Skip to main content

Using Skills

Skills give your AI coding agent browser superpowers -- scrape, crawl, search, and automate from within your development workflow. Browsr provides skill files for Claude, Codex, and OpenClaw.

Quick install

Auto-detect your agent and install the right skill:

curl -fsSL https://browsr.dev/install.sh | sh

Install for a specific agent

# Claude
curl -fsSL https://browsr.dev/install.sh | sh -s -- --agent claude

# Codex
curl -fsSL https://browsr.dev/install.sh | sh -s -- --agent codex

# OpenClaw
curl -fsSL https://browsr.dev/install.sh | sh -s -- --agent openclaw

# Multiple agents at once
BROWSR_AGENT=claude,codex,openclaw curl -fsSL https://browsr.dev/install.sh | sh

Set your API key

After installing, export your API key so the agent can use it:

export BROWSR_API_KEY="bak_..."

Get your key from Settings > API Keys at browsr.dev.

What skills enable

Once installed, your AI agent can:

  • Scrape any URL and get back markdown, HTML, screenshots, or structured JSON
  • Crawl entire sites with path filtering and depth control
  • Search Google and optionally scrape each result
  • Automate browsers with 40+ deterministic commands
  • Relay your real Chrome tab to the API
  • Extract structured data with AI-powered schemas
  • Observe live browser state including DOM snapshots and interactive elements
  • Shell spin up sandboxed containers to run code alongside browser automation

All of this happens through the Browsr API -- the agent calls curl commands under the hood.

Usage

Once installed, just ask your agent to scrape, crawl, search, or automate a browser. It will use the Browsr API automatically.

Examples:

  • "Scrape example.com and give me the content as markdown"
  • "Crawl the docs site and extract all page titles"
  • "Search for 'best rust web frameworks' and summarize the top 5 results"
  • "Navigate to the login page, fill in the form, and screenshot the dashboard"
  • "Take a screenshot of the pricing page and extract all plan names and prices as JSON"
  • "Open a shell, install puppeteer, and run a script that checks if the signup flow works"
  • "Scrape the top 10 HN stories, then crawl each link and summarize the articles"
  • "Observe the current session state and tell me what interactive elements are on the page"

Available commands

Skills expose the full Browsr API surface to your agent. Here's what each command category covers:

CategoryCommandsDescription
Scrapescrape, scrape with formatsExtract markdown, HTML, links, images, screenshots, or structured JSON from any URL
CrawlcrawlBreadth-first multi-page crawling with depth and path controls
Searchsearch, search with scrapeWeb search with optional full-page scraping of results
Navigatenavigate_to, refresh, go_back, go_forwardPage navigation and history
Interactclick, type_text, press_key, hover, focus, select_option, check, clearElement interaction
Extractget_text, get_title, get_content, get_attribute, evaluate, extract_structured_contentContent and data extraction
Waitwait_for_element, wait_for_navigationSynchronization primitives
Mouse & Keyboardmove_mouse_to, scroll_to, drag, drag_toLow-level input
ScreenshotscreenshotViewport or full-page captures
ObserveobserveFull DOM snapshot with interactive element map
Sessionscreate, list, destroyPersistent browser context management
Profilessave_profile, load_profileReusable browser identity (cookies, localStorage)
Shellshell create, shell exec, shell stopSandboxed code execution containers

Public skill files

The raw skill files are hosted at: