Using Skills
Skills give your AI coding agent browser superpowers -- scrape, crawl, search, and automate from within your development workflow. Browsr provides skill files for Claude, Codex, and OpenClaw.
Quick install
Auto-detect your agent and install the right skill:
curl -fsSL https://browsr.dev/install.sh | sh
Install for a specific agent
# Claude
curl -fsSL https://browsr.dev/install.sh | sh -s -- --agent claude
# Codex
curl -fsSL https://browsr.dev/install.sh | sh -s -- --agent codex
# OpenClaw
curl -fsSL https://browsr.dev/install.sh | sh -s -- --agent openclaw
# Multiple agents at once
BROWSR_AGENT=claude,codex,openclaw curl -fsSL https://browsr.dev/install.sh | sh
Set your API key
After installing, export your API key so the agent can use it:
export BROWSR_API_KEY="bak_..."
Get your key from Settings > API Keys at browsr.dev.
What skills enable
Once installed, your AI agent can:
- Scrape any URL and get back markdown, HTML, screenshots, or structured JSON
- Crawl entire sites with path filtering and depth control
- Search Google and optionally scrape each result
- Automate browsers with 40+ deterministic commands
- Relay your real Chrome tab to the API
- Extract structured data with AI-powered schemas
- Observe live browser state including DOM snapshots and interactive elements
- Shell spin up sandboxed containers to run code alongside browser automation
All of this happens through the Browsr API -- the agent calls curl commands under the hood.
Usage
Once installed, just ask your agent to scrape, crawl, search, or automate a browser. It will use the Browsr API automatically.
Examples:
- "Scrape example.com and give me the content as markdown"
- "Crawl the docs site and extract all page titles"
- "Search for 'best rust web frameworks' and summarize the top 5 results"
- "Navigate to the login page, fill in the form, and screenshot the dashboard"
- "Take a screenshot of the pricing page and extract all plan names and prices as JSON"
- "Open a shell, install puppeteer, and run a script that checks if the signup flow works"
- "Scrape the top 10 HN stories, then crawl each link and summarize the articles"
- "Observe the current session state and tell me what interactive elements are on the page"
Available commands
Skills expose the full Browsr API surface to your agent. Here's what each command category covers:
| Category | Commands | Description |
|---|---|---|
| Scrape | scrape, scrape with formats | Extract markdown, HTML, links, images, screenshots, or structured JSON from any URL |
| Crawl | crawl | Breadth-first multi-page crawling with depth and path controls |
| Search | search, search with scrape | Web search with optional full-page scraping of results |
| Navigate | navigate_to, refresh, go_back, go_forward | Page navigation and history |
| Interact | click, type_text, press_key, hover, focus, select_option, check, clear | Element interaction |
| Extract | get_text, get_title, get_content, get_attribute, evaluate, extract_structured_content | Content and data extraction |
| Wait | wait_for_element, wait_for_navigation | Synchronization primitives |
| Mouse & Keyboard | move_mouse_to, scroll_to, drag, drag_to | Low-level input |
| Screenshot | screenshot | Viewport or full-page captures |
| Observe | observe | Full DOM snapshot with interactive element map |
| Sessions | create, list, destroy | Persistent browser context management |
| Profiles | save_profile, load_profile | Reusable browser identity (cookies, localStorage) |
| Shell | shell create, shell exec, shell stop | Sandboxed code execution containers |
Public skill files
The raw skill files are hosted at: