Browsr
Browsr is a headless browser automation platform built in Rust. Design scraping sequences with AI, then replay the same commands deterministically at scale.
Get your API key
- Sign in at browsr.dev
- Go to Settings > API Keys
- Create a new key and copy it (prefixed
bak_) - Export it in your shell:
export BROWSR_API_KEY="bak_..."
Quick examples
Scrape a page
- CLI
- cURL
browsr scrape https://example.com
curl -X POST https://api.browsr.dev/v1/scrape \
-H "x-api-key: $BROWSR_API_KEY" \
-H "Content-Type: application/json" \
-d '{"url": "https://example.com", "formats": ["markdown"]}'
Search the web
- CLI
- cURL
browsr search "best headless browser APIs" --limit 5
curl -X POST https://api.browsr.dev/search \
-H "x-api-key: $BROWSR_API_KEY" \
-H "Content-Type: application/json" \
-d '{"query": "best headless browser APIs", "limit": 5}'
Crawl a site
- CLI
- cURL
browsr crawl https://example.com --limit 20 --max-depth 2
curl -X POST https://api.browsr.dev/v1/crawl \
-H "x-api-key: $BROWSR_API_KEY" \
-H "Content-Type: application/json" \
-d '{"url": "https://example.com", "limit": 20, "max_depth": 2, "formats": ["markdown"]}'
Navigate and extract content
- CLI
- cURL
browsr navigate https://news.ycombinator.com
browsr exec --session-id SESSION_ID \
'[{"command":"get_content","data":{"selector":"#hnmain","kind":"markdown"}}]'
curl -X POST https://api.browsr.dev/commands \
-H "x-api-key: $BROWSR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"commands": [
{"command": "navigate_to", "data": {"url": "https://news.ycombinator.com"}},
{"command": "wait_for_element", "data": {"selector": ".titleline", "timeout_ms": 5000}},
{"command": "get_content", "data": {"selector": "#hnmain", "kind": "markdown"}}
]
}'
Run an agent task
- CLI
- cURL
browsr run "Open the docs homepage and summarize the navigation sections"
curl -X POST https://api.browsr.dev/agent/run \
-H "x-api-key: $BROWSR_API_KEY" \
-H "Content-Type: application/json" \
-d '{"task": "Open the docs homepage and summarize the navigation sections"}'
Reference
Features
{}Scrape
Extract markdown, HTML, screenshots, and structured JSON from any page. AI-powered extraction with schemas and natural language prompts.
Scrape docs →</>Crawl
Breadth-first crawl entire sites with path filtering and depth control. Every page extracted in the formats you choose.
Crawl docs →QSearch
Search Google programmatically and optionally scrape each result page. Returns titles, URLs, descriptions, and full page content.
Search docs →>_Commands
40+ deterministic browser commands: navigate, click, type, scroll, drag, manage cookies, run JavaScript, and capture screenshots.
Commands docs →::Sessions
Persistent browser contexts with cookies, viewport, and history. Maintain state across multiple API calls for multi-step workflows.
Sessions docs →((o))Chrome Relay
Bridge your real Chrome tab to the API with the Browsr extension. Automate authenticated sessions without sharing credentials.
Relay docs →~Profiles
Save browser identity (cookies, localStorage, user agent) as reusable profiles. Persist login state across sessions.
Profiles docs →*Skills
Give your AI coding agent browser superpowers. One-line install for Claude, Codex, and OpenClaw.
Skills docs →$Shells
Spin up AI sandboxes on the fly. Container-based execution with filesystem access and artifact capture.
Shells docs →#SDKs
Official JavaScript and Rust clients for the Browsr API. Type-safe, ergonomic wrappers for every endpoint.
SDK docs →Open source
The core browser engine, CLI, and Rust client are open source. Run browsr locally for development, debugging, and self-hosted scraping. See Open Source.