Quickstart
Get started with Browsr in under 2 minutes.
1. Get your API key
- Sign in at browsr.dev
- Go to Settings > API Keys
- Create a new key and copy it
export BROWSR_API_KEY="bak_..."
2. Create a session (optional)
Most commands work without a session -- Browsr spins up a temporary browser for you. Create a session when you need state to persist across multiple calls (cookies, login, multi-step flows).
- CLI
- cURL
browsr sessions create
# → session_id: "abc123"
curl -X POST https://api.browsr.dev/sessions \
-H "x-api-key: $BROWSR_API_KEY" \
-H "Content-Type: application/json" \
-d '{"headless": true}'
Pass --session-id SESSION_ID (CLI) or "session_id": "SESSION_ID" (API) to any subsequent command to reuse the session.
3. Scrape a page
- CLI
- cURL
browsr scrape https://example.com
curl -X POST https://api.browsr.dev/v1/scrape \
-H "x-api-key: $BROWSR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"url": "https://example.com",
"formats": ["markdown", "links"]
}'
{
"success": true,
"data": {
"markdown": "# Example Domain\n\nThis domain is for use in illustrative examples...",
"links": [
{"href": "https://www.iana.org/domains/example", "text": "More information..."}
],
"metadata": {
"title": "Example Domain",
"sourceURL": "https://example.com",
"status_code": 200
}
}
}
4. Run browser commands
- CLI
- cURL
# Navigate to a page
browsr navigate https://news.ycombinator.com --session-id SESSION_ID
# Execute commands
browsr exec --session-id SESSION_ID \
'[{"command":"wait_for_element","data":{"selector":".titleline","timeout_ms":5000}},{"command":"get_content","data":{"selector":"#hnmain","kind":"markdown"}}]'
# Execute commands (use session_id from step 2)
curl -X POST https://api.browsr.dev/commands \
-H "x-api-key: $BROWSR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"session_id": "SESSION_ID",
"commands": [
{"command": "navigate_to", "data": {"url": "https://news.ycombinator.com"}},
{"command": "wait_for_element", "data": {"selector": ".titleline", "timeout_ms": 5000}},
{"command": "get_content", "data": {"selector": "#hnmain", "kind": "markdown"}}
]
}'
5. Crawl a site
- CLI
- cURL
browsr crawl https://example.com --limit 5 --max-depth 2
curl -X POST https://api.browsr.dev/v1/crawl \
-H "x-api-key: $BROWSR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"url": "https://example.com",
"limit": 5,
"max_depth": 2,
"formats": ["markdown"]
}'
6. Search the web
- CLI
- cURL
browsr search "best headless browser APIs" --limit 5
curl -X POST https://api.browsr.dev/search \
-H "x-api-key: $BROWSR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"query": "best headless browser APIs",
"limit": 5,
"formats": ["markdown"]
}'
7. Observe browser state
- CLI
- cURL
browsr observe --session-id SESSION_ID
curl -X POST https://api.browsr.dev/observe \
-H "x-api-key: $BROWSR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"session_id": "SESSION_ID",
"use_image": true,
"include_content": true
}'
8. Use a client SDK
JavaScript
import { BrowsrClient } from '@browsr/sdk';
const client = new BrowsrClient({ apiKey: process.env.BROWSR_API_KEY });
const result = await client.scrape('https://example.com', {
formats: ['markdown', 'links']
});
console.log(result.data.markdown);
Rust
use browsr_client::BrowsrClient;
let client = BrowsrClient::new("https://api.browsr.dev")
.with_api_key(std::env::var("BROWSR_API_KEY").unwrap());
let result = client.scrape_url("https://example.com").await?;
println!("{}", result.data.markdown.unwrap());
Next steps
- Scrape -- all scrape options and formats
- Crawl -- multi-page crawling
- Search -- web search with optional scraping
- API Overview -- endpoint map and integration patterns
- Commands -- full command reference
- CLI -- remote-first terminal usage
- Sessions -- persistent browser contexts
- Skills -- install Browsr for your AI agent