Skip to main content

Quickstart

Get started with Browsr in under 2 minutes.

1. Get your API key

  1. Sign in at browsr.dev
  2. Go to Settings > API Keys
  3. Create a new key and copy it
export BROWSR_API_KEY="bak_..."

2. Create a session (optional)

Most commands work without a session -- Browsr spins up a temporary browser for you. Create a session when you need state to persist across multiple calls (cookies, login, multi-step flows).

browsr sessions create
# → session_id: "abc123"

Pass --session-id SESSION_ID (CLI) or "session_id": "SESSION_ID" (API) to any subsequent command to reuse the session.

3. Scrape a page

browsr scrape https://example.com
{
"success": true,
"data": {
"markdown": "# Example Domain\n\nThis domain is for use in illustrative examples...",
"links": [
{"href": "https://www.iana.org/domains/example", "text": "More information..."}
],
"metadata": {
"title": "Example Domain",
"sourceURL": "https://example.com",
"status_code": 200
}
}
}

4. Run browser commands

# Navigate to a page
browsr navigate https://news.ycombinator.com --session-id SESSION_ID

# Execute commands
browsr exec --session-id SESSION_ID \
'[{"command":"wait_for_element","data":{"selector":".titleline","timeout_ms":5000}},{"command":"get_content","data":{"selector":"#hnmain","kind":"markdown"}}]'

5. Crawl a site

browsr crawl https://example.com --limit 5 --max-depth 2

6. Search the web

browsr search "best headless browser APIs" --limit 5

7. Observe browser state

browsr observe --session-id SESSION_ID

8. Use a client SDK

JavaScript

import { BrowsrClient } from '@browsr/sdk';

const client = new BrowsrClient({ apiKey: process.env.BROWSR_API_KEY });

const result = await client.scrape('https://example.com', {
formats: ['markdown', 'links']
});
console.log(result.data.markdown);

Rust

use browsr_client::BrowsrClient;

let client = BrowsrClient::new("https://api.browsr.dev")
.with_api_key(std::env::var("BROWSR_API_KEY").unwrap());

let result = client.scrape_url("https://example.com").await?;
println!("{}", result.data.markdown.unwrap());

Next steps

  • Scrape -- all scrape options and formats
  • Crawl -- multi-page crawling
  • Search -- web search with optional scraping
  • API Overview -- endpoint map and integration patterns
  • Commands -- full command reference
  • CLI -- remote-first terminal usage
  • Sessions -- persistent browser contexts
  • Skills -- install Browsr for your AI agent