
Benchmarking Hosted Browser Providers: Speed, Stealth, CAPTCHA, and Concurrency
Hosted browser providers let you skip managing headless Chrome infrastructure and just make an API call to get a browser. There are now enough of them that choosing between providers is a real decision, but most published benchmarks show numbers without explaining what those numbers mean for real workloads.
This article covers five benchmarks across six providers. For each benchmark we explain what it measures and what the results tell you about how each provider is built. Every test runs on a free tier, so you can reproduce all of them without spending anything.
The five benchmarks are:
- Session lifecycle speed — how long it takes to get from no browser to a loaded page
- Idle workflow cost efficiency — what happens to a session (and your bill) when it sits idle between steps
- Stealth and bot detection — whether sessions pass fingerprinting checks and how they score on reCAPTCHA v3
- CAPTCHA solving — which providers can solve CAPTCHAs automatically on a free tier, and how fast
- Parallel session handling — how many sessions each provider allows concurrently, and whether performance degrades under load
We ran all benchmarks from a Hetzner CAX21 server in Helsinki (hel1): Ubuntu 22.04, ARM64, 4 vCPU, 8GB RAM, Node.js v20.20.1. The benchmark code is open source at ritza-browser-bench.
Benchmark 1: Session lifecycle speed
Session startup time is the baseline cost of every browser automation job. It compounds at scale, and high variance is harder to manage than a consistently slower average, particularly in queue-based workflows where a slow session blocks everything behind it.
What we measure
We time four distinct stages of the session lifecycle:
- Session creation — the API call to tell the provider to spin up a browser. For most providers this means an HTTP request to their control plane; the response gives you a WebSocket URL to connect to.
- CDP connection — connecting to the browser over the Chrome DevTools Protocol (CDP). This is where Playwright's
connectOverCDP()call happens, and it's almost entirely network round-trip time. - Page navigation —
page.goto()waiting fordomcontentloaded. This is the page load itself, on top of whatever network latency exists. - Session release — the API call to tell the provider the session is finished and the browser slot can be reclaimed.
How we tested it
Each provider implements a create() and release() method. The benchmark wraps them in a timer and connects via Playwright's connectOverCDP:
// Session creation: provider returns an id and a CDP WebSocket URL
const t0 = nowNs();
const { id, cdpUrl } = await provider.create();
result.session_creation_ms = msSince(t0);
// CDP connection: Playwright connects to the running browser
const t1 = nowNs();
const browser = await chromium.connectOverCDP(cdpUrl);
result.session_connect_ms = msSince(t1);
// Page navigation
const t2 = nowNs();
await page.goto(url, { waitUntil: "domcontentloaded" });
result.page_goto_ms = msSince(t2);
// Session release
const t3 = nowNs();
await provider.release(id);
result.session_release_ms = msSince(t3);
We ran 10 measured sessions per provider, with 3 warmup runs discarded beforehand. Warmup runs clear any provider-side cold start effects that would otherwise skew the first measured run. We ran everything from a Hetzner CAX21 server in Helsinki, a neutral cloud environment that avoids the home-network variability of a laptop while not being co-located with provider infrastructure.
The full benchmark code is in ritza-browser-bench on the benchmarks-2026 branch.
Results
All times in milliseconds. Run from Hetzner Helsinki, 10 measured runs per provider.
| Provider | Avg total | Avg create | Avg connect | Avg navigate | Avg release |
|---|---|---|---|---|---|
| Steel | 1,441 | 290 | 760 | 209 | 181 |
| Kernel | 1,554 | 225 | 815 | 271 | 243 |
| Browserless | 2,090 | 0 | 1,665 | 425 | 0 |
| Anchor | 3,730 | 1,528 | 1,015 | 315 | 872 |
| Hyperbrowser | 4,012 | 1,462 | 1,892 | 496 | 163 |
| Browserbase | 11,933 | 9,401 | 1,474 | 648 | 410 |
Steel and Kernel finish in roughly 1.5 seconds. Browserbase averages nearly 12 seconds, almost entirely in the creation phase.
What the numbers tell us
Steel and Kernel are fast because session creation is fast: Kernel's 225ms create step is the lowest of any provider. Browserless shows 0ms for create and release because it has no session creation API: the CDP connection itself starts and ends the session, so all the latency lands in the connect step. The 1,665ms connect time is still competitive and the lack of release overhead is a genuine benefit for high-volume jobs.
Browserbase is the outlier. The 9,401ms average for session creation reflects their architecture: they provision a full cloud browser environment per session rather than routing a connection to a warm pool. Once the session exists, connect time (1,474ms) is comparable to other providers. For workloads with long-lived sessions reused across many tasks, that's a one-time cost. For workloads that create and destroy short sessions frequently, it's a hard constraint.
Benchmark 2: Idle workflow cost efficiency
Browser sessions often sit idle between workflow steps. Providers with wall-clock billing charge for that idle time the same as active time. Providers with compute billing do not. Session timeout policies matter too. A provider that terminates idle sessions forces reconnection mid-workflow, which adds a full cold-start cost at each gap.
What we measure
We simulate a two-step workflow with a deliberate idle gap in the middle:
- Create a session, connect, navigate to a page (step 1 — active work)
- Wait 60 seconds (idle)
- Check whether the session is still alive, navigate to a second page (step 2)
- Release the session
We record whether the session survived the idle period, how long step 2 took (a reconnect cost appears here if the session died), and the total wall-clock time. We also note what each provider's billing model means for that 60-second wait.
How we tested it
After completing step 1, the benchmark waits and then probes the connection before attempting step 2:
// Step 1: create, connect, navigate
const step1Start = nowNs();
const { id, cdpUrl } = await provider.create();
const browser = await chromium.connectOverCDP(cdpUrl);
await page.goto(STEP1_URL, { waitUntil: "domcontentloaded" });
record.step1_ms = msSince(step1Start);
// Idle
await new Promise((r) => setTimeout(r, idleMs));
// Check if session survived
try {
if (!browser.isConnected()) throw new Error("Browser disconnected");
await page.evaluate(() => document.title); // confirm page still alive
record.session_survived = true;
} catch {
// Session died — cold-start reconnect
const reconnectStart = nowNs();
const reconnected = await provider.create();
browser = await chromium.connectOverCDP(reconnected.cdpUrl);
record.reconnect_ms = msSince(reconnectStart);
record.session_survived = false;
}
// Step 2: navigate with whatever session we have
const step2Start = nowNs();
await page.goto(STEP2_URL, { waitUntil: "domcontentloaded" });
record.step2_ms = msSince(step2Start);
We ran one measured session per provider, with a single warmup run at 0s idle beforehand. Each run takes around 65 seconds of wall time, so one measured run per provider is practical.
The full benchmark code is in ritza-browser-bench on the benchmarks-2026 branch.
Results
Run from Hetzner Helsinki. Idle duration: 60 seconds.
| Provider | Survived idle | Step 1 | Step 2 | Reconnect | Bills during idle |
|---|---|---|---|---|---|
| Kernel | yes | 1,602ms | 248ms | — | No (Standby Mode) |
| Steel | yes | 1,366ms | 322ms | — | Yes |
| Hyperbrowser | yes | 4,047ms | 439ms | — | Yes |
| Browserbase | yes | 3,050ms | 373ms | — | Yes |
| Anchor | yes | 3,179ms | 265ms | — | Yes |
| Browserless | no | 2,439ms | 435ms | 1,902ms | N/A (session died) |
What the numbers tell us
Five providers kept the session alive. Their step 2 times are dramatically lower than step 1: Kernel at 248ms and Steel at 322ms, compared to step 1 times over 1,300ms, because there's no session creation or CDP handshake the second time around. That gap is the cost of session overhead versus actual browser work.
Kernel is the only provider that doesn't bill for idle time. Their Standby Mode pauses billing when no CDP connection is active, so a session waiting between steps costs nothing. Browserless is the only provider where the session died: the free tier enforces a 60-second maximum session time, which terminated the connection before step 2. The benchmark reconnected (1,902ms cold start) and continued, but that reconnect cost is real in any production workflow that hits it.
Benchmark 3: Stealth and bot detection
Sites detect headless browsers by looking for signals real users don't produce: navigator.webdriver set to true, a HeadlessChrome user-agent, or fingerprints that don't match any real browser configuration. A session that trips these checks gets blocked before it does any useful work.
Beyond fingerprinting, reCAPTCHA v3 scores sessions invisibly based on IP reputation and behavioral signals. No challenge is shown, just a score between 0 and 1. Sites requiring a score above 0.7 to proceed will block or challenge sessions below that threshold. No free-tier provider scored above 0.3 in our tests, regardless of stealth configuration.
What we measure
We run each provider in default mode, then again in stealth mode if the provider supports it on the free tier. For each session we check three things:
- bot.sannysoft.com — a fingerprinting test suite that checks whether
navigator.webdriveris present, whether the user-agent string containsHeadlessChrome, and around 30 other browser signals - areyouheadless — a single-signal headless detection page that returns a plain-text verdict
- antcpt.com reCAPTCHA score — reCAPTCHA v3 behavioral and IP scoring (0–1, where 0.7+ is considered human-like)
How we tested it
Each provider runs default mode first, then stealth mode if its createStealth() method is implemented. The benchmark opens the same browser session across all three test pages:
// Default mode
const session = await provider.create();
// Stealth mode (where supported)
const session = await provider.createStealth();
const browser = await chromium.connectOverCDP(session.cdpUrl);
// Check fingerprinting
await page.goto("https://bot.sannysoft.com", { waitUntil: "networkidle" });
const webdriverDetected = await page.evaluate(() =>
document.querySelector("tr")?.textContent?.includes("failed") ?? false
);
// Check headless detection
await page.goto("https://arh.antoinevastel.com/bots/areyouheadless");
const headlessText = await page.evaluate(() => document.body.innerText);
// Get reCAPTCHA score
await page.goto("https://antcpt.com/score_detector");
const score = await page.evaluate(() => {
const match = document.body.innerText.match(/score[:\s]+([\d.]+)/i);
return match ? parseFloat(match[1]) : null;
});
The full benchmark code is in ritza-browser-bench on the benchmarks-2026 branch.
Results
Run from Hetzner Helsinki, one run per provider per mode.
Fingerprinting checks (pass = not detected as a bot):
| Provider | Mode | WebDriver | Headless UA | AreYouHeadless | Overall |
|---|---|---|---|---|---|
| Browserbase | default | pass | pass | pass | pass |
| Anchor | default | pass | pass | pass | pass |
| Anchor | stealth | pass | pass | pass | pass |
| Hyperbrowser | default | pass | pass | pass | pass |
| Hyperbrowser | stealth | pass | pass | pass | pass |
| Kernel | default | pass | pass | pass | pass |
| Kernel | stealth | pass | pass | pass | pass |
| Steel | default | pass | pass | pass | pass |
| Steel | stealth | pass | pass | pass | pass |
| Browserless | default | fail | fail | fail | fail |
| Browserless | stealth | pass | pass | pass | pass |
reCAPTCHA v3 score (0.7+ = human-like; all providers use datacenter IPs by default):
| Provider | Mode | Score |
|---|---|---|
| Anchor | default | 0.3 |
| Anchor | stealth | 0.3 |
| Browserbase | default | 0.3 |
| Hyperbrowser | default | 0.3 |
| Hyperbrowser | stealth | 0.1 |
| Steel | default | 0.3 |
| Steel | stealth | 0.3 |
| Kernel | default | 0.1 |
| Kernel | stealth | 0.3 |
| Browserless | stealth | 0.3 |
What the numbers tell us
Browserless in default mode is the only provider that fails fingerprinting checks: it exposes navigator.webdriver and uses a HeadlessChrome user-agent string out of the box. Adding ?stealth=true patches both signals and the session passes cleanly. Every other provider passes fingerprinting checks in default mode without any extra configuration.
The reCAPTCHA scores are a different story: every provider across every mode scored between 0.1 and 0.3. Fingerprinting patches fix what bot.sannysoft.com detects, but reCAPTCHA v3 weighs IP reputation and behavioral signals (no mouse movement, no dwell time, no typing patterns) that datacenter-hosted automated sessions can't fake. Kernel is the only provider where stealth mode changes the score (0.1 to 0.3), because it routes traffic through a managed residential proxy included on all tiers. The proxy changes the outbound IP, which lifts the score slightly. A score of 0.3 is still well below the 0.7 human threshold, but it shows IP type is a factor, and Kernel is the only free-tier provider that changes it.
Benchmark 4: CAPTCHA solving
Most providers gate CAPTCHA solving behind paid plans. For those that don't, the billing models differ. Per-solve pricing and compute-time pricing have different cost profiles depending on volume and solve duration.
What we measure
We navigate to Google's reCAPTCHA v2 demo page and check whether the provider automatically detects and solves the challenge without any custom code beyond enabling the feature flag. We record whether the CAPTCHA was detected, whether it was solved, and how long the solve took.
For providers that don't support CAPTCHA solving on the free tier, we record the reason and move on. No session is created.
How we tested it
Each provider exposes solving through a different mechanism. Browserless fires custom CDP events:
// Browserless: enable solving via endpoint param, listen for CDP events
const cdpUrl = `wss://production-sfo.browserless.io/stealth?token=${apiKey}&solveCaptchas=true`;
const browser = await chromium.connectOverCDP(cdpUrl);
const cdpSession = await ctx.newCDPSession(page);
// Browserless emits custom events outside Playwright's typed CDP surface
const cdpEmitter = cdpSession as unknown as { on(event: string, fn: () => void): void };
cdpEmitter.on("Browserless.captchaFound", () => { captchaDetected = true; });
cdpEmitter.on("Browserless.captchaAutoSolved", () => { captchaSolved = true; });
await page.goto(CAPTCHA_URL, { waitUntil: "domcontentloaded" });
// Wait up to 60s for the solve event
Kernel bundles solving into stealth: true with no separate event. We detect completion by polling the page:
// Kernel: stealth: true includes a built-in reCAPTCHA solver
// No CDP events — poll the page for the solved token
const token = await page.evaluate(() => {
const w = window as any;
if (typeof w.grecaptcha?.getResponse === "function") {
return w.grecaptcha.getResponse() as string;
}
return "";
});
if (token && token.length > 0) {
captchaSolved = true;
}
The full benchmark code is in ritza-browser-bench on the benchmarks-2026 branch.
Results
Run from Hetzner Helsinki, one run per provider.
| Provider | Free tier | Detected | Solved | Solve time | Cost per solve |
|---|---|---|---|---|---|
| Browserless | yes | yes | yes | 17.3s | 10 units (~1% of free monthly budget) |
| Kernel | yes | yes | yes | 38.5s | ~$0.0006 (GB-seconds) |
| Browserbase | no | — | — | — | Developer plan ($20/mo) required |
| Steel | no | — | — | — | Starter plan ($29/mo) required |
| Hyperbrowser | no | — | — | — | Paid plan required |
| Anchor | no | — | — | — | Starter plan ($50/mo) required |
What the numbers tell us
Browserless and Kernel are the only providers that offer CAPTCHA solving on a free tier. Every other provider gates it behind a paid plan.
Browserless solves in about 17 seconds via two CDP events (captchaFound and captchaAutoSolved). Enable solveCaptchas=true on the endpoint URL and the provider handles everything. Cost is 10 units per solve against the free tier's 1,000 units per month, so 100 solves before you need to upgrade.
Kernel solves in about 38.5 seconds, more than twice as long. The solve is bundled into stealth: true with no separate parameter. You navigate to the page and poll for grecaptcha.getResponse(). The cost model is different: you pay for compute time (GB-seconds) during the solve, not per solve. At roughly $0.0006 per solve, there's no per-action accounting. If you're solving at high volume, 17 seconds versus 38.5 seconds per solve adds up; if you're solving occasionally, the cost model difference probably matters more.
Benchmark 5: Parallel session handling
Free-tier concurrency limits determine whether parallel execution is possible at all. Providers that support it may still degrade under load, with higher per-session times or failures that don't appear in sequential runs.
What we measure
We launch 3 sessions simultaneously using Promise.all and measure the wall-clock time for the whole batch to complete. We then compare that to the sum of what those 3 sessions would have taken sequentially to calculate an overhead ratio:
overhead_ratio = total_parallel_ms / sequential_equivalent_ms
An overhead ratio of 0.33 is ideal: 3 sessions running in parallel took exactly one-third the time of running them sequentially. A ratio of 1.0 means no parallelism benefit at all: the sessions effectively ran one at a time.
For providers whose free tier limits concurrent sessions below 3, we run the benchmark sequentially rather than hitting their limit repeatedly. The overhead ratio for those providers is 1.0 by definition.
How we tested it
We launch sessions using Promise.all and compare parallel wall-clock time against the sequential equivalent:
// Run 3 sessions simultaneously via Promise.all
const promises = Array.from({ length: concurrency }, (_, i) =>
runOneSession(provider, i)
);
const sessions = await Promise.all(promises);
// Overhead ratio: how close to ideal parallel execution?
const sequentialEquivalent = sessions.reduce(
(sum, s) => sum + (s.session_total_ms ?? 0), 0
);
const overheadRatio = totalParallelMs / sequentialEquivalent;
// Ideal: 0.33 for 3 sessions. 1.0 = no parallelism benefit.
We ran 3 batch runs per provider, with one warmup batch (concurrency 1) beforehand. Providers with a free-tier limit below 3 were automatically switched to sequential mode.
The full benchmark code is in ritza-browser-bench on the benchmarks-2026 branch.
Results
Run from Hetzner Helsinki, 3 batch runs of 3 concurrent sessions each.
| Provider | True parallel | Overhead ratio | Sessions succeeded | Free tier limit |
|---|---|---|---|---|
| Anchor | yes | 0.34 | 9 / 9 | 5 concurrent |
| Steel | yes | 0.37 | 9 / 9 | 3 concurrent |
| Kernel | yes | 0.38 | 9 / 9 | 5 concurrent |
| Browserbase | no (sequential) | 1.00 | 9 / 9 | 1 concurrent |
| Hyperbrowser | no (sequential) | 1.00 | 9 / 9 | 1 concurrent |
| Browserless | no (sequential) | 1.00 | 9 / 9 | 2 concurrent |
What the numbers tell us
Anchor, Steel, and Kernel ran 3 sessions in true parallel with overhead ratios between 0.34 and 0.38, close to the theoretical ideal of 0.33. All 9 sessions across 3 runs succeeded with no per-session degradation.
Browserbase and Hyperbrowser cap the free tier at 1 concurrent session, so the benchmark ran them sequentially. All sessions succeeded, just with no parallelism benefit. Browserless allows 2 concurrent sessions; the third connection gets an immediate HTTP 429, so the benchmark runs it sequentially too. For teams prototyping or running small-scale automation, the gap between 1 concurrent session and 5 is the difference between a tool that parallelises and one that doesn't.
Conclusion
The differences between providers are not cosmetic. Kernel not billing for idle time isn't a pricing quirk — it's a fundamentally different assumption about what browser automation workloads look like. Browserless capping free-tier sessions at one minute tells you something real about how that product is positioned.
The stealth results were the most surprising. Every provider passes basic fingerprinting checks, but none of them broke 0.3 on reCAPTCHA v3. That score doesn't measure fingerprinting. It measures how the browser behaves on the page over time, and no provider is close to solving that on a free tier.
If you want to run these tests yourself or adapt them for your own workload, the code is at ritza-browser-bench.