Back to reviews
ClawBench

ClawBench

153 real-world browser tasks, live websites — best AI agent scores only 33%

ClawBench is a browser agent evaluation framework built around 153 real-world tasks running on 144 live production websites — not simulated environments or curated sandboxes. Tasks span e-commerce, travel booking, SaaS dashboards, government portals, and developer tools. A built-in request interceptor blocks genuinely irreversible actions (payments, form submissions that send data) so evaluations can run safely on real sites. The benchmark records five layers of data per run: session replays, screenshots at each decision point, raw HTTP traffic, agent reasoning traces, and browser action sequences. This makes failure analysis tractable — you can see exactly which DOM element the agent misidentified, not just a final score. The dataset is open and the evaluation harness is reproducible. The headline finding is sobering: Claude Sonnet 4.6, the best performer, completes only 33.3% of tasks. GLM-5 is second at 24.2%. No model exceeds 50% on any individual task category. The implication is stark — current browser agents are far from autonomous on the open web, and the gap between benchmark performance and production performance is still enormous.

Panel Reviews

The Builder

The Builder

Developer Perspective

Ship

The five-layer recording (replays, HTTP traffic, reasoning traces) is the right approach for actual debugging — finally a benchmark where failure analysis is tractable. The 33% score also sets honest expectations for teams planning to ship production browser agents right now.

The Skeptic

The Skeptic

Reality Check

Skip

Live website testing is a double-edged sword: sites change their DOM, anti-bot measures evolve, and a task that passes today may fail next week with no code change. Benchmark drift on live websites could make ClawBench scores meaningless over 6-month periods without constant maintenance.

The Futurist

The Futurist

Big Picture

Ship

33% on live websites is actually more impressive than it sounds given the adversarial diversity of the real web. The trajectory from 5% in 2024 to 33% in 2026 means we're likely crossing 60% in 18 months — at which point browser agents start displacing RPA software at scale.

The Creator

The Creator

Content & Design

Ship

As someone who uses browser agents for research and competitor monitoring, the failure mode analysis is exactly what I need. Knowing which website categories agents handle well (dev tools) vs. poorly (government portals) helps me route tasks appropriately right now.

Community Sentiment

Overall520 mentions
64% positive26% neutral10% negative
Hacker News140 mentions
65%25%10%

33% score and what it means for production browser agents

Reddit160 mentions
70%22%8%

Comparison between Claude Sonnet 4.6 and GLM-5

Twitter/X220 mentions
60%28%12%

Gap between benchmark hype and real-world capability