Anthropic Ships Claude Opus 4.7 with 3× Vision Resolution and New 'xhigh' Reasoning Mode
Anthropic released Claude Opus 4.7 today with 3× higher vision input resolution, a new 'xhigh' effort level for fine-grained reasoning control, and stronger performance on agentic multi-step coding tasks. Pricing holds at $5/M input, $25/M output.
Original sourceAnthropic today released Claude Opus 4.7, the latest iteration of its flagship model, with a set of targeted upgrades that build on the Opus 4 architecture without a full model generation jump.
The headline capability is a 3× improvement in vision input resolution — images can now be processed at up to 2,576 pixels on the long edge, versus the previous ~850px ceiling. This meaningfully changes what vision tasks Opus can handle: dense technical diagrams, high-resolution UI screenshots, scanned documents, and detailed maps are now usable inputs without aggressive downscaling that previously caused information loss.
A new 'xhigh' effort level joins the existing high/medium/low reasoning controls. Anthropic describes it as giving the model more "thinking budget" for tasks where additional deliberation improves output quality — legal reasoning, complex financial analysis, and multi-step agentic plans are the cited use cases. The tradeoff is latency: xhigh responses take noticeably longer and cost proportionally more, but for tasks where correctness matters more than speed, the option is welcome.
Agentic coding improvements are notable: the model's ability to plan and execute multi-step coding workflows — where it issues multiple tool calls, writes and revises code, and navigates project structure across turns — is measurably improved on the GDPval-AA benchmark suite. Teams running Claude Code or custom coding agents will likely see the biggest practical benefit here.
Pricing stays flat at $5/M input tokens, $25/M output tokens, keeping Opus 4.7 in the same tier as its predecessor.
Panel Takes
The Builder
Developer Perspective
“The vision resolution jump and xhigh effort mode are the two things that actually matter here. High-res vision means I can pass in real screenshots instead of carefully-cropped fragments. xhigh is interesting for complex planning tasks where I currently have to prompt-engineer around shallow responses. Real upgrades, not just marketing.”
The Skeptic
Reality Check
“A 4.7 mid-cycle release is Anthropic's standard maintenance cadence now, not a breakthrough. The xhigh effort mode is a fancy way of saying 'we charge you more and think longer.' GDPval-AA benchmark improvements are self-reported. Solid incremental upgrade — just don't oversell it.”
The Futurist
Big Picture
“The pattern emerging across labs is clear: vision quality, reasoning effort control, and agentic reliability are the three levers being pulled in parallel. The fact that Anthropic ships vision resolution improvements on an Opus point release suggests they're catching up fast on multimodal depth — previously a Google DeepMind advantage.”