Anthropic Launches Claude Managed Agents — $0.08/hr to Run Your AI Agents in Anthropic's Cloud
Anthropic launched Claude Managed Agents on April 8, a fully managed cloud infrastructure for running autonomous Claude agents. The platform handles sandboxed execution, credential management, state persistence, and multi-agent coordination at $0.08 per agent runtime hour. Early customers include Notion, Rakuten, Sentry, and Asana.
Original source## Anthropic Launches Claude Managed Agents
On April 8, 2026, Anthropic shipped Claude Managed Agents — a fully managed cloud harness for running Claude autonomously in production. The launch addresses the core pain point that has slowed enterprise agent adoption: the infrastructure tax of building reliable agentic systems from scratch.
### What It Is
Claude Managed Agents handles everything below the application layer: isolated container provisioning per agent, state checkpointing for long-running sessions (hours, not minutes), scoped credential management, tool orchestration, and error recovery. The Sessions API introduces two primary endpoints — `POST /v1/sessions` to create and run an agent session, and `GET /v1/sessions/{id}/stream` for SSE streaming of agent outputs. The Environments API lets teams configure container templates with pre-installed tooling.
Pricing is additive: standard Claude token costs plus **$0.08 per agent runtime hour**. For a team running 100 agent-hours per week, that's $8 — trivial compared to the engineering cost of building equivalent infra.
### Same-Day Releases
Anthropic shipped two companion tools on April 8 alongside Managed Agents:
- **Ant CLI**: The official command-line client for the Claude API, with YAML-native versioning of agent configs, prompts, and tool definitions. Treats Claude infrastructure as deployable code artifacts. - **Advisor Tool (beta)**: A new API pattern that pairs a fast executor model with a higher-intelligence advisor model for mid-generation reasoning steps. Beta header: `advisor-tool-2026-03-01`.
### Industry Context
The launch comes two weeks after OpenAI expanded its own agent runtime and one week after Google deepened Gemini's agent orchestration in Vertex AI. Anthropic has been notably cautious about productizing agents — the Managed Agents launch signals a shift from "best model" positioning to "best agentic platform" strategy.
Internal benchmarks cited at launch showed a **10-point improvement in task success rates** versus standard API prompting on the same tasks, attributed primarily to better state management and error recovery loops. Early access customers Notion, Rakuten, Sentry, and Asana are in production.
Panel Takes
The Builder
Developer Perspective
“This solves a real infrastructure problem. Managing sandboxes, state persistence, and credential scoping for long-running agents is months of DevOps work — I've built parts of this twice and it's never fun. At $0.08/hr, I'll happily pay Anthropic to handle it rather than reinvent that wheel.”
The Skeptic
Reality Check
“The 10-point benchmark improvement claim needs scrutiny — what tasks, what baseline, what success criteria? And $0.08/hr sounds cheap until your agents are running 24/7 at scale. The bigger concern is vendor lock-in: agent configs, state management, and environment templates are now Anthropic-proprietary formats.”
The Futurist
Big Picture
“This is Anthropic declaring that the platform layer is as important as the model layer. The real competition isn't which model is smartest — it's which cloud makes agents most reliable and cheapest to operate at scale. Anthropic running your agents means they also get telemetry on failure modes, which compounds into model improvement over time.”