CoreWeave Lands Anthropic in Multi-Year Cloud Deal — Stock Jumps 13%
CoreWeave and Anthropic announced a multi-year cloud infrastructure agreement to run Claude workloads at production scale across NVIDIA GPU clusters in 43 US data centers, sending CoreWeave stock up 13% and making Anthropic its ninth of ten top AI model providers.
Original sourceCoreWeave announced on April 10, 2026 that it has signed a multi-year agreement with Anthropic to provide cloud infrastructure for the Claude family of AI models. The deal gives Anthropic access to CoreWeave's NVIDIA GPU clusters across 43 US data centers, with capacity coming online in phases starting later this year.
The announcement pushed CoreWeave's stock up 13% on the day, a significant move for the recently-IPO'd company that has positioned itself as the infrastructure layer for frontier AI. With Anthropic on board, CoreWeave now counts nine of the ten leading AI model providers as platform customers — a remarkable consolidation of AI compute spend at a single independent cloud provider.
The deal is notable because Anthropic already has substantial compute agreements with Google and Broadcom through TPU deals announced earlier in April. Running Claude on CoreWeave's NVIDIA infrastructure in parallel signals that Anthropic is deliberately multi-cloud on compute — hedging against hardware availability constraints as its run-rate revenue has surged to a reported $30 billion annually.
Financial terms were not disclosed, but deals of this nature for frontier model providers typically run into nine-figure annual commitments. CoreWeave's ability to land Anthropic despite competing against hyperscalers like AWS, Google Cloud, and Azure on price and access underscores its differentiated value proposition: pure-play GPU density with less overhead than full-stack cloud providers.
For the AI infrastructure market, this reinforces CoreWeave's emerging role as the de facto neutral backbone for frontier AI — analogous to what Equinix became for internet infrastructure. The question is whether its independence is durable as hyperscalers accelerate their own GPU fleet buildouts.
Panel Takes
The Builder
Developer Perspective
“Anthropic going multi-cloud on compute is smart infrastructure hygiene, but the more interesting signal is CoreWeave's ability to compete with Google's own TPU infrastructure. For developers, more competition in AI compute should eventually mean lower API costs and better reliability SLAs.”
The Skeptic
Reality Check
“CoreWeave's stock popping 13% on a deal with no disclosed financials tells you more about investor narrative than business fundamentals. The AI compute market is winner-take-all until the hyperscalers finish their GPU buildouts — CoreWeave's independent window may be narrower than the market is pricing.”
The Futurist
Big Picture
“CoreWeave landing nine of the ten top model providers is an infrastructure lock-in story playing out in real time. If they execute on reliability and price, they become the Switzerland of AI compute — trusted by all sides precisely because they're not a competitor in the model layer. That's a durable moat if they can maintain it.”