Anthropic's Project Glasswing Enlists Big Tech to Secure Open Source — With an Unreleased Claude Model
Anthropic launched Project Glasswing on April 9, assembling AWS, Apple, Google, Microsoft, JPMorganChase, and NVIDIA to secure critical open-source software infrastructure — backed by $100M in usage credits, $4M in donations to OSS security orgs, and a previously unreleased model called Claude Mythos Preview.
Original sourceAnthropic has assembled the most significant open-source security coalition in years with Project Glasswing — a new initiative that combines commercial resources with a previously unreleased AI model to identify and fix vulnerabilities in critical open-source software.
At the center of Glasswing is **Claude Mythos Preview**, an unreleased Anthropic model that the company says can find software vulnerabilities at a level rivaling all but the most elite human security researchers. Using an unreleased model is notable: it suggests Anthropic is holding back capabilities specifically for high-trust, high-stakes deployments before broader release — a preview of what a "safety-first rollout" looks like in practice.
The $100M in usage credits is structured to give open-source security projects access to Anthropic's models for vulnerability research, while the $4M in direct donations flows to established open-source security organizations. Backers include AWS, Apple, Google, Microsoft, JPMorganChase, and NVIDIA — a rare cross-competitive coalition that signals the industry has agreed software supply chain security is pre-competitive infrastructure.
Project Glasswing arrives at a critical moment: the AI era has exponentially increased the volume of code being shipped, much of it AI-generated, and open-source security tooling has not kept pace. Anthropic is betting that AI can be used to secure the AI-accelerated software ecosystem — with a model powerful enough to actually move the needle. If Claude Mythos Preview can find novel CVEs at scale and coordinate responsible disclosure with maintainers, this could be one of the most impactful AI safety-adjacent deployments of 2026.
Panel Takes
The Builder
Developer Perspective
“A $100M resource commitment with a top-tier vulnerability-finding model could meaningfully reduce the attack surface of open-source software. The key question is governance — who decides which projects get prioritized, and how are coordinated disclosures handled with maintainers who are often volunteers?”
The Skeptic
Reality Check
“Good PR, uncertain impact. Critical open-source projects have been chronically underfunded for decades and haven't been saved by corporate coalitions before. The real test is verifiable results — confirmed CVEs found and patched within 12 months. Without that, this is a press release.”
The Futurist
Big Picture
“Anthropic just established a precedent: frontier AI models as infrastructure security tools, deployed collectively by the industry. If Claude Mythos Preview can find novel vulnerabilities at scale, we may look back at Glasswing as the moment AI became essential to software trustworthiness — not just productivity.”