Key Takeaways
- Anthropic built its most powerful AI model yet — Claude Mythos Preview — and decided it was too dangerous to release publicly, marking a first for the industry.
- The model can autonomously find and exploit zero-day vulnerabilities across every major OS and browser, and has already identified thousands in just weeks of testing.
- Project Glasswing puts Mythos to work for defense only, giving vetted partners like AWS, Microsoft, Apple, CrowdStrike, and Google access to scan and secure critical infrastructure.
- As CrowdStrike’s CTO put it, the window between a vulnerability being discovered and exploited has collapsed — what once took months now happens in minutes with AI.
- Frontier models raise the ceiling for both offense and defense simultaneously. Enterprise IT and security leaders who treat AI strategy and security strategy as separate workstreams are creating a gap adversaries will find.
Something unusual happened in the AI industry this week. Anthropic, one of the leading AI safety companies in the world, built a model it decided was too dangerous to release to the public.
That alone is worth pausing on. In an industry defined by rapid releases, benchmark races, and competitive pressure to ship, a frontier lab chose to withhold its most capable model rather than put it into general circulation.
The model is called Claude Mythos Preview. The initiative built around it is called Project Glasswing. Together, they represent a meaningful inflection point — not just for cybersecurity, but for how enterprise and IT leaders should think about AI risk and readiness.
What is Claude Mythos Preview?
Claude Mythos Preview is Anthropic’s most advanced AI model to date. It is a general-purpose frontier model — the same category of system that powers tools like Claude Code or ChatGPT — but with cybersecurity capabilities that are, by Anthropic’s own assessment, in a different class from anything that has come before it.
In pre-release testing, the model demonstrated the ability to autonomously identify zero-day vulnerabilities — software flaws previously unknown to developers — across every major operating system and every major web browser. More concerning, it did not just find the vulnerabilities. It could weaponize them: writing code to exploit them, chaining multiple vulnerabilities together, and constructing viable attack paths through complex software systems with minimal human guidance.
Logan Graham, who leads offensive cyber research at Anthropic, described the model’s behavior as notable for its autonomy and what he called its “long-rangedness” — the ability to connect multiple findings into coherent, dangerous chains. On the Firefox 147 benchmark, Mythos developed working exploits 181 times, compared to just 2 for Claude Opus 4.6. That is not an incremental improvement. That is a different order of capability.
Frontier models raise the ceiling for both offense and defense. The question is who gets there first.
Why Anthropic didn’t release it
Anthropic’s decision to withhold the model is unprecedented for a leading AI company. It is the first time in nearly seven years that a major AI lab has published a full system card for a model without making it generally available. The reasoning is direct: the cybersecurity capabilities of Mythos Preview are inherently dual-use. The same model that finds a vulnerability for a defender to patch can, in different hands, find the same vulnerability for an attacker to exploit.
Anthropic has reportedly already warned senior government officials that models of this capability make large-scale AI-driven cyberattacks significantly more likely in the near term. The company’s position is pragmatic rather than alarmist. Given the pace of AI progress, they acknowledge that capabilities like these will eventually proliferate. The goal of their approach is not to suppress the technology but to give defenders a head start.
That framing matters for enterprise leaders. We are not in a theoretical future where AI-augmented attacks might happen. According to Anthropic, we are in the most dangerous transition period right now — the window when offensive capabilities may outpace the defensive infrastructure organizations have built.
What Project Glasswing actually is
Project Glasswing is Anthropic’s answer to that danger. Rather than sitting on the model or releasing it broadly, Anthropic has made Mythos Preview available to a carefully vetted group of major technology companies and critical infrastructure organizations for one purpose: defensive security work.
The initial partners include Amazon Web Services, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorgan Chase, the Linux Foundation, Microsoft, NVIDIA, and Palo Alto Networks. Beyond that core group, access has been extended to more than 40 additional organizations responsible for building or maintaining critical software infrastructure, including open-source maintainers whose code underpins systems used by hundreds of millions of people worldwide.
Anthropic is committing up to $100 million in usage credits to support the effort, along with $4 million in direct donations to open-source security organizations including the Linux Foundation’s Alpha-Omega project and the Apache Software Foundation.
The model is accessible through the Claude API, Amazon Bedrock, Google Cloud’s Vertex AI, and Microsoft Foundry — but only to organizations on the approved list. For everyone else, it is not available at any price.
Get ready for what’s next with insights and breakthrough topics in cloud, AI, and innovation. Join our newsletter for curated topics delivered straight to your inbox.
By signing up, you agree to Cloud Latitude’s Privacy Policy and Terms of Use.
What the partners are finding
Early results from Project Glasswing partners underscore why the initiative exists. In just the first few weeks of access, the model has identified thousands of high-severity vulnerabilities, including some that had gone undetected for decades.
Anthropic identified a 17-year-old remote code execution flaw in FreeBSD and a 27-year-old bug in OpenBSD — an operating system specifically known for its security posture.
AWS has been testing the model against its own critical codebases, where it is already contributing to code hardening.
Microsoft validated the model against CTI-REALM, its open-source security benchmark, and reported substantial improvements over previous models. Apple, Google, and others are using the model to scan and secure the foundational systems their products depend on.
Perhaps the most meaningful signal came from CrowdStrike.
Elia Zaitsev, CrowdStrike’s Chief Technology Officer, framed the urgency precisely: the window between a vulnerability being discovered and being exploited by an adversary has collapsed. What once took months now happens in minutes with AI. That compression of time is not a future projection. It is the operational reality security teams are navigating today.
What this means for enterprise IT and security leaders
Project Glasswing is primarily a cybersecurity initiative, but its implications extend into every layer of enterprise technology strategy.
First, the threat surface has changed. AI models capable of autonomous vulnerability discovery and exploit development are now real, not theoretical. Organizations that have not stress-tested their security posture against AI-augmented attacks are operating on an outdated threat model.
Second, the cloud infrastructure that organizations run on is at the center of this. The vulnerabilities being discovered by Mythos are in operating systems, browsers, and foundational software. The patch cycles, exposure windows, and vendor responsiveness of your cloud environment are now directly implicated in cybersecurity risk in ways they were not two years ago.
Third, AI procurement and deployment decisions carry new weight. The organizations in the Project Glasswing coalition have access to capabilities the broader market does not. That asymmetry — between organizations with cutting-edge defensive AI and those without — will define enterprise risk profiles over the next 12 to 24 months.
Fourth, the responsible disclosure ecosystem is under pressure. Industry-standard 90-day disclosure windows were designed around human discovery rates. When a model can find thousands of vulnerabilities in weeks, the coordination infrastructure for responsible disclosure starts to strain. Organizations that depend on vendor patch cycles need to factor in the possibility that those timelines will not hold.
The dual-use reality every leader needs to understand
There is a temptation to view Project Glasswing as a cybersecurity story and stop there. It is more than that. It is a demonstration of a dynamic that will define enterprise AI strategy for the next several years: frontier models raise the ceiling for both offense and defense simultaneously.
The same model capabilities that allow CrowdStrike to find and patch a critical flaw in Linux before an adversary can exploit it also, in other hands, represent an unprecedented offensive weapon. Anthropic’s caution about public release is not a sign that this technology is contained. It is a sign that the company understands the asymmetry — and that the window for defenders to establish an advantage is narrow.
For enterprise IT and security leaders, the practical implication is that AI is no longer a productivity tool sitting adjacent to your security posture. It is now a central variable in it. Organizations that treat AI strategy and security strategy as separate workstreams are creating a gap that adversaries will find.
A final word
Cloud Latitude works with organizations across industries to align cloud, AI, and technology strategy with business objectives — at zero cost to clients. As AI capabilities continue to advance, the intersection of cloud infrastructure, AI deployment, and cybersecurity is becoming one of the most consequential areas of enterprise decision-making.
If your organization is thinking through how developments like Project Glasswing affect your cloud or AI strategy, our advisors are available to help you assess your current posture and identify the right path forward. The advice is independent, vendor-neutral, and there is no cost to you.


