What Is Claude Mythos? Why Security Teams Need to Act Now
Written by
Brandon Veiseh
Every legacy codebase, every open-source dependency, every piece of internal tooling is now exposed in a way it has never been before.
Anthropic just announced something that should put every security and engineering team on alert. Claude Mythos Preview is a new frontier AI model, and the company behind it is so concerned about its offensive cybersecurity capabilities that they have chosen not to release it to the public. Instead, they've launched Project Glasswing, a coordinated initiative to give defenders a head start before these capabilities inevitably spread.
Here's what you need to know and what you can do about it today.
What Is Claude Mythos and Why Won't Anthropic Release It?
Claude Mythos Preview is Anthropic's most capable model to date, a general-purpose frontier model that sits above their existing Claude Opus, Sonnet, and Haiku tiers. But what makes Mythos unprecedented isn't its general intelligence. It's its ability to autonomously discover and exploit software vulnerabilities at a scale and speed that no human, and no previous AI model, has come close to matching.
In internal testing over the past several weeks, Mythos identified thousands of high-severity zero-day vulnerabilities across every major operating system and every major web browser. Some of these flaws had survived decades of human code review and millions of automated security scans. One was a 27-year-old bug in OpenBSD, an operating system renowned for its security posture. Another was a 16-year-old vulnerability in FFmpeg that had evaded every fuzzer thrown at it, despite being executed over 5 million times by automated testing tools. A third, CVE-2026-4747, was a 17-year-old remote code execution flaw in FreeBSD that grants unauthenticated root access, and Mythos found and exploited it fully autonomously.
The performance gap is staggering. When benchmarked against Claude Opus 4.6 on Firefox JavaScript engine vulnerabilities, Opus produced working shell exploits twice out of several hundred attempts. Mythos succeeded 181 times, a roughly 90x improvement.
Anthropic did not train Mythos specifically for offensive security. According to their technical report, these capabilities "emerged as a downstream consequence of general improvements in code, reasoning, and autonomy." That's the part that should concern you most: every future frontier model, from any lab, is on this same trajectory.
What Is Project Glasswing?
Because of these capabilities, Anthropic assembled a coalition of twelve major technology and finance companies, including AWS, Apple, Microsoft, Google, CrowdStrike, Palo Alto Networks, NVIDIA, Cisco, Broadcom, JPMorgan Chase, and the Linux Foundation, and gave them early access to Mythos Preview exclusively for defensive security work. The initiative also extends to over 40 additional organizations that maintain critical software infrastructure.
Anthropic is backing the effort with $100 million in usage credits and $4 million in direct donations to open-source security organizations. The goal is straightforward: find and patch the most critical vulnerabilities in the world's most widely used software before models with similar capabilities reach adversaries.
As CrowdStrike's CTO Elia Zaitsev noted, adversaries will inevitably look to exploit the same capabilities, and the right response is not to slow down but to move together, faster.
Why Should Engineering and Security Teams Be Worried?
The economics of vulnerability discovery just collapsed. Work that once required a skilled human researcher spending weeks can now be done autonomously in hours for a few dollars of compute. According to one analysis, Mythos found the OpenBSD vulnerability for roughly $50 in compute.
This isn't theoretical. It's happening now, inside a gated program. And as Anthropic's Frontier Red Team Cyber Lead told VentureBeat, frontier AI capabilities are likely to advance substantially over just the next few months. Other frontier labs are on similar trajectories, and open-source models will eventually close the gap.
Most organizations' application security programs were built around a core assumption: finding and exploiting vulnerabilities is hard, expensive, and slow. That assumption is no longer valid. Every legacy codebase, every open-source dependency, every piece of internal tooling is now exposed in a way it has never been before.
The companies inside Project Glasswing get a head start. Everyone else needs to start hardening now.
How Can You Protect Your Applications Before These Capabilities Spread?
This is exactly the category of problem that MindFort was built to solve. MindFort deploys autonomous AI security agents that continuously red-team your applications, discover exploitable vulnerabilities, and remediate them, generating patches as pull requests with full threat model explanations.
Rather than waiting for an annual pen test or triaging thousands of false positives from a traditional DAST scanner, MindFort's agents work around the clock across your entire stack. They test authentication flows, business logic, and API security, the same classes of vulnerabilities that Mythos has proven AI can now find at scale.
What makes this particularly urgent is the self-learning capability built into MindFort's agents. As your codebase evolves, the agents adapt, learning your infrastructure topology, application behavior, and organizational patterns to become more effective over time. They don't just find vulnerabilities once. They continuously probe, try new attack methods, and remember past attempts, much like the always-on AI Red Team agents that Project Glasswing partners are now using to defend their own systems.
When a vulnerability is found, MindFort doesn't just file a ticket. It generates a PR with the minimal code change needed to fix the issue, complete with a threat model explaining what was found and how the fix works. For teams stretched thin, this automated remediation closes the loop between discovery and resolution in minutes, not weeks.
What Should You Do Right Now?
The window between defenders and attackers having access to these AI capabilities is narrow. The Linux Foundation's CEO Jim Zemlin put it clearly: open source maintainers, whose software underpins much of the world's critical infrastructure, have historically been left to figure out security on their own. That is changing, but most organizations are not yet prepared.
Waiting for your next annual pen test is no longer a viable security strategy. Start hardening your applications today. Deploy continuous, autonomous security testing that keeps pace with your release cycle. And accept that the era of AI-powered vulnerability discovery is here. The only question is whether you find the bugs first, or an attacker does.
See how autonomous security agents can protect your applications →

About the author
Brandon Veiseh
Co-Founder & CEO of MindFort. Previously led product at ProjectDiscovery and built AI tools for offensive security at NetSPI. Founded his first startup building NLP models for network packet inspection.