Anthropic’s decision to restrict its Claude Mythos Preview model to a small set of defensive partners was framed as a vulnerability-discovery containment measure. Bruce Schneier, writing on May 8, makes a sharper argument: the containment is mostly performative because competitors already have comparable capability — and the real systemic risk is much bigger than any single CVE.
What Mythos actually does
Mythos is a Claude variant that Anthropic disclosed as so effective at finding software flaws that it would not be made generally available. Instead, the company is letting select organizations point it at their own codebases. The headline data point Schneier references: Mozilla used Mythos to surface 271 vulnerabilities in Firefox, all of which have since been fixed and will never be available to attackers.
The catch is that OpenAI’s GPT-5.5 demonstrates what Schneier calls comparable capability, and smaller, cheaper models have already reproduced the underlying technique on public benchmarks. Anthropic’s restriction policy may slow the proliferation curve for a few weeks. It does not change the curve itself.
The asymmetry that matters
The core argument is about defender-versus-attacker speed. AI is now meaningfully better at finding and exploiting vulnerabilities than it is at finding and fixing them. The fix step requires a code change, a regression test, a release process, and a deployment window — none of which a frontier model can run end-to-end for an organization that isn’t already automating its security pipeline. Discovery, by contrast, runs as fast as inference.
Schneier’s near-term forecast is therefore counterintuitive but coherent: defenders gain a permanent advantage on systems they actively scan with these models (every Mozilla-grade organization), while attackers gain a transient but real advantage on every system that isn’t being scanned (most of the long tail). The gap closes only when scanning becomes default infrastructure rather than a sponsored research program.
Why software is the easy version of this problem
The part of the essay practitioners should sit with is what comes after software. Mythos-class models are not specialized for code. They are general reasoners pointed at a rule system — and software is one of many rule systems in modern life. Tax codes, insurance policies, election regulations, healthcare billing schemes, building codes, content-moderation rules, and the contractual surface of large platforms are all rule systems with exploitable structure. Schneier flags this directly: the same machinery finding integer overflows in OpenSSL can be pointed at the U.S. tax code or a state’s election law and asked the same question — where is the loophole.
What this means
For security leaders, the immediate move is straightforward: get vulnerability-discovery AI into your defensive pipeline before someone else gets it into their offensive pipeline. The harder, longer conversation is governance. If your organization owns or operates a rule system that matters — a benefits program, a marketplace policy, a fraud-detection ruleset — assume an AI capable of mining it for loopholes already exists, and start designing those rules with adversarial AI testing in the loop. Schneier closes with a line worth keeping on the wall: adapting to this new reality will be hard, but we do not have any choice.