Anthropic announced earlier this month that it is withholding its Claude Mythos Preview model from general availability on the grounds that it is “materially more capable” at finding and exploiting software vulnerabilities than any prior release. Instead of a standard rollout, the company is shipping the model to roughly 50 organisations under a programme it is calling Project Glasswing — a defensive-only access track where the model is pointed at an organisation’s own codebase to find and patch vulnerabilities preemptively.
What the model actually does
Per Schneier’s reading of the announcement, Mythos Preview can chain multiple bugs together and drive an exploitation workflow with “minimal human involvement” — the kind of end-to-end capability that was still aspirational a year ago. Glasswing flips that capability around: the model is given source access and tasked with producing patched pull requests before an adversary with the same tooling would find the bug. For defenders with the right access, this is the first time the offensive and defensive side of the asymmetry has pointed the same direction.
The Schneier counter-read
Schneier, who has been measured about every previous “dangerous capability” announcement, offers a more sceptical frame:
- Smaller models are already close. Security firm Aisle reproduced the same vulnerability-finding results using older, cheaper, publicly-available models. The capability is not uniquely Anthropic’s — it’s on a faster curve than the gate-keeping implies.
- “Finding for the purposes of fixing is easier for an AI than finding plus exploiting.” Defenders get the cheaper half of the workflow first. That advantage is real but structural: it erodes the moment a similarly-capable model is available in an open-weights release.
- Messaging vs. capability. Anthropic’s PR framing (safety-constrained access, curated partner list, glassy branding) has received mostly uncritical coverage. The underlying point — that AI vuln-discovery is going to be available to whoever is willing to run a GPU — has received less.
What this means for practitioners
Three practical questions for security leadership in the next quarter:
- Can your codebase be handed to a model? If the answer is “no, due to provenance / compliance / vendor IP” then Glasswing-style defensive tooling doesn’t apply to you, and the offensive side will arrive first.
- Are you measuring time-to-detect new CVEs in your stack, not just time-to-patch? Schneier’s companion post on how attackers are thinking about AI (linked above) makes the argument that the detection pipeline is where AI changes the game.
- Budget for volume. Dark Reading’s 2026 predictions piece frames the coming year as an “AI arms race” with malware autonomy; the practical implication is that both sides ship more, faster, and the triage queue (see our coverage of NIST’s CVE enrichment cut) is where the pressure shows up.
Read Schneier’s full post for the load-bearing argument. The defensive window is real, and short.