They're Not Releasing It to You. That's the Point.
- Rich Washburn

- Apr 9
- 4 min read


Two stories dropped this week that most people read separately.
They're the same story. Anthropic is previewing its next model — codenamed Mythos — to about 40 major enterprise partners through a program called Glass Wing. Not the public. Not developers. Not you. Microsoft, Oracle, and a handful of other institutional players get early access specifically because Mythos is so capable at finding security vulnerabilities that Anthropic didn't feel comfortable releasing it broadly until the attack surface it creates can be partially patched.
Simultaneously, Axios is reporting that OpenAI's next model — internally called Spud — will also receive a limited rollout to a small group of companies, with no public release planned.
Two of the most powerful AI labs on earth. Two new frontier models. Both going to the same class of institutional partner first.
That's not a coincidence. That's a policy.
What Mythos Actually Did
Let me put on my other hat for a second. Before any of this — the writing, the AI strategy work — I spent years in cybersecurity. CEH certified. The kind of work where you don't just understand the defense posture, you understand how the other side thinks, what they're looking for, and why a 27-year-old vulnerability in a production system is actually more dangerous than a brand new one. So when I say what Anthropic found with Mythos is significant, I mean it differently than most people writing about this.
Here's what they reported: Mythos autonomously analyzed codebases and surfaced thousands of previously unknown zero-day exploits. Not helped a researcher find them. Not suggested where to look. Found them. Independently. At scale.
Two examples they made public: a 27-year-old vulnerability in FreeBSD — one of the most hardened Unix operating systems in existence, the kind that runs mainframes and high-security infrastructure — and a 16-year-old exploit in FFmpeg, the audio/video codec library embedded in essentially everything that processes media. Twenty-seven years. In FreeBSD. Undiscovered. That's not a parlor trick. That's a fundamental shift in what the threat landscape looks like.
Why the Old Playbook Just Got Harder
Here's what people outside the security world don't fully grasp: most successful cyberattacks don't exploit brand new vulnerabilities. They exploit old ones that nobody got around to patching. Legacy code. Forgotten libraries. Systems that haven't been touched in years because they work and nobody wants to break them. The entire enterprise security model is built on a fundamental assumption: that finding zero-days is hard. It requires rare expertise, significant time investment, and a level of patience and precision that limits who can do it. Nation-state actors. Sophisticated criminal organizations. Well-funded APT groups. That assumption just became significantly less reliable.
When a model like Mythos can scan a codebase and surface vulnerabilities that escaped human detection for nearly three decades — at machine speed, at scale, without fatigue — the scarcity that made zero-day discovery expensive and rare starts to erode.
Layer 8
In network architecture, the OSI model has seven layers. In security circles, we've always joked about Layer 8. Layer 8 is the human.
Every serious security professional will tell you the same thing: you can have the most robust technical posture on earth, religiously applied best practices, defense-in-depth architecture, zero-trust network segmentation — and all of it can be undone by one person in marketing clicking a link in a phishing email. The most expensive breaches in history weren't technical failures. They were human failures. Social engineering. Spear phishing. Pretexting. Vishing. The adversary bypassed the firewall entirely because they didn't need to go through it — they just called someone and asked for the password.
AI makes this worse before it makes it better. On the attack side, the same models that write better code write better phishing emails. More personalized. More contextually accurate. Harder to flag. On the defense side, the answer is the same AI reading those emails before the human does — not filtering on keywords, actually reading. Understanding context, checking sender history, flagging anomalies in communication patterns.
Machines don't get complacent. A security script that runs every five minutes runs identically every time, indefinitely, without getting lazy on a Thursday afternoon.
The Spud Pattern
Now layer in the OpenAI side of this. Spud — no public release, limited enterprise rollout, no announced timeline for general availability.
Same pattern as Glass Wing. Capability so significant that the labs are managing the deployment deliberately, prioritizing the institutional players who have the infrastructure to handle it responsibly.
This is a new kind of capability tiering. The most powerful models are no longer racing to be public-facing products first. They are becoming enterprise infrastructure — deployed inside the security perimeter of organizations capable of containing them.
That raises a question the industry hasn't fully answered: What does it mean for the competitive landscape when the most capable AI tools are available to Fortune 500 companies and nation-states but not to the startup founder or independent researcher? What happens to the democratization thesis when the most powerful models are Glass Wing products?
What the Next 18 Months Look Like
The asymmetric window is real but bounded. Attackers will use these tools aggressively in the near term. The initial period before defenders fully integrate equivalent capabilities will be the highest-risk window — more sophisticated attacks, faster zero-day exploitation, more credible deepfake-assisted pretexting.
Organizations that treat this moment as a signal to accelerate their AI-augmented security posture will widen their defensive moat. Organizations that treat it as noise will find out the hard way.
The Glass Wing program is Anthropic essentially ringing the bell. Twenty-seven-year-old vulnerabilities in FreeBSD don't stay private forever. The vendors got the head start. The question is whether they use it.
Defenders always have the home field advantage. But that only matters if you show up to play.
I spent years in cybersecurity before pivoting to AI strategy. When Anthropic finds a 27-year-old zero-day in FreeBSD, that's not a benchmark number. That's a wake-up call.




Comments