top of page

The Government Isn't Flip-Flopping on AI. It's Just Moving at Government Speed.



Audio cover
Ai at Government Speed

There's a story going around right now that the Trump administration is reversing course on AI — that after spending a year tearing down Biden-era oversight, the White House is quietly rebuilding it. The framing is irresistible: political hypocrisy, a made-for-TV U-turn, the deregulators becoming the regulators. But that framing misses the more important story.

What's actually happening isn't a flip-flop. It's a collision — between the speed at which AI is developing and the speed at which governments are capable of responding to anything. Those two clocks have never run at the same rate. And the gap between them is going to keep producing moments exactly like this one, regardless of who's in office.


The Clock Problem

When the Biden administration stood up CAISI — the Center for AI Standards and Innovation at NIST — Claude Mythos didn't exist. The category of AI model that could genuinely alarm a central bank's cybersecurity team didn't exist. The working assumption in Washington at that point was that AI was a competitive race to be won, not a weapons system to be managed.


That assumption wasn't wrong, exactly. It was just priced off a model that aged out faster than anyone expected.

When the Trump administration dismantled that framework earlier this year, they were operating on a similar logic — that heavy-handed oversight would slow American AI dominance while doing nothing to stop the actual risks. Politically, that position was easy to hold. The risks were still largely theoretical.


Then Anthropic previewed Claude Mythos to a handful of British banks and government agencies. And suddenly the risks weren't theoretical anymore. The UK's National Cyber Security Centre started scrambling. The Financial Conduct Authority. The Bank of England. These aren't organizations that panic easily. When they start asking each other what to do about a single AI model, that's a signal worth taking seriously.

Washington is now taking it seriously. Not because of a change in politics, but because of a change in the actual threat landscape.


This Was Always Going to Look Like a U-Turn

Here's the thing about governing a technology that doubles in capability every 12 to 18 months: every policy position you take is going to look wrong within two years. Not because your analysis was bad, but because the underlying technology moved faster than any governance framework is designed to accommodate.


This is the nuclear analogy that actually holds up. The atomic bomb wasn't regulated because politicians were visionaries. It was regulated because Hiroshima happened. The entire architecture of nuclear oversight — the treaties, the inspections, the international bodies — was built reactively, after the world understood what the technology could actually do. Nobody designed that framework in advance. They built it in the aftermath, under pressure, with imperfect information.


AI is tracking toward a similar pattern. The difference is that the "warning shot" — if Claude Mythos and its successors are that warning shot — is happening before a Hiroshima. That's genuinely unusual. And it means the window to build the oversight architecture is still open, even if it's narrowing. The government isn't failing at AI policy. It's running a 20th century governance machine against a 21st century acceleration curve. The institutional bandwidth simply doesn't exist to keep pace in real time.


What's Being Proposed

The current plan under discussion — an executive order creating an AI working group with government and industry representatives, potentially including the NSA, the National Cyber Director's office, and the Office of the Director of National Intelligence — is modest. It's a working group. It's a process for figuring out what the process should be.


That's not nothing. A formal government review process for new AI models, with the right agencies in the room, is the beginning of a real framework. The UK is building something similar, and that British model appears to be directly informing what Washington is now considering.

The irony that this looks like a rebuild of what was torn down is real. But the reason it looks that way is because the underlying need didn't go away when the political winds shifted — it accelerated. What wasn't necessary eight months ago is necessary now. That's not hypocrisy. That's the technology moving faster than the politics.


The Real Question

The debate about whether AI should be regulated is effectively over. Every major government on earth is moving toward some form of oversight. The question was never if — it was always when and how. The more important question now is whether the catch-up can happen fast enough. Because the gap between where the technology is and where the governance frameworks are is still enormous, and it's not closing on its own.


The models being previewed to bank regulators today are not the models that will be deployed two years from now. The working group being proposed in Washington will be figuring out its mandate while the technology it's supposed to govern continues to evolve underneath it.

That's the real story. Not a U-turn. A race — between the speed of AI and the speed of the institutions trying to understand it.

The institutions are behind. They know it. And they're finally moving.

The question is whether they can move fast enough.


Rich Washburn is a technologist and AI strategist. He advises organizations on AI infrastructure, security, and implementation at richwashburn.com.

Comments


Animated coffee.gif
cup2 trans.fw.png

© 2018 Rich Washburn

bottom of page