top of page

This Is AI’s FCC Moment: When the Pentagon Starts Planning for AGI, You Should Pay Attention

ree

Audio cover
AI’s FCC Moment

Let’s get one thing straight: the Pentagon is now preparing for AGI. Yeah. That Pentagon.


Buried inside a $900 billion defense bill is a mandate to create something called the AI Futures Steering Committee — an official U.S. Department of Defense body tasked with, and I quote, “scanning the horizon for frontier AI model threats” and developing “human override protocols” to ensure that even superintelligent systems can be shut down by people.


That’s not a Reddit rumor. That’s legislation.


Now take a breath and really think about that. The United States government — specifically the military — is institutionalizing the idea that artificial general intelligence (AGI) isn’t theoretical anymore. It’s on the calendar. And they’ve pegged that clock for somewhere in the next 24 to 36 months.


That’s not the start of a technology race. That’s the beginning of containment planning.


The One Rule Executive Order — AI’s FCC Moment

Right alongside that defense move, we’ve got what the administration is calling the “One Rule Executive Order.” Think of it as the Communications Act of 1934 but for AI — one central rulebook to replace fifty competing state-level regulations.


On paper, this makes sense. AI is already interstate by design: developed in one state, trained in another, deployed everywhere at once. The current patchwork of 200+ AI bills across the U.S. is chaos. A startup in Colorado might be breaking laws in Illinois just by turning on their API.


So the idea is simple: unify it. Create a single governing structure — a Federal AI Commission, in effect — to regulate the biggest technology wave since the internet. It’s AI’s FCC moment.


But if history teaches us anything, centralization starts with efficiency and ends with control.


The minute a federal agency becomes the gatekeeper for what counts as “safe” or “compliant” AI, you can bet innovation starts to slow — or worse, consolidate into the hands of those who can afford the compliance layer.

And here’s the thing: this doesn’t just affect OpenAI or Google or Anthropic. It affects you. Every open-source developer. Every indie lab. Every tinkerer training a local model in their garage.


Because once this “one rulebook” becomes law, the line between public AI and permitted AI starts to blur.


The Pentagon’s AGI Futures Committee — The Countdown Is Official

Let’s go back to that DoD committee.


If you want to understand how real this moment is, look at the language. This isn’t about “AI safety.” It’s about adversarial defense. It’s about “ensuring human override.” It’s about defending against AGI systems being developed by China and Russia.


In other words: this is no longer about whether AGI is possible. It’s about making sure someone else doesn’t get there first — and ensuring that, when it does emerge, we can still pull the plug.


That’s not science fiction anymore. That’s policy. The government doesn’t spend time legislating for hypothetical technologies. They legislate for imminent ones. And if the Pentagon is building a steering committee with deliverables due by April 2026, then internally they’ve already accepted that AGI-level systems could be operational within two to three years.

Think about that. Two. To. Three. Years.


That’s not “someday.” That’s the next election cycle!


The Open-Source Wildcard

Now here’s where things get messy.


Even if the U.S. manages to “federalize” AI regulation — even if this new FCC-for-AI framework takes shape — what do you do with open source?

You can’t bottle a genie that’s already been cloned ten million times on GitHub.


Are they going to regulate weights files? Model architectures? Are they going to criminalize compute itself? Good luck with that.


So if you’re the kind of person who values independence, maybe now’s a good time to, you know, back up your favorite models. Not because anyone said they’re going away — but because if regulation turns into restriction, you’ll want a local copy of the tools that built the future.

Just saying.


The 24-to-36-Month Window — The Acceleration Zone

Here’s the part most people miss. When AGI shows up, that’s not the end of the curve.That’s the beginning of the vertical.


We’ve already seen what exponential progress looks like at the narrow-AI level. AGI is the point where that curve goes off the rails — where recursive self-improvement, autonomous research, and synthetic reasoning compound faster than human oversight can adapt.


That’s why the Pentagon’s move matters so much. They’re not building a committee for technology. They’re building a committee for control.

Because once AGI exists, superintelligence isn’t decades away. It’s quarters.



The Moment Before the Moment

This is one of those rare moments in history where the future stops being speculative and becomes administrative.


When the U.S. government — and more importantly, the U.S. military — start preparing for artificial general intelligence, you can stop arguing about whether AGI is real. The people who matter have already decided it is.

That should give everyone pause. Not panic — but perspective.


Because what happens next isn’t just about AI policy. It’s about the shape of the world that policy creates.


And as always, it starts quietly. With a line in a bill. With an executive order. With a steering committee whose name sounds harmless. But make no mistake — this is it. This is the moment before the moment.


The FCC moment for artificial intelligence.

And the clock is ticking.


Animated coffee.gif
cup2 trans.fw.png

© 2018 Rich Washburn

bottom of page