top of page

You Can Protest AI. You Cannot Stop It.


Audio cover
Morning Molotov

At 3:45 in the morning on April 10, 2026, a 20-year-old man walked up to Sam Altman's home in San Francisco and threw a Molotov cocktail at the gate. He's in custody. No one was hurt. The investigation is ongoing.

But I want to talk about something bigger than the fire. Because that fire is a symptom of a conversation we keep having wrong.


THE PAUSE ARGUMENT HAS A FATAL FLAW

In the weeks leading up to this attack, dozens of protesters marched through San Francisco — from Anthropic's headquarters to OpenAI to xAI — demanding that AI labs pause frontier development. The argument is sincere. The anxiety behind it is real. Some of the smartest people in the world have signed open letters, left their jobs at major labs, and spent their careers warning about what happens if we get this wrong.

I don't dismiss any of that.

But here is the problem with the pause argument, stated as plainly as I know how: China is not pausing.


The People's Republic of China has made artificial intelligence the centerpiece of its national strategy. Their 2017 AI development plan set a target to lead the world in AI by 2030. They are not behind schedule.

DeepSeek shocked the Western AI industry with what it could do and what it cost to build. Their military is integrating AI into autonomous weapons systems, surveillance infrastructure, and economic forecasting at a pace that doesn't leave room for philosophical debate.

Russia is not pausing. The Gulf states are not pausing. Every nation-state with the resources and the ambition to compete is running — not walking — toward this technology.


So when someone throws a Molotov cocktail at Sam Altman's house, or when protesters demand that OpenAI hit the brakes, the implicit assumption embedded in that demand is that the brakes apply to everyone simultaneously. They don't. They apply only to us.


THIS IS THE INTERNET. THIS IS ELECTRICITY. IT DOES NOT CARE ABOUT YOUR FEELINGS.

There is a version of this argument that gets made about every transformative technology in history. People protested the railroad. People burned textile mills during the Industrial Revolution. People warned that the printing press would destroy civilization — and in some ways, they weren't entirely wrong. Every major technological shift displaces something. Jobs. Power structures. Entire ways of life. The anxiety is always legitimate. The technology always arrives anyway.


AI is not different. It is, if anything, more inevitable. Because it is not a product — it is a capability. And capabilities, once discovered, do not get undiscovered. The mathematics behind large language models, neural networks, and transformer architecture exist. They are known. They are documented. They are being replicated by research teams on six continents right now.


You can pause OpenAI. You cannot pause the idea.

What you can do — what the pause argument actually accomplishes in practice — is slow down the people who are, whatever their flaws, operating inside a democratic system with congressional oversight, academic peer review, and public accountability. You leave the field to those who have none of those constraints. That is not a safety strategy. That is a surrender strategy.


CONTROL OF AI IS CONTROL OF THE FUTURE

I want to be precise about what is actually at stake here, because the conversation often gets lost in the abstract. Whoever builds and controls the most capable AI systems will have an asymmetric advantage in every domain that matters: economic output, military capability, scientific research, public health, education, infrastructure, and information. Not a small advantage. A decisive, compounding, generational advantage.


This is not science fiction. This is the conclusion of every serious geopolitical AI assessment produced in the last five years — by the National Security Commission on AI, by RAND, by the OECD, by the European Commission. The consensus is not that AI might be strategically important. The consensus is that it already is.


So the question is not whether AI gets built. It gets built. The question is who builds it, under what values, accountable to whom. Do we want that capability developed inside a system with a free press, an independent judiciary, competitive elections, and the ability for citizens to protest outside the headquarters of the companies building it — including, apparently, the ability to throw a bottle at the CEO's house and be arrested and face trial by jury? Or do we want to cede that ground to a system where none of those things exist? That is the actual choice on the table. And the people demanding a pause, however sincerely, are — perhaps unintentionally — arguing for the second option.


WHAT WE SHOULD ACTUALLY BE DEMANDING

The energy behind the anti-AI movement is not wasted energy. It is misdirected energy. The right demand is not stop. The right demand is build this responsibly and prove it.


That means real transparency from labs about what they're building and what risks they're managing. It means federal regulatory frameworks with actual teeth, not voluntary commitments. It means international coordination — not a unilateral pause, but genuine multilateral agreements with verification mechanisms. It means investment in AI safety research that keeps pace with capability research. It means making sure that the economic benefits of this technology don't accrue only to the people who were already winning.


Those are hard problems. They require sustained political will, technical expertise, and institutional capacity that we are only beginning to build.

But they are the right problems to be working on.

Throwing a bottle at a gate at 4 in the morning is not working on any of them. Marching to demand a pause that only applies to democracies is not working on any of them.


THE BOTTOM LINE

Sam Altman is fine. His family is fine. The man who threw the bottle is in custody. But the pressure that produced that moment is not going away. It is going to intensify as AI moves faster and displaces more and becomes more visible in every corner of life. The anxiety is real and it deserves a real response — from the companies building this technology, from the government institutions tasked with overseeing it, and from the public conversation we choose to have about it.

What it does not deserve is to be aimed at a gate at 3:45 in the morning.

The train is moving. The only question worth asking is who's driving it.

I know which answer I'm working toward.

This article was written and published with assistance from ARIA, my AI research and content partner. The irony is not lost on me.

Comments


Animated coffee.gif
cup2 trans.fw.png

© 2018 Rich Washburn

bottom of page