top of page

Sam Altman Just Published His Terms of Surrender


Audio cover
Altman Terms of Surrender

Last week, Sam Altman published a 13-page document called Industrial Policy for the Intelligence Age. The press covered it as a bold policy vision. I read it as something else — a man who knows what's coming writing the terms before someone else does.


The three headline proposals have gotten most of the attention. A Public Wealth Fund, seeded by AI companies, distributing profits directly to citizens. A shift in the tax burden from payroll to capital gains and corporate income, because AI will hollow out the payroll tax base that funds Social Security and Medicaid. And a subsidized four-day, 32-hour workweek at full pay — what Altman calls the "AI efficiency dividend."

On the surface, these sound generous. Look closer and a different picture emerges.


The Tell Is in the Structure

OpenAI just completed its conversion from a nonprofit to a for-profit company. The organization founded on the explicit premise that AI should benefit all of humanity now has a fiduciary duty to shareholders. The week that conversion was finalized, Altman published a paper about distributing AI wealth to the public. The document frames worker benefits — expanded retirement contributions, subsidized healthcare, childcare assistance — as corporate responsibilities rather than government guarantees. Which means if automation eliminates your job, your employer-sponsored healthcare and retirement match disappear with it. OpenAI acknowledges this tension with a nod toward "portable benefit accounts," but stops short of proposing universal, government-backed coverage. The people most displaced by AI are the least protected by the plan designed for them.


Greg Brockman, OpenAI's president, has donated millions to Donald Trump. The same week this paper proposes taxing AI-driven capital gains more heavily, Marc Andreessen — one of the most influential voices in the tech investment community — has previously made clear he'll back political candidates specifically to prevent that kind of taxation. OpenAI is proposing policies that their own political allies have organized against. Either they don't expect these proposals to pass, or they're positioning for a negotiated outcome somewhere in the middle. Neither interpretation is particularly reassuring.


The Washington Post called it a PR document. Gizmodo called it vague. That's not unfair. Anthropic published a nearly identical policy blueprint six months ago. The timing here isn't coincidental — it arrived alongside the midterm election cycle and the Trump administration's push toward a national AI framework. This is bipartisan positioning as much as it is policy.


The Deeper Problem With the Altman Model

Here's what I think is actually going on beneath the policy language.

Altman's framework, at its core, treats citizens as passive recipients of AI-generated wealth. The Public Wealth Fund distributes profits to people. The four-day workweek gives people more time. The robot tax funds the social safety net. In every case, the human is downstream of the machine. The AI produces. The government redistributes. You receive.


That's not a new social contract. That's a subscription model for society.

It solves the immediate economic tension — people need income, AI is generating it — but it doesn't solve the longer problem, which is that a population of passive recipients has no leverage, no productive identity, and no mechanism for building intergenerational wealth. You're not an owner. You're a beneficiary.


Musk's Model Is Philosophically Different — and Worth Taking Seriously

Elon Musk has been circling a different idea for a while now. At the U.S.-Saudi Investment Forum last November, he said work will become "optional" and money "irrelevant" within 10 to 20 years as AI and robotics generate effectively unlimited abundance. That sounds like the same endpoint as Altman — but the path is different. The version worth examining is the compute allocation model — the idea that instead of giving citizens cash dividends, you give them productive capacity. A slice of AI compute in the cloud. Yours to use, rent, or run like a business. Not a check. A stake.

The distinction matters. A check makes you a recipient. Compute makes you a participant. You can use your allocation to run your own AI agents. You can rent it to enterprises that need inference capacity. You can build something on top of it. You have to make decisions about it. You're not passive — you're operating, even if what you're operating is computational rather than physical.


It's a harder sell politically because it requires people to engage rather than just receive. But it's a more durable model because it doesn't create a permanent dependent class — it creates a distributed ownership class. Universal Basic Compute instead of Universal Basic Income. Agency instead of stipend. There are real challenges. Compute depreciates faster than cash. Managing a cloud allocation is not trivial. The infrastructure to actually distribute and manage compute at citizen scale doesn't exist yet. But the orbital compute layer SpaceX is building, the Terafab production capacity Tesla and xAI are assembling, the inference-at-scale that Groq enables — those aren't just commercial products. They're the prerequisites for a distributed compute economy to actually function.


What I'm Actually Watching For

Altman's paper is going to generate hearings, op-eds, and think tank reports for the next six months. Most of it will miss the point.

The real question isn't which social safety net mechanism we bolt onto a disrupted economy. It's whether the people living through this disruption end up as owners or dependents. Whether the AI economy produces a distributed participant class or a permanent beneficiary class.

Altman's model, whatever its intentions, trends toward the latter. The compute allocation model, whatever its implementation challenges, trends toward the former.


We built the internet and handed most of the value to twelve companies. We built mobile and handed most of the value to two operating systems. We are now building the infrastructure of intelligence — the chips, the models, the orbital compute layer, the physical robots — and the same question is in front of us again. Who owns the means of intelligence?

The answer to that question is more important than any of the three policies in Altman's paper.



— Rich

Comments


Animated coffee.gif
cup2 trans.fw.png

© 2018 Rich Washburn

bottom of page