The Stack That Changes Everything
- Rich Washburn

- 1 day ago
- 5 min read


In January 2026, I wrote a short piece called "He Just Said It Out Loud." Elon was at Davos, and he said what we'd been living inside for months - that the limiting factor for AI isn't chips. It's power. I was deep in a facility assessment that week, tracing fiber paths and mapping generator loads, thinking: yes. Exactly that.
That article was about energy. This one is about what comes after. Because there's a bigger story forming, and it's worth slowing down to actually look at the shape of it.
Four Companies. One Stack.
There are roughly four companies right now that aren't competing with each other in the traditional sense. They're not fighting over attention, market share, or platform dominance. They're assembling something more like a vertical monopoly on the infrastructure of intelligence itself.
Nvidia builds the processors. xAI builds the models. Tesla builds the embodied physical layer - robots and autonomous vehicles that act in the world. SpaceX builds the connectivity backbone and, if the FCC filing holds, eventually the orbital compute layer that delivers intelligence everywhere on Earth that needs it.
That's not four companies. That's a stack. Silicon to software to steel to space. And the combined market value - roughly $8 trillion - understates what it represents, because you can't replicate the integration. You can build a GPU competitor. You can train a model. You can't build the ten years of reusable rocket launches that give you 65% of everything currently in orbit and a credible path to one million satellites.
The old Big Tech empires - Google, Meta, Apple, Amazon - were built on platforms that monetized human attention. Search, social, mobile, shopping. The total addressable market was human eyeballs and the time between them. These four are monetizing something different: the capacity to compute, to decide, to act. The ceiling on that market isn't population. It's physics.
The Numbers Are Not Normal
Nvidia's fiscal year 2026: $215.9 billion in revenue, up 65% year-over-year. Their data center division alone. Q4 alone - $68.1 billion. More than Nike makes in an entire year, every three months.
The Blackwell chip delivers 30x inference performance over its predecessor. Jensen Huang just unveiled Vera Rubin - built on TSMC's 3nm process, 336 billion transistors, promising 10x inference throughput per watt over Blackwell. And then he paid $20 billion for Groq - a chip that isn't a GPU - because he knows inference is projected to be 10x larger than training by 2028 and he's not going to let anyone else own that market.
That last move is the tell. Nvidia isn't defending its position. It's cannibalizing it before someone else does.
Meanwhile, Big Tech is spending $600-700 billion in capital expenditure on AI infrastructure in 2026 alone. Google $185 billion. Meta $135 billion. Amazon $118 billion. Almost all of it flows through Nvidia. They are the tollbooth, the mine, and increasingly the map.
The SpaceX Layer Is the Part Nobody's Ready For
In January, SpaceX filed with the FCC to launch up to 1 million satellites designed to function as orbital data centers. Each satellite generating 100 kilowatts of compute from near-constant solar exposure. A million satellites at 100 kilowatts each equals 100 gigawatts of AI compute in orbit. And then in March, Tesla, SpaceX, and xAI announced Terafab - a $20–25 billion chip fabrication facility in Austin. One terawatt of computing power annually. A 9-month release cadence. Faster than Nvidia. Faster than AMD. Built by the people who already have the rockets to put it into orbit.
A startup called Starcloud already launched the first AI server in orbit - a 60-kilogram satellite running an Nvidia H100 in space, successfully training a language model aboard a SpaceX rocket. This isn't a vision statement. It's a proof of concept. The future of compute isn't in a Loudoun County data center. It's distributed, it's closer to the edge, and eventually it's orbital. That's not a prediction anymore. It's an active engineering project with FCC filings attached.
The Bottleneck Nobody's Talking About
Here's what I keep coming back to when I look at this stack honestly.
The power problem is being addressed at every layer - behind-the-meter energy, modular facilities, orbital solar. The compute problem is being attacked by Nvidia, Groq, Terafab, and a dozen custom silicon programs. The model problem is a race with no clear finish line but a lot of funded runners.
What doesn't get discussed nearly enough is the efficiency of the math inside the machines. When a supercomputer runs a physics simulation - whether that's orbital trajectory modeling, fluid dynamics for aerospace, or plasma physics for fusion energy research - it isn't running one clean calculation. It's running billions of discrete steps, and with every step, minute errors accumulate. The simulation drifts. To compensate, current software forces the processor to iterate - smaller steps, more guesses, more power, more time - just to stay anchored to physical reality.
This is called energy drift , and it's the silent tax on every serious computation happening in HPC today. It's why plasma physics simulations take days. It's why trajectory modeling for orbital mechanics burns enormous computational budgets. It's why the beautiful efficiency numbers on the chip specs don't translate cleanly to real-world workload performance.
The hardware roadmap - Blackwell, Vera Rubin, Terafab - solves for raw throughput. The orbital compute vision solves for deployment scale. Neither of them solves for the mathematical inefficiency baked into the simulation software itself.
That's a different problem. And it's the one that will determine whether the orbital compute thesis actually closes the economics it promises.
If you're going to run physics simulations on a satellite drawing 100 kilowatts of solar-powered compute - with no ground-based redundancy, a thermal envelope that doesn't forgive waste, and mission-critical precision requirements - you need the math to be right the first time. Iterative approximation is not a viable architecture for orbital hardware. The margin for computational waste in space-based workloads is essentially zero.
Where This Is Going
The stack is real. The capital behind it is real. The FCC filings, the Terafab announcement, the Colossus build-out in Memphis, the Optimus deployment in Tesla factories - these aren't visions. They're engineering timelines with dollar figures attached.
What I'm watching for is the layer between the hardware and the applications. The places where raw compute hits the actual problem - the physics engine, the trajectory solver, the fluid dynamics model - and either delivers or doesn't. I work at the infrastructure layer of this industry. I spend my time thinking about where compute lives, how power gets to it, and what the real-world bottlenecks look like when you're not reading a press release. From that vantage point, the hardware story makes complete sense. The ground segment is the on-ramp. The federated architecture, the teleport infrastructure, the edge compute layer - that's what makes the orbital compute vision usable rather than theoretical.
But the efficiency story - the math story - that's the one that will quietly determine whether this entire stack performs at the level it promises.
The companies building what comes next are already here. The question worth asking is whether the math running inside their machines is ready for what they're about to demand from it.





Comments