The Signs Are Everywhere. Math Is Eating the Physical World Now.
- Rich Washburn

- 8 hours ago
- 5 min read


A few weeks ago I wrote about the Algorithmic Multiplier — the idea that when silicon hits its physical ceiling, mathematics steps in and multiplies what the hardware can do. I used the 56K modem as the anchor. TurboQuant as the proof. The thesis: the next great leap in compute won't come from a new chip. It'll come from the mathematicians. I didn't expect the next proof point to arrive so fast. Or from this direction.
This week, Disney Research, Google DeepMind, and NVIDIA quietly released Newton — an open-source, GPU-accelerated physics simulation engine. Apache 2.0 license. Linux Foundation project. Already 3,800 GitHub stars before most people noticed it existed. On the surface, it's a robotics tool. Look deeper, and it's a sign of something much larger.
What Newton Actually Is
Newton is a physics simulation engine built on NVIDIA Warp and backed by MuJoCo — the physics backend that's become the standard for robotics research. It handles everything: rigid bodies, soft bodies, cloth, cables, granular materials, fluids, inverse kinematics, multi-physics coupling. All GPU-accelerated. All open source. But the feature that matters most isn't any of those. It's differentiability. Newton supports differentiable simulation. That means you can backpropagate through the physics itself — run the simulation forward, measure the outcome, calculate the error, and adjust the parameters backward through the entire physical model. You can train AI policies directly inside simulated reality without ever touching a real robot, a real manufacturing floor, or a real environment.
In plain terms: instead of buying a thousand robots and running them for months to generate training data, you run the simulation. Mathematically. On GPUs. At speeds and scales that physical hardware can't touch.
Sound familiar?
The Same Playbook, Different Domain
In the modem era, the copper wire was the constraint. The pipe was full. Math stepped in and compressed the signal — not to make the wire faster, but to make every inch of it do more work.
In AI, GPU memory was the constraint. TurboQuant stepped in and compressed the vectors — not to build new hardware, but to make every gigabyte of existing VRAM do six times the work. In robotics and physical AI, real-world experience is the constraint. Training embodied AI systems requires contact with reality — and reality is slow, expensive, dangerous, and hard to parallelize. You can't run a thousand simultaneous robot training sessions in a warehouse.
But you can run ten million simultaneous physics simulations on a GPU cluster.
Newton is the compression algorithm for physical reality. Instead of building more robots to generate more training data, you compress the problem into mathematics. You simulate the physics with enough fidelity that the policy trained inside the simulation transfers cleanly to the real world — what researchers call sim-to-real transfer. Math is eating the physical world now. Not metaphorically. Literally.
Why the Founders Tell the Whole Story
The three organizations that initiated Newton are not random. Each one represents a layer of the stack. NVIDIA built the compute layer. Their investment in the physical AI stack — from Isaac Sim to now Newton — is a deliberate bet that the next wave of AI training doesn't happen in language. It happens in physics. Google DeepMind contributed MuJoCo, the physics backend that became the de facto standard for robotics control research. DeepMind has been training agents inside simulated environments for over a decade. They know better than anyone that the bottleneck isn't the algorithm. It's the fidelity and speed of the simulation underneath it.
Disney Research is the wild card — and the most telling signal of all. Disney doesn't release tools like this for academic credit. They have theme parks, animatronics, autonomous character systems, and physical AI challenges that most robotics labs have never encountered. When Disney shows up as a co-founder of an open-source physics engine, it means this technology has already been validated against real industrial problems.
Three organizations. Three layers of the same stack. All converging on the same conclusion: the next frontier of physical AI runs through simulation, not hardware.
The Infrastructure Implication Nobody's Talking About
Here's where it gets interesting from an infrastructure standpoint — and it's something we think about constantly at Data Power Supply. The narrative around AI compute has been almost entirely focused on training large language models and serving inference at scale. That story is real. The hyperscaler capex numbers are staggering, and the power demands are only accelerating. But Newton signals a second wave of GPU demand that most infrastructure planners haven't fully priced in yet.
Differentiable physics simulation at scale is not a light workload. Running ten million parallel robot training environments — the kind of throughput that makes sim-to-real transfer actually useful — requires dense, sustained GPU compute. Not inference chips. Not CPUs. Full training-class GPU clusters, running continuously, generating the synthetic physical experience data that embodied AI systems need to learn.
Every robotics company, every autonomous vehicle program, every industrial automation platform that adopts Newton-style simulation-first training is going to need a place to run it. That compute has to live somewhere. It has to be powered. It has to be cooled.
This is exactly the infrastructure gap that Data Power Supply was built to address — high-density, high-availability compute environments for workloads that the hyperscalers aren't optimized to serve at the edge, and that enterprises can't afford to build themselves. The physical AI training wave is coming. The facilities need to be ready before the demand arrives, not after.
The Sign of the Times
In the last 30 days, I've watched Google compress AI memory 6x with pure mathematics. I watched a 4-billion parameter model run offline, agentically, on an iPhone — no data center required. And now three of the most sophisticated engineering organizations on earth open-sourced a tool that compresses physical reality itself into GPU-trainable mathematics.
Every one of these is the same story. Math eating a constraint that hardware alone couldn't solve.
The pattern is clear. We are not in an era of diminishing returns on compute. We are in an era of mathematical leverage — where the right abstraction layer makes existing hardware do ten times the work it was doing six months ago. This is what I meant when I wrote about the Algorithmic Multiplier. The modem analogy wasn't nostalgia. It was a warning about what was coming. The warning is now current events.
What This Means If You're Building Anything
If you're building physical AI systems — robotics, autonomous vehicles, industrial automation, defense applications — Newton just handed you a simulation stack that would have cost millions to build internally. The barrier to training sophisticated physical AI policies just dropped dramatically.
If you're building compute infrastructure, the message is the same as it was after TurboQuant: raw GPU count is increasingly not the moat. The mathematical layer on top of the hardware is where the leverage lives.
If you're an investor, this is the moment to ask which of your portfolio companies are building mathematical leverage into their systems — and which ones are still just buying more hardware and hoping for the best.
The signs are everywhere. Math is eating the physical world.
Newton is just the latest proof.




Comments