Yann LeCun’s Quiet Power Move
- Rich Washburn

- Nov 12, 2025
- 4 min read


Why the Godfather of Deep Learning Might Be Plotting the Next AI Revolution — And Why It Matters More Than You Think
You know those moments where something big is happening, but it doesn’t come with fireworks or a keynote stage? It’s just... quiet. Subtle. But seismic?
This might be one of those moments.
Word’s coming out — mostly whispered, not shouted — that Yann LeCun, Meta’s Chief AI Scientist, is preparing to exit the company and start his own research lab or startup. The kind that doesn’t just compete with what we’ve got now… but maybe redefines where we’re going entirely.
And if you know LeCun’s history, this tracks. He’s not the loudest in the room — never has been. But he’s been one of the most important.
He’s the reason we have convolutional neural networks in the first place. He’s been pushing the boundaries of self-supervised learning for decades.
While others have been polishing chatbots and padding token counts, he’s been asking deeper questions:
What if LLMs aren’t intelligence — they’re just a first draft?What if real intelligence needs to understand the world, not just autocomplete it?
And now… he might be stepping out to prove it.
LLMs Are the Tools — Not the Destination
Let’s not get it twisted: LLMs are incredible. They’re the rubber meeting the road in AI right now. Tools like GPT, Claude, Llama — they’re redefining productivity, creativity, even how we interact with machines.
But they’re tools, not intelligence.
They don’t know the world.They don’t reason through cause and effect.They don’t plan — they respond.It’s autocomplete with a turbocharger.
What LeCun is building? That’s different.
He’s chasing what we haven’t built yet. The real thing. The deeper intelligence. The one that can model the world, anticipate what might happen next, and take coherent action. The thing we’ve been pointing at when we say AGI — whether we’ve admitted it or not.
Enter JEPA: Intelligence That Predicts, Not Just Parrots
At the core of this vision is something called JEPA — Joint Embedding Predictive Architecture. I’ve been watching this for a while now, but it hasn’t really hit the mainstream yet. It should.
JEPA doesn’t care about predicting the next word in a sentence. It’s not built on text.It’s about learning representations of the world itself. Not “what did humans say about it?”But “what is it, how does it move, and what happens next?”
Think:
Predictive video models that learn physics and causality.
Agents that build internal models of the world and simulate different outcomes.
AI that can actually reason, plan, and understand, rather than just reflect what it’s read.
Sound familiar? It should — it echoes the path toward embodied intelligence, toward systems that don’t just talk about reality, but live inside it.
LLMs taught the machine to speak. JEPA might teach it to think.
The Quiet Yoda Makes His Move
Here’s the part that gets interesting: this isn’t just a research paper move. LeCun’s reportedly talking to investors.
And if that startup launches, it could pull in some serious heat — funding, talent, intellectual horsepower. The kind of minds that are frustrated with the LLM arms race and ready to build something more foundational.
That’s a Jobsian move. Not loud. Not flashy. But deeply strategic.
Because look at where we are now: The LLM space is maturing fast. GPT‑4 turbo, Claude 2.1, Gemini, DeepSeek… they’re all part of the same evolutionary curve. They’re impressive — and transitional.
LeCun’s move might signal the next curve is about to begin.
And if he’s right? We’ll look back on JEPA the way we looked at AlexNet in 2012. As the quiet beginning of something massive.
What I’m Watching Next
This is still early. Nothing officially confirmed. Meta hasn’t said anything. LeCun’s keeping his NYU post. But the tension inside Meta is well‑documented — especially around the shift from open research to product‑first pipelines.
But if this startup launches, here’s what I’ll be watching:
Who’s around the table? I want to know who joins him. The co‑founders, the researchers, the VCs. That will tell us how serious this is.
What are they building? Is it a lab? A foundation model startup? An open research collective? What kind of compute are they throwing at JEPA?
Where’s the inflection point? The day we see a JEPA‑trained model outperform an LLM in planning, prediction, or task execution — that’s when the paradigm starts to shift.
Also: this could push OpenAI, Anthropic, and DeepMind to race harder on world models. The game just changed.
Final Thought: Don’t Sleep on the Quiet Ones
Yann LeCun isn’t loud. But he’s been right a lot. He was right about neural nets before anyone cared. He’s been pushing self-supervised learning long before it became the hot thing. And now he’s saying: “LLMs aren’t the final answer.”
If you’re building in this space, you need to track this. Because if he gets it right — if JEPA works — we’re about to run the cycle all over again. New architecture. New models. New stack. New rules.
And this time, the goal isn’t just talking with machines. It’s making them understand.
Let’s see who follows him.




Comments