top of page

Rocket Ships Without Pilots Are Missiles: The AI Literacy Crisis

ree

In Q2 of 2025, the “Magnificent Seven” tech giants spent over $100 billion on AI infrastructure. One quarter. Three months. That’s $1.1 billion every single day.


And that wasn’t marketing or R&D. That was just data centers.


If you want a measure of how big a deal AI is—there it is. The largest, most profitable companies on the planet are betting six figures per second on this shift. Governments are scrambling to publish AI initiatives. Nations are racing to wire up their infrastructure. Politicians are campaigning on AI strategy. Everyone recognizes this is the next industrial revolution.


So here’s the paradox: AI is such a big deal that the world is investing at railroad-building scale—yet teaching people how to actually use it isn’t a big deal.


We’re building rocket ships for every citizen and every classroom, but we’re not teaching anyone to fly.


When my daughter was very young, I took her to a school science fair. The classic cornstarch-and-water demo was supposed to show off a liquid that becomes solid under pressure. Instead, they used cornmeal. No magic—just mush. And instead of correcting it, the instructor smiled and told the kids: “See? That’s how it works.”


The problem wasn’t the mistake. It was pretending the mush was science. The kids didn’t just fail to learn—they learned something wrong.

That’s what’s happening with AI right now. We’re distributing it everywhere, but without training, without literacy, without guidance.


People are confidently learning the wrong lessons.


Teachers Without Confidence

I work with educators every week. And I’ve seen both sides of this coin.

On one hand, many teachers feel less confident walking into the classroom because of AI. They don’t know what to expect. They don’t know how to integrate it. They feel undermined. That’s totally upside down.


On the other hand, I’ve seen what happens when they get even a little bit of training.

  • Bob, a financial literacy instructor who barely uses a computer (but is fine on his iPad, because “it’s just a big phone”), used AI to grade handwritten loan calculations, explain errors, and tutor his students through budgeting projects. His reaction? Pure shock. “It felt like every student had their own tutor.”

  • Teresa, a high school math teacher and tech-savvy by nature, handed AI a stack of handwritten geometry homework. It graded every page, gave detailed feedback, and even added motivational notes. She told me: “I’ve always wanted to give this kind of feedback, but there just isn’t time.”


Then she realized something bigger. She could use AI to make lessons fun. She admitted she doesn’t always know what’s “cool” in her students’ world—but AI does. So she started tailoring lessons through the lens of whatever her students were into. Suddenly kids were laughing, engaged, and—without even noticing—learning faster. “They think I’m the best teacher in the world,” she said, shaking her head in disbelief.


That’s the continuum in action. Confidence up. Engagement up. Learning up. And all it took was training.


Corporate Checkboxes vs. Human Potential

So why aren’t we seeing this everywhere? Because AI initiatives are being steered by the wrong priorities.


Big tech thinks in corporate checkboxes: ship it, package it, slap a logo on it, count the subscriptions. That’s fine for apps or SaaS models. But this isn’t another productivity tool. This is intelligence. This is personalized education for every human being—kids with learning differences, adults retraining for new careers, parents just trying to keep up.


And we’re ignoring that once-in-history opportunity, because “number of seats sold” looks better on a quarterly report than “confidence gained by teachers.”


The Rocket Ship Problem

Here’s the truth: people will use AI. These rocket ships are already on the launchpad. But without training, without literacy, without guidance, they’re not rockets anymore. They’re missiles.


They’ll still go up—but they’ll come down, often in the wrong direction, often without control.


And we’re already seeing it:

  • AI cults that treat outputs as gospel.

  • Businesses making decisions without fact-checking.

  • Students confidently learning the wrong lessons.


Rocket ships without pilots are dangerous.

This isn’t theory. I see it every day.


One client told me: “Within a week, I went from overwhelmed to confidently building, breaking, and fixing. What we built in a few weeks outperformed the $250,000 CRM we had before.”


Another: “...transformed our skepticism into confidence. We now work more efficiently and are more competitive in our market.”


And another: “His uncanny ability to simplify the complex and deliver practical solutions sets him apart. I refer him to my clients, and he delivers exceptional results.”


They’re not really praising me. What they’re experiencing is what happens when complexity becomes clarity. That’s the superpower of AI.


I don’t have that superpower—I’ve just been working with it long enough to understand how to use it. Anyone can. It’s not locked up. It’s not reserved for specialists. It’s nearly free, sitting there, waiting for whoever takes the time to learn how to steer it.


The magic isn’t me. The magic is the tool. The opportunity is in knowing how to use it.


If we care enough to spend $100 billion in 90 days building infrastructure, we can care enough to invest in literacy. Every AI initiative—whether in education, business, or government—should put 20–30% of its budget into training.


Because AI isn’t broken. Literacy is.


And unless we fix that, the rocket ships we’re building won’t lift humanity. They’ll come down as missiles.


As always, if you’re looking for a place to start:

  • For educators and parents, I’ve built a permanent hub with free resources, prompts, and walkthroughs: richwashburn.com/abc. These tools are designed to give teachers confidence and students momentum. My hope is that they help spark the continuum of learning we’ve been talking about.


  • For everyone else who’s wrestling with GPT-5, I just released NOVA — Next-Generation Optimized Virtual Advisor. Think of it as the framework that turns GPT-5 from “frustrating” into a structured workhorse. Direct answers, transparent reasoning, options and trade-offs, and a practical action plan every time. NOVA is public and ready to use: 👉 [Try NOVA]


I’ve got a big place in my heart for education and AI, but the truth is this: literacy matters for all of us. If you ever have questions or want help finding your footing, don’t hesitate to reach out.



댓글


Animated coffee.gif
cup2 trans.fw.png

© 2018 Rich Washburn

bottom of page