top of page

Prompting Is Dead. Long Live the Conversation.



Audio cover
Prompting Is Dead

I've written about this before. A few times, actually — from different angles, at different points in the AI curve. The latent space piece from late 2023. The "New to AI" post from March. The NOVA piece where I started unpacking what GPT-5 actually demands from you.

But I want to bring it full circle, because a conversation I had this week crystallized something I've been circling for a while.

We were talking about how the whole prompting obsession has basically become theater. And someone said it better than I could: "It's more about delivering your intent with a little bit of structure and then just dumping in whatever you bring to the table."

That's it. That's the whole shift.


The MIT Moment Nobody Talks About

MIT Sloan ran a piece not long ago that made the rounds — the headline was something like "Prompt engineering is so 2024. Try these prompt templates instead." And people shared it everywhere like it was a revelation.


Here's the thing: even that framing is already dated.

The piece was essentially arguing that instead of one-time crafted prompts, you should use reusable prompt templates — structural scaffolding you can drop into any workflow. Which is fine advice. Genuinely useful. But it's still operating under the assumption that the way you phrase the input is the lever. It isn't. Not anymore.


That MIT piece — and most of the "prompt engineering is dead" content you're seeing — is solving for the wrong problem. They moved from crafting individual prompts to systematizing prompt templates. That's a marginal improvement on a strategy that's already been superseded.

The real shift isn't about the prompt at all.


What Actually Changed

Go back to 2022. The earliest public AI wave. Models were genuinely fragile. If you didn't structure your input carefully, you got garbage out. Prompt engineering felt like magic because it kind of was — you were essentially compensating for the model's limitations with linguistic precision.


I wrote about this in October 2023 — the whole latent space activation piece. The idea that the right phrasing could unlock different layers of the model's knowledge. "Let's think step by step." "Take a deep breath." Chain-of-thought prompting. Tree of thought. All of it was real, and it worked, and it mattered.


Then GPT-4 happened. And I noted this in the NOVA article — GPT-4 had internal prompting. It was silently rewriting your messy inputs behind the scenes. You could throw it half-formed thoughts and it figured them out. The rough edges disappeared and people got lazy. Myself included.

Then GPT-5 arrived and broke that comfort. It's less forgiving. It demands precision and intention again — but a different kind of precision. Not the "add these magic words to your prompt" kind. The you need to actually know what you want kind.

Which leads to the real point.


The Prompt Was Never the Problem. You Were.

Here's what three-plus years of heavy AI use has taught me:

The people who get the most out of these models aren't the ones with the best prompt libraries. They're the ones who can think clearly and communicate intent. That's it. Full stop.


The "8 prompts that change everything" content works because clarity works — but the clarity comes from the person, not the template. You could throw those same templates at someone who doesn't know what they're trying to accomplish and you'd get nothing useful.

The prompt is just the transmission medium. What matters is what you're transmitting.


What "Talking to Your AI" Actually Means

The shift I'm describing isn't subtle — it's architectural.

Old model: Construct a carefully engineered input → receive output → tweak the input → repeat.


New model: Have a conversation. Show up with your actual thinking. Dump in the context. Change your mind mid-sentence. Say "no, not like that — more like this." Add examples on the fly. Contradict yourself and then clarify. Bring whatever's in your head, in whatever form it's in.

The models are now better at inferring intent from natural, messy input than they are at parsing over-structured prompts. Too many rules. Too many nested instructions. Too many "act as a" and "you must always" and "never under any circumstances." You don't get better results. You get stiff, confused output that sounds like it was written by someone who'd never spoken to another human.


What actually works:

Context over commands. Don't tell the model how to answer. Tell it why you need the answer, what you're building, what problem you're solving. Context is the fuel. Commands are just the steering wheel.

Your language, not AI language. I've talked about this before — what I call cognitive zip files. Analogies, shorthand, references that compress a lot of meaning into a few words. The model picks it up instantly. Your way of explaining things is more powerful than the generic AI-speak everyone's imitating.

Iteration over perfection. The best prompt is often the one you arrive at after three exchanges, not the one you constructed before the conversation started. Let the model push back. Refine in motion.

Intent + structure, not structure alone. A little bit of structure is useful — knowing what you want, roughly what format you need, what the output should do in the world. But the intent behind it is what the model is actually working with.


The Conversation Is the Prompt

I want to sit with that for a second.

When you talk to a smart person — a good advisor, a trusted colleague, someone who genuinely understands your world — you don't craft a "prompt" before speaking to them. You talk. You think out loud. You give them background. You reference things from last week. You half-explain something and then say "you know what I mean." And they get it. Because they have context. Because they know you. Because they're working with the full picture you're bringing into the room, not just the sanitized sentence you pre-composed.


That's what modern AI is becoming. And the people who figure this out first — the ones who stop performing for the model and start actually communicating with it — are going to run laps around the prompt template collectors.


The conversation is the prompt. The intent is the engineering.

You don't need to learn prompt frameworks.

You need to learn how to think clearly and say what you mean.

Everything else is just packaging.

Comments


Animated coffee.gif
cup2 trans.fw.png

© 2018 Rich Washburn

bottom of page