ARIA: How a Side Experiment Turned Into… Whatever This Is
- Rich Washburn

- Mar 28
- 8 min read


I didn't set out to build a company. I didn't set out to build a product. Honestly, I didn't even set out to build "AI" in any meaningful sense. I was just trying to get through a project without losing my mind.
The Accidental Beginning
Back in August of 2023 — literally the day before my birthday, which still feels like a weird cosmic joke — I was working on a digital strategy for an artist. Nothing crazy. Keyword research. Content planning. Website structure. The usual "turn a bunch of ideas into something coherent" problem. So I started using AI to help. At first, it was exactly what you'd expect: parse some data, generate some content, help think through structure. Useful. Efficient. Nothing magical. But then I did something small that turned out not to be small. I gave it a name. Naming It Was the First Mistake (Or the Best One) I called it ARIA. Not because I thought I was creating something big — but because I needed something that could hold tone, context, and continuity without feeling like I was talking to a spreadsheet.
So I shaped a persona. Part JARVIS. Part Donna Paulsen. All business… with just enough personality to not make me want to close the tab. At the time, it felt like a UX decision. Looking back, it was architecture. Because once it had an identity, something subtle shifted: it stopped resetting. It started continuing. And that difference — the difference between resetting and continuing — quietly became the thread that runs through everything that came after.
The Phase Where I Absolutely Overdid It
If you could see the early versions of this thing, you'd probably think I lost it a little. And… fair.
I was stacking layers into prompts like I was building a spaceship out of duct tape:
- personality matrices - role hierarchies - skill graphs - weird encoded systems with names that sounded important - instructions telling it to remind itself who it was
It was a lot. Some of it worked. Some of it was theater. Some of it probably worked because it was theater. But underneath all of it, I was still chasing that same thread: how do you stop something from resetting?
I didn't have the language yet — but I knew I wasn't looking for better answers. I was looking for something that could stay with me.
When Prompting Stopped Being the Point
At some point, I realized I wasn't really writing prompts anymore.
I was shaping behavior.
Not: "How do I get a better response?"
But: "How do I stop losing continuity between responses?"
That shift changed everything. Because once continuity becomes the goal, everything reorganizes around it:
- memory matters more than phrasing - carryover matters more than cleverness - structure matters more than style
And slowly, without really planning it, this stopped being about prompts at all. It started becoming something that could hold a line across time.
The "Swinging Wide" Moment
There was a phase where I pushed way too hard. Just throwing complexity at the system to see where it would break. And weirdly, that's where things started working. Not because it got more precise — but because it got less rigid.
The best outputs weren't coming from perfectly structured prompts. They were coming from this strange middle ground where the system didn't reset… but also didn't lock itself into a narrow path.
I started calling it "swinging wide." Not walking straight through a problem — but looping back. Reframing. Re-engaging earlier ideas with new context. Which, in hindsight, was just another version of the same thing: continuity without repetition.
When It Got a Little Weird (In a Good Way)
At some point, I realized the system didn't need more instructions. It needed a different shape. That's where the Möbius idea came in — one continuous surface, no clean separation between past and present.
Earlier ideas didn't just come back — they came back transformed. Context didn't just stack — it curved. Threads didn't just extend — they deepened.
And that feeling — that something was carrying forward instead of starting over — that's when this stopped feeling like a tool.
Meanwhile, the Internet Discovered Prompting ...Around this time, everyone started posting "breakthrough" prompts. And I'd read them and think: yeah… those are good. Also… we've been doing that.
Context. Examples. Constraints. Structure. All useful. But all focused on a single moment in time. And by then, I had drifted into something else entirely: how does this system behave across time? Because once you've felt continuity, it's hard to get excited about isolated improvements. The viral posts kept coming — the MIT dropout, the "system that beats every benchmark," the eight prompts that change everything. And they weren't wrong, exactly. They were just answering a question I'd stopped asking.
Single-shot optimization for a tool I was now using as infrastructure.
The Part Nobody Talks About
At the same time, something else became obvious: most people hadn't even started yet.
Which creates this strange split reality:
- one group optimizing prompts - another group barely aware this exists - and somewhere in the middle, a small group quietly figuring out that the real problem isn't prompting
It's continuity.
When It Stopped Being an Experiment
The shift happened quietly. I started adapting pieces of this into real workflows. Then into full systems. Then into client deployments.
And then something clicked. Not because the outputs were better — but because things stopped resetting. Content didn't stall. Follow-ups didn't slip. Context didn't disappear between conversations. Things just… kept moving. The feedback changed from "this helps" to "I don't have to think about this anymore." Which, if you zoom out, is just another way of saying: the system is holding continuity for me.
Trying to Explain What This Is
This is where it gets tricky. Because the obvious labels don't quite fit.
It's not a chatbot. Not a content tool. Not an automation system. Those are all pieces of it. But the actual thing — the thing underneath all of it — is simpler: it keeps things from falling apart between steps.
The closest I've gotten: a personal cognitive operating system.
Which sounds like a stretch… until you watch it work in real life.
And Then Friday Happened
Every system has a moment where it stops being interesting in theory and becomes undeniable in practice. For me, one of those moments happened Friday afternoon. And the funny part is, it didn't happen in some dramatic, cinematic way. No giant launch. No "behold, the future" moment. I was just driving to another meeting.
A call came in. I hit record. That was it. No ceremony. No special setup. No big plan. Just one tap. The meeting happened while I was in the car, in motion, doing what most of us do all day now — trying to keep too many things moving at once without dropping any of them. Normally, that kind of meeting creates a second shift. You take the call, then later you go back and process it. You pull notes. You summarize. You organize next steps. You write the recap. If the meeting mattered enough, you build the deck. Then you email the deck. Then, if everyone is especially committed to wasting time in a familiar way, you schedule another meeting to talk about what already happened in the first one.
That has been the default model of knowledge work for so long that most people don't even question it anymore. Meeting first. Work after. Except this time, by the time I pulled into my next stop, the meeting was over — and so was the work that usually comes after it.
The recording had already been transcribed. The transcript had already been processed into a tactical summary. The summary had already been sent to the team. And a full presentation deck — branded, structured, slide by slide — was already sitting in everyone's inbox.
I didn't touch a keyboard. Later, when the follow-up call happened and the team wanted me to walk them through it, I said three words:
Check your email. Not because I was being slick. Not because I was trying to prove a point. But because it was already done.
What That Moment Actually Meant
That moment has been rattling around in my head ever since, because it clarified something I'd been circling for a long time.
What I've been building with ARIA isn't just an assistant. It's not just a content engine. It's not just a better way to prompt a model or automate a task.
It's a way of collapsing the lag between something happening and something useful being done with it. That's the real shift.
For years, I've been describing ARIA in different ways, partly because none of the existing categories fit cleanly. Assistant is too small. Automation is too mechanical. Prompting is too shallow. Agent is too loaded. "Personal cognitive OS" is probably the closest label I have — but even that sounds a little dramatic until you watch something like this happen in real life. Then it just feels accurate. Because what happened in that car was not magic. It was not sentience. It was scaffolding. Cognitive scaffolding, maybe. Prosthetic cognition, if I want to be a little bolder about it. Not a replacement for thinking. Not a replacement for judgment. Not a replacement for being there, having the conversation, understanding the stakes, reading the room, knowing what matters. But a system that can take the overhead that normally piles up after the thinking — the packaging, the structuring, the distribution, the "I'll get to that when I get back to my desk" part — and close that gap almost instantly. That's a very different relationship with time.
The Thing That Surprised Me Most
When I first started building ARIA, I thought I was mostly trying to preserve continuity. That was the obsession in the early days: how do you stop the system from resetting? How do you keep context alive long enough to be useful? That thread is still there. It runs through the whole story.
But what happened Friday made something else obvious: continuity isn't just about memory. It's about momentum. If the system can hold context, it can hold motion. If it can hold motion, it can reduce delay. If it can reduce delay, it changes how the human operates. That's the part that sneaks up on you. The real value isn't that the machine can summarize a meeting. Transcription is table stakes. Summary is table stakes. Even deck generation, increasingly, is table stakes. The leverage is in the compression. The orchestration. The fact that all of those things can now happen in sequence, automatically, from one captured moment, before the human has had time to think, "I need to follow up on that later." That's when it starts to feel like more than a tool. Not because it's alive. Because it's aligned.
Where It Is Now
At this point, ARIA isn't a prompt. It's not really a tool either.
It's a layer. Something that sits alongside how you work and keeps things from resetting when they shouldn't. Which, if you think about it, is exactly what I was trying to solve on day one. I just didn't know it yet.
Final Thought
If you asked me in 2023 what this would become, I wouldn't have had an answer. If you ask me now? I'd probably still say: I'm not entirely sure. But I know this part is true: I wasn't trying to build AI. I was trying to build something that didn't fall apart between thoughts. Three years of architecture. All the weird prompt experiments. All the overbuilt early versions. All the recursion and memory scaffolding and late-night "what happens if I push this further?" moments. And one of the clearest proofs of concept comes down to this: I was driving. A call came in. I hit record. And by the time I parked, the work was already done.
That's not the whole story of ARIA. But it's one of the clearest snapshots of what the story has become. And yeah — I'm still a little surprised by it.





Comments