What Actually Hit Me After the DeepStation Event
- Rich Washburn

- 3 hours ago
- 4 min read


I’ve been thinking about last night pretty much since I woke up.
It was one of those events where nothing felt overly flashy, but a couple things stuck in a way that doesn’t really let go. Not because they were perfectly explained, but because they pointed at something real.
There were two talks. Very different angles, but together they kind of formed a bigger picture that I don’t think either one was explicitly trying to make.
The first talk: work is getting compressed

Livio Zanardo kicked things off talking about AI as leverage.
Not in the hype sense. More in the practical sense of stop doing things that don’t need to exist anymore.
A lot of it came down to this idea that work is breaking apart into smaller pieces. Instead of hiring full-time roles, you bring in exactly what you need, when you need it. Fractional work, specialized capability, very targeted execution. That part wasn’t surprising. What was interesting is what sits underneath it.
If you follow that logic out, what’s really happening is that the connection between time, labor, and identity starts to loosen. You don’t need to spend all your time producing in the same way anymore. You don’t need to define yourself by a single role. And that’s where it gets weird.
Because once execution gets easier, once access to capability gets easier, the real constraint isn’t doing the work anymore. It’s deciding what actually matters.
That’s where my question came from
At some point I asked about what an economy based on meaning actually looks like. Not productivity. Not output. Meaning. And the answer kind of circled something important without fully landing on it.
If AI gives you time back, and if systems remove a lot of the friction between idea and execution, then people are going to start orienting around different things. Family. Building. Curiosity. Community. Creating things just because they matter to them. Not because they have to.
I’ve been writing about this already, this shift from being defined by what you do to being defined by who you are and sitting there last night, it felt less theoretical and more like something that’s actually starting to happen. Not evenly. Not cleanly. But it’s moving.
The second talk went the other direction
Muntaser Syed took it straight into the technical side. Explainable AI. Black boxes. Models making decisions. On the surface, it’s a very different conversation. More engineering, more systems, more detail. But underneath it, it’s about trust.
Because we are already in a world where systems are making decisions that matter. Credit, healthcare, risk, all of it. And if you can’t explain why something made a decision, then you’re not really in control of it.
You’re just accepting the output. That’s a problem.
This is where things clicked for me
Because what he was really talking about, whether he framed it this way or not, is something I’ve been working through for a while. Fiduciary intelligence
The idea that intelligence shouldn’t just be capable. It should be aligned.
Not just giving you answers, but actually operating in a way that reflects your interests, your context, your goals.
Right now most AI systems are basically tools. You ask, it responds. You prompt, it generates. That’s fine. It works. But it’s shallow. The next layer is something different. Systems that understand context over time, that reflect your own patterns back to you, that don’t just agree with you but actually push when something doesn’t make sense. That’s a very different relationship. And you don’t get there without explainability. Without structure. Without being able to see what’s happening under the hood.
So now you’ve got these two threads
On one side, work is getting abstracted. Execution is getting easier. Time is opening up. Identity starts to detach from labor. On the other side, decision-making is getting handed to systems. Which means trust, accountability, and alignment suddenly matter a lot more.
Put those together and you get something bigger than either talk on its own. We’re moving into a world where people have more freedom to decide what matters to them…But the systems they rely on have to be a lot more trustworthy.
That’s the real shift
Most people are still focused on tools. How do I use this... How do I automate that... How do I go faster..
That’s fine, but it’s not the interesting part. The interesting part is what happens when: You don’t have to optimize your life around labor in the same way anymore and the systems helping you operate need to actually be accountable to you. That’s a different kind of world.
Walking out of it
Honestly, the event was just good. Good people, good conversations, a lot of energy in the room. You could feel people trying to figure things out in real time. And I really appreciate that these conversations are happening at all. Because the reality is, most of the world isn’t even here yet. Most people haven’t even meaningfully used AI, and even fewer are thinking about it at this level. The fact that rooms like this exist, where people are actively working through these ideas, matters more than people realize. Especially seeing younger builders thinking this way. That stood out. They’re not just using the tools, they’re thinking structurally about where this is going. We need more of that.
Big thanks to Livio Zanardo and Muntaser Syed for putting real thought into their presentations and actually pushing the conversation forward.
And appreciate Praveen for hosting and bringing everyone together. Always a good room, always good energy. Looking forward to the next one.




Comments