The Next Blocks: An AI Systems Map for 2026–2028
- Rich Washburn

- 1 day ago
- 4 min read


A lot has happened since OpenClaw hit. And it wasn't that long ago.
I usually do these — this week in AI, this month, whatever. I haven't done one in a while. But enough is in place now that I can say: from where I'm standing, here's what I see coming. And I think most of this lands inside a three-year arc. Let me map it out.
Context Frame
We are not in AGI. We are in pre-AGI industrialization — infrastructure scaling, systems hardening, capability distribution.
At the same time: - 84% of humanity has never used AI - 0.04% are actually building with it
That's not a small gap. That's a structural asymmetry. And it means we're simultaneously in early infrastructure phase and extreme capability divergence. That combination is what makes this moment unusual.

The Nine Next Blocks
1. Ambient Capture Layer — The Input Revolution
Right now, using AI requires intent. You have to decide to use it. That changes. The next layer is continuous ingestion — voice, meetings, documents, screens, workflows, decisions — feeding into a pipeline that captures, structures, interprets, and acts. You stop "using AI." It becomes a background process of your life and work.
This is how the 84% crosses over. Not through a better chatbot. Through invisibility.
2. Persistent Personal Memory — The True Moat
The model isn't the moat. Memory is.
Curated, structured, long-term memory — preferences, decisions, patterns, tone, workflows, outcomes — turns a stateless tool into a stateful system that compounds over time. Key principle: relevance over volume.
Models commoditize. Memory differentiates.
3. Closed-Loop Execution Systems — The Flywheel
Data → insight → recommendation → execution → feedback → repeat.
This removes the lag between thinking and doing. Turns AI from an advisor into an operational system. As the loop tightens — human and system co-evolve. This is where leverage explodes.
4. Agent Orchestration — The Composition Layer
One agent is a tool. A system of agents — research, writing, analysis, routing, execution, QA — bound together by shared memory and task decomposition — is infrastructure. Specialists beat generalists. Systems beat tools.
5. Human Confirmation Layer — The Control Layer
AI proposes. Human confirms. AI executes.
This model wins because it minimizes risk, preserves trust, and increases speed without losing control. The ratio in practice: 90% automated, 10% human directional input. Full autonomy is a product liability problem. Guided autonomy is a competitive advantage.
6. Data Ownership and Sovereignty — The Structural Layer
Own your data. Own your behavior patterns. Own your memory. Own your outputs and workflows. This breaks the SaaS extraction model. The coming divide isn't between AI users and non-users. It's between people renting intelligence and people owning it.
7. Multimodal Grounding — The Perception Layer
When AI can see your screen, read your documents, and understand your live systems — it stops being an assistant and starts being an operator.
This is the difference between a consultant who reads a brief and one who's actually in the room.
8. Trust and Provenance Layer — The Verification Layer
As agents take more actions, trust becomes the constraint. Who authorized this? What was the reasoning? What changed?
Trust becomes scarcer than intelligence. Build the layer early.
9. Cognitive Architecture Layer — The Alignment Layer
Memory segmentation. Reasoning vs. action separation. Recursive loops. Decision frameworks.
When you build these in deliberately, AI behaves less like a tool and more like a thinking system attached to a human. That's not a metaphor. That's an engineering choice.
The Connective Tissue — The Real Game
There needs to be a layer that standardizes interaction between agents, memory systems, tools, and humans. That handles context passing, identity, permissions, task routing, and state management.
Working name: Cognitive Infrastructure Layer (CIL)
Think of it not as a model and not as an app — but as the protocol that connects them.
I've said before this might become a new kind of TCP/IP layer. I still think that. Without this layer, everything stays fragmented. Agents don't scale. Memory is siloed. Systems break under complexity.
With it — systems become interoperable. Intelligence becomes composable. Workflows become programmable.
The System Loops
These nine blocks don't sit in isolation. They're self-reinforcing:
Loop 1: More capture → better memory → better outputs → more trust → more delegation
Loop 2: Insight feeds action → action generates data → data sharpens insight
Loop 3: Owned systems → higher trust → deeper integration → higher value
Loop 4: More grounded AI → better decisions → more permission → more autonomy
Each loop feeds the others. Intelligence emerges at structured complexity — not in chaos, not in rigid order — at the edge.
What This All Collapses Into
Not a better chatbot. Not a single smarter agent. Not a bigger model.
A personal, persistent, multi-agent cognitive system that observes, remembers, reasons, advises, executes, and learns — aligned to a human or an organization.
The final distillation: 1. Invisible input (capture) 2. Persistent identity (memory) 3. Continuous loops (execution) 4. Composable agents (systems) 5. Owned data (control) 6. Grounded perception (context) 7. Verified outputs (trust) 8. Structured cognition (architecture) 9. Unified protocol layer (connective tissue)
One line: AI is moving from stateless tool → stateful system → personal infrastructure.
That's the arc. And we are very early in it. Most of what I just described either exists in prototype or is actively being built right now. The assembly is the work. The people doing that assembly — understanding how the blocks fit, where the loops close, what the connective tissue needs to be — those are the people who will operate at a different level for the next decade. Three years. Maybe less. Get in before the next jump.




Comments