top of page

Human in the Loop, Human in the Crosshairs


Audio cover
Human in the Loop

Let’s stop dancing around it.... For the last couple of weeks, I’ve been watching this open-source agent ecosystem do what open source always does when something powerful lands in its lap: it goes feral. ClaudeBot, Maltbook, autonomous negotiation, agents coordinating, people duct-taping workflows together and seeing what breaks. And most of the conversation has been about autonomy.


Is this safe? Is this dangerous? Is this the gray goo phase?

That’s interesting. It’s not the point. The point is influence.


For thirty years, we’ve treated cybersecurity like a perimeter problem. Firewalls, endpoints, identity layers, zero trust. We built walls because we assumed the walls were the target. But anyone who’s actually done offensive security work knows better. You don’t hack systems. You hack people.


Phishing works because someone clicks.Escalation works because someone approves. Social engineering works because someone believes. That’s never changed. What’s changed is that persuasion just got automated. An autonomous agent today can research a target in seconds. It can scrape LinkedIn, infer tenure, detect tone shifts, mirror communication style, and adjust strategy mid-conversation. It doesn’t get tired. It doesn’t get impatient. It doesn’t misread social cues. It optimizes. And while all of that has been happening, we’ve been reassuring ourselves with one phrase:

“Keep a human in the loop.” It sounds responsible. Ethical. Controlled. But step back and look at it structurally.


If AI automates most of a workflow and a human signs off at the end, that approval moment becomes the highest leverage point in the system. It’s the choke point. It’s the decision that moves money, changes policy, grants access, approves discounts, overrides safeguards. That’s not a safety feature. That’s a target.


Human in the loop has quietly become human in the crosshairs.

Now here’s where this gets real. For years, we’ve run phishing simulations inside organizations. I’ve done it. You probably have too. We send fake emails offering a free iPad if you click the shiny green button. We test who clicks. We retrain. We explain, again, that no one is giving away electronics for fun.


Why do we do that? Because humans need conditioning to recognize engineered manipulation. But phishing emails are blunt instruments.


They’re the caveman version of persuasion. Misspellings, weird links, obvious urgency. What’s coming next doesn’t look like that. It looks reasonable. It references your recent post. It matches your tone. It understands your internal workflow. It frames the request as helpful, efficient, aligned. And if it senses hesitation, it adapts.


The help desk rep clearing tickets. The finance manager approving invoices. The sales director negotiating margin. They’re not facing a spam blast. They’re facing an optimization engine. That’s an uneven fight.


So if we insist — and we should — on keeping humans in the loop, then we need to stop pretending that’s enough. That human waypoint is now a critical security boundary. It needs training. It needs tooling. It needs reinforcement.


The person approving the final step in an AI-assisted workflow should be trained not just to spot bad links, but to recognize persuasion patterns. Artificial urgency. Subtle authority cues. Policy deviation wrapped in convenience. “Just this once” logic. And more importantly, they need backup.


If AI is being used to influence, then AI needs to be used to defend.

A sidekick that flags abnormal pressure. That notices when language is too perfectly tuned. That highlights when a request is drifting from established patterns. Not to replace the human, but to level the field.


Because here’s what I’m increasingly convinced of:

The next major breach in the AI era won’t start with ransomware.

It will start with a reasonable request. A discount approved. An exception granted. An override rationalized. And it will pass through a human checkpoint because that’s where we told the system to leave control.


We’re watching this evolution happen in weeks instead of years. That’s the gift of this moment. The time-lapse view. Capability surges, and the security implications surface almost immediately. This is one of those threshold shifts. Not a feature release. Not a hype cycle. A change in the battlefield.


The perimeter isn’t just the network anymore. It’s the human decision layer.

And if we don’t start treating that layer like infrastructure — training it, reinforcing it, augmenting it — we’re going to learn the lesson the hard way.


I’d rather call it now. Before the headline does it for us.


Comments


Animated coffee.gif
cup2 trans.fw.png

© 2018 Rich Washburn

bottom of page