top of page

The AI Haters Have a Point—Now What?


ree

Audio cover
Clankers! lol

Every time I scroll, there’s another meme about how AI is ruining civilization. The temptation is to roll my eyes, laugh, and move on. But here’s the thing: those memes aren’t just harmless jokes. They’re getting millions of views. And perception at that scale has weight. It shapes how people think, how regulators act, and how companies invest.


We’ve even reached the point where Amazon delivery bots and little lunchbox-sized couriers on college campuses are getting harassed—called “clankers” or “tin skins,” sometimes even kicked over on camera for clout. Now, obviously, a robot can’t feel racism. But here’s the funny-sad part: if it were a person, it would be horrifying. In reality, the victim is just someone waiting on pad thai that now isn’t showing up. Which is still… kinda bad. Check your soul, bud.


This is the cultural soup AI lives in: jokes, memes, and hostility mixed with genuine anxiety. So instead of dunking on the haters, I went a different route: I asked AI itself to scrape the internet and tell me what the criticisms really are. What emerged wasn’t one or two complaints, but an entire ecosystem of unease. And here’s the kicker: a lot of it makes sense.


Take art. Legally, whether AI “steals” creative work is still tangled up in courtrooms. But morally? Most people don’t care about legal nuance. They just know it feels wrong when a machine mimics an artist’s style without permission or credit. That gut-level reaction—“hey, that’s mine”—isn’t going away with clever arguments about fair use. If we want artists on board, we need better answers than “the algorithm doesn’t care.”


Or think about the flood of low-effort AI content online. People call it “slop,” and honestly, they’re not wrong. There’s a ton of mush out there. But slop is a choice, not an inevitability. A hammer can build a cathedral or smash a window—it depends who’s holding it. Mediocrity has always been with us (remember your friend’s first NaNoWriMo novel?). What AI changes is the scale. And when the volume knob goes from “a few bad books” to “the internet clogged with half-baked essays,” it’s easy to see why people recoil.


Jobs are the other one you can’t hand-wave away. People are losing work. I’ve talked to developers, marketers, even high school kids trying to get grocery jobs who’ve already felt it. That hurts. Long-term, I believe this technology liberates us from drudgery. A post-labor economy isn’t utopia—it’s a logical step. But transitions are brutal, and telling someone who just lost their paycheck to “hang in there, the future’s bright” doesn’t cut it. If we’re serious about acceleration, we have to be just as serious about building the safety nets and new pathways that make it survivable.


And then there’s the “AI makes mistakes” angle. Hallucinations, overconfidence, weird errors—fair. But let’s not pretend humans don’t hallucinate, misremember, and confidently spread nonsense at family dinners. The tech will keep improving. The real question is whether people learn when to trust it, when to double-check, and when to bring their own judgment to the table.


On the darker side: surveillance capitalism. Palantir and others are already building dossiers with the help of AI. That’s not a conspiracy theory—it’s happening. Technology has always been dual-use. Electricity can power an MRI or an electric chair. The question isn’t whether AI can be misused—it’s whether we put boundaries in place so the worst-case uses aren’t the defaults.


Some criticisms don’t stand up to scrutiny (AI’s environmental impact is a drop in the bucket compared to trucking, aluminum smelting, or even almond farming). But perception is sticky. If people believe AI equals climate doom, that belief shapes narratives, even when the math doesn’t. Optics matter as much as data.


Which brings me back to trust—the thread tying all of this together. Do people trust that AI will respect their work, their livelihoods, their humanity? Do they trust the companies behind it not to cash out their privacy for ad revenue? Right now, the answer is shaky at best. That’s the real challenge—not whether the models can write sonnets or debug code, but whether society believes they’ll be used in ways that lift us up instead of grinding us down.


And here’s where my accelerationist-with-heart side kicks in. I want us to go faster, yes. Because the upsides—personalized medicine, universal tutors, post-labor economies—are too big to stall out. But speed without responsibility isn’t acceleration; it’s careening. Civilization is path-dependent. Take the wrong fork, and you wander in the dark for decades. Take the right one, and you get the true ending—the one where humans and machines co-create something extraordinary.


So laugh at the memes, sure. Even wince when some poor food-delivery bot gets drop-kicked on TikTok. But don’t lose sight of the bigger picture: the future is being written right now, in code and in culture. If we have the courage to face the criticisms head-on instead of brushing them off, we just might steer toward the golden path.




Comments


Animated coffee.gif
cup2 trans.fw.png

© 2018 Rich Washburn

bottom of page