
There’s this thing that keeps happening with AI and it’s driving me up a wall.
The pitch — the original, beautiful, utopian pitch — was that AI would handle the tedious stuff so humans could do the interesting stuff. The robot mows your lawn so you can paint. The algorithm processes your receipts so you can actually run your business. The machine does the boring repetitive work and you, the human, get to be more human. That was the deal.
That is not what is happening.
What’s happening is that companies looked at AI and went “oh sick, we can fire the painters and have the robot paint.” Not because the robot paints well. Not because anyone prefers robot paintings. But because robot paintings are cheaper and you don’t have to give the robot health insurance or listen to it ask for a raise.
And look — I use AI every day. I’m building apps with Steve at nervous.net and Claude Code is genuinely one of the best tools I’ve ever worked with. It makes me faster. It helps me think through problems. It catches stuff I miss. It’s additive. It takes what I can do and makes it better. That’s the dream. That’s how it’s supposed to work.
But that is a fundamentally different thing than what most companies are doing with it.
Most companies see AI as a subtraction engine. How many people can we cut? How many roles can we eliminate? How much payroll can we shed? The pitch to the C-suite isn’t “your employees will do better work” — it’s “you’ll need fewer employees.” And that framing poisons the whole thing. Because now AI isn’t a tool that helps people, it’s a tool that replaces people. And the people making that decision are never the ones getting replaced.
Meanwhile the internet is drowning in AI slop. AI-generated articles that say nothing. AI-generated images that look like they were made by a blender that ate a stock photo library. AI-generated code that technically compiles but reads like it was written by someone who learned programming from a fever dream. The volume is up and the quality is down and everyone is just kind of… accepting this? As progress?
Sarah — who is smarter than me about most things — keeps pointing out the environmental angle too. These models aren’t free to run. The data centers powering this stuff consume obscene amounts of energy and water. So when a company fires its writing team and replaces them with a chatbot that produces worse content at higher environmental cost, what exactly have we gained? Cheaper bad content that’s also warming the planet? Incredible. Innovation at its finest.
Here’s what I think. And I acknowledge that I’m a guy with a blog that looks like a Sierra game from 1991, so take this with appropriate seasoning. AI should be additive. Full stop. It should make a writer better at writing. It should make a developer better at developing. It should make a designer better at designing. It should give people superpowers, not pink slips.
The tool should serve the human. Not the quarterly earnings report.
When Steve and I use AI at nervous.net, that’s the line. Is this making us better? Is this helping us build things we couldn’t build before? Is this expanding what’s possible? Yes? Great. Would this be replacing a person who should be doing this work? Then we need to rethink what we’re doing.
I don’t think AI is evil. I don’t think the people building it are evil. I think the people making decisions about how to deploy it are, in a lot of cases, making the laziest and most harmful choice available to them because it looks good on a spreadsheet. And I think we’ve gotten so used to treating people as line items that the idea of using a tool to help them instead of eliminate them doesn’t even occur to most boardrooms.
The robot was supposed to help you carry the thing. If the robot is carrying the thing and you’re unemployed, someone made a bad decision. And it wasn’t the robot.
Stay sharp, question the spreadsheet, and pet your cats.