When Everyone Has a Superpower
AI gave everyone the same superpower. Now everything looks the same. On homogeneity, creative atrophy, and why imperfection is the next edge.

Same shelf. Different apartment.
There's never been a better time to build things.
You have an idea for an app? You can ship a working prototype before your coffee gets cold. A bug that would've taken half a day to trace? Solved in seconds. A landing page, a business plan, a logo — things that used to require entire teams and weeks of back-and-forth now materialize in minutes.
And it's not just developers. Everyone's building. Every day, new apps pop up like mushrooms after rain. People who never wrote a line of code are launching products. Designers are producing in an afternoon what used to take a week. Writers are publishing at a pace that would've been physically impossible two years ago.
This is genuinely amazing. The barrier between "having an idea" and "making it real" has never been this thin. AI didn't just lower it — it practically removed it.
So here's the thing that's been quietly bothering me: if everyone suddenly has the same superpower, what happens next?
The sameness machine
Look around. Really look.
That startup's landing page looks like the other startup's landing page. That technical blog post reads like the other technical blog post. The code in that pull request follows the exact same patterns as every other AI-assisted pull request you've reviewed this month.
This isn't a coincidence. It's the inevitable side effect of a brilliant feature.
AI is exceptionally good at giving you the best known solution to a problem. Ask it to build a login flow, and you'll get the login flow that works best — based on thousands of existing implementations it has already learned from. Ask it to write a cover letter, and you'll get the cover letter that statistically performs well — based on every successful cover letter ever written.
This is incredibly useful. It's also quietly making everything look the same.
Think about IKEA. It's a masterpiece of problem-solving. Need a bookshelf? Here's the most space-efficient, affordable, well-designed bookshelf that millions of people have already validated. Need a desk? Same thing. Every problem solved practically, beautifully, at scale. Sound familiar?
But walk into anyone's first apartment — the same shelf, the same lamp, the same couch. Everything functional, everything affordable, everything... identical. There's nothing wrong with any single piece. But the room tells you nothing about who lives there.

When everyone asks the same oracle for answers, the oracle gives everyone its favorite answer. And slowly, without anyone noticing, the edges disappear. The quirks vanish. The things that made your solution yours get optimized away in favor of the things that made the most popular solution popular.
We're not converging on excellence. We're converging on consensus.
The problem nobody's solved before
I ran into this firsthand not long ago.
A client of mine had a complex evaluation process — the kind that was entirely manual, built on years of personal judgment, gut feeling, and honestly, sometimes pure whim. The kind of decision-making where you look at the data and just know. No spreadsheet, no algorithm. Just experience and instinct.
They wanted to automate it. Reasonable enough.
So I threw AI at the problem. I spent a month — a full month — trying to get it to work. I fed it rules, but the rules were deeply intertwined, dependent on each other in ways that resisted any clean logic. Every time I pinned down one condition, three others shifted.
The problem wasn't that AI couldn't execute. It executed beautifully — on problems that already had solutions somewhere out there. But this? This was one client's unique mess. Nobody had solved it before. There was no Stack Overflow thread, no best practice, no training data.
AI had nothing to converge on. So it just... spun.
Eventually, I sat down and solved it myself. Set theory. Simple, boring, human set theory. I looked at the overlapping conditions, drew the circles, found the intersections. It wasn't elegant. It wasn't innovative in any grand sense. But it was a solution that didn't exist before I created it.
That's the moment that stuck with me. Not because I'm smarter than AI — I'm obviously not, at most things. But because this particular problem needed something AI fundamentally doesn't do: start from nothing.
The beauty of imperfection
Here's what's interesting: our brains are actually wired to reject the kind of perfection AI produces.
Researchers in cognitive neuroscience have been studying how we respond to music emotionally. Their findings are telling: small imperfections in a performance — a slightly irregular timing, a note that lands just a hair late — are perceived by listeners as more expressive. And expressivity, it turns out, is one of the biggest drivers of aesthetic appeal.
It goes deeper than that. Our brains maintain a predictive model of what we're hearing. When something deviates from our prediction — a so-called "prediction error" — it signals that our mental model can be improved. And we like that. We actively seek out environments where we can learn, where we're slightly surprised, where things don't go exactly as expected.
In other words: the brain doesn't crave perfection. It craves the interesting. And interesting, by definition, is what you didn't see coming.
The Japanese understood this centuries ago. They have a word for it — wabi-sabi — a philosophy that finds beauty in imperfection, impermanence, and incompleteness. And they have kintsugi, the art of repairing broken pottery with gold, making the cracks the most visible part of the piece. Not hiding the flaw. Celebrating it.
Now think about what AI does. It takes thousands of inputs and produces the smoothest, most predictable, most statistically average output. The prediction errors disappear. The cracks get filled in. Everything is polished to the point where there's nothing left to surprise you.
It's technically flawless. And that's exactly the problem.
The muscle you're losing
Here's what worries me more than homogeneity.
When you outsource every problem to AI, you get faster. Obviously. But you also stop doing the thing that made you good at solving problems in the first place: struggling with them.
There's a reason junior developers are told to build things from scratch before using frameworks. Not because frameworks are bad — they're wonderful. But because the act of wrestling with a problem, failing, trying a different angle, failing again, and eventually finding your own way through it — that's what builds the muscle.
AI skips that entire process. It gives you the answer. And the answer is usually good. Sometimes it's great. But you didn't get there. You just arrived.
I catch myself doing this. I'll reach for AI on something I absolutely could have figured out on my own. Not because I'm lazy — because it's faster. And every time I do, I'm making a trade: speed now, capability later.
The real danger isn't that AI makes bad work. It makes good work. That's precisely why it's so easy to let it do everything. And one day you're staring at a problem that nobody's solved before — your client's unique mess, your company's weird edge case, your product's unexpected constraint — and the muscle you need to solve it has quietly atrophied.
Not because you lost it. Because you stopped using it.

The ceiling is still yours
I'm not saying stop using AI. That would be absurd. I use it every day. It's the most powerful tool I've ever had access to, and I genuinely believe it's making the world more productive in ways we're only beginning to understand.
But a tool is only as valuable as the person wielding it.
When everyone has access to the same tool, the differentiator isn't the tool — it's what you bring to it that the tool can't generate on its own. Your judgment. Your taste. Your ability to look at a problem sideways and see something that isn't in any training data.
The trends of tomorrow won't come from AI. They never could. Trends come from people who break patterns, not from systems that perfect them. Every meaningful shift in music, design, technology, and culture happened because someone did something that didn't make sense yet. Something that no model trained on the past would ever suggest.
AI will keep raising the floor. It will make the average better. It will make the baseline higher. And that's genuinely great. But the ceiling? The ceiling is still yours.
So use AI. Use it aggressively. Let it handle the things it's brilliant at — the known problems, the established patterns, the best practices. But don't let it fool you into thinking that's all there is.
Because the moment everyone's work looks the same, the world will start craving what's different. And different is something only you can make.
Don't let the honeymoon make you forget that you're the one who fell in love in the first place.
Sources: Research on musical imperfection and aesthetic appeal referenced from "Why imperfection is the secret to making better music" — MusicRadar, based on cognitive neuroscience research at Goldsmiths, University of London.