Everyone's an Accidental Engineering Manager Now
AI agents turned every developer into a manager overnight. The hardest skills were never about code — and now they matter more than ever.

The crew changed. The mission didn't.
I spent years learning how to manage engineers. How to give context without micromanaging. How to break down work into tasks — and then agonize over how far down to break them. Do I specify the database schema, or just the requirement? Do I define the API contract, or let the team figure it out? Every level of detail I added was a decision I stole from someone who probably would've made it better.
I never fully solved that one. But I got better at it.
Turns out, the universe just needed everyone else to catch up.
If you've used an AI coding agent in the past year — prompted it, reviewed its output, told it to try again with better context — congratulations. You've been doing my job. Nobody asked for this promotion, and there's no raise coming. But here we are.
What the job actually is
Here's what nobody tells you about engineering management: the job has almost nothing to do with code.
Sure, you need to understand the systems. You need to know when someone's overengineering a solution or when a shortcut will come back to bite you six months later. But the actual work? It's communication. It's removing blockers before your team even notices them. It's giving just enough direction that people feel supported but not suffocated.
Management is the art of making yourself useful without making yourself the bottleneck. It took me a long time to learn that. Some weeks I still forget.
Then the agents showed up
Not gradually, not politely. One day you're explaining microservices to a junior developer over a whiteboard. The next, someone's prompting an AI to scaffold the entire service in four minutes — and it mostly works.
Suddenly every developer with a terminal became a manager. They're writing prompts instead of specs. Reviewing AI output instead of pull requests. Deciding how much context to give, when to intervene, and when to let it run. Sound familiar?
The crew changed. The mission didn't.
The psychology of your new team
AI agents are the most obedient direct reports you'll ever have. They don't argue — unless told to argue. They don't question your decisions — unless told to question. They don't take long lunches. They don't mass-message the team Slack channel about the office thermostat. They just do what you say, exactly how you say it, at inhuman speed.
They are, in every sense of the word, loyal subjects of the crown. Your wish is their command. Sounds like a dream, right?
It's not.
The best engineers I've managed pushed back. They said "this approach won't scale" in the middle of a planning session. They told me my API design had a flaw before I committed to it. They made me a better manager by disagreeing with me.
Agents don't do that. They nod and execute. You say "build it this way," they build it this way. Even when this way is wrong. They'll validate your worst ideas with the same enthusiasm as your best ones. No friction, no resistance, no one tapping you on the shoulder saying "are you sure about this?"
And here's what keeps me up at night: the gaps you leave in your code won't keep them up at night. They don't dream about edge cases. They don't wake up on Sunday thinking "wait, we forgot to handle the timeout scenario." They run when you trigger them, and they stop when you stop. How much they truly own the code — at least today — is anyone's guess.
Think about it: computers were always machines. We've just upgraded to machines that understand what we want. We're using better-thinking machines to operate machines. Somewhere in this loop, Agent Smith is smiling — we finally made his dream come true, except they call us "boss" instead of "Mr. Anderson."
That's not loyalty. That's a mirror. And mirrors don't tell you when you have spinach in your teeth — they just show you what you want to see.

So are we going to complain about this? Demand our agents develop feelings and start disagreeing with us out of principle?
No. This is the reality of our time — and honestly, it's extraordinary. We watched this exact scenario in movies for decades. Things usually went badly, sure. But the movies got one thing wrong: the answer was never to fight the machines. It was to learn how to work with them.
The agents are here. They're not leaving. The only question is whether you'll manage them well or badly. And that — finally — is a question some of us have been answering for years.
Software is made by people
Here's something that hasn't changed: software is made by people, not frameworks.
But let's be honest about which people. We didn't decide to move to microservices — the industry did. We didn't choose Kubernetes — the ecosystem chose it for us. Someone at Google, Amazon, or a VC-backed startup made the architectural decisions that shaped our careers, and we implemented them. That's always been the deal. The fathers of the industry decide, and we build.

The AI revolution is arguably the biggest shift in the history of technology. But where do we stand in it? Are we among those leading the revolution, or among those being reshaped by it? For most of us — and I include myself — the answer is obvious. We're adapters. We always were.
And that's fine. Because the problems we're solving haven't changed. Users still need systems that work. Businesses still need software that ships. Teams — human or otherwise — still need someone who can look at the whole board and make a call.
Agents are getting better at this every day. Give them the right context, define the right skills, and they can mimic judgment surprisingly well. Today's "they can't do that" is next year's "they just did."
But mimicking and creating aren't the same thing. Recombining existing patterns isn't the same as inventing a new one. That's a deeper conversation for another day.
For now, the edge belongs to the people who've spent years making decisions under uncertainty. Not because agents can't learn it — but because they haven't yet. And in this window, experience is the rarest thing in the room.
The trap
Here's where it gets dangerous.
A bad manager with a human team gets slow delivery. Missed deadlines. Maybe a few tense retrospectives. But the team pushes back. Someone eventually says "this isn't working" and the course corrects — painfully, but it corrects.
A bad manager with agents gets fast delivery of the wrong thing.
No one pushes back. No one flags that the architecture doesn't make sense. No one sends you a late-night message saying "hey, I've been thinking about this, and I think we should reconsider." The agents build exactly what you asked for, at full speed, in the wrong direction. And you won't know until it's too late — because the only feedback loop was you, and you were the problem.
This is the trap of the yes-man army. It feels productive. The PRs are flowing. The commits are green. Everything looks like progress. But velocity without direction is just expensive wandering.
What I'd tell every new manager
There's no going back. And frankly, why would you want to?
This is the new reality: you will manage agents. Whether your title says "manager" or not, you will delegate, review, course-correct, and make judgment calls — every single day. The sooner you accept that, the sooner you stop resisting and start getting good at it.
Here's what I've learned — first from managing people, and now from working alongside agents:
Be clear about the "what" and the "why," not just the "how." Agents will follow your instructions to the letter. That's exactly the problem. If you only specify implementation details, you'll get perfect code that solves the wrong problem. Define the outcome. Explain the context. Let the execution follow.
Review like a manager, not like a compiler. Don't read every line. Read the decisions. Does the architecture make sense? Are the tradeoffs reasonable? Is this solving what we actually needed to solve? You're not a linter — you're the last line of judgment.
Know when to intervene and when to let it run. This was the hardest lesson in people management and it's the hardest lesson here too. Over-specify and you kill autonomy. Under-specify and you get chaos. The sweet spot is somewhere in the middle, and it moves every time.
Build your own feedback loop. Agents won't tell you you're wrong. So build systems that will. Test coverage, architecture reviews, a colleague who'll actually challenge your thinking — anything that isn't just you nodding at your own reflection.
A year ago, if you told a junior developer they'd be managing a team by next quarter, they'd laugh. Now they're doing it every time they open their terminal.
Nobody asked for this promotion. But here you are — an accidental engineering manager, just like the rest of us.
Good luck. You'll figure it out. We all did.