Stop forcing AI tools on your engineers
What software engineers really need from their managers right now
There is A LOT of pressure in tech.
Every couple of days a new article pops up about how engineers are X% more productive, and how company Y laid off hundreds of developers because they are not needed anymore.
CEOs and executives read this and think:
“Why don’t I see more features released in our company? Our engineers are outdated and they don’t adopt the AI tools fast enough. Let’s become an AI-first company!”.
Well even if it’s not exactly the flow - the result is still that engineers everywhere are pressured to ship faster, adopt the latest tools, and become “AI first”.
And your engineers need their manager to help in that crazy battle.
Let’s start by using my favorite mental model - Inversion.
If I wanted to make sure that AI tools adoption would end in a disaster, what would I do?
1. Force it
Cursor is the hot thing right now. So I would forbid all engineers to use IntelliJ/PyCharm.
Also, every new internal tool would have to be built with Bolt or Lovable.
Oh, and we MUST have agents. No more simple API calls. Do we need weather data? Create a ‘Weather agent’ that will get us the info.
The agent thing really drives me crazy. If someone uses the word agent in a vague way in a conversation, I immediately stop listening to them. Don’t call a switch statement with API calls an agent. Nothing wrong with saying ‘code’.
2. Give AI-adoption ratings
Let’s start praising people who adopt the most tools, and use them the most.
We can create a leaderboard, based on tokens used! Oh, and let’s tie the promotions to it - if you didn’t use 10M tokens in a month, you can’t be promoted!
Finally, we have a measurable metric for the next performance review!
You think I’m kidding, but I’ve seen such things on LinkedIn, about how the ‘best’ employees are eating through the tools budget. This is so nuts. Why are we back to praising usage and not outcomes? Let’s measure code lines again! Goodhart’s law at its best.
3. Kill the ‘old’ way of writing code
Engineers who write code line by line should be ridiculed. Everyone should develop a habit of asking the AI to do things. If you catch someone updating a log message manually, reprimand them in front of everyone.
AI should write your test. It should debug the production incidents. It should design your screens. No more manual work in our company!
Anton, aren’t you overreacting?
The result of all 3, as I hope you agree with me, will be quite a disaster.
Let’s think about it - what is your goal?
“Become an AI-Native company”?
”Adopt the best tools?”
”Release faster”?
”Show investors you are adapting to AI, to raise your stock price”?
If it’s the latter, nothing I can do about it. But if it’s one of the first three, hopefully the actual goal behind it is to serve your customers better, and grow your business.
Why then would you focus on tools and not on outcomes? If you optimize for tools adoption, don’t be surprised if you end up with a slower pace and a complete mess in a year.
Your app is probably already in the enshitification phase, maybe you should focus on that first?
So what EMs SHOULD do?
Look, I’m definitely NOT saying that we should ignore those tools. There ARE engineers who are old-fashioned and resist without giving it a real chance. You do have a responsibility to help your team adapt to the new world (and it IS a new world).
But there are so many better ways to do it:
GIVE time to explore
There is no ‘zero’ cost adoption in existing companies. You are not building a todo-list SaaS, where you can spin up a full solution in 30 minutes using Cursor.
Adapting any AI tool to an existing (and big) codebase, is hard, and it takes time. You need to experiment with it during actual work, evaluate what works better, and constantly tweak the flow.
This will hurt in the short run - but you will see a speed increase in the long run, assuming you let people choose the right tools and the right use cases.
Take a couple of enthusiastic engineers, and let them lead this effort. Reduce their workload by 20%, and ask them to play with the latest toys during their work.
Or, take a whole week/month off the roadmap to explore.
Share what worked in YOUR org
Don’t talk about things from other companies - in 90% of the cases it won’t be relevant to you. Instead, understand what worked great in YOUR company, and share that with everyone.
I promise you, most of the engineers are not resistant just for the sake of it, and not for job security. It’s because you can’t really say to Claude ‘improve the performance of our webapp’ and expect something useful.
Yes, you can and should use it to help you debug, analyze, plan, brainstorm. And it IS getting better at writing code in complex databases (the todolist approach taken in Cursor 1.2 is a great one imo).
But it’s still early days.
Do you honestly think that ALL of your engineers are worse than those in the companies you read about in the news? I hope not, and that you know you have some great engineers.
When your engineers see something that work, they’ll adapt it themselves. Trust in their judgment.
Give people time to adopt it their way
The world is not ending. It’s not ‘AI or die’.
In my opinion, instead of focusing on what tools your engineers use, you should care about what work they deliver.
So if you have 6 engineers, 3 who are using AI tools and 3 who don’t, and the first 3 are doing a much better job (faster, higher quality, whatever you want to measure) - understand why, and if they find a great way to use AI that works - awesome! Share it with the other 3.
If you think it’s not possible in a big org - read this article by Monday, a $15B public company. They had the guts to take 5 weeks to explore, and did it by:
Weekly lectures by internal AI champions; not theory, but real use cases and workflows they already used.
Hands-on training and workshops led by peers.
Weekly demo sessions, with an overall 127 (!) submissions - one simple rule: show what’s live, what failed, and what you learned.
It’s not perfect (didn’t like that part where they said “it’s not a trade-off with the roadmap”), but I feel it’s at least in the right direction.
Final words
Now it’s time for the caveats.
I DO think there is pressure on engineers to level up, as I wrote 2 weeks ago:
I believe that those tools will continue to improve, and it’s worth spending your time playing with them and adopting what works for you.
Also, if you are working on a completely fresh codebase, or on a PoC - the gains can be huge. I was able to build in the last 2 months something that would have taken me a year previously.
But I have the privilege of working alone, doing whatever I want, in a single and simple repo - classic use case for AI.
So I know my opinions may come off as contradictory, and I hope you got my point.
2 great articles on the topic:
Software engineering with LLMs in 2025: reality check by
.A counter to the “My AI Skeptic Friends Are All Nuts” article.
Discover weekly
What if there was a button that instantly revealed everything that’s ever been said about you when you weren’t around? Would you press it?. Interesting thought exercise by
.The rise of the AI-native employee by
, who shares her experience from working at Lovable. I believe it’s a great article, and that ones like this create the FOMO in leaders ("if the team is Lovable can, why can’t we?”).Every side-hustle I've shipped (and how it went) by
. A PM who shipped side projects for a decade, and shares the REAL numbers behind them. A fun read.
Good article Anton. Well articulated. As an EM, It becomes very important to take this up and ensure the team/org don't lose their ways.
This Article needs to be read by the decision makers who wants to adapt AI without much groundwork.
(The ones who are doing groundwork in implementing AI needs to share their learnings with outside world).
As a tester, testing landscape is going through getting impacted in the way testers test the software with AI.
For me, learning is by doing, sharing what worked, where it failed and how to optimise the next time. Studying the use cases, understanding the biases that AI has been using, testing the data sets etc.,
Keep posting!