Discussion about this post

User's avatar
Ella's avatar

Interesting analysis! There's something I don't get though: You said "Teams that didn’t perform code reviews had a x1.9 times higher output (~59 expert hours/developer), but x2.4 times more bugs! (8.9/developer)."

However, you count bugs by looking at PRs, not at JIRA. Meaning, you are looking at bugs solved, not at bugs created. Then, you are saying that in a company where they have higher output (i.e. more features), they also have more bugs solved. Well, that doesn't mean they are creating more bugs! They are just doing more of everything.

I think you should normalize the bugs solved, taking the higher output into account. So for example, if due to lack of code reviews, I manage to get 1.9x more code in, then assuming I work on features and bugs 50%-50% (a big assumption!), I also solve 1.9x more bugs. So, if you see 2.4x bugs, that means due to lack of code reviews, the bugs increased from 1.9x to 2.4x, and not from 1x to 2.4x. That's a 1.3x increase.

In reality, I don't work on features and bugs 50%-50%. It really depends on the culture. In some companies, you have a dedicated time for bug fixing (e.g. 1 week every month). But assuming all bugs are being resolved is always wrong. You can assume all major bugs are resolved, but other than that, it really depends. So maybe the correct way to analyze this would be to only look at PRs for major bugs. WDYT?

Expand full comment
Daniil Shykhov's avatar

Fascinating analysis! The correlation between review speed and productivity is something every team needs to see.

Expand full comment
2 more comments...

No posts