Holding managers accountable (Part 2)
I reckon I failed to add more detail to this post. In particular, why it’s more difficult to hold engineering managers accountable compared to ICs. Power has a role to play here. Or at least how much influence managers can exert to get out of trouble. Beyond that, there are two points I want to touch on.
The first one is time. The success and failures of a manager is measured in months and years. An IC can be measured in a much shorter timeframe. For example, if a manager is working with a problematic IC, there can be some early signs that the person is improving. Nonetheless, it’s rare to find extreme cases of erratic behaviour. Most of the coaching happens with individuals well within the middle of the bell curve. Not only can it be difficult to see immediate results, it’s also tough to establish causality, or even correlation. How can I prove that my last 2 months of coaching in communication has led to better writing skills of my report? There could be dozens of reasons why my report has improved outside my 1:1s. What if they have become worse at it? Does that mean that I should have done even more? And at what point do we shift the responsibility spotlight back to the individual, rather than the manager? What about dealing with tech debt, or any long duration technical initiative. Managers are usually the ones sponsoring and planning big refactoring in the codebase. If they don’t deliver on the original deadline, what should happen? It’s not uncommon for ICs to miss deadlines, or for a task to move to the next sprint. If the consequences are quite minor, or non-existent for ICs, what should happen for managers? Without context filling the blanks it’s difficult to give a good answer. I ultimately believe that if worst comes to worst, it’s still on the manager, rather than the IC. Moreover, managers operate at a higher level - which implies more scrutiny. ICs work on the other hand, can be measured on a sprint-by-sprint cadence. It’s trivial to measure if they are delivering on what they agreed upon. It’s also easy to measure if they are reviewing other people’s work, and if they are contributing to discussions and RFCs.
The second point is the domain on which they operate. Managers work on people, processes and technology. As previously discussed, measuring human improvement can be difficult. It’s arduous to establish a relationship between feedback and consequential growth - in all possible cases where the manager provided feedback, or not, there was a change for better, or worse, from the report. This doesn’t mean that the manager is off the hook because progress is difficult to measure. Long term poor performance from an IC, or a team, must be dealt with, and this responsibility cannot be thrown out of the window. Compared to people, the efficacy of a process is easier to measure. And a manager can always ask for feedback with an anonymous form. This should provide enough qualitative data to inform if a team is happy with a new process. From an IC perspective, most of the domain is within technology. It’s again easy to measure if an IC is being effective and efficient while doing tech work. We look at the quality of their work and how often they deliver on time. We can also look at their core skills, like communication. And if it’s improving over time, or deteriorating.
These are two reasons why holding managers accountable is more difficult, compared to ICs. The results of their labour are measured at different timeframes. ICs usually see it on the short term, while managers on the medium to long term. But also the domain in which they operate. The effectiveness of coaching humans can be both difficult to measure and to correlate.