Lately I’ve been doing a lot of thinking about proficiency and skill. Where does proficiency come from? How can we quantify and measure it? How is it different from skills that non-practitioners have?
So here are some of my thoughts. You are fully entitled to vehemently disagree.
Take a football coach. Without a doubt, a good coach would know and understand the mechanics of the game. Now take a good product manager. I would argue that a good product manager, like a good football coach, knows and understands how software engineering is done.
This being said, engineering and product management are entirely different spaces with different responsibilities and measures of productivity. Nobody expects product managers to contribute to engineering velocity and nobody expect football coaches to get in the game.
There is a level of indirection there. To know and to do are different skill sets, different levels of abstraction and different expectations. Can you say that you are proficient in developing software if you are a knower, not a doer? I don’t think so, but as long as there is some measure of objective underpinning to the skill, it shouldn’t matter if you do or know. It’s hard to argue with the fact that there are good product managers out there. Same for football coaches.
Let’s set aside the do-er vs. know-er distinction for a second. Regardless of what form proficiency takes I tend to think of it as a tighter bound on a number of mistakes per unit of work. For software engineers it translates to the following scenario. Say, you are tasked with building a certain module, you do the work, and then you lose your git repository and all the local copies of what you’ve written.
The question is if you have to do it over again from scratch would you spend less time, and if so how much less? True proficiency to me lies somewhere between 0 and 10% improvement because it corresponds to lack of learning. Admittedly this measure is flawed because it would confuse true proficiency with inability to learn, but those are so far apart there should be other means to distinguish them.
If you do end up speeding up by a lot when you do something over, then that’s a sign that you’ve learned something, and there was something for you to learn in the process, so by implication you are not proficient in whatever it is you were doing.
Making people re-do their work just to measure their proficiency would be untenable from the business point of view, not to mention a major drag on employee motivation. Thankfully, estimating tasks, something engineers are asked to do very often, is a bit like doing it the exercise in your head the first time around, and then doing it in code for real.
I am not saying this is exactly the same, but there is at least some correspondence because if your estimates are grossly off, then it’s likely that you missed something important that you would probably account for if you had the chance to estimate again. Once you loop through enough iterations, you will start getting closer to that very tight bound I mentioned earlier.
There are other factors like external dependencies and personal tendencies to overcommit that make this messy, but it’s a model that at least under ideal circumstances (external and social factors neutralized) should work pretty well. Isn’t it depressing though? To think that once you become good at something you not only stop making mistakes, you stop learning as well.
What do you think?