There are only two hard things in Computer Science: cache invalidation and naming things. -- Phil Karlton
I agree with Phil, I'd also like to add estimation to this list. Of the three it's the hardest as you cannot ever solve the problem due to the nature of time.
Estimation is the theme of which an engineers' life revolves around; Is it completed? Can you have this ready by Friday? Based off of this task it ought to only take X.
Humans look for patterns, whether these patterns are in the past, the present or speculation about the future. Estimates for a long time in the industry have been cloaked under descriptions such as 'best guess', 'informed estimate' giving the illusion that we are actually quite accurate.
I'd prefer if we re-framed estimates as 'speculation'. We cannot predict the future. Our preference for patterns means we make the false assumption that they exist. To combat this we create theoretical 'models'. The moniker 'models' implies that somehow predicting future events is possible.
The issue I take with models is similar to Nassim Taleb in his book Black Swan, models anchored around large 'unexpected' events (usually the largest deviation we've seen before) does not protect us from future deviations. If we'd made our model the day prior to the largest deviation that had happened before then we'd have been vulnerable and exposed to it. Humans also tend to think large deviations are spread out and that they can't occur in close succession.
Speculation can bite us in development
We load test our services to ensure they can handle increased throughput and stress without degrading completely. Common practice is to use the largest deviation from the norm as the benchmark for the stress test producing false confidence until an even larger deviation occurs. When this deviation occurs we alter the stress test to reflect this.
Approach stress testing with 'this is the point at which it's acceptable for our system to go down' and work backwards from there.
You complete a low complexity engineering task, you are now asked to estimate a 'similar' complexity task. The human urge, infallible optimism (and usual scrum/agile estimation pressure) is to say it will take the same amount of time as the previous task. This is a dangerous game, the amount of variables in play that could reduce or lengthen the task are impossible to account for.
There is pressure on a CEO to deliver good quarterly results, the same pressure is applied to engineers to provide the 'right' estimate.
Our need to create a narrative that leads to this point
Retrospectives are a fundamental aspect of Agile and I enjoy them as part of the process for discussions about processes and tools. Yet they suffer for occurring after the event. With successful tasks we pat ourselves on the back because events unfolded how we predicted. With unsuccessful tasks we analyse what happened and come up with quite a clear narrative about why we failed. We make this narrative because we need one to rationalise what happened, even if it is incorrect.
Let me adapt another of Nassim's ideas to our domain:
Given a complex program with no logging that has a break-point where we can debug and see the current variables, experienced engineers could give a good prediction of what will occur based off of our knowledge of reading code and applying it's logic to the current values the variables have. You'd receive a spectrum of guesses about what would happen but I'd imagine they'd converge in roughly the same space.
Now, given the same program that has run with no logging and only the printed end result (be it a value or exception). The same group of engineers are going to give completely different answers about what would has occurred.
Both of these examples exist within a space where mathematics and logic preside. We ought to see the folly in producing narratives for past events in an even more complex field such as estimation. (Estimation within a team includes the mathematics and logic of the program plus the social and human aspect).
The further we predict, the less flexible we are
So what's harder than estimating a task? Estimating how many tasks fit together over a long duration, the complexity and number of different variables in play rise. This problem is also compounded by any waste that occurs (be it design or technical debt).
Having debt is a strong statement about future goals and direction
Look Mr Convey you said in a previous article that we can manage technical debt like monetary debt, I'm happy with the loan I've taken out. You are happy with the amount of debt due to current conditions and most likely a 'model' that is based off a large previous deviation.
Now deviations in the engineering world can be problems such as security exploits or scaling problems, where your technical debt either exposes or hampers you. You can experience positive deviations in your market space, a rival competitor has gone under and you've seen a specific area of the market where your company could excel, lack of technical debt may allow you to capture and exploit this to the fullest extent.
So without estimates, what do we have?
Tie ourselves less to future predictions, keep as many options open as possible. Planning a feature that will take 8 months doesn't account for changing market conditions or unknown events. Either we'll be way over the estimate or have gamed it to be under.
Short spikes for unknown problems are the single best tool in engineering. After all you don't start building a house without first checking the subsoil.
People inherently within the Agile world translate numerical estimates to days
Put in place a pre and post retro, planning tasks using only numerical estimates is flawed. Move to a world of planning tasks via an abstract complexity plus initial guesses at blockers. Wait, what? Are we now making estimates at blockers? We only list blockers after the event so do it before and use it as a tool/hypotheses to counteract the narrative.
Moving all departments to be as non-blocking as possible. A big push for estimates in engineering is that other departments are blocking in how they conduct their work, marketing want to hit a specific date. Are marketing basing their effectiveness off of this date in the future, are we now a little more worried about the extrapolation needed to get here? Dates are less important than the quality of the output, let the latter take care of the former.
There is no silver bullet. Some sort of fuzzy estimation on shorter lived tasks seems like a good compromise. Avoid anything numerical or even difficulty ratings. Use colour coded risk of blockers, it's easier to think of task threat issues when you remove time pressure. Some companies (see Apple) have to hit certain industry timelines but then bugfix, they have a loyal user base who tolerate this.
If high quality is your guiding metric than estimates become a marginal factor, they won't add any quality. Avoid getting something out there and adding quality later to meet an estimate.
As always I'd love to hear what you think about this article so feel free to either leave a comment below or reach out to me on twitter.