In my last blog post about the tenets of technical debt, I covered some of my thoughts about technical debt. One of my thoughts has been playing on my mind since I wrote it:
Treat technical debt exactly as you'd treat monetary debt, it can be a good thing, allowing fuelled growth, ambitious plans and a multiplier effect
Most teams don't treat tech debt this way, instead they splurge on new features creating a long list of debt. A lot of teams have a loose rule of trying to pay off some tech debt each iteration yet for most teams all debt is equal. Thus we prioritise paying off our 1% interest student loan over the 3546% loan that we took out two years ago.
How can we fix this?
Development teams need a way to categorise debt that is easy to understand. Big O notation allows someone to gain a quick understanding of the run time of an algorithm as input size grows. Bond credit ratings are used as a check about the confidence and risk of the bond as time progresses. Why don't we have a system like this for tech debt?
We need a system like this for tech debt, not something 100% accurate but a quick way to signal how serious the debt is in the system. So why not just use story points? Well here's the issue, tech debt is also like monetary debt in that it accumulates the longer you leave it. You can write some less than stellar code and revisit it in two weeks and improve it, how does that scale when you revisit it in 6 months? Or when a new developer takes it over and you're not around to assist them?
Software being "Done" is like lawn being "Mowed".— Jim Benson (@ourfounder) August 29, 2016
The technical debt inflation scale
Introducing the TDI scale, to categorise technical debt into 5 code quality levels. Allowing you to give a clue to product owners of how much harder it becomes to fix as time progresses.
The higher the quality of code the easier it is to rework/revisit or rewrite. High quality code still accumulates technical debt but at a slower rate. Revisiting it again one month out is like revisiting it at four months out, it carries little penalty. The longer you leave low quality code alone the harder it becomes to manage!
High quality code is thoroughly tested at both a function/class and system wide level. Tests are thorough and detailed and cover not only happy path cases but many corner cases.
The code is modular and follows well known engineering (SRP/Dry) with a focus on clear readable code. Documentation includes high level designs and explanations along with information on domain problems. Pull request and commit messages are thoughtful and provide a time lined snapshot of the process.
A developer that hasn't worked on the code before will be able to immediately grasp how it works and become productive. Code is ever evolving and every part of a system accumulates tech debt. This quality accumulates it at a much slower pace meaning refactoring needed is both infrequent and easy.
Upper medium quality
Similar modularity and conciseness as high quality code but is constrained by tech debt or system design decisions. Testing is still thorough but may lack some of the corner cases and detail present in high quality code.
A developer new to this code will have a small ramp up time to becoming productive. This comes with the caveat that the work is carried out closer to the initial start of the code. A lot of teams think they are writing high quality but actually fall into upper and medium quality. There is a sweet spot to revisiting this code, with care and follow up in the first few weeks there is the chance to upgrade this from upper to high quality.
Lower medium quality
Tests are sparse or limited due to not being able to mock out a dependency or other part of the larger system. Less modular code makes changes more brittle and time consuming. Less detailed tests means refactoring is possible but the likelihood of creating an edge case bug is much higher.
This quality can be for areas that have to be fixed and don't change a lot, teams can 100% choose to ship code of this quality as it shouldn't have any glaring defects. If it's not going to change a lot then paying off bugs isn't too bad if done right away. Can be a great example of leveraging technical debt as a multiplier.
An easy way to spot the difference between lower and medium quality is the care taken documentation. Commits often follow a format of:
Added new tests
Fixed user bug
Improved cache service
These sort of messages leave little information nor a breadcrumb trail for your future self or other developers. This lack of care usually correlates with poorer abstractions, methods or classes that are handling too many responsibilities and components and variables with unhelpful names.
If you want to improve your commit messages then this tweet gives a great tip that'll help in under 30 seconds:
No tests or limited happy path situations, magic numbers, production methods contain conditional statements that only execute if tests are running. Long methods and large classes, bad variable naming and unclear paths through the logic. Use only for spikes and throwaway code. If you are building something more significant than a MVP then you are asking for trouble.
Low quality is the most damaging of all the levels (even though there is one level lower than it). This is because to a product owner it gives the illusion of quality, the happy path works and it doesn't crash all the time. Often this level gets voted higher and placed in the lower-medium quality, this is the biggest battleground in development, fight to get things correctly evaluated! The illusion of quality means that teams and companies place it in a lower risk category and are resistant to large scale changes. This only become clear when either the feature cannot scale or large changes are needed.
Tricky for new developers to get up to speed and lots of gotchas that'll trip them up. Functions can actually produce the inverse of what they say which places a high cognitive load onto the developer while reading. Estimates against this code level often vary wildly and are almost always wrong.
Sparse to zero tests included, complicated tests and brittle code. Tests are for one specific part of the system, look for test names such as
testUserWorks. 95% of the time code of this quality ships with no tests and manual testing of the feature over and over. Junior developers and senior staff are both as capable of this low level dangerous junk.
Classes contain huge methods with tons of responsibilities. Poor variable names and abstractions that leak into other areas of the system. Poor performance or downright dangerous edge cases (think retry mechanisms that loop and eat up all your threads).
Code like this that is actively developed upon means you are either reliant on an incompetent developer or will lose a huge amount of time raising the code quality. A new developer faces an almost impossible task to fix or add new features without system wide issues and potential long periods of down time.
Closing thoughts and musings
Don't trick yourself into thinking you've created a higher level area of code than you really have.
If you have a resounding yes to the following then you probably can justify the rating you've given it:
- Would you show it to someone you respect?
- How hard would it be to work on if you were a developer that had just joined the company?
- If allowed would you show this as an example of your work at an interview?
- Is this better than the average quality of the codebase?
If you always rate yourself as high quality then I fear this tweet may apply to you!
"A 10x developer", aka "a tech debt loan shark"— Phil Calçado (@pcalcado) June 26, 2016
If it's possible to downgrade code from one level to another can you also raise the quality bar?
Quality is a fluid concept, it can move in any direction. Raising the quality bar can be achieved in multiple ways and is not just limited to code
Super scary code? - Add documentation as you gain better understanding!
Super scary code? - Wrap in a single test and get sign off from a product expert, one small foothold means you can establish a beachhead and add more tests in the future.
Terrible documentation? - Can you use any existing tests to produce a rough start of expected inputs and outputs?
Can my code be a mixture of different levels of quality, i.e. good code but bad tests, great PRs but medium code?
Of course, every developer has their strengths and weaknesses and it's important to improve each facet. Don't be fooled into thinking that low quality PRs are where you can skimp on quality. The inverse of this is also true, poor code isn't improved purely by a great PR.
Team bloat can be an indicator of tech debt
If your team is growing and velocity per developer is going down then you've got quite significant tech debt. A common industry response is to add more and more team members to fight technical debt but this produces domain knowledge debt that hampers the path to recovery.
Should we apply a quality rating system to requirements?
I think this would be a great idea, rarely do poor requirements translate into great code. Code at the end of the day are just instructions to create features. If you struggle to understand a requirement then the code in question will reflect that. The ability to push back against a requirement document and label it as (high/medium-upper/medium-lower/poor) etc again allows product owners to make more informed decisions.
How much debt is acceptable as a percentage of a projects backlog?
This is up to each team to decide and manage. Categorising quality and overlaying it against how likely you will revisit it should give more confidence to know your risks of defaulting. We don't let people take 10x mortgages out so we should avoid piling up tech debt as a team.
As always I'd love to hear what you think about this article so feel free to either leave a comment below or reach out to me on twitter.