The concept of technical debt has popped up recently with a couple of our clients. Specifically, these clients are interested in how to measure it across their application or product portfolio. It came up most recently as we were brainstorming the measures for a scorecard for a large technology group that wanted to track their progress towards increasing speed and decreasing costs through strategic architecture consolidation.
Note: the term has been used as a metaphor for a while in the software development world, but more commonly during the software development lifecycle. Apparently Ward Cunningham coined this term originally to describe when a development team “chooses a design or construction approach that’s expedient in the short term but that increases complexity and is more costly in the long term.” Here’s a writeup by Steve McConnell that is dated but very comprehensive if you want the full background.
But back up to the portfolio level … assume we define technical debt as maintenance costs for a product/ technology that could be avoided by making a certain investment. Of course not all technical debt is bad. There are very good reasons to incur debt especially when you can make a case that it accelerates innovation or addresses customer needs. But many times debt is incurred needlessly due to ignorance or poor decision making. Clearly if we had decisions to make all over again, we probably wouldn’t end up with the same unique blend of technologies, tools and methods underlying our portfolio. Did we really need that one-off DBMS license? Do we still want to support and enhance applications written in FoxPro? Why can’t we resist trying a new source code control tool on every project? These questions come up, but often too late to do anything about them.
So the question is, could we measure something early on in the project lifecycle, as we are making decisions about the technologies, tools and methods to be used, that tracks the technical debt we are incurring from veering off a defined path?
We talked about it extensively with our clients and in both cases came to the conclusion that a lagging indicator like maintenance spend, combined with more of a leading indicator like compliance to architectural standards, were the most direct ways to measure it. It seems the concept is a very useful metaphor for helping business executives understand the implications of technology decisions, but measuring it directly seems to be impractical.
But I would love to hear from those of you who may have some ideas or even better, some history, with measuring it effectively!