My 2010 keynote at the Google Test Automation conference considered the dimensions of software testability and its implications.
This presentation is serialized in following posts.
We’ve become blasé about (even impatient and demanding) about the incredible advances in computing capacity (see nearby graph of Moore’s law.) But this would not have been possible without standard test features in all kinds of digital logic devices. “Design for Testability”, including standardized built-in self test, is critical for all kinds of digital devices but little known outside of hardware design and manufacturing.
Could a similar approach help to make software cheaper, better, faster? I thought so, but it turned out that like many promising hardware/software analogies, the software problem was unbounded and more complex.
To begin with, what is “software testability” and why does it matter?
Testability is the degree of difficulty of testing a system. This is determined by both aspects of the system under test and its development approach.
Let’s assume the following about software development. Other things being equal, and on average:
Testability determines the limit to which the risk of costly or dangerous bugs can be reduced to an acceptable level.
That is, poor testability means you’ll probably ship/release a system with more nasty bugs than is prudent.
About 130 individual factors that contribute to testability. The effect of all this can be measured with two ratios:
Improved testability means we can do more testing and/or increases the odds we’ll find a bug when we look.
In Part 2, I explain what makes for untestability.