Robert V. Binder

Software Testability, Part 1: What is it?

Graph of Moore's Law

My 2010 keynote at the Google Test Automation conference considered the dimensions of software testability and its implications.

This presentation is serialized in following posts.

 

What is Testability?

We’ve become blasé about (even impatient and demanding) about the incredible advances in computing capacity (see nearby graph of Moore’s law.)  But this would not have been possible without standard test features in all kinds of digital logic devices.  “Design for Testability”, including standardized built-in self test, is critical for all kinds of digital devices but little known outside of hardware design and manufacturing.

Could a similar approach help to make software cheaper, better, faster?  I thought so, but it turned out that like many promising hardware/software analogies, the software problem was unbounded and more complex.

To begin with, what is “software testability” and why does it matter?

Testability is the degree of difficulty of testing a system.  This is determined by both aspects of the system under test and its development approach.

  • Higher testability: more better tests, same cost.
  • Lower testability:  fewer weaker tests, same cost.

Let’s assume the following about software development. Other things being equal, and on average:

  • Development (including testing) occurs with a fixed budget, so the key question is how to optimize the value produced.
  • Testing adds value by minimizing the bugs in a released system.
  • Sooner is better:  We’re better off when we  release our software product sooner.
  • Escapes are bad:  The old a bug gets, the nastier (more expensive) it becomes.
  • Fewer tests means more escapes:  Suppose our tests have 1:100 odds of finding a bug and there are 1,000 latent bugs in our system. We need to run at least 100,000 tests to find these bugs. But suppose we run only 50,000 tests and release; we’ll probably ship with about 500 latent bugs.

Testability determines the limit to which the risk of costly or dangerous bugs can be reduced to an acceptable level.

That is, poor testability means you’ll probably ship/release a system with more nasty bugs than is prudent.

About 130 individual factors that contribute to testability. The effect of all this can be measured with two ratios:

  • Efficiency: average tests per unit of effort.  Or, much testing can we get done with the time, technology, and people on hand?
  • Effectiveness: average probability of killing a bug per unit of effort.

Improved testability means we can do more testing and/or increases the odds we’ll find a bug when we look.

In Part 2, I explain what makes for untestability.



No Comments


Trackbacks

  1. When should testers get involved? | Tester Vs Computer

Leave a Reply

Comment moderation is enabled, no need to resubmit any comments posted.