Robert V. Binder

Software Testability, Part 3: Accidental Untestability

Three dancing hamstersThis post covers part three of my 2010 talk on testability.

Aren’t the dancing hamsters a stitch?

Not so funny if you have to test code whose stability or controllability makes you feel like you’re wearing the hula-hoop.

To reveal a bug, a test must:

  • Reach the buggy code
  • Trigger the bug
  • Propagate the incorrect result to an observable interface
  • Incorrect result must be observed
  • The incorrect result must be recognized as a failure.

The absence of any of these conditions means a test will fail in its mission to reveal bugs. Jeff Voas provided a canonical example of how even simple code can be hard to test in his excellent analysis of testability.

int scale(int j) {
    j = j - 1;             // should be j = j + 1
    j = j/30000;
    return j;
}

Exhaustive testing of this trivial function is possible: there are only 65,536 possible values for input j, so embedding it in a loop to generate all the values is easy and fast.

But how many of these tests would reveal the bug? Only six test cases will produce an incorrect result:  -30001, -30000, -1, 0, 29999, and  30000.  All of the other 65,530 will run with out any failure and the bug will not be revealed.  How many developers would have thought to try the exhaustive test?  These kind of bugs can be easily missed.

So even trivial programs can limit testability, for even trivial bugs. Why is this? Just executing buggy code doesn’t always result in a observable failure (the technical term for this is “coincidental correctness”.)

Do you remember the Y2K IT pandemic?  Systems that had worked just fine for decades were certain to fail when run after December 31, 1999. This is an example of a kind of bug that produces an incorrect result only for certain data values – in that case, when the truncated year field advanced from 99 to 00. Many systems interpreted the 00 as 1900 (not 2000) resulting in many kinds failures.

Complexity also contributes to this. There are many technical definitions of this, but for my talk, I tried to find visual and aural metaphors for complexity.  I like Jackson Pollock’s Autumn Rhythms as a visualization:

Jackson Pollock's Autumn Rhythms

What can we do about complexity?  Essential complexity is the implementation-independent extent of functions and aspects. Simply put, more is more. But if the “more” is necessary to solve the application problem at hand, this can’t be avoided.

Accidental complexity is often avoidable and usually the result of careless amateur development or selfish local optimizations.  In fact this is so common, it has its own anti-pattern:  Big Ball of Mud. Of course, this diminishes testability and helps to hide many bugs.

Other implementation aspects often decrease testability.

  • Non-deterministic dependencies
  • Race conditions
  • Message latency
  • Threading
  • Unsynchronized updates to shared and unprotected data

The dancing hamsters are a great visual metaphor for this – cute but very unstable. To avoid creating systems that produce instability as an unecessary side-effect, part 4 explains how white-box (implementation) techniques can improve testability.

 



Leave a Reply

Comment moderation is enabled, no need to resubmit any comments posted.