Robert V. Binder

Software Testability, Part 2: Controllability and Observability

July 20, 2011  |  Blog, Software Testing, Testability

What makes a software system easier or harder to test?



 The general aspects are controllability and observability.

This post covers part two of my 2010 talk on testability.


 Controllability determines the work it takes to set up and run test cases and the extent to which individual functions and features of the system under test (SUT) can be made to respond to test cases.

  • What do we have to do to run our test cases?
  • How hard (expensive) is it to do this?
  • Does the SUT make it impractical to run some kinds of tests?
  • Given a testing goal, do we know enough to produce an adequate test suite?
  • How much tooling can we afford?

Observability determines the work it takes to set up and run test cases and the extent to which the response of the system under test (SUT) to test cases can be verified.

  • What do we have to do to determine test pass/fail?
  • How hard (expensive) is it to do this?
  • Does the SUT make it impractical to fully determine test results ?
  • Do we know enough to decide test pass/fail?

I identified 130 unique contributors to testability in my 1993 CACM article. There are six main factors.

  • Representation: How do we know what to test? Are requirements and specifications available?  Are they complete? Current?
  • Implementation: What is the structure of the as-built system — is it so complex that is likely to hide bugs? How hard is it to set up and interact with features of interest?
  • Built-in Test.  Does the system under test have any built-in features that help us set up and evaluate tests?  Does it provide logging?  Self-checking?
  • Test Suite.  Is the test suite organized to make it extensible or is an all-or-nothing monolith?
  • Test Tools.  Are tools available that can automate regression testing?  To what extent can we easily change test suites?
  • Test Process.  Does the testing work flow complement the development work flow? Is testing an equal partner in development of requirements and features or an afterthought?

These general principles provide a framework for analysis.  Here are some specific examples of how actual systems hinder testablity.

Test Focus Aspects that reduce Controllability Aspects that reduce Observability
GUIs Impossible without abstract widget set/get. Brittle with one. Latency. Dynamic widgets, specialized widgets. Cursor for structured content. Noise, non-determinism in non-text output. Proprietary lock-outs.
OS Exceptions 100s of OS exceptions to catch, hard to trigger. Silent failures
Objective-C Test drivers can’t anticipate objects defined on the fly. Instrumenting objects on the fly.
DBMS and *unit test harness Too rich to mock, can’t easily replicate transaction locking for scalability testing. Must develop separate queries to DBMS, possibility interleaved with transaction locking.
Multi-tier CORBA Identifying a transaction path; getting all distributed objects desired state. Can’t tracing message propagation over nodes (like tracert.)
Cellular Base Station RF loading/propagation vary with base station population.  Actual system is only environment with realistic load. Proprietary lock-outs. Never “off-line.”

 Part 3 explains some technical aspects that reduce testabilty.



  1. Thanks

  2. Thank you so much for this… very useful and clear, couldn’t be better!

Leave a Reply

Comment moderation is enabled, no need to resubmit any comments posted.