The general aspects are controllability and observability.
This post covers part two of my 2010 talk on testability.
Controllability determines the work it takes to set up and run test cases and the extent to which individual functions and features of the system under test (SUT) can be made to respond to test cases.
Observability determines the work it takes to set up and run test cases and the extent to which the response of the system under test (SUT) to test cases can be verified.
I identified 130 unique contributors to testability in my 1993 CACM article. There are six main factors.
These general principles provide a framework for analysis. Here are some specific examples of how actual systems hinder testablity.
Test Focus | Aspects that reduce Controllability | Aspects that reduce Observability |
GUIs | Impossible without abstract widget set/get. Brittle with one. Latency. Dynamic widgets, specialized widgets. | Cursor for structured content. Noise, non-determinism in non-text output. Proprietary lock-outs. |
OS Exceptions | 100s of OS exceptions to catch, hard to trigger. | Silent failures |
Objective-C | Test drivers can’t anticipate objects defined on the fly. | Instrumenting objects on the fly. |
DBMS and *unit test harness | Too rich to mock, can’t easily replicate transaction locking for scalability testing. | Must develop separate queries to DBMS, possibility interleaved with transaction locking. |
Multi-tier CORBA | Identifying a transaction path; getting all distributed objects desired state. | Can’t tracing message propagation over nodes (like tracert.) |
Cellular Base Station | RF loading/propagation vary with base station population. Actual system is only environment with realistic load. | Proprietary lock-outs. Never “off-line.” |
Part 3 explains some technical aspects that reduce testabilty.
Thanks
Thank you so much for this… very useful and clear, couldn’t be better!
Thanks