The Blob is a science-fiction B movie released in 1958. A mass of mobile goo (the Blob) arrives from outer space, grows exponentially, and consumes people and buildings in a terrified California town.
If you’re not familiar with Agile jargon, “Back Log” refers to work that remains to be done. “Back Blob” is a not too subtle play on this concept.
The “given-when-then” (GTW) narratives used in Behavior Driven development (BDD) and Acceptance Test Driven Development (ATDD) are single slices of an astronomically large input and state space. Tools like JBehave and SpecFlow help to structure this text, associate it with hand-crafted test cases, and then track testing status. However, they do not address the fundamental problem of adequate testing – getting a meaningful exploration of a system under test.
Worse, I think they lead to a false sense of completeness, both as specifications and as tests. Although producing GWTs is a big step up from undocumented tribal knowledge, it is nowhere near a complete specification and even further from an adequate test suite.
But, even if a BDD collection was complete, it has sustainability limitations: to retain value as a system evolves, it must be maintained and then re-run. Test automaton helps with the re-run part, but significant work is still needed for maintenance, in addition to the work needed to develop the next incremental feature set. This is true of any hard-coded collection of non-parameterized requirements and test cases. Over time, as a BDD/test code base grows, it either crowds out development or, typically, is simply ignored. Either is high-risk. That’s how Agile teams get eaten by the testing BackBlob.
With model-based testing, one maintains test models – not test cases – then generates executable test code. As existing features change and new features are added, a test model is updated accordingly. Previously generated tests and test code can be discarded with no maintenance; a fresh test suite is automatically regenerated. The size/complexity of test models grows much more slowly than the actual test suites/cases. That’s why updating a test model is much less effort than attempting to maintain a growing collection of test instances. Because models are simpler, the chance of update errors and omissions is reduced. Most importantly, model maintenance provides early validation that can reveal bugs and adverse feature-interactions that regression tests based on stale requirements will let escape.
The talk includes a demo of SpecExplorer. I don’t discuss its theory, how to model, or its exotic features, but I do show the basic steps needed to do MBT with SpecExplorer.
One of the developer/testers in the audience tweeted about the talk, with a hash tag #MoreModelsLessTests using a catch phrase I tossed out. This almost says it all, except that with models, we can automatically produce many, many more tests, but drive the cost of maintaining the tests to very near zero.
A video of the talk is on YouTube http://ow.ly/qMkb4.
The slides are on http://www.slideshare.net/robertvbinder/taking-bddtothenextlevel
The YouTube video couldn’t include the video clips from “The Blob” (1958) that I used to illustrate the concepts and have a little fun, so here they are
In the movie, the Blob is finally beaten back (spoiler alert) by freezing it – that’s what I think model-based testing can do for the testing backblob.