Robert V. Binder

Déjà vu All Over Again – The Mobile Testing Nightmare

Disassembled Rubik's cubeI attended a great talk today about testing mobile applications, given by Lee Barnes of Utopia Solutions. It recounted the Rubik’s cube permutations that affect mobile app quality and reliability: multiple stacks, multiple handheld devices/form factors, constrained battery life, constrained memory/processor, variability of wireless connectivity, different behavior in dedicated and multi-tasking handheld OS, sensitivity to non-application events (power sleep, incoming call/text, drained battery, use the camera, dropped call, lost connection, etc.) And then there is the sheer proliferation of devices, apps, networks, stacks, etc.

In 2000, a Gartner analyst termed this the “mobile testing nightmare.”

Lee observed that some of his clients are quite surprised to discover that mobile app testing isn’t simpler than testing desktop apps.

He included an overview of some current test automation tools for mobile apps. The take-away? Available tools leave a lot to be desired for automating functional and regression testing of mobile applications. Why?

  • They do not provide effective cross-platform support. Suppose you want to develop the same app for iOS, Windows Mobile, Android, etc. There is little if any support for factoring out the commonality in a useful way, either during development or test.
  • There is no reliable “object” interface for the component parts of mobile app GUIs, so test automation is still dependent on very brittle screen captures and unreliable character recognition.
  • Some tools require re-compilation of application code to add-in special test interfaces, but this is problematic for many reasons.
  • Emulation and shared devices are useful, but only up to a point.
  • Although some products are reported to provide script-level integration with existing test automation environments like HP’s Quality Center, this doesn’t address any of the fundamental issues of device controllability and observability.
  •  No support for end-to-end testing.

Lee’s recommendation is to develop smart manual test strategies, and use available test automation only where it provides a clear advantage.

These were exactly the same problems I set out to solve when I founded mVerify ten years ago. Although we made a lot of progress, we didn’t get past most of the above barriers.

  • The GUI object-interface problem will never be solved until platform providers provide an API that can “hook” all of the software components (“objects”) that comprise a GUI. This I know because I asked all of the platform providers to do this, repeatedly, and was basically told to go to hell. Unless and until this is provided, there can be no reliable and effective test automation for mobile apps.
  • Another problem is automating on-device controllability and observability without creating side-effects that hide/create spurious bugs. After three architectural iterations for this problem, I know how to do this. It is not cheap or easy, and it cannot be done without in-stack hooks.
  • End-to-end testing (e.g., making a server send a message of interest to a handheld, while controlling the handheld and observing its response) requires a distributed test harness.  Multi-stack agents have to be controlled from a single automated implementation of a test plan. The last version of MTS had a limited but useful implementation for this. Sound simple? I can promise you it isn’t.
  • The cross-platform solution is the philosopher’s stone of automated testing. It would require that functional tests be composed at a high level of abstraction (this requires skill in abstract test design and apps that are implemented with abstractable functions, so it isn’t only a technical challenge.) Then there has to be a mapping to all (not just one) of the application-specific and platform-specific interfaces from the abstract actions. Platform-unique features have to be blended in too. In principle, feasible. In practice, the complexity is daunting. And again, no hope unless a complete “hook” API deep in platform stack, of each platform, is available.

Ten years ago, I founded mVerify to provide a solution to the mobile testing nightmare.  We had one main competitor, who received over 20 times as much funding as we did from leading VCs. I didn’t raise any VC investment – we raised some Angel money and mostly bootstrapped. Despite that, our products were essentially the same. Our competitor had a few features we didn’t (and we had some they didn’t). Neither of us had a complete solution and neither firm exists today.

However, I did learn what it will take to make the mobile testing nightmare go away. I wish present-day tool providers well as they grapple with the same challenges I did, because an effective solution is still very much needed. But, unless some kind of disruption in this space makes it possible to pay for the substantial development needed, I don’t expect to see much more than the limited present-day technology.

Until that happens, I think Lee Barnes has it about right. A well-crafted test plan conducted by skilled testers, even if it mostly relies on fingers and eyeballs, is still the best (and unfortunately only) way to test mobile apps.

 



No Comments


Trackbacks

  1. | LogiGear Magazine
  2. Interview: Robert V. Binder | LogiGear Magazine
  3. If It Moves, Test It - FDA and mobile technology

Leave a Reply

Comment moderation is enabled, no need to resubmit any comments posted.