Building testable software

We’ve all worked with software which wasn’t designed with testing in mind; as developers and as testers, many of us will have been stung by this. Try as we might, we can’t vouch for the completeness, robustness or performance of some component or function of our product. This leads to anxiety, premature greying and poor digestion.

Software can be untestable (or difficult to test, it amounts to the same thing) for a variety of reasons; component X is too tightly integrated with some poorly-understood blob of code and can’t be tested in isolation; appropriate hooks for automated testing haven’t been built into the code; testers lack the tools to interact with the system they’re supposed to test.

Don’t allow yourself to get into any of these situations. If your team (and managers!) value quality, that means (among other things) designing for test from the outset.

Testability as a Done’ criterion

  • Before any code or functionality can be considered complete, it must be proven that the UI functionality to exercise that code can be driven from the team’s automated testing framework. It must be possible for tests in that framework to find and interact with any UI component, and to make assertions about its state.
  • If functionality to be tested relies on external services or unimplemented code, some mechanism to simulate those services or code must be provided.

The case for automated testing

The purpose of testing is not to assure quality”, but to provide information to stakeholders on the state of the software: its fitness for purpose, its completeness and its robustness. This information allows these stakeholders to make informed decisions about the demands and risks involved in releasing the software.

Automated testing can provide useful information that large swathes of functionality continue to function in the same way today as they did yesterday. Even if your functional test harness never catches a single bug, it’s providing useful information to stakeholders every time it runs.

Automation done badly

You’ve probably seen test automation done badly; my favourite example of terrible test automation involved parachuting a contractor into the test team on an already-late project, where he then spent several weeks, working alone, building an automated GUI test harness using a proprietary tool, for which the company owned only a single (very expensive) license. Over a couple of days at the end of the contract period, there was a handover’ of the automation code — to manual testers with very basic programming skills and no experience of automation. Yes, I was one of those testers.

We tried our best, but within a few short weeks the software had evolved further, the test harness started to break and everyone on the project team lost faith in the harness’ ability to tell us anything useful about the state of the software. Soon, the test harness was abandoned, leaving a lot of people with a bad taste in their mouths regarding the cost-effectiveness and fruitfulness of automated testing.

Automation done well

First and foremost, any test harness must be robust and provide reports that are trusted by the project team. That means the harness’ architecture must be transparent, the tests must be clear and easy to understand and tests should be easy to maintain.

Doing automation properly requires understanding what automated testing can and cannot deliver; it requires making pragmatic choices about what is cost-effective and appropriate to automate. Among the questions to ask are:

  • What is the most appropriate choice of testing framework?
  • Does the team have sufficient skill to build an automated harness, build the support functions to enable testing and build and maintain the tests themselves?
  • If not, do we invest the time and effort to train staff appropriately?
  • How do we choose which tests to automate?
  • Who tests the test harness?
  • Is the product in such a state of flux that UI elements, APIs and other interfaces are changing daily?

If the cost of updating the test harness to handle a change in existing functionality is greater than the cost of changing the functionality itself, it’s likely that the test harness architecture is not optimal.

May 10, 2011


Previous post
“Don’t give us unexpected or surprising information” Just got a curious instruction from a product manager: “Bugs logged in sprint 3 should focus on functionality delivered in sprint 3 or previous
Next post
Recommended reading for testers An old workmate of mine, Janesh, got in touch to ask me for my recommendations of books on testing for his team. I’ve been meaning to write a few