Writing solid test plans

Introduction

This short document provides you with the background context you need in order to be able to write good test plans. It explains that there are a number of perspectives from which you must view a piece of software in order to test it properly.

You might like to print this out and read it on paper. Your feedback is welcome!

Understand the business requirements

A tester is a proxy for the most important stakeholder in the entire project: the customer. It’s important that the tester has a good understanding of why a customer wants a new feature to work in a particular way. Knowing this helps you to predict what the feature should and should not do, which in turn feeds into your test plan.

If the Business Requirement Document is made available to you, read it. Ask questions. Print it out. Scribble in the margins. Don’t assume that the customer has actually thought things out properly for themselves. Try to find holes - situations that the author has missed. Talk to the person in your company who is the direct customer liason.

Understand the technical specification

The Business Requirement Document outlines why the customer needs a given feature and what they want it to do. The Technical Specification describes how the feature will be implemented. Look out for mismatches between the two documents - assume that the person who translated the business requirement into a technical specification doesn’t have perfect knowledge. Expect to find mismatches, ambiguities and contradictions.

You can assume the document author has imperfect knowledge, but If you find an ambiguity in the document, don’t assume that anyone else will resolve it the way you might; bring it to the attention of the document author and make sure the document is amended to resolve the dependency. Check that the resolution to the ambiguity had input from the customer. This is testing.

Even without the Business Requirements Document (or Functional Specification) you can still make yourself very useful (and annoying) by asking penetrating questions about the Technical Specification. Ask why the particular architecture has been chosen - is it a fast and cheap solution, is it a compromise enforced by the existing architecture? Will this solution result in a solid, maintainable, modular solution with minimal interdependencies?

Know your domain

Domain knowledge: sounds great, but it just means knowing your stuff”. This is something that comes with practice; everyone starts off knowing virtually nothing about the product they’re testing. Make sure you have an experienced mentor, who can sketch out the product architecture at various levels of detail for you as your knowledge grows. Challenge yourself to understand as much as you can about the inter-relationships between the various parts of the system you’re testing. Talk to the developers and get them to explain how components work internally. Don’t believe anybody who tells you that testing is best done by black box testing. That’s just one tool out of many in the toolkit of an experienced tester.

Scoping out a test plan

Now that you’ve got an understanding of what the customer wants, how the development team are going to build it and the way the current system works, it’s time to start sketchng out a test plan.

Speak with your team lead and decide on major test headings and what the goals of those test headings are. Some example test headings follow. These start off with functional tests, looking at the system through a microscope, moving to larger and larger views of the system until you are testing the entire system as a whole, taking into account not only the software under test, but foreign systems with which the system has to communicate as well as the hardware and network on which the system is running.

Functional tests

These tests are designed to verify that the system behaves as intended. The simplest functional tests will verify components and the tasks you can perform with them. For example, creating, reading, updating and deleting a user account. This is known as CRUD, which are the most elementary operations you can perform with a database-backed system. Write separate tests for each of these activities.

Remember that the read’ part of CRUD includes searching; this can quickly multiply out into a large number of tests if you have several search parameters. In general, the correct approach is to write a separate test for each individual search parameter and then write tests for combinations of parameters. Paging though results in a UI is also a class of read’ test.

The same CRUD approach can be taken with API tests; APIs are used to create, read update and delete entities in the system.

Each test only needs to have one expected outcome’. If your test has several expected outcomes interspersed between a chain of activities, break these out into separate tests and instead set up the necessary preceding steps as prerequisites.

Other types of functional tests which need to be covered are state change tests: for example, moving a user from the NEW to the ACTIVE and then to the SUSPENDED state. State change tests are all subsets of the CRUD tests - after all, moving a user from NEW to ACTIVE is a type of update.

Data validation is the final type of basic functional test. These types of test are concerned with verifying that the system can cope with input in expected and unexpected formats - for example, try to create a user with no name, or a name with one million characters. Does the system cope properly with these types of input? Does it swallow the input and then break when you try to read the data back? See the TROLL page for more information on the different types of input checking that need to be tested.

Remember that not just UIs or APIs need data validation: if the system has to read data from a configuration file, check how it copes with bad or missing data in the file.

Integration tests

Integration testing is where the focus is in how the system’s components work together. The components of a typical system include remote APIs, databases, mail servers, message queues and gateways to third-party systems. These interfaces may use clearly defined protocols (HTTP, SOAP, SMTP…) or they may use proprietary protocols (Oracle’s network interface, WebLogic’s T3 protocol…)

Integration testing has two main focuses - validating that the component parts of the system work together faultlessly under normal operation and verifying that the system copes properly with failures in remote systems. The latter type of testing is called negative or failure testing.

The failure or unavailability of some remote systems are fatal to the system; it cannot function at all without them. A typical examples is the database. Failures in other remote systems may be recoverable - you need to understand how the system should cope with these failures before you begin writing tests.

Integration testing requires a very subtle approach; what if the database is clustered? The system should be able to fail over to the still-functioning database with no loss of data.

Any remote system which relies on the network must be carefully integration tested: network problems can mean that an expected response from a remote system never arrives. How does the system behave? Does it wait forever for the lost response? Note how these tests are similar to but subtly different from data validation tests at the functional test level.

It’s possible to create a rock-solid system which is very tolerant of badly-behaved remote systems if we are able to build simulators of those remote systems which can be triggered to produce bad input or no input at all, simulating errors at various levels throughout the network stack: application, transport, internet and link.

A key idea to remember when performing integration testing is to ensure that your system is very conservative in what it sends out, and very liberal in what it receives. In practice, that means we should validate (error-check) our data before we send it and we should validate any data we receive before we attempt to do anything else with it.

System tests

You read about state change tests at the functional test level. The focus of those tests is to verify that the GUI (or API) can be used to move an entity through various states. State change tests at the system testing level depend on the functional tests and integration tests working properly and are focused on what happens when an entity tries to interact with other entities in the system while in various states. Another way of looking at is is to think of the actions which are associated with certain entities. For example, if a user is in the suspended state, what happens when they try to make a purchase? These types of state-change test are also called lifecycle tests, and belong at the system test level, not at the functional test level.

If two (or more) entities are interacting, draw up a matrix of each of the states that these entitles can be in and write tests for each of those state combinations. For example, a user and their account can be in several different states at the same time. Only some are valid combinations. Find out which combinations are possible on the system and test these, then verify that the invalid states are disallowed. Here’s a sample matrix for an imaginary payments system:

User and Account state matrix
 
User states



    NEW ACTIVE SUSPENDED TERMINATED

Account states

NEW valid - user may be in NEW state and have NEW account valid - user may be in ACTIVE state and have NEW account valid - user may be in SUSPENEDED state and have NEW account valid - user may be in TERMINATED state and have NEW account
  ACTIVE invalid combination - account can’t be made ACTIVE before the user is made ACTIVE valid - user may be in ACTIVE state and have ACTIVE account invalid combination - the account must move to the SUSPENDED state at the same time that the user moves to SUSPENDED invalid combination - if user is terminated, all user accounts must be terminated first.
  SUSPENDED invalid combination - account can’t be SUSPENDED before the user is made ACTIVE valid - user may be ACTIVE while their account is SUSPENDED valid - a SUSPENDED user’s accounts are also SUSPENDED invalid combination - if user is terminated, all user accounts must be terminated first.
  TERMINATED invalid combination - account can’t be TERMINATED before the user is made ACTIVE  valid - user may be ACTIVE while their account is TERMINATED  valid - a SUSPENDED user may have an account which is TERMINATED valid -  a TERMINATED user’s accounts are also TERMINATED

Performance tests

For the time being, see my post on performance testing guidelines, which includes tips on how to write tests plans for performance testing.

Other types of test

Usability testing

Usability testing is another perspective that you have to keep in mind while testing. It is the job of verifying that human users of the software can carry out the tasks for which the software is designed with minimum effort and confusion. Usability testing is one of the few testing functions which is impossible to automate!

The software delivered to you might match the specification perfectly, but if you have to scroll around a UI endlessly or make several clicks where one would do, or a GUI component isn’t chosen well, then the software has usability issues.

Most often usability issues come to light at the specification stage. The job of finding these issues at specification time is made easier if you’ve been provided with mockups or wireframes’ of how the UIs will look. Unfortunately, usability issues are often treated as low priority issues when they’re found after the implementation has been done, so it’s important to find them as early as possible.

Test design

Be clear in what you’re testing

State in your test plan whether the test or set of tests are functional, integration or any other type. Make sure you keep to that perspective while writing your tests - don’t get sucked into checking what happens in the database when writing a functional test, for example.

For every test case, state the reason for your test: this sounds inane, but sometimes it’s just not obvious why something is being tested. For example, indicate if the test is part of a series of CRUD tests. Refer to use cases in the source documents - the business requirements document, functional specification and technical specification.

As well as having a reason for being executed, every test should have just one expected outcome. You should know in advance what the result will be - even long before you get your fingers on running code. If you find yourself dithering over an expected outcome, congratulate yourself for having found a hole in the specification and alert the person responsible for maintaining the specification - you’ve just found a specification bug! Finding bugs early is one of the best prizes of Quality Assurance.

Separating tests and data

A test case specifies a set of actions which must be performed and the data which must be supplied as part of those actions. Manual tests typically specify the data in the test case.

Deciding whether to separate tests and data for automated testing purposes depends on the complexity of your functionality. If you have simple functionality (for example, Google search) then it makes sense to separate out the tests from the data: a series of simple tests can iterate over vast data sets.

If you have complex functionality (For example, complex entities within your system which go through state transitions and can perform actions on other entities in your system), separating tests and data is not recommended.

Some of the data’ are entities which must be created in the system. Even if you separate the data from the tests, you would have to create these entities in the system anyway. So ultimately you have to do the same amount of work but in two different places. It’s difficult to keep these coordinated.

For example, if a test needs a suspended user, you have to add code to the test’ module to create a suspended user for this test. Then the other properties of the suspended user have to be listed in a data’ file, which will be read by the test. This doesn’t really make sense! It is much better to create the user and set their state and other properties as and when the test requires it.

Avoiding inter-dependencies

It’s essential to ensure that you don’t use one test case to set up data or state for a following test case. This introduces interdependencies between tests and makes the tests impossible to run in isolation. It’s a difficult habit to break if you are used to manual testing, but must be avoided once you move on to building automated tests.

Grouping your tests logically

It should be clear by now that a test is an item that can be independently validated (for example: Enter the service URL; the home page appears”).

Tests can be grouped into a test case, most often a logical series of steps. In turn, test cases can be grouped into a test scenario, which is typically a complete business transaction.

Be aware that this approach can lead you into chaining tests together, setting up interdependencies between them. As stated earlier, this is not good practice if the tests are to be automated. When automating tests, don’t just assume that you can string together a bunch of existing functional automated tests to produce a valid test for a complete business transaction. Instead, write the business transaction test as a separate test. Taking that approach means that the functional tests can be run standalone and so can the business transaction test.

Conclusion

After reading this document, I hope you’ll have a feel for the fact that testing requires you to look at a system from a variety of distances and angles.

You can get right in and close-up, verifying the behaviour of components on an individual GUI page, or you can stand back to see the system interacting with other systems around it.

You can view the same component from different directions: checking how its state changes and how it interacts with other components. The design of a component affects its usability.

No matter which angle you’re looking at a system from, it’s very important to be clear in your own mind where you are standing, and not to confuse one perspective of the system with another. When writing a test, always ask yourself what you are trying to test. Don’t try to make each test a complex, all-singing, all dancing test which does fifteen different things. It may feel efficient when you’re running the test, but this approach makes it very hard to diagnose what’s happening when your test triggers the inevitable bug.

Acknowledgements

Thanks to Ciara for suggesting the entity state interaction matrix, to Chaminda for a very illuminating discussion on the merits (or otherwise) of separating tests and data and Sisira for feedback on logical test structure.

May 30, 2005


Next post
Performance testing guidelines Written sometime between 2001 and 2009. The original publication date is lost. This post has moved across three blogging platforms during its life.