Test Documentation set
Another “I was asked the other day” post. This time, the topic was around test documentation.
Which documents comprise a mature set of test documentation?
- Project Test Plan – This document outlines the high-level strategy from a testing perspective. Each feature is discussed briefly as are the resources required by the project (staffing, hardware, software). The primary audience of this document is not test, but other people/groups in the organization who do not need to know the nitty-gritty details, but have input on the overall direction the testing will take. If there are any large test-support tasks happening during the duration of the project, they should also go here. A couple test-support tasks might me “use a new hardware management strategy” or “automate all new features”.
- Feature Test Plan – This document is specific to a single feature and outlines the strategy for testing a specific feature and is the most important document for the individual tester as it puts them in the right frame of mind to approach the problem. Included is a detailed description of the feature and description of the approach(es) to testing it. Example: The display of CGI generated information appears in all supported browsers. Ultimately, any tester should be able to be given this document and achieve good test coverage. Nowadays, an automation section should also be included which outlines how a feature could be automated; even if it is not going to be at this time. Or even more importantly, what obstacles exist to prevent automaton which would be the start of a conversation with development.
- Feature Test Cases – Now these are where the nitty-gritty details are. I’m of the school of thought that they should be detailed enough to not allow for an confusion regarding the goal of the test, but not so detailed that a 5 year old could do them. If a feature has all it’s test cases completed, then it has 100% test completion — as we know it now. The last part is important. It could be that a new attack vector was discovered or new feature x has a cascading feature effect on only feature m which will cause you to create new test cases. This is a big topic, so I’ll (try) do a separate post on test cases in the near future.
At this point you might be thinking “well, show us an example of each”. That is no simple task really. What might work in my environment might not work in yours. Another problem this might create is that people people could just modify it by changing the names of the features etc. There is nothing more dangerous than someone armed with a template who does not understand why they are putting certain information in. This is especially true it test. If your test cases are incomplete or inaccurate, you lose the auditability and accountability that is necessary in testing. You also lose the ability to respond to “did you test x?” when a bug from the field comes in. If your strategies are missing or not developed, then every time a tester approaches a feature there is a large change that they will do it differently. This seems like a good thing from a breadth of coverage perspective, but you also lose all your previous coverage this round.
“But all this documentation is overhead my schedule cannot afford!” You schedule has to afford it. You will never have enough time in the schedule to do everything you want. But you must have a well developed test documentation set. Not only because I say so, but because
- it allows you to ramp up new members to the test group quicker than without
- it allows for business to continue as usual if someone leaves (or is hit by a bus)
- it allows you to outsource some of your testing should you chose that model at some point in the future
- it allows you to use non-testers in your testing which can be a useful method of checking usability