Consistency, or more specifically, the lack thereof is one of ways to drive me crazy. inconsistency happens everywhere, not just in testing. For instance, should I order a large hot chocolate and a muffin from the coffee shop at the bottom of our building it will end up costing 1 of 3 different prices depending on who punches the order into the till. The difference is only a few cents, so I don’t bring it up.

I see a similar pattern in organizations which do not have mature testing processes in place. Tester A will test Feature X in an ad hoc way and get decent coverage, and Tester M will also test Feature X in an ad hoc way and also get decent coverage. The definition of what ‘decent’ is could, and in practice usually does, vary widely depending on the understanding of the feature, understanding of changes (risks) associated with the feature etc.. This inconsistency has the potential of introducing a difference measured not in cents, but in very real dollars — especially in some of the more litigatious jurisdictions.

So how does one address this problem, and as a result drag your testing group (kicking and screaming if necessary) up a notch or two up the maturity ladder? Test Cases.

“But we don’t have time to develop extensive Test Cases!!”

That may be so, but I don’t seem to recall saying you need to have extensive Test Cases. Traditionally, when people think of Test Cases the following pops into their head.

“Right-click the ‘Name’ field to activate it. Now that is activated, type in ‘John Doe’ and hit ‘Enter’…”

I agree whole-heartedly that you do not have the time to waste on Test Cases like that. The problem with this level of specificity is that you test only one specific path of code for that sub-feature, every release. And while, this is extremely consistent, I can guarantee you that your customers are not going to limit themselves to the data you used which limits it’s testing worth.

What I propose instead is a consistent approach for testing a feature. Let’s use certificate authentication as an example because it’s massively complex (and having had to test it for 6 years, I know a lot of the nooks and crannies). This is approximately what my test cases would look like.

OSCP

  • location externally specified
  • location embedded in certificate
  • location both externally specified and embedded in certificate
  • ocsp server unavailable
  • response: certificate found
  • response: certificate not found
  • ca certificate not found
  • certificate revoked
  • ca certificate revoked
  • certificate still valid  Â
  • age of crl in ocsp server invalid

That is it more or less it for OCSP which is a pretty big part of modern PKI. Notice that I did not specify what CA should have issued the certificate, which OCSP server to test against, the DN of the certificate etc.. This lets the tester use their own data and approaches which will exercise the code to a greater degree, but at the end of the testing cycle you will know that all aspects of the feature will have been covered.

Generating this list is pretty simple as well since it should be taken almost straight from the requirements. And if not, then your requirements need modification/flushing out.