A couple weeks ago I responded to a question on comp.software.testing about how I write test cases. We also discussed it tonight in the class I’m currently teaching. Since it’s come up twice recently, I figured it was worthy of a post.

Like most things, I like to not artificially constrain my test cases. This means that rather than writing something like put adam into the username field as part of a registration test suite I would just have put a name in the username field (or something equally as non-test data specific). The logic behind this is if I tell myself, or whoever is using the test case, to use ‘adam’ all the time, then this test will only has a good shot at finding a bug once; the first time. Every other time the test is used the same execution path is run – 4 letters, all lowercase.

If I have leave it open, I will get more variation in the test data. If I was looking to be lazy, and I usually am, I might go so far as to write a quick script to generate random usernames of random length that may or may not meet a certain validation critera. Connect the two with some logic and you have pretty much entered the land of Model Based Testing.

But because there is always an exception (without exception), if there is specific test critera that triggered a bug, list that as something specific to test.

Another thing I leave out of test cases is the specific expectation. I’ve seen lots of test cases which stated as the expectation ‘The test should succeed.’ Well, duh! Of course the test should succeed. That falls under the category of fluff and needs to be removed. I also don’t like seeing expectations that say the user should be presented with the following error: The username is already in use. By embedding information into your test case that is likely already contained in a requirements document you are increasing the likelihood the two documents will not jive in the future. I always test with the requirements at my fingertips so if i care about the error message, I can find the desired content quickly and easily in the document that truely matters. Should is a pretty dumb word to see in test cases as well. Either a test Must pass, or it Must Not pass. None of this ‘In this situation, it should do this’ nonsense. It either did what you wanted it to, or not.

There was also a question about format. I don’t care. The important thing is the content itself. If you like tables, good on you, but they tend to lead you down the road to including expectations and other unnecessary items.

One person asked about providing a column (in a table) to show the result. This too is unnecessary. Remember what I said about regarding should. All tests need to pass or have a bug reported. If you are desperate for another column, but the bug number that causes a test to fail. And remember to remove it when a bug is fixed.

So what does this translate to in ‘real’ test cases?

Login Test Cases

  • Valid username / password
  • Missing username / missing password
  • Missing username / password that exists in database
  • Valid usernmae / missing password
  • Security
    • XSS
    • SQL Injection
    • Secure connection
  • Non-english username
  • Non-english password
  • Non-information-leaking error message text
  • Lock-out after X incorrect attempts
  • Incorrect attempt counter resets after successful login
  • Online Help
  • Accessibility
  • Usability
  • Maintainability
  • Field lengths
  • Consistency

As you can see, its a pretty complete list and was pretty fast to write. Nowhere does it say the expected results as those are captured in the requirements and the default state for any test is to pass so I have not included anything to imply there is any state for a test to end positively in.

I can also give this list to another tester and they could just as effectively in this app. One could argue their testing would be more effective since they don’t know how to make it work in their environment but instead in a test environment.