Throw out your Automated Tests
In the era of ‘Automate Everything!’ fanaticism coming from the Agile community, and with more and more vendors releasing products which will automate you towards zero shipped defects it is near blasphemy to suggest a testing strategy without automation. What is even worse is that you suggest that time be invested in creating scripts only to toss them aside once they are complete. But that is exactly what I am suggesting.
Static analysis tools provide high value with little maintenance once the initial tuning is complete and usefulness of unit tests has been well demonstrated by the TDD crowd so we want keep those around. Classical end-user functionality automation (think QTP and Selenium) however provide little long term value, and as such should be discarded.
If that wasn’t controversial enough a statement, this one will certainly top it. You should still go through the exercise of writing these tests though. Why? By doing so, you learn about how the feature that is being automated is wired together. If you are going to do good automation you will inevitably answer these questions (and many more):
- What pages are involved?
- What are the field validations?
- Where is information stored in the database?
- What manipulations happen to the data?
- Are the business rules that govern the process?
- Are the developers providing sane id’s of objects?
All the answers to the above help you form other, better, more targeted, questions to ask the software. This deep learning of the product is the true value of the automated tests you have written. The value is not in catching a regression bug, as that was hopefully found by the static or unit tests. Once an automated test is written it also starts to become a victim of the Pesticide Paradox where bugs are learn to “hide” from the tests or are likewise immune to them.
I’ve been doing automation for a number of years on a variety of platforms. During that time I would guess that running of the scripts resulted in likely fewer than 10 bugs. I’ve found many times over that number he when writing the tests.
So why throw out the scripts when they have already been created? Shouldn’t you just keep running them? I suppose, if you commit to not spending any time if a test ‘breaks’. The reason for this comes down to maintenance. Any time you spend reidentifying gui objects in the screen or otherwise tinkering with a script is time that you have lost from testing. Time is almost always is constrained resource and we would be best served to use it as effectively as possible.
What we need to do then is change how we look at test automation. Instead of having it as a safety blanket against regressions that provides management with a nice warm-and-fuzzy feeling, test automation needs to be reclassified as a technique which aides in the exploratory testing of an application at a technological level.
Put another way, automated functional testing is not about the end result, it is about the journey.
I can only accept partial blame for this idea. The seed of this was planted by Michael back in December though I suspect I might have run much further with it than was discussed then.