History of a Large Test Automation Project Using Selenium
I’ve known Chris McMahon through the Internets for a number of years, but only met him in person at Agile 2009. His talk was immediately after mine and seemed to be one of only a few purely experiential sessions: this is what the situation was, this is what we did and this is what we learned in the process. I like this session a lot more than pure workshop ones, though I’m beginning to see their value too. So even though I know a bit of inside-baseball into the Socialtext automation rig, I still got over two pages of notes. Not to mention the paper that he mentioned exists somewhere that I’m sure he’ll add a link to in the comments. (Won’t ya, Chris…)
- Manual test cases looked the same-ish as their automated counterparts
- Tests were named after features
- Manual tests were note there for regression purposes, but as placeholders for future automation
- Poverty of Oracle problem – automation reports only one problem
- Tip – Break out of the frame to increase testability
- Once the manual test is automated, throw out the manual one
- There is a need for both canned data and data created through the tests. Canned data is for when you have a large required data set for a test
- Lewis Carol poems make great (string) seed data (especially for search)
- If you are creating data, make sure to make it unique through something like Julian timestamps
- Weave a ‘web’ of tests through your application
- 90/9/1 Rule – 90% of people read content, 9% comment on it, 1% create new content
- Smell – All automated tests share the same initial path through the system (a branching strategy)
- Smell – At a certain number of steps (~200 in Socialtext’s case) the script is too large to (easily) diagnose script failure causes and needs to be refactored
- Ran scripts end-to-end, continuously
- If a script fails, run it again immediately and only on second failure mark it as failed. (But don’t lose the fact that you ran it the second time)
- Scripts can have manual prompts in them for human inspection points. Speeds up screen traversal and environment handling
- Four points for efficient, long-term automation
- Ability to abstract into fixtures / dsl
- Strive for feature coverage
- Build a web of tests
- Have fast, routine reporting of failures
- The only reason to automate is to give expert testers time to manually test