My session at this year’s Agile Tour Toronto had the catchy title of ‘From Start to Success with Web Automation’. I think it went well and the feedback cards and post-talk hallway conversations seem to back that up. Here is the video, slides and commentary.

</embed>

[From Start To Success with Web Automation](http://www.slideshare.net/agoucher/from-start-to-success-with-web-automation "From Start To Success with Web Automation")</param></param></param></embed>

Elisabeth Hendrickson was in Europe recently and gave a keynote in which she identified a number of Key Practices of Agile Testing. While I wasn’t there, she did list them off on Twitter.

  • Collective Test Ownership
  • Continuous Integration
  • Rehearsed Release
  • Automated Technical (code-level) Tests
  • Test Driven Development
  • Exploratory Testing
  • Automated Business (functional) Tests
  • Acceptance Test Driven Development

Of these Key Practices, three of them directly involve automation at the web level: ATDD, ET and Automated Business (functional) Tests. If you want to get really specific, then they also touch on CI and Collective Test Ownership too so clearly we need to be able to succeed in our web automation efforts if we want to succeed in Agile Testing.

The problem is, that teams rarely come close to succeeding with them. So much so that automating the front-end of applications is considered a flagrant waste of time and effort in some circles. I’ve had success though. And I think the reasons for it are applicable outside of just the projects I have worked on.

If we are aiming for successful automation, we should look at some of the attributes of a successful script commonly has. In other words, successful automation is…

  • Paranoid – Never ever, ever, ever, ever, ever, ever, ever trust the client. Just because something says something happened does not mean it has. Account information claimed to be updated? Check the database! As soon as you find yourself trusting things, you will get burned. Get burned too publicly too many times and you will find your time to work on automation being removed and / or the results just shrugged off as inaccurate. Perhaps ‘never trust’ is too strong. How about ‘trust, but verify’ instead.
  • Efficient and Effective – A script exists to exercise a single piece of functionality. Just because you test your whole freakin’ app in one script does not mean it is a good idea. True, there will be likely be a huge number of ways in which your script could not complete successfully, but those specific failure modes would be pointedly exercises by some other script. For example, in order to get to the page you want you have to login to the system. Don’t check the whole login process, just take for granted that it will work. Your login scripts will verify the functionality of the login system. Don’t worry about it anywhere else.
  • A Student of History and Linguistics – Automation isn’t new. There have been lots of research and experience generated in the area over the last 10 – 15 years. Sometimes you have to invent a better wheel, but you should know about the design of earlier wheels as well. Do your research and build up a library. Something like xUnit Test Patterns should be available on the bookshelf of anyone doing automation for instance. Also know which language to use when. Don’t fall into the trap of thinking you need to automate in Java just because that is what your application is written in. I, for example, automate in either Ruby or Jython usually; regardless of what the underlying application is written in.
  • Intelligent and Wise – In Dungeons & Dragons, Intelligence is your intellect or smarts and Wisdom is how you apply it to the talk at hand. Your scripts should be strutting about with 19s or higher in both categories. They should be able to interact with their environment to build their own data and even decision trees. They should also include their own oracles to determine whether the right thing happened, at the right time, in the right manner.
  • Modest – I kept switching between ‘modest’ and ‘humble’ for this slide. The concept here is that when your script breaks, it doesn’t try to hide the fact. It says very obviously I couldn’t do what you asked here because of this. When this happens it should clean up the mess that it left behind so your environment is not in a completely untrustworthy state.
  • Automates Checks and Facilitates Testing – Michael Bolton has a meme running right now with the difference between Testing and Checking. The key difference between the two is whether the oracle is automated or not. Automation can include the oracle as a check or it can zoom through your application to a specific page to help facilitate a human to do testing. Don’t confuse the purpose of your script when creating it. Though of course your facilitation script could have a number of checks so the tester knows the environment is, at minimum, ready for their sapience.

Now that you know what your end scripts will look like, you are ready to start writing stuff. Right? Well, not quite yet. Before we do that we need to know a bit about who is going to be the creators, consumers and maintainers of them. I broadly break these categories into two divisions: those who can code and those who can’t. It is my experience that you have a far greater chance of success if you target the geeks of the organization first.

Geeks know how to code. They are comfortable inside a text editor and seeing xUnit style automation with all its quirks is not going to phase them in the least. They also know how to Build their own Lightsaber and by applying the DRY principle will abstract out common sequences of commands into helper methods/fixtures. These helper methods will form the basis of your organization’s DSL.

Non-geeks are not second class citizens in successful automation by any means, they just have a different skill set that they bring to the table. Often this is in the form of Subject Matter Expertise and what they need is a way to using it efficiently and effectively within the framework. This is best achieved through a DSL that abstracts the technical details away from the business details. And because you had the geeks work on automating stuff first you already have the beginnings of that. The non-technical tester doesn’t care that there are 250 steps behind the scenes for building 50 shares of AAPL; they just want to be able to call ‘buy_stock(AAPL, 50)’ and have it magically work.

Yes, now you can start actually creating an automated script.

Record

The first step in a successful automated script to to record the basic skeleton in some means. With the Selenium suite, you use the Selenium-IDE extension for Firefox. In the actual talk I did a recording of a search operation on a local WordPress installation in the IDE. Don’t forget to add your verify and assert statements in. If there is no way for the script for fail, it is not testing (or even checking) in my opinion.

I almost never save a script within Se-IDE. Instead I’ll export it out to a real language and run it from within Se-RC. Yes, even if it is just a ‘simple’ script. ‘Simple’ scripts have a tendency to increase in complexity over time.

Add Power

The real power of Automation is unlocked when you have a real language, not just vendor-script. I’ve argued before that Se-IDE is powerful enough to stay in it, but not powerful enough to really get stuff done.

Here is the script as exported from Se-IDE into Ruby.

<pre lang="ruby">require "selenium"
require "test/unit"

class direct_export 
<p>But like most things in the open source world (especially the Ruby part of it), the code that the IDE produces isn't up to the latest coolness. So with a bit of modifications to use the <a href="http://selenium-client.rubyforge.org/">selenium-client</a> gem, we have this code.</p>
<pre lang="ruby">require 'rubygems'
require "test/unit"
gem "selenium-client", ">=1.2.15"
require "selenium/client"

class CategoriesTest 
<p>Power!<br></br>
<br></br>
Or at least the potential for power. The first step for adding power is often to data-drive your script. This is the process of abstracting your script to read data from an external source which is something Se-IDE can't do (easily). In an environment of geeks and non-geeks this is often accomplished through the use of CSV files. The chief advantage of this is that you can add to scenarios without actually having to change the commands that are executed; just the inputs change. Here is the same script modified to be data driven through csv.<br></br>
</p>
<pre lang="ruby">require 'rubygems'
require "test/unit"
gem "selenium-client", ">=1.2.15"
require "selenium/client"

class CategoriesTest 
<p>More Power!<br></br>
<br></br>
We can go even further though. We want scripts that can are Intelligent enough to data-drive themselves. Because we have the full power of a real language at our disposal we can hook into the database directly and let the script do its thing. Of course, in order to do this you need to understand at a very deep level what is going on in your application. That knowledge acquisition is a Good Thing though. The more you understand the system, the more complete your mental model becomes and the better testing and thorough checking you can accomplish.<br></br>
<br></br>
Again, same script, but driven from the database.</p>
<pre lang="ruby">require 'rubygems'
require "test/unit"
gem "selenium-client", ">=1.2.15"
require "selenium/client"

class CategoriesTest 
<p>OMG! <a href="http://en.wikipedia.org/wiki/Fictional_history_of_Spider-Man#Cosmic_Spider-Man">Cosmic Spider-man</a>-esque Power!<br></br>
<br></br>
This is where you want to have your scripts function at for true success. Once you are here, you can have your scripts run continuously if you have a smart runner that notices when new scripts are added into the mix.<br></br>
<br></br>
One problem this doesn't solve is Permutation Madness problem. This comes from there being a boatload of browser and OS configurations we need to care about in today's environment. Selenium Grid is designed to solve this problem. Using a centralized machine you can farm out your script execution to various slave machines which could all have a different configuration.<br></br>
<br></br>
This solution doesn't scale so well though. Suddenly you need a whole farm of machines to run your tests and that will suck of scare resources to make sure they are all patched, etc. Virtualization helps, but you still need to manage the VMs. Companies like <a href="http://saucelabs.com">Sauce Labs</a> and <a href="http://browsermob.com">Browser Mob</a> exist to remove that maintenance burden from you (and other value add stuff too).<br></br>
<br></br>
(Had I been thinking I would have run my script in Sauce Labs' <a href="http://saucelabs.com/products/sauce-ondemand">OnDemand</a> cloud but I wasn't and didn't know how sketchy the wireless was at the venue.)<br></br>
<br></br>
Thus far we have covered what a successful script will look like and a recipe for achieving it. But how do you know when you are at risk of running off the rails? In programming terms, you check for smells. Here are the ones I mentioned in the talk. There are assuredly more.</p>
  • I need to re-record - This hints that you are staying the Record phase to long and have not added things like error handling and the robustification that is possible in a real language. If the application has changes significantly (intentionally) that your script no longer operates at all, throw it out and start with a clean slate. Don’t try and fix the existing one.
  • Number of Steps - Various companies are learning and publishing the optimum length of script for ease of maintainability and readability and they seems to be averaging around the 200 step / action mark. If you have 1000 steps in a script then you really need to examine what it is doing. Odds are it is really a couple scripts that have organically grown as a single one.
  • Automate Everything - The Cult of Automation is alive and well in the Agile community. Somethings should be automated, somethings should not be. Learn the difference.
  • Staying too long at phase of maturity - Similar to the first one; don’t get stuck in Se-IDE, Se-RC or Se-Grid. Just because it was the right level before does not mean it is the same one now: new problems surface, new variations of old problems occur. If your Se-RC scripts take 9 hours to run because they run synchronously, then you likely are (well) overdue for the move to Se-Grid or one of the commercial offerings.
  • Trust - Again, never trust in automation. Verify everything isn’t a lie. Even if it is a well-intentioned one.

The last thing I talked about was Patterns for Success. I put them last because I didn’t know if I was going to run out of time and figured it was more important to get the Smells (Danger!) in than the Patterns. I didn’t run out of time so the decision ended up being irrelevant.

  • Build a web - Approach your application from multiple angles in order to build a web across it. Just as a spider catches bugs in its net, you automation can catch them it its. (This was one of the key points of Chris McMahon’s Agile 2009 talk.)
  • Tags - I was taught to organize my scripts by functionality (in Mercury training circa 2000), but the rise of User Stories has people also grouping by User as well. Both systems can work to great success, but there are inherit issues of overlap in these. I’ve been messing with the idea of ‘tagging’ scripts in addition to these structures. This removes the overlap problem as it is actually an important part; scripts are organized on disk by functionality, but tagged with the User(s) they affect. Runners need some modification though. As do Test Management Systems (if you are stuck using them)
  • Metaframeworks - I publicly demonstrated for the first time the Metaframework I am working on in my ‘spare time’ (heh, no wonder it is taking forever). A Metaframework will run and aggregate the results of scripts written in a number of different languages which lets people write them in whatever they are most comfortable. The point is to exercise the application, not your power in dictating the language tests must be in.
  • Sync ‘n Run - See this post for a larger discussion, but essentially, it is ‘check everything into version control so deployment is just a sync operation.
  • Design for Parallelism - It is better to design your scripts initially in such a manner that they can be run (massively) parallel from the get go rather than have to hack it back in later. Things like file and database row contention become issues. The same techniques application developers use to deal with these problems apply just as well to your automation.
  • Data Doesn’t Have to be Real - Input data has rules and as long as it adheres to those rules then you are golden. It doesn’t matter that you cannot pronounce the First Name the script generated, because, well, you don’t have to. It just needs to be accepted, processed and returned correctly by the system.
  • Test Discovery - Mentioned before, but having a runner which can automatically detect a new script added into the available scripts pool is powerful. This means that you never have to turn off your scripts.

And after a couple clarifying questions, that was it.