YSlow and Continuous Integration
YSlow is almost indispensable as a quick way of testing for performance robbing issues in your product. But because it is a Firefox extension, it is relegated to the realm of manual testing rather than being integrated into a fast-feedback system that a continuous integration server provides. I’ve even made a worksheet in the past to assist in the collection of this manual information.
Jay Goldman pinged me this afternoon about how you would integrate Firebug with a CI server which managed to jiggle the cogs of my brain just right and I plunged down the rabbit hole. This post explains how, if I didn’t have a whole bunch of stuff on my plate right now, I would get YSlow failing builds.
Data Collection
It appears that YSlow has the ability to report its information back to a central server. You can see that a number of people have wired this up over at the public Show Slow site. That is cool and all, but do we really want our non-complete code and sites broadcast for the world to show? Well, of course not. What is super useful is the open-source version.
Show Slow is just a php app with a REST API that captures YSlow information. So just download it onto an internal server somewhere, change a couple lines in the config.php and you are ready to start collecting performance information.
Data Production
There are two ways I can think of to generate data:
- Manually – in this scenario, you configure all the developers and testers to report their YSlow results to your internal Show Slow server. But this means that everyone needs to remember to run YSlow while using the app. You can counter that by having it run automatically, but then you are reporting information on way more sites than is necessary.
- Automated – the better way is to incorporate data production into your automated functional tests. For instance, have your Selenium tests load a custom profile which has YSlow installed as configured to automatically record your data. Data is collected, and is all relevant (or shouldn’t be in your tests)
Failing Builds
This is a detail screen of Slow Site’s detail page. Notice how all the categories and values are available.
We can now using something like Selenium RC to load up that page and make decisions about whether or not the results are acceptable or not. Heck, you could run grab the page with wget or something and parse the html to make the same decision without the overhead of Selenium.
That’s it. A successful build is now contingent upon the performance characteristics per YSlow. (Or to do it properly, someone should write a Hudson plugin that takes YSlow beacon information directly.)
Disclaimer: I’ve run only partially down this rabbit hole. If you pursue this down to where the rabbits are hiding, let me know. I’ll be trying this fully in a month or so I think.