I had a bit of an epiphany yesterday if perhaps a bit late. I’ve been concentrating on the standard places and techniques when approaching the task of improving the overall quality of our product(s). Things like mining data logs, checking unit test coverage, flushing out quality driven processes, etc..

But what occurred to me was that our calculators are really just red herrings in terms of priorities. The value that we create for our customers and consumers is not the calculator interface (any dev team could recreate that; some have in fact). The value is in the questions, and their underlying research and equations. Not the software.

Think about it for a second.

  • Google is nothing without the PageRank and AdSense algorithms
  • Idee‘s value is far more than a huge database of photos. Their value is in the algorithms they use to do their visual search magic
  • Points.com‘s real lynch pin on the loyalty program exchange market is not their website or technology stacks, it is their deep integration with the program back end systems which gives them the competitive advantage All not what people think their ‘core’ product is, but each absolutely necessary to it.

I often use the analogy of an onion (or tree) when describing how software is built. There is often a core set of classes/code to which features and functionality are added on layer by layer. (Unlike onions or trees, the core of software tends to also grow over time but that breaks the analogy so lets ignore that). Testing things at the center of the onion generally produces the greatest bang-for-buck as they are used by the most things. The trick then is identifying the center of the onion.

So the question I have been rolling around in my head now is have I been testing the right thing? Well…. no, I don’t think so (or at least to an appropriate weighting). And its follow-up, how do I test it? No idea, but I’m about to ask a tonne of questions when I get into the office. And questioning stuff is what testers do best.