Investment Modeling a Software Tester’s Perspective
Cem Kaner was at TASSQ this evening to talk about applying test techniques to the evaluation of the models we build into software, to help us decide whether we are building the right thing, to address validation and accreditation of software instead of burying our heads in verification. And he did, though in going into the meeting I had focused on the sentences before and after that one which mentioned ‘automated exploratory testing’ which, as an automation consulting tester I was keenly interested in hearing Cem’s thoughts on the topic. So with that in mind I thought this was the poorest (personally) of the talks I have seen him deliver. Until that is, Paul Carvalho phrased it as a case study, as an experience report. With that frame of reference, it was an excellent talk which showcased in depth, methodical test probing of various financial models in a way only someone with his training and background could provide. Here are my notes.
- There are green bananas and ripe bananas and rotten bananas and big banana and little bananas. But by and large, a banana is a banana — on commodity level software testing. The lesson is don’t be a banana — though a preferred alternative fruit was not provided
- There are a number of levels of testing; five that fit on the slide are
- checking
- basic exploration
- systemic variation
- business value
- expert investigation
I first interpreted this as a linear progression, but as he described them it appears as it was more 1, 2, then people tend to specialize in either 3, 4 or 5.
- The last three in the list allows a tester to ‘earn their keep’
- The decision to automate a regression test is a matter of economics, not principle
- Automation is a starting point to say it is ready to test, not that it is ready to ship
- The most important slide from the deck I think was on the seven risks to any model.
- model – the model is theoretically incorrect
- characterization – the model is correct, the spec is wrong
- comprehension – we misunderstood the model. our code accurately implements the wrong model
- implementation – our code inaccurately reflects our intent
- execution / environmental – platform too slow, volume issues, etc
- tool – the test tool misleads
- scope – the model is properly developed but it is not appropriate to today’s circumstances
- Implementation is the easiest to find and the least business value
- Computer programs are solutions to people’s problems
- If you understand the subject matter of the business and use that expertise in test design then you have true business value
- There is skill in empirical research that we have as testers that can be applied to any area that uses software. and that software is just a representation of a human model
- Focus your work on answering the questions you are trying to get answered rather than supporting your automation
Even with the correct context likely in place for the presentation, I have two complaints; both minor in the grand scheme of things.
- In his ‘opening rant against regression testing’ he said as agile development has gradually failed. Maybe it is because part of what I do is agile coaching-ish, but I felt it came off wrong and detracted from his point.
- The slides are terrible. I mean, they are loaded with information, but they are massively overloaded. The deck is not the presentation, the presenter (Cem in this case) is the presentation. Had it been anyone less trained and practiced in public speaking they could have just read from the slides and been coherent. Of course, I get the opposite complaint about by decks so lets call it a wash.
The title page of the deck mentioned that he will be doing a presentation as a keynote at this year’s CAST which seems like a good enough excuse to plug it. I won’t be there as my wife is at a conference in Alabama at the same time and then it is a Agile the following week, but it should be good. (As always.)