Our job as testers is to communicate quality related information to the stakeholders. I think the audience of this blog has little trouble accepting this. But anytime you have a communication point, there is opportunity for things to get lost in translation or or worse. This post takes a couple slides from one of my teaching decks and explains the sort of communication different groups need to be effective and a suggestion of the format for that communication to occur in.

Before you can communicate anything, you need to understand who your audience is. Is it internal to your test group? Or up to the corporate product team (which I’ll call external)? These two audiences have very different information needs because they have different types of decisions to make.

Internal Communication

Internal audiences want to know information that will help them test or test more effectively.

  • Things like code turmoil stats will help them zoom to the riskiest (changiest) parts of the code. The tools to generate these are often home-made and are always specific to a certain version control system. Here is a blurb from a SVN turmoil script which describes the rationale for this information:

    Code turmoil is a metric indicating the number of lines of code changed, added and deleted from on revision of a code body to another. In our case we are measuring the amount of turmoil between two specified revisions of a subversion repository. Code turmoil is often used as in indication of how stable a project is. High turmoil means a higher possibility of new bugs being introduced and indicates lots of new changes that must be tested before release. Given this logic it is expected that code turmoil would get progressively lower as a release approaches.

  • The complete list of bugs fixed, not verified is another way for guessing where the risk is in the application. It also lets you help others on your team by taking things out of their queue if you are in an area of the application already that needs something verified.
  • A list of all bugs, sorted by component will help illustrate where your bugs are clustering. Be careful though. A lack of testing in an area with a small cluster could be the reason for its size
  • The list of bugs reported by customers is a useful thing to have available as it can tell you where you testing wasn’t quite sufficient enough as well as give you an idea how customers are actually using your product.

External Communication

External audiences generally need information that helps them understand where the project is in relation to where they think/want it to be.

  • Bugs >= Release Criteria is a number that a lot of organizations watch very carefully. Let’s say you have a four-level bug classification scheme and company policy is to make a ship/no-ship decision only after there are no level 1 or 2 bugs. It can almost be assured that the first question out of the status meeting chair towards the testing representative is what this number is.
  • Bugs Found/Fixed/Verified is another popular one. The theory behind this is that as the release gate gets closer, the rate at which bugs are found will decrease, as will will the number of fixes and the number fixed and number verified will match up. This is somewhat like a management consumable version of the turmoil report described above.

Metrics

I’m not going to wade into the debate about metrics in testing and their use. Well, maybe I’ll stick my toe in and say something inflammatory like Numbers with context are information; numbers without context are metrics. Want to start a fight between the various testing factions? Bring up metrics, their collection, their use and their value.

What I will say though is that you need to understand that since metrics are just numbers, they can be gamed. If not directly, than through the manipulation of inputs. Because of this, using them for decisions that really matter to people (like if they are getting a bonus or not), should be done with great care. The result will likely not make either side of the transaction happy.

Dashboards

Dashboards are a great, no, awesome way to communicate what is going on in a project. Why? Because they are visual. A lot of what I think of in terms of these can be traced back to Ron Jeffries’ Big Visible Charts, so go read that now.

Back? Great. Here are my further ideas

  • Take over a whiteboard as the ‘test’ board. I had a large one at one company which had the status of each project, who was doing what, and what was next in the queue. It got to the point where management would look at the board first before asking me questions. Where you are on distributed teams, there are ‘virtual whiteboard’ applications available or even a wiki.
  • At one point I saw a set of magnets you could get which had green happy faces, yellow neutral faces and red sad faces. You could see from a distance how things were going based on the dominant colour. I can’t find that link at the moment, but coloured markers works just as well (just are messier).
  • The depth of dashboard is dependent, again, on the audience. Internal audiences might want it down to the OS/Browser level of specificity, whereas management wants (typically) things to be at the feature or product level
  • Open-concept offices, while they continue to be all the rage, such at this sort of thing as they often lack the necessary wall space for this.

So there are my thoughts on the types of communications two very different audiences (internal and external) need in order to be effective. And how to deliver that information.