I love when I’m thinking about something and a paper or blog post from someone else arrives which makes most of my point. This time it is James Lyndsay’s The Irrational Tester, in which he talks about various biases; something I’ve mentioned a couple times previously. What I wanted to write about was the Broken Windows theory. Since he licensed it under Creative Commons, here is the section in its entirety.

*When considering perceptions of quality and bug fixing priorities, it may be interesting to consider an idea which first came to prominence in issues of civic order in New York.

Kelling and Wilson, in their 1982 magazine article Broken Windows, put forward the intuitive but controversial idea that vandalism is encouraged by vandalism, and that swift fixes to small damage in public spaces are an important part of preventing larger damage. Their idea became the basis of the “zero-tolerance” approach to street crime in New York. These ideas are put to the test in experiments described in Keiser’s The Spreading of Disorder. The experimenters looked at people’s behaviour in situations where rules had been visibly flouted.

In one experiment, they set up a barrier across a shortcut to a car park. The barrier had two notices on it; one to forbid people to trespass, the other to forbid people to lock their bicycles to the barrier. 82% of people pushed past the barrier when it had bicycles locked to it. 27% pushed past when the bicycles were locked up a meter away. Three times as many people were prepared to break the trespass rule when the bicycle rule had already been broken. Similar effects were observed for littering and for theft. It made no difference whether rules had been set legally, by social norms, or arbitrarily. People were more likely to deviate from ‘normal’ behaviour in an environments where a rule – any rule – had been broken ostentatiously.

Imagine, then, a parallel with obvious yet minor bugs. If a cosmetic bug is left unfixed, those who see the bug may be less inclined to follow ‘normal’ behaviour. If the software is under construction and the bug is regularly seen by those building the software, could it act to discourage conscientious development? If the system is in use, might an obvious bug nudge users towards carelessness with their data? Both these questions are speculative, but if we accept that pride in our work tilts us to do better work, we might also accept that we could allow ourselves to work less carefully on a sub-standard product.*

This is one of the key things to think about when trying to build a culture of quality. And it, in my experience, has to come from the top. (See Illusions, Sheriffs and Grassroots for a larger discussion.)

So what sort of things act as broken windows?

  • Builds allowed to remain red
  • Negative or stagnated coverage numbers
  • Test not included in project meetings
  • Seat-of-pants project management; lean principles say you should wait until the last responsible moment to make a decision, but this is often past the responsible moment.
  • Test phase done by non-testers
  • Lack of post-release planning; who is supporting customers, who will be maintaining code, etc.
  • We’ll fix that later as mantra. Technical debt is one thing, but that means it needs to be recorded and tracked somehow, not just left to the ether

Per usual, there is going to be other things that happen to a project which act as broken windows, but those are just a few that start waving warning flags in front of me.