I never got around to making an accompanying slide deck for the the presentation that never was, here are the notes that I would have created it from. This is what I originally submitted to the Agile selection committee.

*Since the rise of the *Unit frameworks, the number of tools which can be incorporated into a product’s build has increased at a rapid rate. All these tools, be they for style analysis, source code idiom detection or bytecode security patterns all serve one purpose; to answer a question about the build.

But are you asking it to answer the right question? And does that answer raise a different question? This presentation will look at the common types of build analysis tools and discuss which questions they should be used to answer. This often is different than the questions they are used to answer.*

Here are the notes for the individual sections.

  • Cult of the Green Bar
    • Too often when people see that the IDE or CI server is ‘running green’ it is interpreted as being ‘good to test’. But does it really mean that? Really? It was a trick question…
    • The green-means-go idea is now so engrained in the market it was specifically mentioned in a video the folks at PushToTest did at Google.
    • Belief in the green bar can fail you:
      • when the tests do not test the conditions that are the ones that are failing in production
      • the tests that really need to be run are not able to be tested in a simplistic manner
      • tests are just plain missing
    • So what does the green bar tell you? That the existing tests all ran in the expected manner
  • Coverage
    • Coverage measurement if often used as justification for the Cult of the Green Bar
    • But, but, but, the bar is green AND we have 97.635% coverage. It MUST be ready to ship
    • The term coverage is vague at best. Wikipedia has five different types of coverage
      • function
      • statement
      • branch
      • path
      • entry/exit
    • Coverage provided by tests that lack context is just a number
    • The only think coverage tells you is where are we at this point in time. This in turn lets you ask whether or not any change (or lack of change) is ok
  • Static Analysis
    • Static analysis tools try to add context to your tests
    • but…
      • there might be a bug in the tools
      • the tool(s) might not know your programming paradigms
      • high false positives or lack of results tuning might cause people to ignore errors that they should actually be paying attentions too (wheat to chaff ratio)
      • you might not care about the errors that the tools returns
      • the tools of course only know what they have been told to know about
      • some tools are picker than other; are they more or less picky than you?
  • Complexity
    • defined as the number of linearly independent paths through the code
    • common practice is to target functions/methods with high complexity first
    • there is (of course) some debate on whether or not this practice makes any sense or is just another useless number to confuse us
    • start to look at code that is giving a complexity of > 35
  • Dependency – Which is better? 1 bad dependency or 3 good ones?