There was a bit of a flurry of words, or opinions and counter opinions in the podosphere recently between Joel Spolsky and Uncle Bob (Robert Martin). John Cook summarizes the back-and-forths with links to each escalation and subsequent de-escalation. I don’t listen to most of the podcasts, but the one I do listen to is StackOverflow so heard the excellent conversation when Uncle Bob was the guest recently on that podcast where it turns out they are more-or-less saying the same thing only differently (as usually happens when two smart people argue).

Here is the audio and the transcript.

The conversation started with just general rambling and a bit of discussion around ‘principles’ which in testing lingo could be called ‘best practices’.

  • There is some deep thing wrong with programmers which makes them good programmers
  • Dependency Magnets – a module that many other modules depend on which necessitates lower modules be tested to changes on it
  • Understand where your guiding principles originated. Do they apply to your environment?
  • Independent deployability is still an issue
  • Controlling the deployment hairball is a Good Thing
  • Not only do you want to independently deploy to your customers, but within your own organization as well
  • At what point does knowing the rules make things worse?
  • If you just blindly follow principles without knowing each’s why then all you end up with is a well formatted mess
  • There are two types of developers: those who think about what they are doing, and those who don’t
  • Developers in the second group can transition to the first

At the 36 minute mark the really good part begins. In it they talk about unit testing and coverage. Robert Martin is right up there with Kent Beck in the unit testing hierarchy.

  • 100% code cover is a good goal, but not a requirement
  • There is a point where there is no point trying to increase your coverage
  • Uncle Bob keeps it around 90% lines of code
  • If you are feeling pain while working with your code, then its a smell that something should be refactored, more tests, etc.
  • Unit tests
    • describe in excruciating detail and unerring accuracy how the code they are testing works
    • lack ambiguity
    • written in a language the programmer understands
  • If there are a set of unit tests that don’t exist, what you have is a couple statements of the specification, not the specification itself
  • Some things are too stupid to test (getters and setters)
  • If there is an if statement, write a test
  • Changing resolution is a common failure mode of GUI tests
  • There seems to be a good argument to not automate testing of the GUI. Automate the underlying business rules that the GUI is using
  • Only test the GUI through the GUI
  • But of course, you need a major separation of the front and back ends
  • If you disabled the command-line on every unix programmer’s machine the level of quality of UIs would increase dramatically overnight
  • Writing tests first forces you into the mindset that forces the separation of things
  • TDD also allows good rules of design will be used more frequently and to a greater degree

And then they do a weekly feature where they discuss an answer found on the StackOverflow site.

  • Two smells around switches
    1. Whole bunch of outgoing statements and each statement has a dependency causes the switch to become a dependency magnet
    2. Switch is replicated all over the place