A frequent discussion that I see is the one that goes along the lines of “you need to program in order to test” or it’s opposite, “if you see the code it will bias your testing”. Like most discussions of this type, both sides are neither wrong, nor neither right. But both are wrong in their exclusion of the either.

This is how I cross-section the testing community.

On the left we have people who, if asked, would call themselves developers first and testers second. The classic example of this sort of thing is the tester groups at Microsoft or Google. These are programmers whose code output happens to be in the sphere of testing; scripts, frameworks, tools, etc.. Nothing wrong with that; the approach works in their contexts.

On the right we have people who are testers. Their primary tool is their brain, not the compiler and tend to hang out with James Bach, Michael Bolton, et al. Exploratory testing to avoid the Minefield Problem and the Pesticide Paradox. Another approach that works in their contexts.

The sweet spot I think for the future of testing is when the two groups collide.

An agile tester, to me, is someone who not only knows how to think like a tester, but knows how to code test stuff when appropriate (Selenium, small scripts to help exploration, etc.). They can also read the code looking doe inspirations and have a conversation with a developer around a specific implementation detail.

In my completely un-scientific survey of ‘testers’ at Agile and from conversations online, he bulk of people who might call themselves ‘agile testers’ know how to code. Automation, for instance, is a programming activity.

If you believe, as I do, that this sort of testing will become more and more mainstream it will become more and more required to have one or two programming languages in your toolbox. I suspect that people not in the overlap will continued to be more marginalized.