Reviewing some of the older posts on this blog, I’m surprised at some of the statements I’ve made in the past about the function and nature of testing; my approach to testing has evolved more than I think over the years!
First of those changes is my rebuttal of the term ‘Quality Assurance’, or ‘QA’, to define the role of the tester. I consider the term ‘QA Engineer’ equivalent to overblown titles like ‘Sanitation Engineer’ and ‘Social Media Evangelist’. On an Agile team, it’s not the responsibility of one person to ‘assure quality’, as though some sort of referee were needed to keep those wayward developers in line. This may sound heretical, but in fact it’s the whole team’s job to assure quality. In my experience, that’s best done by teams using Agile techniques. Not just some pick-and-mix Agile approach, but by adhering to the Agile principles in general and to methodologies like Scrum or XP in particular. I detest the term ‘best practice’, but I can tell you that the teams that I’ve worked on that got closest to a pure Agile approach also produced the best quality software.
I’m also in a period of evolution regarding automated testing; most of my early experience of it was so bad that I really doubted whether it could bring business value. Now, being more familiar with the tools of automated testing, and having a better appreciation of where automation provides most value, I’m a strong advocate, but perhaps I’ve swung too far and don’t practice enough exploratory testing. All my testing now is repeatable, but I wonder if my attention is as broad as it was when I was a purely manual tester. I’m with Michael Bolton and the other folks in the Context-driven school when they say it’s important to draw a distinction between automated checks and sapient manual tests. They are not equivalent.
Another area where my thinking is evolving is in the field of testing metrics. It’s easy to collect and collate data on testing, but it’s hard to do that in a way that actually gives a useful view of the state of the project, or the readiness of the software for production. Right now, I don’t believe that a few bald numbers can do that (especially not bug counts, even when graphed against historical statistics). I’m more of the view that a holistic, conversational story about the state of the project is more likely to provide stakeholders with the information that they need. (That’s some statement, coming from an arch-rationalist like me.)
And this brings us to the crux of it: testing is about providing relevant information to stakeholders so that they can make the decision about whether this release is fit for production. Just as customers are now expected to engage in the development cycle at periodic intervals, so project stakeholders must ensure that they are informed about the state of the project at every stage, not just in the weeks before the go-live date or during the hardening sprint. Us testers must engage with these stakeholders all the way along to ensure that we’re providing good information.
So, on my older posts, caveat lector - let the reader beware. If you see something in a new or old post that you agree or disagree with, let me know and we can have a conversation. I’m always interested in honing my thinking on testing and in hearing new ideas. Changing an opinion in the light of new evidence is not a weakness, it’s a foundation of science. As Jeff Atwood explains, to avoid confirmation bias, it’s best to have strong opinions, weakly held.
Finally, a big thank you to those people who care enough about testing to have challenged and engaged me over the years. Among my colleagues: Frank Somers, Chaminda Peiris, Sisira Kumara, Augusto Evangelisti and Cormac Redmond. Of the folks I know via the net and through books: James Bach, Cem Kaner, Michael Bolton, Jerry Weinberg, Robert Glass and Michael Nygard. Thanks to all of you, whether we agreed on things or not!