Book review: The Mighty Micro”

As well as my other hats, I’m a bit of an amateur historian of computing. Recently, I’ve been reading this decades-old book, The Mighty Micro, by Christopher Evans, which bravely forecasts the effect of the micro-computer on society, up to the year 2000. It’s an excellent and accessible read and will get you wondering why the future didn’t pan out quite the way Evans predicted. I’ve written a review of The Mighty Micro” on Goodreads. Let me know what you think!

May 25, 2012

A cry for creativity

I keep seeing entities getting created in our automated test harness with mind-numbingly tedious names like entity1, entity2, entity3.

I’d love it if people would exercise a bit of inventiveness and create entities with more interesting and memorable names.

There are a few reasons for this: Entity1, entity2 and entity3 implies some kind of order; but most of the time entity1 isn’t in any way before entity2. They’re just instances of a class of thing. Worse, depending on how these entities are indexed, you may trigger terrible performance during testing, because these similarly-named entities will have resulted in an unbalanced tree.

Secondly, those bland names don’t tell me anything about what those entities are for, or what discriminates them. The name of something should tell me something interesting about it.

For example, chances are somebody called Samhbh Górsky has an Irish mammy and a Polish daddy. But keystore_server_4.jks’? It’s a keystore, but for what purpose? Not a clue. (Do you also see the opportunity for internationalisation tests?)

Go on, next time you have to pick a list of names, go crazy and choose fun names that are memorable, mean something and might even trigger a bug — and avoid a red-herring during performance testing.

May 11, 2012

Recommended reading for testers

An old workmate of mine, Janesh,  got in touch to ask me for my recommendations of books on testing for his team. I’ve been meaning to write a few reviews of the books I’ve read or dipped into, so I’m glad of the push to encourage me to write a few lines on each.

Let’s start with the practical, hand-ons on testing books, then detour into books I’ve found inspirational and which provide background history on the topic of computing and software engineering. In a later post I’ll review books on project management and finish up with books which are designed to help you sharpen your thinking skills - because testing is all about thinking.

The practical books

Lessons Learned in Software Testing

Cem Kaner, James Bach, Bret Pettichord. 286 pages, 2002

If you only ever read one book on software testing, it should be this book. It’s divided up into very short articles (called lessons) of as little as one paragraph and no longer than a couple of pages, which distil the hard-won experiences of the authors into pithy, digestible nuggets of wisdom. I guarantee you’ll be a better tester after reading this book. There are almost three hundred lessons; some sample titles include “Report the problem clearly, but don’t try to solve it”, “‘Enough testing’ means ‘enough information for my clients to make good decisions’” and “Capture replay fails”. You may not agree with every lesson in the book, but that’s the author’s intention - they want to stir you up to get you to think critically about your craft and about how you can hone your skills as a tester.

Testing Computer Software

Cem Kaner, Jack Falk, Hung Quoc Nguyen, 479 pages, 1993

Four hundred and seventy nine pages of turgid. Despite being co-authored by one of the finest thinkers on testing, Cem Kaner, this book is exactly the sort of door-stop that gives testing the reputation it has as a boring job for boring people. Nonetheless, if you can get through a chapter without falling asleep, you’ll build up a solid foundation on which to extend your testing expertise. Maybe one to consider if you’ve drifted into testing (like a lot of us!) and have gaps in your knowledge that you know you need to fill. Don’t make this the first book on testing you read. You’ll never come back for more.

How to Break Web Software

Mike Andrews, James A Whittaker, 219 pages, 2006

Can’t remember how I found this book - I seem to remember it was recommended to me, but I can’t remember by whom. Whoever it was, thanks, because this is probably one of the most hands-on, practical books on testing I’ve read. It’s a bit dated now (it was published in 2006) - there’s a CD in the back flap - but it’ll introduce you to the wonderful world of white-hat hacking, which includes the joys of SQL Injection attacks, cross site-scripting and a whole host of other techniques which will make you realise just how little you actually know about security testing, and give you just enough tools and techniques to make you dangerous. Happy hacking!

Agile Testing

Lisa Crispin, Janet Gregory, 533 pages, 2009

This book is good, but it’s not as good as I was hoping. If you, your team or management are new to Agile, it’s a very helpful guide for testers on an Agile team. It depends what you’re looking for, though - if you want specifics of how to test, you won’t find them here - it’s more of an Agile project management book for testers.

Release It!

Michael T Nygard, 350 pages, 2007

This is a book for the tester who’s interested in the bigger technical picture.

If you think you’ve got a handle on your product’s scalability and fitness for production, read this book and shiver. It’s full of tales from the trenches of complex, multi-tier clustered systems that failed spectacularly on their first hours in production. This book outlines the many ways in which complex software can fail - and how to design, engineer and deploy your product differently to make it less prone to these kinds of failures.

The inspirational books

Hackers

Steven Levy, pages 455, 1984

Steven Levy’s classic 1984 portrait of the hackers, misfits and dreamers who shaped the world of computing as we know it today. Levy tells the story of the giants of computing when they weren’t quite so giant: Bill Gates, Richard Stallman, Steve Wozniak and Steven Jobs and a whole stellarium of lesser, but no less talented, hacking heroes. Evokes the feel of the computer labs at MIT in the 1960s superbly.

Where Wizards Stay Up Late

Katie Hafner, Matthew Lyon, 304 pages, 1996

A sober but nonetheless fascinating account of the people who built the Internet. What’s amazing about this book is that it feels as though you’re reading ancient history, but most of the protagonists in the book are still alive and still contributing to the state of the art.

Rebel Code

Glyn Moody, 343 pages, 2001

A flawed but fun read on the world of free software and in particular, Linux and its author, Linus Torvalds. Torvalds’ public spats with Andy Tanenbaum on the merits of monolithic versus microkernel operating system design are the stuff of legend, and retold well here. History has proved Torvalds right; we’re still waiting for the HURD to be ready for prime time - though of course the HURD wasn’t Tanenbaum’s baby.

The Soul of a New Machine

Tracy Kidder, 293 pages, 1981

Tracy Kidder’s outstanding nuts-and-bolts account of the race in the early 1980s by Data General to build a new minicomputer. It’s all here: the long hours, cut-throat business practices and sheer technical wizardry.  Probably the definitive account of the business of hardware engineering.

The Cuckoo’s Egg

Clifford Stoll, 356 pages, 1990

A brilliant account of a hacking attack on the Lawrence Berkeley network which Stoll was responsible for managing. This is pure thriller material - after detecting the intruder on the network, Stoll tells the story of the relentless hunt - over the course of a year - to catch the infiltrator.

More reviews to come; this should keep you in dead tree reading matter for a while! Do let me know what you think of any or all of these books and which ones you’d recommend for me.

Since discovering Alibris, the Amazon of second-hand books, I’ve been going a bit crazy ordering books - to the extent that I’m now in the lucky  position of having a considerable backlog of books to read. Books can be found on Alibris for as little as a Euro if you’re not fussy about having the cleanest possible copy. (I’m not.). As I finish the backlog, I’ll pop reviews up here. Take a look at this Flickr set to get a taste for what’s to come. Thanks for reading!

July 20, 2011

Building testable software

We’ve all worked with software which wasn’t designed with testing in mind; as developers and as testers, many of us will have been stung by this. Try as we might, we can’t vouch for the completeness, robustness or performance of some component or function of our product. This leads to anxiety, premature greying and poor digestion.

Software can be untestable (or difficult to test, it amounts to the same thing) for a variety of reasons; component X is too tightly integrated with some poorly-understood blob of code and can’t be tested in isolation; appropriate hooks for automated testing haven’t been built into the code; testers lack the tools to interact with the system they’re supposed to test.

Don’t allow yourself to get into any of these situations. If your team (and managers!) value quality, that means (among other things) designing for test from the outset.

Testability as a Done’ criterion

  • Before any code or functionality can be considered complete, it must be proven that the UI functionality to exercise that code can be driven from the team’s automated testing framework. It must be possible for tests in that framework to find and interact with any UI component, and to make assertions about its state.
  • If functionality to be tested relies on external services or unimplemented code, some mechanism to simulate those services or code must be provided.

The case for automated testing

The purpose of testing is not to assure quality”, but to provide information to stakeholders on the state of the software: its fitness for purpose, its completeness and its robustness. This information allows these stakeholders to make informed decisions about the demands and risks involved in releasing the software.

Automated testing can provide useful information that large swathes of functionality continue to function in the same way today as they did yesterday. Even if your functional test harness never catches a single bug, it’s providing useful information to stakeholders every time it runs.

Automation done badly

You’ve probably seen test automation done badly; my favourite example of terrible test automation involved parachuting a contractor into the test team on an already-late project, where he then spent several weeks, working alone, building an automated GUI test harness using a proprietary tool, for which the company owned only a single (very expensive) license. Over a couple of days at the end of the contract period, there was a handover’ of the automation code — to manual testers with very basic programming skills and no experience of automation. Yes, I was one of those testers.

We tried our best, but within a few short weeks the software had evolved further, the test harness started to break and everyone on the project team lost faith in the harness’ ability to tell us anything useful about the state of the software. Soon, the test harness was abandoned, leaving a lot of people with a bad taste in their mouths regarding the cost-effectiveness and fruitfulness of automated testing.

Automation done well

First and foremost, any test harness must be robust and provide reports that are trusted by the project team. That means the harness’ architecture must be transparent, the tests must be clear and easy to understand and tests should be easy to maintain.

Doing automation properly requires understanding what automated testing can and cannot deliver; it requires making pragmatic choices about what is cost-effective and appropriate to automate. Among the questions to ask are:

  • What is the most appropriate choice of testing framework?
  • Does the team have sufficient skill to build an automated harness, build the support functions to enable testing and build and maintain the tests themselves?
  • If not, do we invest the time and effort to train staff appropriately?
  • How do we choose which tests to automate?
  • Who tests the test harness?
  • Is the product in such a state of flux that UI elements, APIs and other interfaces are changing daily?

If the cost of updating the test harness to handle a change in existing functionality is greater than the cost of changing the functionality itself, it’s likely that the test harness architecture is not optimal.

May 10, 2011

Don’t give us unexpected or surprising information”

Just got a curious instruction from a product manager: Bugs logged in sprint 3 should focus on functionality delivered in sprint 3 or previous sprints.”

I take issue with that; It’s a tester’s job to highlight holes in requirements, specifications, architectures, implementation choices, missing features — and, yes, in implemented functionality too. If we restrict ourselves to only the latter, we’re not doing our job as dispassionate investigators. A tester’s job is to highlight shortcomings and risks of any sort so that product managers and technical experts can make informed decisions.

The message I hear in that instruction is Don’t give us unexpected or surprising information”. You may not want to hear it, but I’m going to continue to do that.

September 20, 2010

Testing luminaries

This post has moved across two blogging platforms during its life. I preserve it here as a snapshot of my thinking about testing at the time I wrote it.

I’m passionate about software testing, so I’ve been truly inspired by finally getting to hear testing guru Michael Bolton speak and chat to him afterwards over pints in the pub. What a privilege - thanks Michael!

Michael’s in Ireland to conduct a couple of courses. Very generously, he agreed to devote an entire evening to presenting, at no charge - thank you Michael, Anne-Marie Charrett and SoftTest Ireland -  a talk on the topic of The Two Futures of Software Testing”. Michael’s a raconteur, highly sociable, very approachable and an authority on testing and many other topics. He’s my favourite kind of thinker, a synthesist. He’s an intellectual omnivore.

Michael dropped quite a few names during the talk. If you care about testing, all of these people are worth paying attention to — they’re helping to shape testing as a craft and put it on a solid intellectual foundation.There are plenty of writers on testing to be found on the web, and an enormous amount of what you’ll find is trite, shallow and inarticulate - so when you find out just how smart and iconoclastic the top testing thinkers are, it’s a real eye-opener.

I’ll namecheck Michael Bolton first: take a look at Michael’s blog and Twitter stream.

Michael works closely with James Bach, (@jamesmarcusbach) who’s a cantankerous, argumentative and highly intelligent commentator and testing innovator. His brother Jon Bach, (@jbtestpilot) with whom James works closely, is less abrasive and probably just as insightful. If you’re interested to know how to raise your profile in testing, James is the man to pay attention to - he’s probably the first testing writer and blogger to come up on the radar of testers curious to find other like-minded souls.

Cem Kaner (“Kem KANE ur” — glad to have got that one straight) is probably the most traditionally academic of this lot — he’s been a strong influence on Bolton and Bach. He’s not on Twitter but has been thinking deeply and writing about testing for decades.

Jerry Weinberg is the grand-daddy of testing and is enormously influential on each of the people I’ve mentioned.

Janet Gregory (@janetgregoryca) and Lisa Crispin (@lisacrispin) collaborated on the excellent book Agile Testing and are active bloggers and tweeters.

Ajay Balamurugadas (@ajay184f) is one of the co-founders of Weekend Testing, which has taken off in the last year. There are chapters in India, Europe and Australia. Find one in your timezone and join in!

There are plenty of other interesting and worthwhile testing people to be found on Twitter and across the blogosphere — start with the people I’ve mentioned and let them lead you to other interesting folks.

The secret of Twitter is to find a good client. If you use a Nokia phone running Symbian, I highly recommend Gravity — it’s actually better than any other client, desktop or mobile, that I’ve used so far. I miss it on Android - so if anyone can point me at a Twitter client that can equal Gravity, let me know!

For blogs, I use Google Reader. It aggregates all of the blogs you follow into one place and allows you to comfortably dip in when you can. There’s a mobile version of the Google Reader which works very well on simple phone browsers.

Lastly, you might have noticed that I haven’t said a single thing about the content on any of these testers’ blogs - that’s intentional! Check them out yourself. If you haven’t read anything by any of these folks yet, you’re in for a most delicious surprise at the quality of writing and depth of self-analysis going on in the field of testing.

September 15, 2010