To content | To menu | To search

Thursday 23 July 2015

Was this information helpful?

No, Outlook, it wasn't.

Sunday 19 October 2014

Firefox on HiDPI Linux laptops

For a few months now I've been using the very lovely Dell XPS 15 9530 as my main laptop. I'm running Linux Mint 17 "Qiana" on it. The Cinnamon desktop handles the HiDPI screen well, but certain apps don't scale as nicely as others - among them Firefox, which displayed its UI elements and web pages at the screen's native resolution - 3200x1800.

I finally took the time to sort that out today. The fix is extremely simple: Just open the Firefox about:config page, find the value layout.css.devPixelsPerPx and change it from the default -1.0 to 2.0. The change is picked up instantly. Take a look at the difference:

Native resolution:


Next, I need to sort out The Gimp...

Monday 8 September 2014

Getting the most out of Cygwin

A default install of Cygwin can feel very alien, even if you're familiar with command-line Linux. Part of that is down to Cygwin's not-great default integration with your Windows account. Another part of it is the dopey installer defaults which leave you with a half-crippled environment.

As you install Cygwin...

Choose the appropriate architecture

If you have a 64-bit machine, use the 64-bit installer. This will allow you to make the most of the memory on your machine.

Run the installer as Administrator

This is not obvious, but not doing it this way is the cause of many broken Cygwin installs.

Keep the setup defaults

Did I mention that Cygwin Setup is brittle? The rest of Cygwin is much smoother, I promise.

Choose some extra packages

Search for vim and check all the options you see.
Choose curl. Choose wget. Choose file. Choose openssh.

Don't use the Irish mirror

It's a dog. Pick one from a UK university instead.

After you've installed Cygwin...

Here's what I always do after installing Cygwin for the first time on a new machine:

Edit the shortcut to mintty to make it a login shell

Find the Cygwin Terminal in your start menu and right-click it. Choose 'Properties'.
Charge the value of the argument to mintty.exe in 'Target' as follows:
C:\cygwin\bin\mintty.exe -e /bin/bash --login
This change will cause the shell opened in the terminal program to be a login shell - which means your ~/.bash_profile file will always be read when you open a new terminal.

Make your home directory a symbolic link to c:\Users\<me>

Open a Cygwin shell. You'll be in your Cygwin home directory, which has none of the familiar stuff you'd expect to see when you open Windows Explorer. We'll fix that now.
Move all of the contents of your Cygwin home directory into your Windows home directory: 
$ _IFS=$IFS && IFS=$'\n' && mv $(find . -mindepth 1 -maxdepth 1) $(cygpath -u $USERPROFILE) ; IFS=$_IFS
Delete your now-empty Cygwin home directory and create a symbolic link with the same name pointing to your Windows home directory:
$ cd /home && rmdir $USER && ln -s $USERPROFILE $USER
Change directory back into your new home directory and start Windows Explorer:
$ cd ~ && explorer . &
Presto, you'll see all your usual files! That means you can now use Cygwin to - for example - navigate to your "Documents" folder with the command cd ~/Documents

Fix the colourful but wonky-looking default prompt

Using your favourite editor, open the file ~/.bash_profile. It may not exist; that's OK, you can just create it.
If it doesn't exist, add the following lines to it:
export PATH=$PATH:.:~/bin
PS1="[\u@\h:\w]$ "
If it does, just add the last line.
If you're feeling a bit brave, you can try the following subtle but useful prompt instead:

Configure your corporate proxy in your bash environment

If you use a corporate proxy, use the following as a starting point. These settings go into your ~/.bash_profile:
export ftp_proxy=http://corpproxy:8080
export http_proxy=http://corpproxy:8080
export https_proxy=https://corpproxy:8080
export no_proxy="10.*, 192.168.*"
Remember that ~/.bash_profile is only read when your shell is started, so open a new terminal to get the changes.
Having the proxies set in your environment will allow lots of command-line tools that use the network to work properly: apt-cyg, youtube-dl, curl, wget, links and so on.

Install apt-cyg for command-line installation of Cygwin packages

As you've probably already noticed, the Cygwin installer is very clunky. I bypass it by using a tool that bears a passing similarity to Debian's apt-get - it's called apt-cyg.
If you need to use a proxy, follow the steps above, then follow these instructions to install apt-cyg.

Set up some useful aliases

Add these to your ~/.bash_profile:
alias ls='ls --color'
alias ll='ls -l'

Friday 29 August 2014

ISO 29119 is dead in the water

The ISO are attempting to standardise testing, but there's no consensus in the testing community. In particular:

'An opponent of the specification, Scotland-based test consultant James Christie, does not like the idea of a standardized approach to software testing nor the approach of the ISO effort. "ISO 29119 puts too much emphasis on process and documentation rather than the real testing," Christie says. "Of course that is not its purpose or the intention of the people who have developed it. However, I have seen in practice how people react when they are dealing with a messy, complex problem and there are detailed, prescriptive standards and processes on hand. They focus on complying with the standard and lose sight of the real goal."' 

Sound familiar? ;-)

Take a look at the conversation on Twitter to get a feel for the level of opposition.

I think the Context Driven school do a much better job of providing testers with a guidance and a rich education; See Cem Kaner's Black Box Software Testing course and James Bach's Rapid Software Testing courses. Cem Kaner's course materials are available online via a Creative Commons license. The context-driven approach is about descriptivism rather than prescriptivism.

Thursday 28 November 2013

Softest Ireland: Nathalie Rooseboom de Vries presents "How to catch a high-speed train - End to end testing at NS Hispeed"

Softest Ireland host occasional presentations by testers from Ireland and around the world; Janet Gregory and Michael Bolton have been previous guests. These presentations are free to attend, you just have to be prepared to make the time.

Nathalie's talk was particularly worthwhile. She was the sole tester on the end-to-end (E2E) testing of the ticketing system of Fyra, the high-speed rail connect between Belgium and the Netherlands.

These are the key ideas she shared. Some are borrowed from her spare-time pursuit as a casualty simulation victim, during which she gets to observe medical personnel at work. I'm always interested in hearing ideas from folks who are open to cross-fertilisation of ideas into testing from other disciplines.

  • Draw a "talking picture" of the system; a simplified diagram of each of the components and the paths through each over time, for each business flow. Use this to discuss testing coverage with non-technical staff.
  • Ask stakeholders "What is your worst nightmare?" Prioritise these scenarios.
  • Radiate confidence. The calmer you are, the calmer the stakeholders will be.
  • Cultivate relationships with the business folks; find ways to get to know your team.
  • Ask for help. Asking for help is very empowering, as it is difficult to refuse.
  • Use checklists - you can't memorize everything, especially when you're under pressure.
  • Implement a time-out protocol.
  • When running an end-to-end test, have a pre-agreed signal to indicate a "No Test" situation. This is a flag you can raise to indicate a critical issue discovered during a test (but perhaps unrelated to the test being executed). When the "No Test" flag is raised, the teams must come together to implement and install a patch on the E2E system. The best Nathalie and her team were able to do from an initial "No Test" flag to getting the patch into the E2E environment was four hours.

Interestingly, she also outlined how she got hold of a dedicated end-to-end environment - she commandeered the best acceptance testing rig, and then attached components to it as they became available.

The event was very well attended, and the question and answer session during the second half of the talk was better than most, perhaps because everyone can get their heads around a train ticketing system and so the questions tended to delve into interesting details of the challenges Nathalie faced.

Nathalie blogs at Female funTESTic Fanatic and she's @FunTESTic on Twitter. Thanks for the talk, Nathalie! Hope your broken toe heals soon!

Friday 27 September 2013

Boot Repair will save your life

A few weeks back I was doing some work on my partner Aisling's multi-boot laptop. I managed to trash the bootloader by launching the Windows 7 recovery program. The laptop got into a sickening reboot loop. Aisling was delivering a presentation the very next day and needed her laptop, so I had to fix this right now. Gulp.

Six seconds of Googling presented me with Boot Repair:

I didn't imagine such a simple looking tool could do the job, but I booted the laptop off a copy of Ubuntu I had lying around on a USB stick, installed Boot Repair, fired it up and mashed the "Recommended repair" button. A minute or two and one reboot later, the laptop was presenting me with the complete GRUB menu again, with Windows 7, Ubuntu 12.04 and all the other magic partitions available. Phew.

The next day, Aisling's presentation went beautifully and the laptop behaved itself. Kudos to the folks at Boor Repair!

Saturday 14 September 2013

A superb summary of formal specification methods

You probably know I'm a bit of a computer history buff; probably because this field is so dynamic and young that its history seems short enough to get a handle on. (Turns out that's wrong, but who cares, I'm hooked.)

I've started watching the 1982 BBC TV series The Computer Program (watch it in its entirety at archive.org (...and consider giving this wonderful organisation a donation!)). At the end of the first episode (starting at 21:39), author and technology commentator Rex Malik forecasts how the ubiquity of computers in society will transform our lives over the coming decades - in fact up to our present time. He's astonishingly prescient!

Rex Malik on The Computer Program

At the start of the clip, the camera pans in close-up over a stack of books on Rex's desk, to the sound of him mashing keys on a manual typewriter. The title of one of those books caught my attention, and after I'd heard Rex's accurate prognosis, I reckoned his books were worth taking another look at.

One of them is entitled Every Object is a System, by Dr Patrick Doyle. The book is out of print, and some casual Googling doesn't turn up any second-hand copies, but I did find a summary of the book, in Bernie Cohen's sharp, witty, short and (as far as I can tell) accurate A Brief History of Formal Methods (PDF). (...he's a little too liberal in his use of scare quotes, but we'll forgive him that.)

Here's what Cohen has to say about Patrick Doyle and his book Every Object is a System:

In the early 70s, Dr. Patrick Doyle, a mathematician with the Irish Life Insurance Company in Dublin, was commissioned to develop a sales commission tracking system. Not being a ‘systems analyst’, he tackled the problem in an unconventional way: by constructing a model of the required system in set theory. Although he believed that the model he had constructed captured all the requirements of the potential users of the system, he felt that it should be signed off as an acceptable specification before he proceeded to implement it. So he offered the appropriate authority, the Board itself, an interesting alternative: either to receive a long, rather boring and probably ambiguous English-language document, which he could derive from his model, or to follow a short course in elementary set theory which would enable the Board members to read and understand his specification in its original form. The Board took the course, read and understood the formal specification, made some suggestions for change and signed it off. Doyle turned the model into a collection of precise software module specifications which he passed to a small team of (non-mathematical) programmers, who coded and ‘integrated’ the modules. The system worked first time! Paddy Doyle was so far ahead of his time that he had to publish his own book, Every Object is a System (still available from its author), in which he presents his unique view of the rôle of mathematics in information system design, concluding that, ultimately, it is an exercise in topological manifolds.

That article was published in the journal Formal Aspects of Computing in January 1995.

I'm still not sold on formal methods, but the field is nonetheless worth paying attention to. Perhaps the most revealing passage about translating formal methods into a language comprehensible to end users comes in the very next paragraph in Cohen's paper:

At about the same time, Jean-Raymond Abrial and Steve Schuman, in the IRIA laboratory in France, were also
investigating the use of set theory as a medium for system specification. They called their notation Z (after Zermelo and Fränkel, who had defined the well-founded set theory on which they relied). Z was taken up by the Programming Research Group at Oxford University, by then under the leadership of Strachey's successor, Tony Hoare, where it was enriched, supported by tools and applied to several real problems in industry and commerce. One of these was the CAVIAR system for administering visitors to STL Harlow, ITT's main laboratory. Abrial himself interviewed the client, Gladys, who manually maintained the records and bookings for the 12000 visitors who passed through STL each year, and constructed the (very elegant) Z specification.
However, unlike Doyle, he made no attempt to instruct Gladys in the mysteries of set theory. Instead, he ‘validated’ his model by deriving from it ten theorems (’emergent’ properties of the model), each of which could be cast in the form of a simple, English-language statement about the system, such as: ‘No two visitors shall share the same hotel room’, and asked Gladys to confirm, or deny, them. Gladys gladly did so and the system was duly implemented.

The model's 'emergent properties' sound to me a lot like the tests that emerge from a specification when using Acceptance Test Driven Development (ATDD). To find out how to do that well, take a look at How to transform bad acceptance tests into awesome ones, by Augusto Evangelisti. Augusto and I used to work together, and it looks like he's taken ATDD to a very clean level indeed. In my opinion, this approach is the future of automated testing.

Saturday 24 August 2013

Falsehoods programmers believe about time

Almost exactly two years ago, Patrick McKenzie wrote Falsehoods programmers believe about names, which lists common assumptions about how human beings are named. As the offspring of Dutch immigrants living in Ireland, this post rang very true to me - my family background forces me to notice names and the assumptions people make about them. Patrick’s post is one I’ve returned to again and again when I need inspiration when testing how names are stored in systems I work on.

Now Noah Sussman brings us Falsehood programmers believe about time, in the same spirit as Patrick’s post. Here are a few of them. Go read the full list.

  • The system clock will always be set to the correct local time.
  • The system clock will always be set to a time that is not wildly different from the correct local time.
  • If the system clock is incorrect, it will at least always be off by a consistent number of seconds.
  • The server clock and the client clock will always be set to the same time.
  • The server clock and the client clock will always be set to around the same time.
  • Ok, but the time on the server clock and time on the client clock would never be different by a matter of decades.
Update: W3.org (the organisation for standardisation on the web) has put together a decent page entitled Personal names around the world. It provides real-world examples of names that don't fit the western naming approach, explains the conventions behind these different name presentations and then makes recommendations for form design. Well worth your time.

Friday 19 July 2013

Rights Management Services

I've just found out about Microsoft's Rights Management Services, a way to apply rights management to documents produced with Microsoft Office. I wonder if the resulting initials 'RMS' were accidental?

Wednesday 10 July 2013

Sequence diagrams from text to SVG

Here's a great sequence diagram creator; it turns a description of a sequence diagram in plain text into an SVG file. I've just tried importing one of the generated files into The Gimp at an enormous resolution, and it looks superb.

Here's an example of a text description:

Title: Here is a title A->B: Normal line B-->C: Dashed line C->>D: Open arrow D-->>A: Dashed open arrow

This results in the following diagram, embedded here as a PNG graphic:

I like this kind of tool because it gets us back to the old Unix philosophy of small tools that do one thing well (...but this time around, we use JavaScript to wire them together.)

Sunday 7 July 2013

"Simply hold the ctrl key and double click on any non-button area of the window chrome - not the contents"

I work with software, a lot.

It's probably safe to say I live software. So I'm comfortable with the idioms of software development and software use. The older I get the more I care about software usability. Excuse me while I just explode with apoplexy here at utterly insane software usability design.

My partner teaches courses on how to write for the web. Her background isn't technical, so for her, a computer is just a toolkit to get stuff done. One of the tools in the toolkit is LibreOffice Impress, the software libre alternative to Microsoft Powerpoint. (Personally, I detest presentation software because it prioritises format over content and it's so widely abused, but that's a separate rant.)

Anyway, I noticed she'd accidentally pulled the "Slides" pane out of the UI into a separate window. I tried the usual stuff of dragging the window towards various nearly edges to get it to dock back into the main UI, to no avail. A bit of Googling turned up this solution: It's necessary to hold the CTRL key while double-clicking on any non-interactive part of the undocked pane.

This is just beyond insane in so many ways I simply cannot find words to express my disgust at this user experience paradigm. It's 2013. We've had graphical user interfaces since, I don't know, 1985 on the Amiga and the mid-1990s on Windows PCs. There is no earthly reason, beyond utter contemptuous disdain for end users, to specify a user interface interaction that requires the user to hold down a meta key and perform an operation normally reserved for file icons. (When was the last time you double-clicked on a web app?) How am I supposed to figure this out? There is absolutely no cue provided by the interface to allow the user to discover this interaction by themselves. I can't think of a single other user interface interaction convention which would lead me to try this warped combination of very specific actions all at the same time.

Is it the case now that interface designers are off the hook because no matter how obscure the steps required to perform an operation, Google will always serve up an answer provided to some other lost soul who previously found themselves in the same situation?

Monday 13 May 2013

Good writing on testing

Finding the people who write intelligently on testing can be a daunting task. For example, you won't find much quality content on the testing forums on LinkedIn.

Huib Schoots has a terrific list of testing blogs that he considers worth paying attention to. If, like me, you keep abreast of thinking on testing, you'll recognise many of the names on the list - but there are also several there that I haven't come across before. It's great to see a curated list of good writing on testing. Plenty of reading for the weeks ahead! Dip in, see what you like and drop me a comment.

Huib's "Colleagues" List

Wednesday 8 May 2013

Markov chains and formal proofs

I've posted this link before, when this blog was on the late lamented Posterous, but when I went looking for it again, it had disappeared - so here it is again, for good measure. Hat tip to Kent Beck for originally tweeting about this article explaining Markov chains.

I'm reading some interesting follow-on papers, in particular, a pair by James A Whittaker (Yes, that James A Whittaker) "A Markov Chain Model for Statistical Software Testing" and "Markov Analysis of Software Specifications" as well as a graduate thesis by Ivan S. Zapreev entitled "Model Checking Markov Chains: Techniques and Tools". I'm interested in the effect of a program's cyclomatic complexity on the feasibility of using Markov chains for testing purposes.

Until now, I haven't found formal proofs of anything other than trivial software systems to be convincing; they appear to rely on a perfect test oracle, namely a specification that's so well specified that it may as well be the software system itself. Formal approaches such as Dijkstra's tend to approach software systems in some idealised way, not as gloriously nonlinear entities executing on top of preemptively multi-tasking operating systems and making use of imperfect APIs over unreliable networks... and even if I concede that it's possible to prove that a software system is internally perfectly consistent, that tells me nothing about how well that system actually solves the needs of the customer paying for it.

But in any case, I think Markov chains are worth exploring.

Wednesday 13 March 2013

Stop using spreadsheets

If you use spreadsheet software, but you don't use any mathematical functions, please stop right now. You're inflicting pain on yourself and your readers.

Ugly presentation

By using a spreadsheet for purposes other than doing calculations, you're choosing to skip the formatting that you would normally do for a document you expect to print out. Because you're not thinking about formatting, you're sacrificing legibility. For example, Excel doesn't smoothly scroll horizontally across the page. The jarring cell-by-cell jumps are really unpleasant for your readers. Why are you presenting the data in a way that forces your readers to scroll?

A document has one author, but several readers. As the document author, the onus is on you to make your document as legible as possible. Wikipedia describes the message of Steve Krug's book "Don't Make Me Think" like this: "...a program or web site should let users accomplish their intended tasks as easily and directly as possible". Exactly the same rule applies to documents as it does to programs.

The context is separate to the spreadsheet

I can't remember the number of times I've received a spreadsheet whose explanatory text is in the accompanying email. Without the covering email, the spreadsheet loses important contextual information. What does all this stuff mean? Who is it important to? Where do I start? Is there any content on any of these other unlabeled worksheets? No? Why haven't they been deleted?

No version control

By disseminating the spreadsheet by email, you've just created multiple competing copies of your document.

To your reader, the spreadsheet they have in front of them is the canonical version. Trouble is, they're probably editing it right now to provide you with updated or corrected information. Let's say they email their edits back to you (and the distribution list). Now you have the problem of manually merging their changes to produce the most up-to-date version. What did they change? Better check every row! Let's assume you do the merge (without missing anything), and email the updated spreadsheet around to everyone again. Now there are 2 x (recipient list + you) copies out in the ether. Remember what I said about the canonical version of the spreadsheet being the one in front of your readers right now? Human fallibility being what it is, the chance that everyone is looking at the most up-to-date version is close to zero.

Microsoft Office does have reviewing and rudimentary version control built in, but these features don't fix the problem of multiple stale copies of your document continuing to exist forever in email folders and saved in random places on other people's desktop hard drives - or worse - network shares.

Tools like Sharepoint (primarily a web-based document repository which poorly emulates elements of wikis and version control systems) attempt to address this; instead of emailing the document, you save it to Sharepoint and email the link instead. However if your recipients open the link in any browser other than Internet Explorer, they lose the ability to use Sharepoint's locking and versioning features. And you can't control what browsers your recipients use.

Using a dedicated source-control system isn't the answer either, because spreadsheets are binary blobs. Version control systems are designed to work with text, and only store the differences between successive versions, so that comparisons can be made between versions.

The solution

Use a wiki. This solves all of the problems I've described, quickly and easily. There are many wikis out there; some of them can be up and running in minutes. Many offer rich-text editors. Others support only Markdown, a simple text markup language.

Just enough presentation

Wiki pages can be as pretty as newspaper layouts; take a look at any Wikipedia page. You'll see a table of contents, probably a boxout containing summary information, and the information on the page presented under headings. Tabular data is easily readable. Editing the tabular data in Markdown is straightforward too, though a little more complex than using Excel. Remember, as the document author, the onus is on you to make your content decipherable, not on your readers to decipher it. Wikis encourage you to format your data, but only to a certain extent. Enough formatting is enough.

In-line context

You're back in the land of free-text; you no longer have to fight with cell formatting (merge cells, wrap text, text alignment...). You can simply provide an introductory paragraph for your readers, before proceeding to the meat of the data. You were going to write that in the covering email anyway, right?

Built-in version control

There's only one version of a wiki page - the one at the link you provide. Your readers can edit the page in their browsers and hit "Save" - and everyone gets to see their updates immediately, or at least on the next page refresh. If two readers are simultaneously editing the current version, the second editor to save their changes will get a warning that there have been changes to the page since they started editing. The onus is now on your readers to manage merging, not you. If someone over-writes changes accidentally, it's a simple matter to roll back to an earlier version.

If you really need to, you can enable role-based access control, so that every user has to be authenticated before they can make edits. (...but why would you want this? Presumably you're trying to foster collaboration and information exchange. Why prevent stakeholders from contributing?)


It's easy to reach for a spreadsheet in order to quickly knock out some tabular data - but please think again before doing it next time. Your quickly-assembled spreadsheet is likely to live a lot longer and have a broader readership than you anticipate. Do yourself and all of your readers a favour, install a wiki and use that instead. They'll thank you for it.

Tuesday 11 September 2012

Cem Kaner on “The Oracle Problem and the Teaching of Software Testing”

Cem Kaner, one of the best thinkers on software testing, has a new blog post out: “The Oracle Problem and the Teaching of Software Testing”.

Kaner is not a prolific blogger, but what he produces is well worth paying attention to. He’ll be presenting a new iteration of his software testing course soon, so this is partly a sales pitch for the course, but in spite of that, the blog post is really worth your time. Apart from the useful discussion of how to know whether the expected outcome of a given test is actually correct, I especially like his references to “testasauruses” and evidence for them in the fossil record.

Thursday 16 August 2012

What we're actually doing when interacting with an 'intuitive' interface

I've become pretty cheesed off with the term 'intuitive', as in 'this shiny touch interface is so intuitive'.

I get to observe my 87-year old intelligent-but-half-blind mother negotiating web pages and desktop applications. Her efforts to learn the conventions of modern GUI interaction have shown me that what us techies consider the application of intuition is, in fact, the application of a rich set of contradictory heuristics about visually subtle and temporally fleeting graphical cues.

There is absolutely no consistency at all to GUI design - that's why, when we're confronted with something that causes those heuristics to break, we're lost. Right-click? Apple-click? Double-click? Pinch-to-zoom? Swipe-to-dismiss? I don't think any of us figured those out for ourselves, much as we'd like to think we did.

Saturday 26 May 2012

Opinions change!

Reviewing some of the older posts on this blog, I'm surprised at some of the statements I've made in the past about the function and nature of testing; my approach to testing has evolved more than I think over the years!

First of those changes is my rebuttal of the term 'Quality Assurance', or 'QA', to define the role of the tester. I consider the term 'QA Engineer' equivalent to overblown titles like 'Sanitation Engineer' and 'Social Media Evangelist'. On an Agile team, it's not the responsibility of one person to 'assure quality', as though some sort of referee were needed to keep those wayward developers in line. This may sound heretical, but in fact it's the whole team's job to assure quality. In my experience, that's best done by teams using Agile techniques. Not just some pick-and-mix Agile approach, but by adhering to the Agile principles in general and to methodologies like Scrum or XP in particular. I detest the term 'best practice', but I can tell you that the teams that I've worked on that got closest to a pure Agile approach also produced the best quality software.

I'm also in a period of evolution regarding automated testing; most of my early experience of it was so bad that I really doubted whether it could bring business value. Now, being more familiar with the tools of automated testing, and having a better appreciation of where automation provides most value, I'm a strong advocate, but perhaps I've swung too far and don't practice enough exploratory testing. All my testing now is repeatable, but I wonder if my attention is as broad as it was when I was a purely manual tester. I'm with Michael Bolton and the other folks in the Context-driven school when they says it's important to draw a distinction between automated checks and sapient manual tests. They are not equivalent.

Another area where my thinking is evolving is in the field of testing metrics. It's easy to collect and collate data on testing, but it's hard to do that in a way that actually gives a useful view of the state of the project, or the readiness of the software for production. Right now, I don't believe that a few bald numbers can do that (especially not bug counts, even when graphed against historical statistics). I'm more of the view that a holistic, conversational story about the state of the project is more likely to provide stakeholders with the information that they need. (That's some statement, coming from an arch-rationalist like me.)

And this brings us to the crux of it: testing is about providing relevant information to stakeholders so that they can make the decision about whether this release is fit for production. Just as customers are now expected to engage in the development cycle at periodic intervals, so project stakeholders must ensure that they are informed about the state of the project at every stage, not just in the weeks before the go-live date or during the hardening sprint. Us testers must engage with these stakeholders all the way along to ensure that we're providing good information.

So, on my older posts, caveat lector - let the reader beware. If you see something in a new or old post that you agree or disagree with, let me know and we can have a conversation. I'm always interested in honing my thinking on testing and in hearing new ideas. Changing an opinion in the light of new evidence is not a weakness, it's a foundation of science. As Jeff Atwood explains, to avoid confirmation bias, it's best to have strong opinions, weakly held

Finally, a big thank you to those people who care enough about testing to have challenged and engaged me over the years. Among my colleagues: Paul O'Neill, Frank Somers, Chaminda Peiris, Sisira Kumara, Augusto Evangelisti and Cormac Redmond. Of the folks I know via the net and through books: James Bach, Cem Kaner, Michael Bolton, Jerry Weinberg, Robert Glass and Michael Nygard. Thanks to all of you, whether we agreed on things or not! 

Book review: "The Mighty Micro"

As well as my other hats, I'm a bit of an amateur historian of computing. Recently, I've been reading this decades-old book, "The Mighty Micro", by Christopher Evans, which bravely forecasts the effect of the micro-computer on society, up to the year 2000. It's an excellent and accessible read and will get you wondering why the future didn't pan out quite the way Evans predicted. I've written a review of "The Mighty Micro" on Goodreads. Let me know what you think!

Friday 11 May 2012

A cry for creativity

I keep seeing entities getting created in our automated test harness with mind-numbingly tedious names like entity1, entity2, entity3.

I'd love it if people would exercise a bit of inventiveness and create entities with more interesting and memorable names. There are a few reasons for this: Entity1, entity2 and entity3 implies some kind of order; but most of the time entity1 isn't in any way before entity2. They're just instances of a class of thing. Secondly, those bland names don't tell me anything about what those entities are for, or what discriminates them. The name of something should tell me something interesting about it. For example, chances are somebody called Samhbh Górsky has an Irish mammy and a Polish daddy. But 'keystore_server_4.jks'? It's a keystore, but for what purpose? Not a clue. (Do you also see the opportunity for internationalisation tests?)

Go on, next time you have to pick a list of names, go crazy and choose fun names that are memorable, mean something and might even trigger a bug.

Saturday 14 January 2012

Information leakage can sink the ship

I'm in the market for a new van in which to store and transport my kiting gear. I'd found a likely candidate on a second-hand car website and used the "email the seller" link to send a message. Some days later (having heard nothing from the seller) I came back and hit 'refresh' on the page, to check if the vehicle was still available. (Yes, I leave many of my machines running most of the time.) The browser prompted me that I was about to resubmit the page, and of course the tester in me reckoned that would be a good idea. 

This is what I got in response:

Here it is in plain text:

Server Error in '/' Application.

Cannot open database "ABG" requested by the login. The login failed.
Login failed for user 'sa'.

Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. 

Exception Details: System.Data.SqlClient.SqlException: Cannot open database "ABG" requested by the login. The login failed.
Login failed for user 'sa'.

Source Error: 

An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below.

Stack Trace: 

[SqlException (0x80131904): Cannot open database "ABG" requested by the login. The login failed.
Login failed for user 'sa'.]
System.Data.ProviderBase.DbConnectionPool.GetConnection(DbConnection owningObject) +1019
System.Data.ProviderBase.DbConnectionFactory.GetConnection(DbConnection owningConnection) +108
System.Data.ProviderBase.DbConnectionClosed.OpenConnection(DbConnection outerConnection, DbConnectionFactory connectionFactory) +126
System.Data.SqlClient.SqlConnection.Open() +125
Microsoft.Practices.EnterpriseLibrary.Data.Database.GetNewOpenConnection() +140
Microsoft.Practices.EnterpriseLibrary.Data.Database.GetOpenConnection(Boolean disposeInnerConnection) +74
Microsoft.Practices.EnterpriseLibrary.Data.Database.ExecuteScalar(DbCommand command) +49
ABGData.DAL_Email.EmailTypeInsert(EmailType type, String emailBody, Int32 nSiteID) +287
ABGBusiness.Utils.BL_EmailUtils.EmailTypeInsert(EmailType type, String emailBody) +60
CommercialsWebsite.Global.Application_Error(Object sender, EventArgs e) +6971
System.EventHandler.Invoke(Object sender, EventArgs e) +0
System.Web.HttpApplication.RaiseOnError() +174

Version Information: Microsoft .NET Framework Version:4.0.30319; ASP.NET Version:4.0.30319.1

While this information is obviously useful to application developers, it's also very useful to hackers. Look at what they've given away here; First, the database name and the database username  It's my guess with unimaginative, generic names like this that the password for user 'sa' is 'sa' or password. 

The website also divulges the exact version number of the application framework and some details about the structure of the application.

Before deploying an application, it's important to consider that information which is a useful diagnostic tools for developers and testers during development may actually provide a way for less scrupulous investigators to compromise your application in production.

- page 1 of 2