04 February 2015
Around a year ago I wrote a blatantly tongue-in-cheek article about the perils of testing software, which was basically me letting of some steam during what was essentially an exercise in paying off technical debt. Looking back the views I formed were made before I had fully got to grips with everything my current company does, as well as before I fully understood the position of the project, but I still feel that the sentiment in the article has few faults.


The underlying problem was that there was a serious absence of functional testing, but in the absence of any real specification and the people who wrote the business logic in the first place, asking development teams to up the code coverage was the only thing that could realistically be done. As a result people were having to up the code coverage for modules they neither wrote nor understand, with the predictable result of harder-to-hit edge case protection getting stripped out, and the tests asserting whatever the business logic seemed to accept or reject. Yes it meant problems down the river, but in the long-term I don't think actual turnout was significantly more inefficient than any other realistic turnout.

Test-driven development

I still fundamentally disagree with test-driven development as to me it is putting the cart before the horse. Turning a specification into a test-case has all the hazards that plague turning specifications directly into business logic, but it does not provide any discernible upside. To me the supposed selling point of TDD is that is turns specifications into program constraints without regards for secondary concerns such as program efficiency, but the DevOps culture that TDD seems to be at home with does not care about these in the first place. TDD only seems to make any real sense when either debugging, or when a feature slots right into an already well-defined framework.

On the plus side, the infrastructure provided for TDD fits in well with traditional development methods. Even back in the 1990s, development of ADTs involved writing contrived test scenarios that would today would be considered basic unit tests, but without any test framework they were normally thrown away. When working with a test framework there is the mentality of tests being kept around, and this it the critical point. In the long term the real purpose of tests is not working out whether the software works now, but whether it still works when various people have hacked around with it.

Context-less testing

The problem I faced at the time I wrote the last article was having to deal with the demand to increase testing, whereas those having to carry this out had little or no information to worm with other than the source code that needed testing. When a programmer writes tests for their own code, they are at least writing tests based on some mental model of what is correct behaviour, but what was happening here was flying blind. What was particularly noticeable was that firewall code quickly got stripped out, which soon resulted in misdirected bug reports when the more subtle errors ended up getting detected a long way away from where they actually happened.

In hindsight much of my criticism was due to being used to off-the-shelf specifications (usually RFCs), but the nature of the project made it very difficult to have such authoritative documentation at hand. The company maintains an extensive Confluence (Wiki) site that is meant to document things like architectural decisions, but I have my doubts about how effective it is. I certainly got a bit of a reputation for highlighting correct but undesirable program behaviour (i.e. wrong but not a bug). Looking back the company had little choice but go through a painful period of upping coverage without access to any information beyond the business logic being tested.

Automation culture

Although I made attempts at getting my last company to use the sort of automation that the Joel Test talks about, my writing about living dangerously were partly lamentations on how this effort was basically a failure and how it led to what I considered avoidable problems down the road. Nevertheless what I once considered something of an admittedly logical extreme turned out to be standard practice elsewhere. To this day I consider exposure to this “logical extreme” to be one of the biggest things I gave gained from my current company. And yes I did mention in my annual appraisal that exposure to this extensive use of tools was one of the reasons I accepted the job offer.

Automated testing has a high up-front cost, but it means that edge cases get hit quickly without having to think about them. Time and time again at previous companies me and colleagues got burned for not testing “this” or “that” particular scenario, but when you are a developer at the bottom of the totem pole you ain't gonna remember everything that needs testing. When I have very recently applied for new jobs and they have given a programming task that requires giving of test-cases, I have by instinct provided an automated test-suite instead. This is what made me realise how much I had changed.

Testing vs. development

This is the real killer. Time spent testing is time not spent developing, and time not spent developing means not meeting some (realistic or otherwise) business requirements. At a previous company I only ever printed two emails, one was a diatribe about getting fully-featured bug-free software delivered by date X, which in short was a challenging time-frame. The other was an issue list forwarded from the customer post-delivery, which contained things that did not surprise me. The latter I decided to only read over a liquid lunch, and at the time it convinced me that the company was about to collapse. Since leaving I looked at the company account summaries via Companies House, and my fears turned out to be well grounded. The company survived, but it was a close call.

Looking back at when I first read the Mythical Man Month, my views on testing were really about when someone should stop developing, because at the time my experience with testing was seeing how it constantly got squeezed out. Testing anything other than the absolute latest code means making the kind of assumptions that hammering in of last-minute features tends to break, and even if there are no bugs there are likely to be other quality issues. My view of the need to consolidate was as much about ironing out seams as it was bugs, and more often than not neither was budgeted for.