The Joel test

03 July 2012
I cannot remember exactly when I first came across the Joel Test, but over the last two or so years I semi-regularly come back and note how my attitudes towards it have changed. It is clearly rough-and-ready, and in places shows its age, I am in increasing agreement with it. However the reasons for agreement are usually different from Joel's justifications, so for instance while at face value I disagree with the need for daily builds, I think all the prerequisites of doing daily builds should be present. Joel also misses points I consider important.

My own check-list

For items 1 through to 7 and 9, any sane company should be able to spin at least a partial yes. An outright "no, none of this stuff" for any of these and the company is in a mine-field. 8 & 10-12 I will explain later. From my experience, my personal check list is the following six items:

Source code revision control

Source code revision control has come a long way since the days CVS was king, but I do know people who tried using it for group-work and the experience put them off using any form of revision control. Most of the problems stemmed from CVS being little more than an extension of RCS to multiple files, and hence underplayed the importance of files as a collection. Joel was being rather unconditional in calling CVS fine, but this was 11 years ago and better systems are now available.

The problem with manual version control is that when it fails, it fails spectacularly. It only works in the first place when there are extremely rigid boundaries between the areas each developer works on, and even then developers end up passing around entire source code sets. This is a work-flow that is very unforgiving towards fast turnaround of fixes, and more often than not no-one except the build-master pays any attention to integration. All hell breaks loose if someone happens to be away, and hence someone else has to modify their code - the odds of that change getting lost once the 'owner' returns is sky-high.

A major point that Joel misses, mainly due to age of his test, is that source control (especially Subversion) is also automated version numbering. Manual version numbering only really works when dealing with public releases, and even then it is a pain as this number has to describe components with very different histories. Start making patch versions of software in the field, and the only thing that really works is automatically embedding repository revision numbers into binaries.

Centralised & automated builds

There is always going to be a master copy of a project, regardless of whether this is intended or not. The only difference is whether there is dependency on a single build-master. In the latter case this dependency is not acknowledged, and that is when trouble starts. You know there are issues when someone needs to be called in at 8pm just to run make and an NSIS script. And god help you if the build process, as is more likely than not, requires a special set-up on a workstation that has just died.

It should be possible for any developer to make a change, and then be able to roll a new build ready for shipping. And doing all this requires automation. In the Joel test the general thrust of argument is that automation removes the scope for silly mistakes, but my view is that automation is the best way to document a process. Unlike a hand-over document written during a notice period, shell scripts are unambiguous. While I do not yet fully agree with Joel's third point on mandating daily builds, all the technical requirements for being able to do them are things I think a sane build system should have. As soon as you start thinking about all this, the Utopia Joel mentions of even making CD ISOs no longer seems that far-fetched.

Issue tracking

When someone is working on a major bug, anything minor is going to go in one ear and out the other. Without any form of issue database, things such as possible improvement and even is this correct?, simply vanish. I have known cases where what at the time I thought were minor issues have cropped up as a major customer complaints 3 months down the line.

Consolidation is essential

Unlike with pay-day financial products, consolidation is not only good, it is essential. Early on in a project is is OK to steam ahead with adding features, but at some point solidifying what you've got is required. As a project progresses, priority has to be given to fixing bugs, and at some point a feature freeze has to occur. Hammering in a new features at the last moment may be commercially desirable, but it completely sends QA out the window.

Joel talks about the time cost of fixing a bug, but the real problem is smoothing over seams. In one project I saw what was a borderline-experimental file converter get wrapped up and bolted into an about-to-ship project, which was basically asking for trouble. It was miraculous that the converter itself worked properly, but it still featured disproportionately in client feedback.

Early use-case analysis is essential

Joel talks about schedules and specifications, but what really has to be tied down is what the end user is to expect. A schedule boils down to when a feature freeze occurs, and a specification boils down to elaboration on the feature list. If programmers do not even have the fixed target of a feature list, then the only realistic expectation is a crock of shit. It is pot luck whether the design the programmer chooses is accommodating to a given feature if they do not have prior knowledge of its requirement.

Proper provision

Unlike the 1990s even fairly high-end systems are not that expensive - you can get a Quad i7 with a full-HD display for circa £500, which after reclaiming VAT works out at about the cost of 3-4 days employment. If that. The economics of PhD lab equipment do not apply. On the software side the picture is a little more complex. In many cases the best tools for the task are the free ones, in my case the lack of a decent Windows .ico icon editor being the only exception that springs to mind. Nevertheless this does lead to the trap in believing that free software is inevitably better (try getting Gimp to output raw RGB pixels).

What Joel does not address is that in some cases it is not just about your main desktop. There will be cases where you will need more than just your main development machine.What works properly over the local loop-back may fall apart completely as soon as parts are put onto separate physical machines. In other cases you might be developing a module that is simply too dangerous to even test on your main development machine - one case that springs to mind was when I was writing the code that manages the auto-reformatting of hot-swapped hard drives. I was lucky that differences between the target hardware and my desktop only resulted in a blue-screen.

The one part that fundamentally applies is Don't Torture Programmers, as that covers much more than just how good tools are. I know one start-up company that provided the latest Visual Studio, but the boss insisted that everyone used his custom preferences file, which had several keyboard short-cuts remapped from the defaults. XEmacs might be baroque as hell, but I use it because I worked out how to make it auto-brace exactly to my preferred style, and no other editor even comes close.

What about 8, 10 & 11?

Quiet conditions?
I agree with much of the sentiment stated in item 8, but I am not entirely convinced that putting everyone into their own office is actually a good idea. In fact I dislike entirely quiet conditions, and welcome the odd unplanned break. Maybe my opinion would be different had I had to share an office with marketing. All good programmers will have programmed at 5am at some point, and there is a reason they stayed up that late.
Testers?
Difficult call, as the economics presented are not the case in the UK. However the real issue is having people other than the program's developers do testing, and getting people from other software groups to test would fill in this gap. Given how much this point is predicated on the previous points,
Hall-way usability
I never quite understood this point, as grabbing 5 people in my company would be a significant proportion of the total number of people there. Philosophically the ideas are basically the same as that with item 8 on testers.

And finally: Writing code in interviews

I used to be in favour with this idea, but have since fallen out with this approach, for much the same reason I think brainteasers are too much of a blunt instrument. If you have interviews people have to prepare for, you end up with people optimised for interview preparation. A better test is being able to give technical opinions on programming languages and/or code, as in reality a good programmer is really a good debugger. Being able to type out the assembly code for a linked list does not mean you will be able to do it on paper under interview stress.