Wednesday, June 25, 2008

Many Hands Make Light Work, But I’ve Only Got Two

When choosing how to allocate resources, it can be difficult to do an effective cost benefit analysis in a short period of time. A technique that I stumbled upon is to look at things from the perspective of “what would I do if I was the only person on the team?”

In 1992, I convinced my manager at the Open Software Foundation (OSF) to create the position of tool smith and allow me to work on the OSF Development Environment (ODE) full time. As I now had much more time for doing development and made many more changes to ODE than I had ever made before, there was much more testing to be done.

Initially, I didn’t realize just how much testing was fully required for ODE. Not only were there no automated tests for ODE, there were no documented test cases either. The test plan consisted entirely of “round up as many volunteers as you can and have each volunteer test on one of the nine platforms.” The testing that each volunteer did was entirely up to their discretion. It was different volunteers every time, and there was no record of what they did, only a “seems fine to me” or “I found these problems.” Once the number of problems got down to an acceptable level, we’d ship the new version.

After a couple of releases it started to sink in that I had to do something differently. My first attempt was to document a standard set of test cases. At first this seemed to work really well. Testers commented that it was much easier and took much less time. I felt like I had gotten a consistent set of results from a consistent set of tests. But a test/fix cycle requires running the same tests over and over again until things stabilize. Following the same steps over and over again can become pretty mind-numbing. Pretty soon I couldn’t get the volunteers I needed and I was starting to suspect that people were skipping steps.

I also discovered another problem. As bug reports came in from production use, and as I thought of test cases that were missing, the list of test cases mushroomed. I was very careful not to include overlapping test cases, but still the list grew and grew and grew.

Since the QA of the tool was ultimately my responsibility, I had to pick up the testing slack. The combination of the ever increasing list of test cases and the diminishing pool of volunteers soon made the QA of a new release pretty much impractical. I would end up spending much more of my time testing than coding.

But then I remembered how I had gotten the job of tool smith in the first place, by automating myself out of my previous job of manually kicking off builds on all platforms. Test cases are basically a set of steps to take and the expected results. Automating test cases is basically coding, and that was much more fun than manual testing. A couple of inspired weekends later I had automated all of the test cases and added many more new ones as well.

That was the last time I ever relied on manual tests.

Read more from: "Zero to Hyper Agile in 90 Days or Less"

No comments: