Thursday, May 31, 2007

Developing the Right Software With Test

It occurs to me that testing is mistakenly treated as a distinct task in software development almost every time I've seen software developed. Here are three unpleasant side-effects I see in conventional testing:

  1. The best people want to be creative, not test someone else's work
  2. Test coverage asymptotically approaches functionality
  3. Test pushes come at the end of late cycles
But I think there is something we can do about it.

Test Becomes a Non-Creative Effort

Despite pushes from management, testing often fails to tap into creative reservoirs. Testing involves creative efforts to be sure. I've worked with many creative testers; however, I've rarely seen testers apply their passion for quality, reliability, and measurement to the early planning efforts for original functionality. To participate, testers are asked to put on the hats of other roles, roles typically filled by their co-workers. So the bright, the ambitious, and the talented want little to do with testing.

Test is a Subset of Functionality

Without an integrated test and measurement focus, features are defined and developed in one phase, then handed off to test in another. Testing is planned with a focus on what was developed previously, typically without a precise or up-to-date documentation. This virtually guarantees that some functionality will ship without test coverage, despite high—or often low—code coverage. After all, testing like this can only asymptotically approach the developed functionality.

Test Push Comes Last, Gets Cut

Despite efforts to parallelize testing and development, the test push typically comes last. As a tester I might say, "don't ship it, I haven't tested that enough." Management comes back with, "is it broken?" What can I say, but "I don't know." Or worse, I say: "I want to add another test suite," and they reply: "It's too late to start new testing." And they're right. If we find a new bug so late in the game, we can't take a fix without the risk of new regressions. So they make the tough decision, and cut the test.

So What Can We as Testers Do?

One thing we can try to do—and should do—is to ask require developers to include unit tests with every check-in. Chromatic from ONLamp compares the number of test assertions in the perl vs. the ruby standard libraries. He makes a nice comparison, and his implication is correct: software would be of higher quality if we had more unit tests and more test assertions per unit code. But I see another deeper point.

I like how Marc Clifton puts it in his Code Project article on Unit Test Patterns:
Unit testing must be formalized so that it becomes a real engineering discipline rather than an ad hoc approach...the unit test is supposed to test the code that the programmer writes. If the programmer writes bad code to begin with, how can you expect anything of better quality in the tests?...the unit test should be written first, before the code that is to be tested.

Aha! I think he's on to something. But even this focuses too strongly on raw programming. The push—and the discipline—is not just about unit-testing. It's got to start with product and feature design. As soon as you decide to develop a new feature, get your testers involved. I'm getting at test driven development, or rather Example Driven Design, as Peter Provost puts it in this excellent video on test driven design.

How will you know when you're successful? What has to be true? What assumptions are you making? There are your test cases. There is what you've got to develop to be successful. When put this way, it's obvious that test and dev and design must come together.

I'd like to see are more testers asserting product quality from the very beginning. You testers are, after all, the ones responsible for asserting product quality in the end. Focus more on what it is you're trying to build, rather than the code that makes it happen. As Professor Bill Arms from the Department of Computer Science at Cornell University puts it, "Most failed software development projects fail because they develop the wrong software."

If you can pull it off, think of the peace of mind you'll have when product quality is measured from the very beginning, and nothing is developed that doesn't have corresponding tests. You'll wonder how you ever developed software before. And your testers will proudly stand as shining stars in your organization since you've empowered them to achieve their mandate.

Have an opinion on the role of test in software development? Post a comment, and let me and our readers know.

No comments: