Friday, April 18, 2008

An Uncomfortable Truth about Agile Testing

There is a good article by Jeff Paton on some bitter truth of Agile Testing.

In organizations that have adopted agile development, I often see a bit of a culture clash with testers and the rest of the staff. Testers will ask for product specifications that they can test against to verify that what was built meets the specifications, which is a reasonable thing to ask for. The team often has a user story and some acceptance criteria in which any good tester can quickly poke holes. "You can't release the software like this," the tester might say. "You don't have enough validation on these fields here. And what should happen if we put large amounts of data in this?"

"This is just the first story," I'd say. “"We'll have more stories later that will add validation and set limits on those fields. And, to be honest, those fields may change after we demonstrate the software to the customer--that's why we're deferring adding those things now."

"Well, then there's no point in testing now," the testers would usually say. "If the code changes, I'll just need to re-test all this stuff anyway. Call me when things stop changing."

I can understand their concern, but I also know we need to test what we've built so far--even if it's incomplete, even if it will change. That's when I realized what testing is about in agile development. Let me illustrate with a story:

Imagine you're working in a factory that makes cars. You're the test driver, testing the cars as they come off the assembly line. You drive them through an obstacle course and around a track at high speed, and then you certify them as ready to buy. You wear black leather gloves and sunglasses. (I'm sure it doesn't work that way, but humor me for a minute.)

For the last week, work has been a bit of a pain. When you start up your fifteenth car of the day, it runs rough and then dies after you drive it one hundred yards from the back door of the plant. You know it's the fuel pump again, because the last five defective cars you've found have all had bad fuel pumps. You reject the car and send it back to have the problem properly diagnosed and fixed. You may test this car again tomorrow.

Now, some of you might be thinking, "Why don't they test those fuel pumps before they put them into the cars?" And you're right, that would be a good idea. In the real world, they probably test every car part along the way before it gets assembled. In the end, they'll still test the finished car. Testing the parts of the car improves the quality downstream, all the way to when the car is finally finished.

Testing in agile development is done for much the same reason. A tester on an agile team may test a screen that's half finished, missing some validation, or missing some fields. It's incomplete--only one part of the software--but testing it in this incomplete stage helps reduce the risk of failures downstream. It's not about certifying that the software is done, complete, or fit to ship. To do that, we'd need to drive the "whole car," which we'll do when the whole car is complete.

By building part of the software and demonstrating that it works, we're able to complete one of the most difficult types of testing: validation.

I can't remember when I first heard the words verification and validation. It seemed like such nonsense to me; the two words sounded like synonyms. Now I know the distinction, which is important. Verification means it conforms to specification; in other words, the software does what you said it would do without failing. Validation means that the software is fit for use, that it accomplishes its intended purpose. Ultimately, the software has no value unless it accomplishes its intended purpose, which is a difficult thing to assure until you actually use it for its intended purpose. The best person qualified to validate the software is a person who would eventually use it. Even if the target users can't tell me conclusively that it will meet its intended purpose, they often can tell me if it won't, as well as what I might change to be more certain that it will meet its intended purpose.

Building software in small pieces and vetting those pieces early allows us to begin to validate sooner. Inevitably, this sort of validation results in changes to the software. Change comes a bit at a time, or it arrives continuously throughout an agile development process, which agile people believe is better than getting it all at once in a big glut at the end of the project, when time is tight and tensions are high.

Being a tester in an agile environment is about improving the quality of the product before it's complete. It also means becoming an integrated and important part of the development team. They help ensure the software--each little bit that's complete--is verified before its users validate it. Testers are involved early to help describe acceptance criteria before the product is written. Their experience is valuable to finding issues that likely will cause validation issues with customers.

At the end of the day, an agile tester likely will pour over the same functionality many times to verify it as additional parts are added or changed. A typical agile product should see more test time than its non-agile counterpart. The uncomfortable truth for testers in agile development is that all of this involves hard work and a fair amount of retesting the same functionality over and over again. In my eyes, this makes the tester’s role more relied on and critical than ever.

0 comments: