Showing posts with label QA. Show all posts
Showing posts with label QA. Show all posts

Tuesday, May 13, 2008

What shoul be a sccessful tester

Source : thinking tester

  • Self Driven or high levels of Inner drive for learning new things – No fear of unknown.
  • Spontaneous – Thinks on the feet – Good in emergency response.
  • Agile and adaptable.
  • Love for Science (Physics/Chemistry), Mathematics and Philosophy.
  • Love for problem, Puzzles.
  • Hunger for self Expression – Writing, speaking.
  • Organized Skepticism and constantly challenge their own thoughts

U Me Aur Hum

There is a good movie released 3 week ago. "U me Aur Hum". There was a nice line in the movie stating "Sometimes journey will become liong due to distance between two...." At the same time my mind strike that how its affecting in our work place. what i thought about that is.....

As per my view this line is perfectly matching to our work place our working style. We always working in a team but every body are working for individual (either for money/knowledge or whatever). Thinking that “Only I have to grow" take us to long journey.

What I have always observed in an organization (No only in V2!!) is understanding between QA and Dev. Although a team is said to be a combination of QA + Dev, but the fact is something else. When you compare their understandings, mentality etc we find a huge difference.

It’s the work nature which is forced to glue them together, but not a natural thinking in real sense. We find always a DEBATE mood when two comes at a single stage.

That affecting ultimately to the work we are doing and one more times a looong journey.

What’s the reason for that???

Some of the lines from Developers

"It is none of your job to know how did we write the code, all that you have to do is go and click some buttons!"

"If I do (UNIT) Testing, what will you do?"

What we are throwing back as the answers. Trying to produce more and more bugs and make a bug report as rigid as possible? (Without analyzing any root cause for that bugs). Have we ever sat with the developers and try to analyze Priority/Severity of that bug/release?

Guys is this the QA what are we doing?? Or is it the SDLC they are following?

That always creates distance in a team. And only we have to pay for that and we are paying (long nightmares, Huge pressure and of course poor Appraisal!!!).

Guys….Always try to co operating with your team and shortening your journey.

Friday, May 9, 2008

A Story of a Tester

On a dark and foggy night, a small figure lay huddled on the railway tracks leading to the Mumbai station. At once I was held back to see someone in that position during midnight with no one around. With curiosity taking the front seat, I went near the body and tried to investigate it. There was blood all over the body which was lying face down. It seemed that a ruthless blow by the last train could have caused the end of this body which seemed to be that of a guy of around my age. Amidst the gory blood flow, I could see a folded white envelope which was fluttering in the midnight wind. Carefully I took the blood stained envelope and was surprised to see the phrase "appraisal letter" on it. With curiosity rising every moment, I wasted no time in opening the envelope to see if I can find some details about the dead guy. The tag around the body's neck and the jazzy appraisal cover gave me the hint that he might be a software engineer. I opened the envelope to find a shining paper on which the appraisal details where typed in flying colors. Thunders broke into my ears and lightening struck my heart when I saw the appraisal amount of the dead guy!!!!! My God, it was not even, as much as the cost of the letter on which the appraisal details were printed.... My heart poured out for the guy and huge calls were heard inside my mind saying "no wonder, this guy died such a miserable death"... As a fellow worker in the same industry, I thought I should mourn for him for the sake of respect and stood there with a heavy heart thinking of the shock that he would have experienced when his manager had placed the appraisal letter in his hand. I am sure his heart would have stopped and eyes would have gone blank for few seconds looking at the near to nothing increment in his salary.


While I mourned for him, for a second my hands froze to see the employee's name in the appraisal letter... hey, what a strange co-incidence, this guy's name is same as mine, including the initials. This was interesting. With some mental strength, I turned the body upside down and found myself fainted for a second. The guy not only had my name, but also looked exactly like me. Same looks, same built, same name.... it was me who was dead there!!!!!!!! While I was lost in that shock, I felt someone patting on my shoulders. My heart stopped completely, I could not breathe and sprung in fear to see who was behind......... splash!!! Went the glass of water on my laptop screen as I came out of my wild dream to see my manager standing behind my chair patting on my shoulder saying, "wake up man? Come to Opera meeting room. I have your appraisal letter ready"!!!

Thursday, May 8, 2008

KLOC - What does it mean to Software Testing

Introduction to KLOC

Lines of Code (LOC) is one of the software metric that is used by most of the people for Software Measurement. Thousand lines of code is treated as KLOC. This metric helps us in knowing the size and complexity of the Software Application.

What does it mean to Software Testing

We do test applications with the intention to see if the promised functionality works fine or not. Any deviation here will be considered as a Bug. So each of these bugs must be originated from some line of code in the product.

So it’s understood that when the size of the code is more there is a chance for more number of bugs in the prodcut. Even most of the process to talk about some % of issues is fine or acceptable quality per KLOC(even though there is lot of subjectivity).

The Defect Density is arrived at Number of Bugs / KLOC per the product under test. The defect density is one of the metric used to measure the quality of the product. Most of the Quality Process does talk about this metric.

Concerns

The concern in this approach is that how these values are measured. The general bias with KLOC is that people try to see that only the excutable lines of code in the product.

Each every line in the product may not be code at all & we may not execute each and every one of them. So it’s not taken care, then the issues related to documentation, images, installation etc might be misleading.

Since we are looking at KLOC as the size of the product it’s better to include each and every entity that effects the same. Then it’s helpful for both development and test teams.

Use Cyclomatic Complexity to determine the Risk and Test Scenarios

Cyclomatic Complexity (CC) is a software metric (mostly code based metric) used to the number of independent path executions in the application. Introduced by Thomas McCabe in 1976, it measures the number of linearly-independent paths through a program module.

It helps the developers to determine the independent path executions and base line unit tests that they need to validate. Using this, the developers can assure that all the paths have been tested atleast once. It’s a great comfort for the developer and their respective managers.

It’s better to write JUnit Tests for all these linearly-independent paths and integrate it with any code coverage tool. These reports help to focus more on the un covered paths and improve the code coverage.

It also helps to evaluate the risk associated with the application. The following are the results published by SEI and they are being followed widely to determine the health of the code base.

Cyclomatic Complexity Risk Evaluation
1-10 A simple program, without much risk
11-20 More complex, moderate risk
21-50 Complex, high risk program
Greater than 50 Un testable program (very high risk)

Explore more at Cyclomatic Complexity in Software Technology Roadmap from SEI.

Use metrics to evaluate the risk early in the cycle & improve your test coverage

YouTube India is Launched

Source : TechLads

With News Corp's MySpace having launched in India, can Google-owned YouTube be far behind? The popular video-sharing Web site today launched its local Indian version at www.youtube.co.in.

YouTube India is different in that it features a localized home page plus search functions, allowing users create and share videos, discover the most popular/relevant videos in India, and generally connect with other Indian and global users. Over time, YouTube India is expected to have an entirely 'local' flavor and feature content and functionality that is most desired by Indian users.

YouTube India sports local features like promoted videos, featured videos, home page promotions, localized user interface and help center, user support and community features (video ratings, sharing, and content flagging), and intends making easier for the Indian YouTube community to search and view videos from India. In addition, content uploaded by users in India would show up as 'top favourites' and 'recommended content' on the local YouTube Web site. YouTube India also aims at facilitating exchange amongst the large Indian NRI community.

Meanwhile, YouTube India has already inked partnerships with the likes of UTV, NDTV, India TV, Zoom TV, Rajshri Films, Eros Entertainment, IIFA, the Ministry of Tourism, IIT Delhi, and KrishCricket to name a few, with the objective of bringing exclusive Indian content to users in newer ways. Enjoy guys

Thursday, April 24, 2008

Peeling the Performance Onion

The Infernal Onion
When we run a performance testing scenario, we usually start with a light load and measure response times as the load increases. You would expect that the response time would increase as the load increases, but you might not anticipate the dreaded "knee" in the performance curve. Figure 1 shows the hockey stick shape of the typical performance curve.

Figure 1: The classic hockey stick

The knee is caused by non-linear effects related to resource exhaustion. For example, if you exhaust all physical memory, the operating system will start swapping memory to disk, which is much slower than physical memory. Sometimes a subsystem like a Java interpreter or application server might not be configured to use all available memory, so memory limitations can bite you even if you have plenty of free memory. If your CPU horsepower gets oversubscribed, threads will start to thrash as the operating system switches between them to give each a fair share of timeslices. If you have too many threads trying to access a disk, the disk cache may no longer give you the performance boost that it usually does. And if your network traffic approaches the maximum possible bandwidth, collisions may impact how effectively you can use that bandwidth.

Figure 2
When we tune the performance of the system, we try to move that knee to the right so we can handle increasing load as long as possible before the response time shoots off the scale. This tuning often happens near the scheduled end of a project when most of the system is functional enough to allow for system-level performance testing. When you improve the performance of the system, what should you anticipate to happen next? Sometimes you're still limited by the same kind of bottleneck, though the knee has moved and overall performance is better. Often, though, you'll uncover a new bottleneck that is now the limiting factor in your performance (shown in figure 2). It may be that you're now exhausting a different resource, or that a different part of the system is exhausting the same resource as before. Figure 2 shows a second bottleneck that was masked by the first one.


This is an application of "Rudy's Rutabaga Rule" from Jerry Weinberg's The Secrets of Consulting. The rule is "Once you eliminate your number one problem, number two gets a promotion." Maybe if you get enough bottlenecks out of the way, you can achieve your performance goals for your system. But don't get frustrated if each change to the system only improves performance by a small amount. Figure 3 illustrates why. (See below.)

If your system doesn't hog one resource significantly more than any other resource, then your bottlenecks will be stacked closely together. Removing each layer will only make a small improvement; you'll most likely slam into another bottleneck waiting nearby.


It Won't Go Fast if It Doesn't Go at All
Testing the system's performance tells us how fast each user can complete a task, and how many users it can support. A related concept is reliability, where we look at how long the system can operate before encountering a failure. You might want to devise a reliability test that doesn't step up the load the way a performance test often does. Not all projects do reliability testing, though, so you might be conducting performance testing before the system's reliability is solid. In that case, you'll usually find latent reliability issues in the system during performance testing. So, when you run a performance test, there's a chance that you'll encounter a failure that renders the rest of your test run invalid.

There is a random nature to most reliability issues: You will probably have more than one reliability issue in the same build of your software that can bite you during a performance test. Whether you encounter one of the latent reliability bugs and which one you see first depends on a roll of the dice. Also, be prepared for bug fixes to unmask hidden reliability bugs the same way performance bottlenecks can hide behind each other.

Another related system attribute that comes into play is robustness--the ability of the system to gracefully handle unexpected input. You can test the robustness of your software with a stress test, which may involve simply ramping up the load in a performance test until you encounter a failure. Robustness issues tend to be easier to consistently reproduce than reliability issues. If you keep hitting the same failure at the same point in your test, you probably have a robustness issue where the system doesn't respond with a reasonable error message when some system resource is completely exhausted. For example, if both your physical memory and swap space are exhausted, a request to allocate more memory will fail, and often the end user doesn't get a useful error explaining that the server is too busy to complete a requested task. Even if the system does fail gracefully, your performance test needs to watch for errors, because it's important to know if any of the simulated users didn't actually get the results they asked for.

Figure 3

Handling the Onion Without Tears
Here are a few tips to improve upon the typically slow progress of peeling off one bottleneck at a time.
  • Consider designing and running different types of performance-test scenarios that may each identify different bottlenecks. This changes the onion-peeling process so it's somewhat more parallelized. This presumes that the system is stable enough to survive these different tests and give meaningful, correlatable results.
  • Make sure your performance tests catch errors that indicate if the system isn't doing the work the test expects it to be doing (reliability or simple functional bugs). This takes a lot of extra work in most load-test tools, but it's important because failures can render your performance measurements invalid.
  • Perform stress testing early in your project to identify where the hard limits are. Run the tests all the way up to system failure--i.e., run "tip-over tests."
  • Balance reliability testing with performance testing. The less reliable the system, the more unpredictable your performance testing will be. If it crashes at any level of load, no matter how slight, your performance results are not meaningful. You are still on the outer skin of the onion.
The best approach is to do performance modeling hand-in-hand with performance tests that validate the model. Performance models identify bottlenecks before you even start coding. Do unit- and subsystem-level testing early in the project that covers resource allocation, performance, and reliability. Most performance issues should already be resolved before you start peeling the onion at the system level. Try using simple models like spreadsheets during design and then more sophisticated dynamic models, perhaps with the help of a commercial tool, (e.g., Hyperformix) to simulate and predict behaviors;. i.e., design for performance and reliability from the beginning to grow a smaller, fewer-layered onion. Make sure that resource allocation, performance, and reliability testing are part of unit and integration testing.

Friday, April 18, 2008

A Tester’s Tips for Dealing with Developers

When I started my career as a software tester, I was made aware of an ongoing antagonism between developers and testers. And it took me no time or effort to be convinced that this is all too common. I received the kind of unwelcome response from developers that I think all testers experience at some point during their careers.

From indifferent shrugs to downright hostility (sometimes cloaked as sympathetic smiles), a tester has to endure a lot from developers. It can be hard to keep a positive attitude. But it’s up to us to keep our priorities straight, and push toward a quality project.

I picked up a beautiful line from Cem Kaner’s Testing Computer Software: “The best tester is not the one who finds the most bugs or who embarrasses the most developers. The best tester is the one who gets the most bugs fixed.”

So how can we do that?

Be Cordial and Patient
As a tester you may find it more difficult to convince a developer about a defect you’ve found. Often, if a tester exposes one bug, the programmer will be ready with ten justifications. It’s sometimes difficult for developers to accept the fact that their code is defective—and someone else has detected it.

Developers need support from the testing team, who can assure them that finding new bugs is desirable, healthy, and important in making the product the best it can be. A humanistic approach will always help the tester know the programmer better. Believe me, in no time the same person could be sitting with you and laughing at mistakes that introduced bugs. Cordiality typically helps in getting the developer to say “yes” to your bug report. An important first step!

Be Diplomatic
Try presenting your findings tactfully, and explaining the bug without blame. “I am sure this is a minor bug that you could handle in no time. This is an excellent program so far.” Developers will jump and welcome it.

Take a psychological approach. Praise the developer’s job from time to time. The reason why most developers dislike our bug reports is very simple: They see us as tearing down their hard work. Some testers communicate with developers only when there is a problem. For most developers, the software is their own baby, and you are just an interfering outsider. I tell my developers that because of them I exist in the company and because of me their jobs are saved. It’s a symbiotic and profitable relationship between a tester and a developer.

Don’t Embarrass
Nobody likes mistakes to be pointed out. That’s human nature. Try explaining the big-picture need for fixing that particular bug rather than just firing bulky bug reports at developers. A deluge of defects not only irritates the developer, it makes your hard work useless for them.

Just as one can’t test a program completely, developers can’t design programs without mistakes, and they need to understand this before anything else. Errors are expected; they’re a natural part of the process.

You Win Some, You Lose Some
I know of testers who make their bug reports as rigid as possible. They won’t even listen to the developer’s explanations for not being able to fix a bug or implement a feature. Try making relaxed rules for yourself. Sit with the developer and analyze the priority and severity of a bug together. If the developer has a valid and sensible explanation behind her reluctance to change something, try to understand her. Just be sure to know where to draw the line in protecting the ultimate quality of your product.

Be Cautious
Diplomacy and flexibility do not replace the need to be cautious. Developers often find an excuse to say that they refused to fix a bug because they did not realize (or you did not tell them) how serious the problem was. Design your bug reports and test documents in a way that clearly lays out the risks and seriousness of issues. What’s even better is to conduct a meeting and explain the issues to them.

A smart tester is one who keeps a balance between listening and implementing. If a developer can’t convince you a bug shouldn’t be fixed, it’s your duty to convince him to fix it.

12 Bug writing tips

1.Be very specific when describing the bug. Don’t let there be any room for interpretation. More concise means less ambiguous, so less clarification will be needed later on.


2.Calling windows by their correct names (by the name displayed on the title bar) will eliminate some ambiguity.

3.Don’t be repetitive. Don’t repeat yourself. Also, don’t say things twice or three times.

4.Try to limit the number of steps to recreate the problem. A bug that is written with 7 or more steps can usually become hard to read. It is usually possible to shorten that list.

5.Start describing with where the bug begins, not before. For example, you don't have to describe how to load and launch the application if the application crashes on exit.

6.Proofreading the bug report is very important. Send it through a spell checker before submitting it.
7. Make sure that all step numbers are sequenced. (No missing step numbers and no duplicates.)

8.Please make sure that you use sentences. This is a sentence. This not sentence.

9.Don’t use a condescending or negative tone in your bug reports. Don’t say things like "It's still broken", or “It is completely wrong”.

10.Don’t use vague terms like “It doesn’t work” or “not working properly”

11.If there is an error message involved, be sure to include the exact wording of the text in the bug report. If there is a GPF (General Protection Fault) be sure to include the name of the module and address of the crash.

12.Once the text of the report is entered, you don’t know whose eyes will see it. You might think that it will go to your manager and the developer and that’s it, but it could show up in other documents that you are not aware of, such as reports to senior management or clients, to the company intranet, to future test scripts or test plans. The point is that the bug report is your work product, and you should take pride in your work.

Banana Testing

Once upon a time there was a man named Andy. Andy was a normal guy like any one of us, with strengths and weaknesses. His strength was that he did a great job at work, but his weakness was that he was a procrastinator, leaving everything for the last minute.

It was the beginning of December, and Andy realized that he had accumulated all his vacation days for the year and had to use them or lose them. Luckily, he wasn’t working on anything urgent, so he immediately got approval for a two-week vacation, and booked a flight to Hawaii. He went home, woke up the next morning and started to pack because his flight was that afternoon.

As soon as he took out his suitcase, he realized that he had no clothes to pack, so he ran out to the department store to pick us some vacation-wear, sunglasses, etc. He becomes hungry on his way back home and in a rush, stops in a fruit store to pick up something to eat. He wants to buy a banana, but the storekeeper will only sell him a bunch, not a single banana. They get into a small argument but in the end Andy gives in and buys whole bunch because he’s hungry and in a rush.

When Andy gets home he drops the bananas on the windowsill, throws his department store bags into the open suitcase, zips it up and runs out of the house to the airport.

Two weeks go by and Andy returns home. As soon as he opens the door, he is greeted by a strange smell. He walks around the house, and he sees something black on the windowsill that’s oozing and dripping, soggy and mushy. There are gnats and flies buzzing around it, and it smells fermented. He gets close, takes a good whiff, sticks his finger in it and tastes it, and declares “these must be rotten bananas” as he passes out.

Why did I tell you this story? To describe what software testing is. In its most generic form, there are three basic elements of testing.[1] They are:


Please see Comments for more........

Is Software Testing Advancing or Stagnating?

Software Testing has been started in 1976 and we are still following same standards and methods. Here is an interesting article by Steve Whitchurch about is software testing Advancing or Stagnating?

In 1976, Michael Fagan published his first paper on Design and Code Inspections. He talked about how inspections could reduce errors in software. In the same year, Glenford Myers wrote Software Reliability Principles and Practices. In this book, Myers talks about testing philosophies—emphasizing the importance of designing good test cases. He goes on to describe test cases which test the code as if it’s in a sealed, black box. In 1979, Myers wrote his fifth book, The Art of Software Testing, which soon became the bible of the Software Quality movement. In this book, he talks about the importance of Inspections, Black and White Box testing, and the benefits of regression testing.

Sounds like a solid beginning. So, what’s my point?

My point is this. I don’t think testing has advanced since Fagen and Myers wrote their first papers and books. We are still using the same methods to perform our work. We are still asking the same questions.

Now, I’m not suggesting that there haven’t been any important books written in the time since the books Myers wrote. In fact, since then many fine books have been written on the subject of software testing.

In 1983, seven years after Myers' software reliability book, Boris Beizer wrote Software Testing Techniques, a very good book on the subject of software testing. Beizer gives the terms Black Box and White Box testing new names—Functional Testing and Structural Testing respectively. But for the most part he talks about testing methods similar to Myers.

In 1995, a full nineteen years after Myers’ book, Edward Kit wrote Software Testing in The Real World, another good book on software testing. But still, Kit talks about Functional Testing (Black Box) as well as White Box Testing.

But if you have been in the business for any length of time, you get a distinct sense of déjà-vu. If you don’t believe me, take a look at the next testing conference advertisement you get in the mail. Then think about that talk you attended years ago. The one where the speaker described a testing oracle that would create test cases for you. Have you ever seen such a tool that really worked on real code? I doubt it.

What about the CMM and ISO 9000? These processes were going to help us produce high-quality software. How many of you are still using them? Have they solved your quality issues?

Like most of you, I create functional test cases, update regression test suites, and attend
an occasional code review, all in the name of process improvement. But I haven’t seen anything new or revolutionary impact my world.

Again, I’m not minimizing or trying to downplay software quality process. But, like most of you, I work in the real world of tight deadlines and poor requirements. Most of the time I don’t even have real functional specifications. Software engineering documentation—what’s that?

So thinking back to Myers’ 1976 book and all the testing books and conferences since, have we advanced or are we stagnating?

Let’s just say, I feel the algae growing.

PLease put your comments.......... :)

An Uncomfortable Truth about Agile Testing

There is a good article by Jeff Paton on some bitter truth of Agile Testing.

In organizations that have adopted agile development, I often see a bit of a culture clash with testers and the rest of the staff. Testers will ask for product specifications that they can test against to verify that what was built meets the specifications, which is a reasonable thing to ask for. The team often has a user story and some acceptance criteria in which any good tester can quickly poke holes. "You can't release the software like this," the tester might say. "You don't have enough validation on these fields here. And what should happen if we put large amounts of data in this?"

"This is just the first story," I'd say. “"We'll have more stories later that will add validation and set limits on those fields. And, to be honest, those fields may change after we demonstrate the software to the customer--that's why we're deferring adding those things now."

"Well, then there's no point in testing now," the testers would usually say. "If the code changes, I'll just need to re-test all this stuff anyway. Call me when things stop changing."

I can understand their concern, but I also know we need to test what we've built so far--even if it's incomplete, even if it will change. That's when I realized what testing is about in agile development. Let me illustrate with a story:

Imagine you're working in a factory that makes cars. You're the test driver, testing the cars as they come off the assembly line. You drive them through an obstacle course and around a track at high speed, and then you certify them as ready to buy. You wear black leather gloves and sunglasses. (I'm sure it doesn't work that way, but humor me for a minute.)

For the last week, work has been a bit of a pain. When you start up your fifteenth car of the day, it runs rough and then dies after you drive it one hundred yards from the back door of the plant. You know it's the fuel pump again, because the last five defective cars you've found have all had bad fuel pumps. You reject the car and send it back to have the problem properly diagnosed and fixed. You may test this car again tomorrow.

Now, some of you might be thinking, "Why don't they test those fuel pumps before they put them into the cars?" And you're right, that would be a good idea. In the real world, they probably test every car part along the way before it gets assembled. In the end, they'll still test the finished car. Testing the parts of the car improves the quality downstream, all the way to when the car is finally finished.

Testing in agile development is done for much the same reason. A tester on an agile team may test a screen that's half finished, missing some validation, or missing some fields. It's incomplete--only one part of the software--but testing it in this incomplete stage helps reduce the risk of failures downstream. It's not about certifying that the software is done, complete, or fit to ship. To do that, we'd need to drive the "whole car," which we'll do when the whole car is complete.

By building part of the software and demonstrating that it works, we're able to complete one of the most difficult types of testing: validation.

I can't remember when I first heard the words verification and validation. It seemed like such nonsense to me; the two words sounded like synonyms. Now I know the distinction, which is important. Verification means it conforms to specification; in other words, the software does what you said it would do without failing. Validation means that the software is fit for use, that it accomplishes its intended purpose. Ultimately, the software has no value unless it accomplishes its intended purpose, which is a difficult thing to assure until you actually use it for its intended purpose. The best person qualified to validate the software is a person who would eventually use it. Even if the target users can't tell me conclusively that it will meet its intended purpose, they often can tell me if it won't, as well as what I might change to be more certain that it will meet its intended purpose.

Building software in small pieces and vetting those pieces early allows us to begin to validate sooner. Inevitably, this sort of validation results in changes to the software. Change comes a bit at a time, or it arrives continuously throughout an agile development process, which agile people believe is better than getting it all at once in a big glut at the end of the project, when time is tight and tensions are high.

Being a tester in an agile environment is about improving the quality of the product before it's complete. It also means becoming an integrated and important part of the development team. They help ensure the software--each little bit that's complete--is verified before its users validate it. Testers are involved early to help describe acceptance criteria before the product is written. Their experience is valuable to finding issues that likely will cause validation issues with customers.

At the end of the day, an agile tester likely will pour over the same functionality many times to verify it as additional parts are added or changed. A typical agile product should see more test time than its non-agile counterpart. The uncomfortable truth for testers in agile development is that all of this involves hard work and a fair amount of retesting the same functionality over and over again. In my eyes, this makes the tester’s role more relied on and critical than ever.

What Are You Working On?

Summary : Goals and requirements drive the work schedules of all projects. Some of these requests are necessary to the success of the current project, others are not so critical. Yet sometimes we lose sight of this and spend many work hours trying to complete more than what can be done within the timeframe of a project. There is a short story regarding how we running behind goals and missing requirements.

Story:Kris, a technical lead, saw Tom, one of her project's developers running down the hall. She was curious, but didn't interfere. A few hours later, she saw him running back. He stopped and made a U-turn into her cube.

"Kris, do you have a minute?" he asked.

"Sure, what's up?" she said.

"Well, Danny over in marketing wants me to add these things to the screen, and I was wondering if you could take a look at it?"

Kris started reviewing the changes and asked, "Tom, is this why you've been running around all day?"

"Uh, yeah. Why?"

"Because what Danny wants is something that's a goal, not a requirement for release. Remember the product roadmap? This feature is for the next release but was a goal for this release. We need to finish the requirements before we think about the goals. Let's go talk to the project manager and see if anything's changed."

Many projects have more requirements than can be finished in the desired project time. Some of those unfinished requirements turn into goals. Other times, the project team members have internal goals it wants to accomplish. Or marketing has said it would be nice if the team could achieve a certain performance or reliability greater than what it specified. Or the organization wants faster projects. All of these are goals, and the team should satisfy the goals after it satisfies the requirements.

Separate Goals from Requirements
It's easy, especially at the beginning of a project, for a project team and the people who request deliverables (some of which are requirements) to be unable to differentiate between goals and requirements. The project team might be excited about the project and want to do everything. The people who want the release might feel as if there's pressure for everything in this release. But it's too easy for the project team members to be sidetracked if they haven't differentiated between goals and requirements.

Here's how I do it. First, take your requirements (if you have them). Now, for each requirement, ask the person who gave you this requirement into which bucket this item belongs. The buckets are:

1. Product requirements required for this release
2. Product requirements for some time in the future
3. Project goals, such as a reliability or performance measurement that exceeds the product requirement
4. Team goals--things the team would like to do (e.g., pay down some technical debt by investing in more unit tests)
5. Organization goals, such as finishing this project before the next one needs to start

Only the first item in this list represents the project's requirements. Everything else is a goal.

Define Release Criteria
Now that you know the requirements, you can test whether or not you've bucketed everything correctly by defining release criteria. The release criteria should be a subset of the requirements. If you find anything creeping in from the goals, you know you either have more requirements than you thought or your release criteria are not actual release criteria--they're goals someone wants you to deliver.

Here's how this could play out in a project. Imagine you have an online store, and you have a requirement of improving the search for a specific set of items--say, canary cages--by ten percent to bring total search time for canary cages to less than two seconds. In addition, you know that for next quarter's release, you need to bring canary cage search to less than one second--as well as bring the search for all cages to less than 1.5 seconds. This is a goal for this release, but a requirement for next release. If you can see how to fulfill that requirement now for no more money and time than you're already spending, fine. But if you can't, you only work on canary cage search.

In addition, the team members realize they've incurred some technical debt by missing some unit tests, and they don't have all the performance tests they want for all cage searches. They do have performance tests for canary cages.

As a project manager, I would talk to the team and ask where the unit tests are missing. If team members were going to work in those areas of the code, I might ask them to timebox any additional unit test development--i.e., make sure that the team members develop unit tests for the code they're developing (only for code they're touching) and to timebox the time they spend doing that. We would agree on how much time to spend fulfilling the requirements for release because the time the team members spend developing tests for already-running code is time they're not spending on finishing the features for this release.

This conversation tends to be tricky. I don't want to prevent people from providing more information about the code base, and I don't want people to wander all over the code adding unit tests. I tell them that and ask them to monitor their time and let me know if adding more unit tests is taking more time than they thought. An organizational goal for this release might be to meet the quarterly deadline for the release.

Case In Point
One organization I previously worked with only had eighteen-month releases. They wanted to transition to quarterly releases but suspected that they first needed to transition. They thought moving from an eighteen-month release to a three-month release might be too difficult for them. So, they cut the goal in half as a requirement. For the first project, the requirement was nine months from start to release, with a goal of three months to finish the project. The project took twelve months to complete. At the retrospective, the team members discussed what they could do differently during the next project and planned for a six-month duration. They met that six-month requirement and learned what they needed to do for a three-month cycle. Having the original goal helped them learn what they needed to do, even though they couldn't meet the goal.

Summary
Once Tom realized what he was doing--working on a goal instead of a requirement, he asked Kris for help in discussing the issue with Danny. Danny thought the team should change what they were doing to accommodate his request to move the feature from goal to requirement for this release, so they checked with the project manager. The project manager explained to Danny that it was too late in this release to change the requirements, but she would be happy to discuss reordering his requirements in the future.

Your conversations might require more people to make a decision about whether this feature is a goal or requirement, but having the conversation will help everyone do what they need to for the current project. Remember, project requirements and goals are different, and you should treat them differently. Spend your time on the requirements, and then attend to the goals.