Thursday, April 24, 2008

Peeling the Performance Onion

The Infernal Onion
When we run a performance testing scenario, we usually start with a light load and measure response times as the load increases. You would expect that the response time would increase as the load increases, but you might not anticipate the dreaded "knee" in the performance curve. Figure 1 shows the hockey stick shape of the typical performance curve.

Figure 1: The classic hockey stick

The knee is caused by non-linear effects related to resource exhaustion. For example, if you exhaust all physical memory, the operating system will start swapping memory to disk, which is much slower than physical memory. Sometimes a subsystem like a Java interpreter or application server might not be configured to use all available memory, so memory limitations can bite you even if you have plenty of free memory. If your CPU horsepower gets oversubscribed, threads will start to thrash as the operating system switches between them to give each a fair share of timeslices. If you have too many threads trying to access a disk, the disk cache may no longer give you the performance boost that it usually does. And if your network traffic approaches the maximum possible bandwidth, collisions may impact how effectively you can use that bandwidth.

Figure 2
When we tune the performance of the system, we try to move that knee to the right so we can handle increasing load as long as possible before the response time shoots off the scale. This tuning often happens near the scheduled end of a project when most of the system is functional enough to allow for system-level performance testing. When you improve the performance of the system, what should you anticipate to happen next? Sometimes you're still limited by the same kind of bottleneck, though the knee has moved and overall performance is better. Often, though, you'll uncover a new bottleneck that is now the limiting factor in your performance (shown in figure 2). It may be that you're now exhausting a different resource, or that a different part of the system is exhausting the same resource as before. Figure 2 shows a second bottleneck that was masked by the first one.


This is an application of "Rudy's Rutabaga Rule" from Jerry Weinberg's The Secrets of Consulting. The rule is "Once you eliminate your number one problem, number two gets a promotion." Maybe if you get enough bottlenecks out of the way, you can achieve your performance goals for your system. But don't get frustrated if each change to the system only improves performance by a small amount. Figure 3 illustrates why. (See below.)

If your system doesn't hog one resource significantly more than any other resource, then your bottlenecks will be stacked closely together. Removing each layer will only make a small improvement; you'll most likely slam into another bottleneck waiting nearby.


It Won't Go Fast if It Doesn't Go at All
Testing the system's performance tells us how fast each user can complete a task, and how many users it can support. A related concept is reliability, where we look at how long the system can operate before encountering a failure. You might want to devise a reliability test that doesn't step up the load the way a performance test often does. Not all projects do reliability testing, though, so you might be conducting performance testing before the system's reliability is solid. In that case, you'll usually find latent reliability issues in the system during performance testing. So, when you run a performance test, there's a chance that you'll encounter a failure that renders the rest of your test run invalid.

There is a random nature to most reliability issues: You will probably have more than one reliability issue in the same build of your software that can bite you during a performance test. Whether you encounter one of the latent reliability bugs and which one you see first depends on a roll of the dice. Also, be prepared for bug fixes to unmask hidden reliability bugs the same way performance bottlenecks can hide behind each other.

Another related system attribute that comes into play is robustness--the ability of the system to gracefully handle unexpected input. You can test the robustness of your software with a stress test, which may involve simply ramping up the load in a performance test until you encounter a failure. Robustness issues tend to be easier to consistently reproduce than reliability issues. If you keep hitting the same failure at the same point in your test, you probably have a robustness issue where the system doesn't respond with a reasonable error message when some system resource is completely exhausted. For example, if both your physical memory and swap space are exhausted, a request to allocate more memory will fail, and often the end user doesn't get a useful error explaining that the server is too busy to complete a requested task. Even if the system does fail gracefully, your performance test needs to watch for errors, because it's important to know if any of the simulated users didn't actually get the results they asked for.

Figure 3

Handling the Onion Without Tears
Here are a few tips to improve upon the typically slow progress of peeling off one bottleneck at a time.
  • Consider designing and running different types of performance-test scenarios that may each identify different bottlenecks. This changes the onion-peeling process so it's somewhat more parallelized. This presumes that the system is stable enough to survive these different tests and give meaningful, correlatable results.
  • Make sure your performance tests catch errors that indicate if the system isn't doing the work the test expects it to be doing (reliability or simple functional bugs). This takes a lot of extra work in most load-test tools, but it's important because failures can render your performance measurements invalid.
  • Perform stress testing early in your project to identify where the hard limits are. Run the tests all the way up to system failure--i.e., run "tip-over tests."
  • Balance reliability testing with performance testing. The less reliable the system, the more unpredictable your performance testing will be. If it crashes at any level of load, no matter how slight, your performance results are not meaningful. You are still on the outer skin of the onion.
The best approach is to do performance modeling hand-in-hand with performance tests that validate the model. Performance models identify bottlenecks before you even start coding. Do unit- and subsystem-level testing early in the project that covers resource allocation, performance, and reliability. Most performance issues should already be resolved before you start peeling the onion at the system level. Try using simple models like spreadsheets during design and then more sophisticated dynamic models, perhaps with the help of a commercial tool, (e.g., Hyperformix) to simulate and predict behaviors;. i.e., design for performance and reliability from the beginning to grow a smaller, fewer-layered onion. Make sure that resource allocation, performance, and reliability testing are part of unit and integration testing.

Friday, April 18, 2008

A Tester’s Tips for Dealing with Developers

When I started my career as a software tester, I was made aware of an ongoing antagonism between developers and testers. And it took me no time or effort to be convinced that this is all too common. I received the kind of unwelcome response from developers that I think all testers experience at some point during their careers.

From indifferent shrugs to downright hostility (sometimes cloaked as sympathetic smiles), a tester has to endure a lot from developers. It can be hard to keep a positive attitude. But it’s up to us to keep our priorities straight, and push toward a quality project.

I picked up a beautiful line from Cem Kaner’s Testing Computer Software: “The best tester is not the one who finds the most bugs or who embarrasses the most developers. The best tester is the one who gets the most bugs fixed.”

So how can we do that?

Be Cordial and Patient
As a tester you may find it more difficult to convince a developer about a defect you’ve found. Often, if a tester exposes one bug, the programmer will be ready with ten justifications. It’s sometimes difficult for developers to accept the fact that their code is defective—and someone else has detected it.

Developers need support from the testing team, who can assure them that finding new bugs is desirable, healthy, and important in making the product the best it can be. A humanistic approach will always help the tester know the programmer better. Believe me, in no time the same person could be sitting with you and laughing at mistakes that introduced bugs. Cordiality typically helps in getting the developer to say “yes” to your bug report. An important first step!

Be Diplomatic
Try presenting your findings tactfully, and explaining the bug without blame. “I am sure this is a minor bug that you could handle in no time. This is an excellent program so far.” Developers will jump and welcome it.

Take a psychological approach. Praise the developer’s job from time to time. The reason why most developers dislike our bug reports is very simple: They see us as tearing down their hard work. Some testers communicate with developers only when there is a problem. For most developers, the software is their own baby, and you are just an interfering outsider. I tell my developers that because of them I exist in the company and because of me their jobs are saved. It’s a symbiotic and profitable relationship between a tester and a developer.

Don’t Embarrass
Nobody likes mistakes to be pointed out. That’s human nature. Try explaining the big-picture need for fixing that particular bug rather than just firing bulky bug reports at developers. A deluge of defects not only irritates the developer, it makes your hard work useless for them.

Just as one can’t test a program completely, developers can’t design programs without mistakes, and they need to understand this before anything else. Errors are expected; they’re a natural part of the process.

You Win Some, You Lose Some
I know of testers who make their bug reports as rigid as possible. They won’t even listen to the developer’s explanations for not being able to fix a bug or implement a feature. Try making relaxed rules for yourself. Sit with the developer and analyze the priority and severity of a bug together. If the developer has a valid and sensible explanation behind her reluctance to change something, try to understand her. Just be sure to know where to draw the line in protecting the ultimate quality of your product.

Be Cautious
Diplomacy and flexibility do not replace the need to be cautious. Developers often find an excuse to say that they refused to fix a bug because they did not realize (or you did not tell them) how serious the problem was. Design your bug reports and test documents in a way that clearly lays out the risks and seriousness of issues. What’s even better is to conduct a meeting and explain the issues to them.

A smart tester is one who keeps a balance between listening and implementing. If a developer can’t convince you a bug shouldn’t be fixed, it’s your duty to convince him to fix it.

12 Bug writing tips

1.Be very specific when describing the bug. Don’t let there be any room for interpretation. More concise means less ambiguous, so less clarification will be needed later on.


2.Calling windows by their correct names (by the name displayed on the title bar) will eliminate some ambiguity.

3.Don’t be repetitive. Don’t repeat yourself. Also, don’t say things twice or three times.

4.Try to limit the number of steps to recreate the problem. A bug that is written with 7 or more steps can usually become hard to read. It is usually possible to shorten that list.

5.Start describing with where the bug begins, not before. For example, you don't have to describe how to load and launch the application if the application crashes on exit.

6.Proofreading the bug report is very important. Send it through a spell checker before submitting it.
7. Make sure that all step numbers are sequenced. (No missing step numbers and no duplicates.)

8.Please make sure that you use sentences. This is a sentence. This not sentence.

9.Don’t use a condescending or negative tone in your bug reports. Don’t say things like "It's still broken", or “It is completely wrong”.

10.Don’t use vague terms like “It doesn’t work” or “not working properly”

11.If there is an error message involved, be sure to include the exact wording of the text in the bug report. If there is a GPF (General Protection Fault) be sure to include the name of the module and address of the crash.

12.Once the text of the report is entered, you don’t know whose eyes will see it. You might think that it will go to your manager and the developer and that’s it, but it could show up in other documents that you are not aware of, such as reports to senior management or clients, to the company intranet, to future test scripts or test plans. The point is that the bug report is your work product, and you should take pride in your work.

Banana Testing

Once upon a time there was a man named Andy. Andy was a normal guy like any one of us, with strengths and weaknesses. His strength was that he did a great job at work, but his weakness was that he was a procrastinator, leaving everything for the last minute.

It was the beginning of December, and Andy realized that he had accumulated all his vacation days for the year and had to use them or lose them. Luckily, he wasn’t working on anything urgent, so he immediately got approval for a two-week vacation, and booked a flight to Hawaii. He went home, woke up the next morning and started to pack because his flight was that afternoon.

As soon as he took out his suitcase, he realized that he had no clothes to pack, so he ran out to the department store to pick us some vacation-wear, sunglasses, etc. He becomes hungry on his way back home and in a rush, stops in a fruit store to pick up something to eat. He wants to buy a banana, but the storekeeper will only sell him a bunch, not a single banana. They get into a small argument but in the end Andy gives in and buys whole bunch because he’s hungry and in a rush.

When Andy gets home he drops the bananas on the windowsill, throws his department store bags into the open suitcase, zips it up and runs out of the house to the airport.

Two weeks go by and Andy returns home. As soon as he opens the door, he is greeted by a strange smell. He walks around the house, and he sees something black on the windowsill that’s oozing and dripping, soggy and mushy. There are gnats and flies buzzing around it, and it smells fermented. He gets close, takes a good whiff, sticks his finger in it and tastes it, and declares “these must be rotten bananas” as he passes out.

Why did I tell you this story? To describe what software testing is. In its most generic form, there are three basic elements of testing.[1] They are:


Please see Comments for more........

Is Software Testing Advancing or Stagnating?

Software Testing has been started in 1976 and we are still following same standards and methods. Here is an interesting article by Steve Whitchurch about is software testing Advancing or Stagnating?

In 1976, Michael Fagan published his first paper on Design and Code Inspections. He talked about how inspections could reduce errors in software. In the same year, Glenford Myers wrote Software Reliability Principles and Practices. In this book, Myers talks about testing philosophies—emphasizing the importance of designing good test cases. He goes on to describe test cases which test the code as if it’s in a sealed, black box. In 1979, Myers wrote his fifth book, The Art of Software Testing, which soon became the bible of the Software Quality movement. In this book, he talks about the importance of Inspections, Black and White Box testing, and the benefits of regression testing.

Sounds like a solid beginning. So, what’s my point?

My point is this. I don’t think testing has advanced since Fagen and Myers wrote their first papers and books. We are still using the same methods to perform our work. We are still asking the same questions.

Now, I’m not suggesting that there haven’t been any important books written in the time since the books Myers wrote. In fact, since then many fine books have been written on the subject of software testing.

In 1983, seven years after Myers' software reliability book, Boris Beizer wrote Software Testing Techniques, a very good book on the subject of software testing. Beizer gives the terms Black Box and White Box testing new names—Functional Testing and Structural Testing respectively. But for the most part he talks about testing methods similar to Myers.

In 1995, a full nineteen years after Myers’ book, Edward Kit wrote Software Testing in The Real World, another good book on software testing. But still, Kit talks about Functional Testing (Black Box) as well as White Box Testing.

But if you have been in the business for any length of time, you get a distinct sense of déjà-vu. If you don’t believe me, take a look at the next testing conference advertisement you get in the mail. Then think about that talk you attended years ago. The one where the speaker described a testing oracle that would create test cases for you. Have you ever seen such a tool that really worked on real code? I doubt it.

What about the CMM and ISO 9000? These processes were going to help us produce high-quality software. How many of you are still using them? Have they solved your quality issues?

Like most of you, I create functional test cases, update regression test suites, and attend
an occasional code review, all in the name of process improvement. But I haven’t seen anything new or revolutionary impact my world.

Again, I’m not minimizing or trying to downplay software quality process. But, like most of you, I work in the real world of tight deadlines and poor requirements. Most of the time I don’t even have real functional specifications. Software engineering documentation—what’s that?

So thinking back to Myers’ 1976 book and all the testing books and conferences since, have we advanced or are we stagnating?

Let’s just say, I feel the algae growing.

PLease put your comments.......... :)

An Uncomfortable Truth about Agile Testing

There is a good article by Jeff Paton on some bitter truth of Agile Testing.

In organizations that have adopted agile development, I often see a bit of a culture clash with testers and the rest of the staff. Testers will ask for product specifications that they can test against to verify that what was built meets the specifications, which is a reasonable thing to ask for. The team often has a user story and some acceptance criteria in which any good tester can quickly poke holes. "You can't release the software like this," the tester might say. "You don't have enough validation on these fields here. And what should happen if we put large amounts of data in this?"

"This is just the first story," I'd say. “"We'll have more stories later that will add validation and set limits on those fields. And, to be honest, those fields may change after we demonstrate the software to the customer--that's why we're deferring adding those things now."

"Well, then there's no point in testing now," the testers would usually say. "If the code changes, I'll just need to re-test all this stuff anyway. Call me when things stop changing."

I can understand their concern, but I also know we need to test what we've built so far--even if it's incomplete, even if it will change. That's when I realized what testing is about in agile development. Let me illustrate with a story:

Imagine you're working in a factory that makes cars. You're the test driver, testing the cars as they come off the assembly line. You drive them through an obstacle course and around a track at high speed, and then you certify them as ready to buy. You wear black leather gloves and sunglasses. (I'm sure it doesn't work that way, but humor me for a minute.)

For the last week, work has been a bit of a pain. When you start up your fifteenth car of the day, it runs rough and then dies after you drive it one hundred yards from the back door of the plant. You know it's the fuel pump again, because the last five defective cars you've found have all had bad fuel pumps. You reject the car and send it back to have the problem properly diagnosed and fixed. You may test this car again tomorrow.

Now, some of you might be thinking, "Why don't they test those fuel pumps before they put them into the cars?" And you're right, that would be a good idea. In the real world, they probably test every car part along the way before it gets assembled. In the end, they'll still test the finished car. Testing the parts of the car improves the quality downstream, all the way to when the car is finally finished.

Testing in agile development is done for much the same reason. A tester on an agile team may test a screen that's half finished, missing some validation, or missing some fields. It's incomplete--only one part of the software--but testing it in this incomplete stage helps reduce the risk of failures downstream. It's not about certifying that the software is done, complete, or fit to ship. To do that, we'd need to drive the "whole car," which we'll do when the whole car is complete.

By building part of the software and demonstrating that it works, we're able to complete one of the most difficult types of testing: validation.

I can't remember when I first heard the words verification and validation. It seemed like such nonsense to me; the two words sounded like synonyms. Now I know the distinction, which is important. Verification means it conforms to specification; in other words, the software does what you said it would do without failing. Validation means that the software is fit for use, that it accomplishes its intended purpose. Ultimately, the software has no value unless it accomplishes its intended purpose, which is a difficult thing to assure until you actually use it for its intended purpose. The best person qualified to validate the software is a person who would eventually use it. Even if the target users can't tell me conclusively that it will meet its intended purpose, they often can tell me if it won't, as well as what I might change to be more certain that it will meet its intended purpose.

Building software in small pieces and vetting those pieces early allows us to begin to validate sooner. Inevitably, this sort of validation results in changes to the software. Change comes a bit at a time, or it arrives continuously throughout an agile development process, which agile people believe is better than getting it all at once in a big glut at the end of the project, when time is tight and tensions are high.

Being a tester in an agile environment is about improving the quality of the product before it's complete. It also means becoming an integrated and important part of the development team. They help ensure the software--each little bit that's complete--is verified before its users validate it. Testers are involved early to help describe acceptance criteria before the product is written. Their experience is valuable to finding issues that likely will cause validation issues with customers.

At the end of the day, an agile tester likely will pour over the same functionality many times to verify it as additional parts are added or changed. A typical agile product should see more test time than its non-agile counterpart. The uncomfortable truth for testers in agile development is that all of this involves hard work and a fair amount of retesting the same functionality over and over again. In my eyes, this makes the tester’s role more relied on and critical than ever.

What Are You Working On?

Summary : Goals and requirements drive the work schedules of all projects. Some of these requests are necessary to the success of the current project, others are not so critical. Yet sometimes we lose sight of this and spend many work hours trying to complete more than what can be done within the timeframe of a project. There is a short story regarding how we running behind goals and missing requirements.

Story:Kris, a technical lead, saw Tom, one of her project's developers running down the hall. She was curious, but didn't interfere. A few hours later, she saw him running back. He stopped and made a U-turn into her cube.

"Kris, do you have a minute?" he asked.

"Sure, what's up?" she said.

"Well, Danny over in marketing wants me to add these things to the screen, and I was wondering if you could take a look at it?"

Kris started reviewing the changes and asked, "Tom, is this why you've been running around all day?"

"Uh, yeah. Why?"

"Because what Danny wants is something that's a goal, not a requirement for release. Remember the product roadmap? This feature is for the next release but was a goal for this release. We need to finish the requirements before we think about the goals. Let's go talk to the project manager and see if anything's changed."

Many projects have more requirements than can be finished in the desired project time. Some of those unfinished requirements turn into goals. Other times, the project team members have internal goals it wants to accomplish. Or marketing has said it would be nice if the team could achieve a certain performance or reliability greater than what it specified. Or the organization wants faster projects. All of these are goals, and the team should satisfy the goals after it satisfies the requirements.

Separate Goals from Requirements
It's easy, especially at the beginning of a project, for a project team and the people who request deliverables (some of which are requirements) to be unable to differentiate between goals and requirements. The project team might be excited about the project and want to do everything. The people who want the release might feel as if there's pressure for everything in this release. But it's too easy for the project team members to be sidetracked if they haven't differentiated between goals and requirements.

Here's how I do it. First, take your requirements (if you have them). Now, for each requirement, ask the person who gave you this requirement into which bucket this item belongs. The buckets are:

1. Product requirements required for this release
2. Product requirements for some time in the future
3. Project goals, such as a reliability or performance measurement that exceeds the product requirement
4. Team goals--things the team would like to do (e.g., pay down some technical debt by investing in more unit tests)
5. Organization goals, such as finishing this project before the next one needs to start

Only the first item in this list represents the project's requirements. Everything else is a goal.

Define Release Criteria
Now that you know the requirements, you can test whether or not you've bucketed everything correctly by defining release criteria. The release criteria should be a subset of the requirements. If you find anything creeping in from the goals, you know you either have more requirements than you thought or your release criteria are not actual release criteria--they're goals someone wants you to deliver.

Here's how this could play out in a project. Imagine you have an online store, and you have a requirement of improving the search for a specific set of items--say, canary cages--by ten percent to bring total search time for canary cages to less than two seconds. In addition, you know that for next quarter's release, you need to bring canary cage search to less than one second--as well as bring the search for all cages to less than 1.5 seconds. This is a goal for this release, but a requirement for next release. If you can see how to fulfill that requirement now for no more money and time than you're already spending, fine. But if you can't, you only work on canary cage search.

In addition, the team members realize they've incurred some technical debt by missing some unit tests, and they don't have all the performance tests they want for all cage searches. They do have performance tests for canary cages.

As a project manager, I would talk to the team and ask where the unit tests are missing. If team members were going to work in those areas of the code, I might ask them to timebox any additional unit test development--i.e., make sure that the team members develop unit tests for the code they're developing (only for code they're touching) and to timebox the time they spend doing that. We would agree on how much time to spend fulfilling the requirements for release because the time the team members spend developing tests for already-running code is time they're not spending on finishing the features for this release.

This conversation tends to be tricky. I don't want to prevent people from providing more information about the code base, and I don't want people to wander all over the code adding unit tests. I tell them that and ask them to monitor their time and let me know if adding more unit tests is taking more time than they thought. An organizational goal for this release might be to meet the quarterly deadline for the release.

Case In Point
One organization I previously worked with only had eighteen-month releases. They wanted to transition to quarterly releases but suspected that they first needed to transition. They thought moving from an eighteen-month release to a three-month release might be too difficult for them. So, they cut the goal in half as a requirement. For the first project, the requirement was nine months from start to release, with a goal of three months to finish the project. The project took twelve months to complete. At the retrospective, the team members discussed what they could do differently during the next project and planned for a six-month duration. They met that six-month requirement and learned what they needed to do for a three-month cycle. Having the original goal helped them learn what they needed to do, even though they couldn't meet the goal.

Summary
Once Tom realized what he was doing--working on a goal instead of a requirement, he asked Kris for help in discussing the issue with Danny. Danny thought the team should change what they were doing to accommodate his request to move the feature from goal to requirement for this release, so they checked with the project manager. The project manager explained to Danny that it was too late in this release to change the requirements, but she would be happy to discuss reordering his requirements in the future.

Your conversations might require more people to make a decision about whether this feature is a goal or requirement, but having the conversation will help everyone do what they need to for the current project. Remember, project requirements and goals are different, and you should treat them differently. Spend your time on the requirements, and then attend to the goals.

Tuesday, April 15, 2008

Change your strategy

One day, there was a blind man sitting on the steps of a building with a hat by his feet and a sign that read: "I am blind, please help".

A creative publicist was walking by him and stopped to observe he only had a few coins in his hat, he dropped a few more coins in his hat and without asking for his permission took the sign,turned it around, and wrote another announcement. He placed the sign by his feet and left. That afternoon the creative publicist returned by the blind man and noticed that his hat was full of bills and coins. The blind man recognized his footsteps and asked if it was him who had re-written his sign and he wanted to know what did he write on it?

The publicist responded: "Nothing that was not true, I just rewrote your sign differently". He smiled and went on his way.The blind man never knew but his new sign read : "TODAY IS SPRING AND I CANNOT SEE IT".

Change your strategy when something does not go your way and you'll see it will probably be for the best. Have faith that every change is best for our lives.

ONE BEDROOM FLAT...


ONE BEDROOM FLAT... WRITTEN BY AN INDIAN SOFTWARE ENGINEER..- A Bitter Reality

As the dream of most parents I had acquired a degree in
Software Engineering and joined a company based in USA , the
land of braves and opportunity. When I arrived in the USA , it
was as if a dream had come true.

Here at last I was in the place where I want to be. I decided I
would be staying in this country for about Five years in which
time I would have earned enough money to settle down in India .
My father was a government employee and after his retirement,
the only asset he could acquire was a decent one bedroom flat.
I wanted to do some thing more than him. I started feeling
homesick and lonely as the time passed. I used to call home and
speak to my parents every week using cheap international phone
cards. Two years passed, two years of Burgers at McDonald's and
pizzas and discos and 2 years watching the foreign exchange
rate getting happy whenever the Rupee value went down.

Finally I decided to get married. Told my parents that I have
only 10 days of holidays and everything must be done within
these 10 days. I got my ticket booked in the cheapest flight.
Was jubilant and was actually enjoying hopping for gifts for
all my friends back home. If I miss anyone then there will be
talks. After reaching home I spent home one week going through
all the photographs of girls and as the time was getting
shorter I was forced to select one candidate.
In-laws told me, to my surprise, that I would have to get
married in 2-3 days, as I will not get anymore holidays. After
the marriage, it was time to return to USA , after giving some
money to my parents and telling the neighbors to look after
them, we returned to USA ..

My wife enjoyed this country for about two months and then she
started feeling lonely. The frequency of calling India
increased to twice in a week sometimes 3 times a week. Our
savings started diminishing. After two more years we started to
have kids. Two lovely kids, a boy and a girl, were gifted to us
by the almighty. Every time I spoke to my parents, they asked
me to come to India so that they can see their grand-children.
Every year I decide to go to India ... But part work part
monetary conditions prevented it. Years went by and visiting
India was a distant dream. Then suddenly one day I got a
message that my parents were seriously sick. I tried but I
couldn't get any holidays and thus could not go to India ... The
next message I got was my parents had passed away and as there
was no one to do the last rights the society members had done
whatever they could. I was depressed. My parents had passed
away without seeing their grand children.
After couple more years passed away, much to my children's
dislike and my wife's joy we returned to India to settle down.
I started to look for a suitable property, but to my dismay my
savings were short and the property prices had gone up during
all these years. I had to return to the USA ..

My wife refused to come back with me and my children refused to
stay in India ... My 2 children and I returned to USA after
promising my wife I would be back for good after two years.
Time passed by, my daughter decided to get married to an
American and my son was happy living in USA .. I decided that
had enough and wound-up every thing and returned to India . I
had just enough money to buy a decent 02 bedroom flat in a
well-developed locality.

Now I am 60 years old and the only time I go out of the flat is
for the routine visit to the nearby temple. My faithful wife
has also left me and gone to the holy abode.
Sometimes I wondered was it worth all this? My father, even
after staying in India , had a house to his name and I too have
the same

nothing more.

I lost my parents and children for just ONE EXTRA BEDROOM.
Looking out from the window I see a lot of children dancing.
This damned cable TV has spoiled our new generation and these
children are losing their values and culture because of it. I
get occasional cards from my children asking I am alright. Well
at least they remember me.
Now perhaps after I die it will be the neighbors again who will
be performing my last rights, God Bless them. But the question
still
remains 'was all this worth it?'

I am still searching for an answer...... ......... ...!!!

INDIAN SOFTWARE ENGINEER

Tuesday, April 8, 2008

Never listen with a predetermined notion


A teacher teaching Maths to seven-year-old Arnav asked him, “If I give you one apple and one apple and one apple, how many apples will you have?”Within a few seconds Arnav replied confidently, “Four!”

The dismayed teacher was expecting an effortless correct answer (three). She was disappointed. “Maybe the child did not listen properly,” she thought. She repeated, “Arnav, listen carefully. If I give you one apple and one apple and one apple, how many apples will you have?”

Arnav had seen the disappointment on his teacher’s face. He calculated again on his fingers. But within him he was also searching for the answer that will make the teacher happy. His search for the answer was not for the correct one, but the one that will make his teacher happy. This time hesitatingly he replied, “Four…”

The disappointment stayed on the teacher’s face. She remembered that Arnav liked strawberries. She thought maybe he doesn’t like apples and that is making him loose focus. This time with an exaggerated excitement and twinkling in her eyes she asked, “If I give you one strawberry and one strawberry and one strawberry, then how many you will have?”

Seeing the teacher happy, young Arnav calculated on his fingers again. There was no pressure on him, but a little on the teacher. She wanted her new approach to succeed. With a hesitating smile young Arnav enquired, “Three?”

The teacher now had a victorious smile. Her approach had succeeded. She wanted to congratulate herself. But one last thing remained. Once again she asked him, “Now if I give you one apple and one apple and one more apple how many will you have?”

Promptly Arnav answered, “Four!”

The teacher was aghast. “How Arnav, how?” she demanded in a little stern and irritated voice.

In a voice that was low and hesitating young Arnav replied, “Because I already have one apple in my bag.”

“When someone gives you an answer that is different from what you expect. Don’t think they are wrong. There maybe an angle that you have not understood at all. You will have to listen and understand, but never listen with a predetermined notion.”

Folk Song Kondaliyu’s Glimpse-Remake-Remix









From : DeshGujarat.com