Thursday, December 15, 2005

Software Testing: Coloring the Boxes from Black to White

In my last article, I have dealt with the virtually never-ending battle between manual testing and automated testing. Today, let us leave that debate in its place and create a new fracas in the world of software testing – let us try to bridge the gap between white box and black box testing.

Last year, a chance encounter with one of my cousins landed me in a remote, underdeveloped locality. Actually, this cousin of mine was a field reporter by profession and was on duty – investigating a murder case, which took place the previous night and I was just accompanying him with the sole interest of exploring a reporter’s day-to-day life.

As we proceeded in our search for the truth and began questioning the localities randomly, I was taken aback by the narrations of the people we questioned. I mean, none of the stories matched each other exactly and it was difficult for me to draw a clear picture of the actual incident. It almost reminded me the story “The blind men and the elephant” wherein each blind man described the elephant according to his own perspective.

Nevertheless, my companion was a seasoned professional and it just took him a couple of hours to filter the diverse stories and bring out the relevant information, which exactly matched with the police report who joined us in a while. Now, what’s the moral of the story and why the hell am I telling you this story when I was supposed to be igniting an intense debate between white box and black box testing?

Actually, what I want to convey is that one single perspective can never provide you a full, accurate picture of a given event or incident and it does not necessarily require blind men to have widely different perspectives over the same matter. And this is true not only with events or incidents, but also with software systems.

Testing computer software is one such activity in the Software Development Life Cycle (SDLC) in which having concurrent views of the software system is an absolute must. The 4+1 view model of software architecture bears a testimony to this fact. This model explains how no single type of test provides us ample information to quantify the quality of a software system and how we must subject the “box” (software system) to multiple types of software tests to ensure that the final software meets the desired quality standards.

Black box testing and white box testing are the two broad categories of software testing that together can give the testers a much better perspective on system quality. But set them apart, and neither of them can boast of being the sole leader in the software testing arena.

The concept of black box testing is analogous to its literal meaning because it obtrudes the intricacies of the software system from the tester. The tester accesses only the interfaces exposed by the system and tests whether the system is running flawlessly. Testing the software using this procedure provides various benefits, which are as follows:
· Ensures that the software conforms to its customer’s requirements specification.
· Ensures that all the software components are perfectly integrated to produce the desired
output.
· The time required to perform the test is minimal.
· It does not require a software engineer to perform the tests.

However, black box testing has one serious demerit. Being architecture independent, it simply cannot determine the efficiency of the code nor does it guarantee that every line of code has been tested. And this is where white box testing scores higher.

When white box testing is allocated enough time and resources, you can bet that every line of code has been tested and you can also ascertain the code’s relative efficiency. Not only that, with white box testing you can also identify the errors that are not explicitly and instantly exposed by the software system.

Thus, you see, that my idea of relating software testing to the story of widely different viewpoints was not at all a fruitless one. In fact, it is because of the widely different procedures of white box and black box testing that you get the perfect result only when you united them together – “United we stand, divided we fall” seems to be the slogan of these two testing methodologies that must be bridged for the larger interests of software testing.

Thursday, November 10, 2005

Software Testing

Test Automation vs. Manual Testing: The Battle Continues…

The letter ‘t’ in the term “testing” bears paramount significance when we talk in terms of software products. Its because, “testing” and “time” go hand in hand and no software tester in this world can deny that an additional bit of time could have helped him produce better results. And its not that whining about the lack of time has become a habit for these software testers, rather it’s an unfortunate truth that software testing is the SDLC phase that gets the least amount of time and attention.

The fact that software development projects normally follows an iterative approach, makes it extra difficult for the software testers to execute their tasks correctly and efficiently. Although the iterative approach helps the developers mend the fissures in their software, it certainly makes life difficult for the testers by piling up the testing debts.

With each iteration, the testers have to test lot more than what they tested in the previous iteration and that too within a reduced time frame. As a result the testers are compelled to sacrifice one or other form of software testing thereby increasing the risk of bug presence in the final product.

Under such circumstances, test automation does seem to be the silver bullet solution, providing immense reductions in the amount of time required in executing the tests. For example, in a recent survey, it was found that test automation could manage to condense five days of manual testing effort into one hour of automated execution thereby causing a 97.5% reduction in execution time.

Software, since its birth, has always been cursed by the “change” factor. It’s a common belief that the software code can be easily changed and thus the customers change their requirements frequently as the software develops. Seldom do they understand that such changes add complexities to the software and it takes a hell lot of effort on the part of the testers to perform regression testing to ensure that no bugs are added as a result of the changes.

When it comes to regression testing, there can be no better solution than test automation. Instead of testing the only area around the changes, test automation allows you to test the entire software after modification using less time and resources.

Now, does that mean that manual testing of computer software stand nowhere near test automation? The answer is NO. Along with all its virtues, test automation has its own limitations and cost is one of them. Just like an ERP package, deploying and configuring an appropriate test automation tools to automate the testing procedures incur huge costs, much more than running the tests manually.

Further, software testing involves some tasks, which can never be automated. For example, tasks, such as inquiry and analysis are not something a machine can do. Even though it is possible to automate some of the testing activities, there is no guarantee that the results will be perfect. And the bigger problem is that, unlike a software feature, it is difficult to identify whether the result produced by the test automation software is inaccurate. Ironically enough, it’s the tester who is made the scapegoat every time the results go wrong because of the ill-built test automation software.

The fact of the matter is test automation in itself is a misnomer simply because it requires a human being to select the right tool for the right purpose and also verify the results. Test automation just helps the human testers perform their task more effectively. Nevertheless, one cannot ignore the fact that the nature of software testing process and the time allocated for the purpose does demand for some kind of automation.