Identify Testing Types and Exit Criteria


Identifing Manual / Automated Test Types

The types of tests that need to be designed and executed depend totally on the objectives of the application, i.e., the measurable end state the organization strives to achieve. For example, if the application is a financial application used by a large number of individuals, special security and usability tests need to be performed.

However, three types of tests which are nearly always required are: function, user interface, and regression testing. Function testing comprises the majority of the testing effort and is concerned with verifying that the functions work properly. It is a black-box-oriented activity in which the tester is completely unconcerned with the internal behavior and structure of the application. User interface testing, or GUI testing, checks the user’s interaction or functional window structure. It ensures that object state dependencies function properly and provide useful navigation through the functions. Regression testing tests the application in light of changes made during debugging, maintenance, or the development of a new release.

Other types of tests that need to be considered include system and acceptance testing. System testing is the highest level of testing which evaluates the functionality as a total system, its performance and overall fitness of use. Acceptance testing is an optional user-run test which demonstrates the ability of the application to meet the user’s requirements. This test may or may not be performed based on the formality of the project. Sometimes the system test suffices.

Finally, the tests that can be automated with a testing tool need to be identified. Automated tests provide three benefits: repeatability, leverage, and increased functionality. Repeatability enables automated tests to be executed more than once, consistently. Leverage comes from repeatability from tests previously captured and tests that can be programmed with the tool, which may not have been possible without automation. As applications evolve, more and more functionality is added. With automation, the functional coverage is maintained with the test library.

Identifing the Test Exit Criteria:

One of the most difficult and political problems is deciding when to stop testing, since it is impossible to know when all the defects have been detected. There are at least four criteria for exiting testing:

Scheduled testing time has expired: This criteria is very weak, since it has nothing to do with verifying the quality of the application. This does not take into account that there may be an inadequate number of test cases or the fact that there may not be any more defects that are easily detectable.

Some predefined number of defects discovered: The problems with this is knowing the number of errors to detect and also overestimating the number of defects. If the number of defects is underestimated, testing will be incomplete. Potential solutions include experience with similar applications developed by the same development team, predictive models, and industry-wide averages. If the number of defects is overestimated, the test may never be completed within a reasonable time frame. A possible solution is to estimate completion time, plotting defects detected per unit of time. If the rate of defect detection is decreasing dramatically, there may be “burnout,” an indication that a majority of the defects have been discovered.

All the formal tests execute without detecting any defects: A major problem with this is that the tester is not motivated to design destructive test cases that force the tested program to its design limits, e.g., the tester’s job is completed when the test program fields no more errors. The tester is motivated not to find errors and may subconsciously write test cases that show the program is error free. This criteria is only valid if there is a rigorous and totally comprehensive test case suite created which approaches 100% coverage. The problem with this is determining when there is a comprehensive suite of test cases. If it is felt that this is the case, a good strategy at this point is to continue with ad hoc testing. Ad hoc testing is a black-box testing technique in which the tester lets his or her mind run freely to enumerate as many test conditions as possible. Experience has shown that this technique can be a very powerful supplemental or add-on technique.

Combination of the above: Most testing projects utilize a combination of the above exit criteria. It is recommended that all the tests be executed, but any further ad hoc testing will be constrained by time.

No comments:

Post a Comment