Regression testing is the re-running of Test cases that a program has previously executed correctly, in order to detect failures spawned by changes or corrections made during software development and maintenance.
These failures arise from incomplete or incorrect changes and are often witnessed as (unexpected) side effects in apparently unrelated application areas. It is common in the IT industry, that one in six (or seventeen percent*) of correction attempts are themselves defective.
This high rate of introduced defects is exaggerated when developers maintain a large number of poorly documented, integrated systems where they of ten have little or no experience of these systems. Regression testing may then be used to great effect at detecting subtle side effects and unconsidered inter-relationships within these environments, thus reducing risk.
In regression testing standard actions in a test procedure are carried out and the expected responses are checked for correctness. Failure of the system to reproduce any of the expected responses imply that the system may have regressed (there may have been one or more introduced defects), or that the regression test itself may be out of date or incorrect.
If all responses are as expected, there may still have been introduced defects. In this case they escaped detection. Each defect that is reported from live or field use, having escaped detection during regression testing must be carefully analysed and the regression test suite(s) updated to catch these or similar defects in the future.
The main source of regression test cases is usually from re-use of unit, integration or system test cases. It is good practice to batch test cases into logically cohesive test suites, rather than have a single huge regression test. This allows different sub-sets of tests to be executed when there is time-pressure, or where there is confidence that only certain tests need to be run.
When first creating a regression testing suite, the choice of tests to use can be guided by the 80/20 principle. Twenty percent of system functions are likely to be used eighty percent of the time. Thus these highly used screens, transactions, menus, or fields ought to be the first candidates for regression tests. This is easy to understand if we consider one of these popular functions experiencing failure. The company call centre will be inundated with calls and the majority of users will be negatively affected. If, however, one of the less common functions has a problem, fewer users would have used it and thus discovered a lack.
Further tests to add to regression suites may be guided by risk considerations. Certain failures may not occur often, but should they occur, would result in a highly negative business impact. These higher risk business areas, modules, screens, or transactions should therefore be tested each and every time there are changes in the system and/or its environment.
Additional regression tests can be added to application areas that are known to be difficult to maintain, and have a history of high failure rates.
Regression testing can begin at the unit level, where unit tests may be adapted and rerun after changes to the unit have been effected. Regression testing should then continue through integration, system, user acceptance and operational software development life cycle phases.
As a minimum, regression tests should be run prior to build releases into the broader community and/or company live Environment. These tests will help detect major anomalies that could have serious cost, schedule, productivity and/or company image Implications.
Web systems, and other multi-user systems might have ongoing regression tests run at regular intervals. For example, one such test may check that all hyperlinks on a web site remain correct and reachable. Links to other sites may become outdated, or may even be corrupted by hackers in a security breach.
Regression testing at regular intervals can also answer production questions such as: “Is the performance of our major transactions within acceptable time limits?”, “or”, is some factor slowing our response times on an important transaction?”
Regression tests of non-functional application attributes such as performance, usability or security, is also very important. A very small change to the code or design may have a significant impact on system performance for example. Also please take note that debilitating changes may not even be within the application software. Changes known to have had dire consequences include an update of PC BIOS, operating system software, network cards, or updates of third party database versions for example.
Regression testing is by definition repetitive, and thus many of the tests are likely to be suited to test automation. Test automation can deliver reduced testing costs after a few test iterations when compared to labour-intensive manual testing processes.
Many companies who use regression testing conduct a very abbreviated check test (sometimes called a ‘smoke’ or ‘sanity’ test) on newly delivered code, prior to starting their formal regression tests. This often saves time as the abbreviated test commonly exposes obvious errors (for example: a whole form may not be displayed because it failed to compile against a changed database format). Removing this type of problem prior to running the time-consuming regression testing scenarios can result in getting earlier developer help, and prevent testers completing a significant portion of the regression testing before finding such problems.
It should be noted that the control and make-up of the test environment is as critical for regression testing as it is for other testing types within the same software development life cycle phase. Refer to the article on page eight that discusses creating a testing environment.
Regression test suites, be they manual or automated, are an important company asset. They therefore need to be backed up, configuration-managed, and kept current to deliver maximum benefit to their owners. Specific ownership of and responsibility for the regression test suites therefore needs to be clearly defined.
Rahnuma is a technical content writer at software testing stuff. A software engineer by degree and a dynamic content creator by passion, she brings to table over 3 years of writing experience in tech niche. Combining her enthusiasm for writing and technology, she loves to share her thoughts on the latest tech trends.