In continuation to my previous post on Test Strategy, I'm here describing it in more details.
Test strategy can be defined as a high level management method that confirms adequate confidence in the software product being tested, while ensuring that the cost efforts and timelines are all within acceptable limits.
Test strategy can be of different levels:
1. Development related risks include:
1. Objective and scope of testing
Advertisement:
Test strategy can be defined as a high level management method that confirms adequate confidence in the software product being tested, while ensuring that the cost efforts and timelines are all within acceptable limits.
Test strategy can be of different levels:
- Enterprise-wide test strategy to set up a test group for the entire organization.
- Application or the product test strategy to set up the test requirements of a software product for its entire life
- Project level test strategy to set up test plans for a single project life cycle.
- Define the objectives, timelines and approach for the testing effort
- List various testing activities and define roles & responsibilities
- Identify and coordinate the test environment and data requirements before the starting of testing phase
- Communicate the test plans to stakeholders and obtain buy-in from business clients
- Proactively minimize fires during the testing phase
- When should the testing be stopped?
- What should be testing?
- What can remain untested?
1. Development related risks include:
- Inefficiently controlled project timelines.
- Complexity of the code
- Less skilled programmers
- Defects in existing code
- Problems in team co-ordination
- Lack of reviews and configuration controlLack of specifications
- Lack of domain knowledge
- Lack of testing and platform skills
- Lack of test bed and test data
- Lack of sufficient time
- Dynamic frequency of usage
- Complexity of user interface
- High business impact of the function
- The earlier a bug is detected, the cheaper it is to fix.
- Splitting the testing into smaller parts and then aggregating, ensures a quicker debug and fix cycle
- Bugs found in critical functions mean more time to fix
- Determine the objective and scope of the testing
- Identify the types of tests required
- Based on the external and internal risks, add, modify or delete processes.
- Plan the environment, test bed, test data and other infrastructure
- Plan strategy for managing changes, defects and timeline variations.
- Decide both the in-process and the post process metrics to match the objectives.
- How much longer is the software being supported and is it worth-while strategizing to improve the testing of this software?
- Is it worthwhile to incrementally automate the testing of the existing code and functionality?
- How much of the old functionality should we test to give us the confidence that the new code has not corrupted the old code?
- Should we have same set of people support the testing of multiple software maintenance projects?
- What analysis can be applied on the current defect database that will help us improve the development itself?
- How much longer is the product going to last? This includes multiple versions etc. since test case and artifacts can continue to be used across versions.
- Would automation be worth considering?
- How much testing should we do per release (minor, patch, major, etc.)
- Do we have a risk based test plan that will allow releases to be made earlier than planned?
- Is the development cycle iterative? Should the test cycle follow that?
1. Objective and scope of testing
- What is the business objective of testing?
- What are the quality goals to be met by the testing effort?
- To what extent will each application be tested?
- What external systems will not be tested?
- What systems and components need to be tested?
- Different phases of the testing that are required.
- Different types of testing
- Test Coverage
- Definition of testing process life cycle
- Creation of testing related templates, checklists and guidelines
- Methodology for test development and execution
- Specification of test environment setup
- Planning for test execution cycles
- Hardware and software requirements
- Test data creation, database requirements and setup
- Configuration management, maintenance of test bed and build management
- Criteria and feasibility of test automation
- Test tool identification
- Test automation strategy (effort, timelines etc.)
- Testing team organization and reporting structure
- Different roles as a part of testing activities and their corresponding responsibilities
- Who to escalate to and when?
- Categorization of defects based on criticality and priority
- Definition of a workflow or the disposition of defects
- Techniques and tools for tracking of defects.
- Status meetings for communication of testing status
- Format and content of different status reports
- Periodicity of each status report
- Distribution list for each report
- Identification of all testing related risks, their impact and exposure
- Plan for mitigation and managing these risks
- List of testing artefacts under version control
- Tools and techniques for configuration management
- Plan for managing requirement changes
- Models for assessing impact of changes on testing
- Process for keeping test artifacts in sync with development artifacts
- What metrics are to be collected? Do they match the strategic objectives?
- What will be the techniques for collection of metrics?
- What tools will be employed to gather and analyze metrics?
- What process improvements are planned based on these metrics?
Good to read
ReplyDelete