Develop Initial Test Strategy

During the Inception Phase the team should consider developing an initial test strategy so as to identify how they will approach verification and validation. This includes thinking about how you intend to:

Develop Initial Test Strategy

Staffing Strategy

You need to determine the type of people you intend to have performing testing activities so that you can bring the right people onto the team when needed.

Options Trade-Offs
Generalizing Specialists. Someone with one or more deep specialties, in this case in testing, a general understanding of the overall delivery process, and the desire to gain new skills and knowledge.
  • Provides greater flexibility for staffing, greater potential for effective collaboration, greater potential for overall productivity, and greater career opportunities
  • It takes time for existing specialists to grow their skills and some people prefer to be specialized; This option requires the development skills to be able to write automated tests, not just manual test
Exploratory Testers. Someone who is skilled at identifying where solutions exhibit unexpected or broken behavior. Often includes ad hoc manual regression for dependent functionality.
  • Find potential defects that the stakeholders may not have thought of
  • Requires significant time and effort to gain this skill; Exploratory testing is a manual endeavor
Test automation specialists. Someone with the ability to write automated tests/checks.
  • Supports the creation of automated regression tests, which in turn enables teams to safely evolve their solutions
  • Requires investment in the automated tests themselves, which may appear expensive in the short term when it comes to legacy software
Manual testers. Someone with the skill to develop test cases from requirements, write manual test scripts, and follow test scripts to identify and record the behavior exhibited by the solution under test.
  • The solution is validated to some extent and supports a structured approach to user acceptance testing (UAT)
  • Very slow and expensive strategy to solution validation


Teaming Strategy

You need to decide how testing and quality assurance (QA) professionals will work with or as part of the delivery team so that everyone involved knows how the team(s) are organized.

Options (Ordered) Trade-Offs
Whole team/embedded testers. People with testing skills are embedded directly into the delivery team so as to test the solution as its being developed.
  • Improved collaboration, faster
  • Often requires people to become generalizing specialists
Parallel independent test team. An independent test team works in parallel to the delivery team to handle the more difficult or expensive testing activities.
  • Can fulfill regulatory requirements around independent verification; Can decrease the cost of some forms of testing; Enables highly skilled testers to focus on their specialty; Useful where the team does not have access to a production-like environment, where it is difficult for the team to test sophisticated transactions across systems such as legacy integrations (for example end of month batch processing) or where a traditional cycle-based regression of test cases is required.
  • Increases complexity of the testing process; Requires a strategy to manage potential defects discovered by the independent testers; Can lengthen Transition phase; Complicates what it means to be “done”
Independent test team. Some testing activities, often user acceptance testing (UAT) and system integration testing (SIT), are left to the end of the lifecycle to be performed by a separate team.
  • Easy strategy for existing traditional testers to adopt; focuses UAT and SIT efforts to end of delivery lifecycle
  • Lengthens length of Transition phase substantially; increases overall risk due to key testing activities being pushed to end of lifecycle; Increases average cost of fixing defects due to long feedback cycle
External (outsourced) test team. An independent test team staffed by a different organization, typically an IT service provider.
  • The testing effort is often less expensive due to wage differences; May be only way to gain access to people with testing skills
  • Requires significant (and expensive) requirements documentation to be provided to the testing team; Lengthens the Transition phase substantially; Increases overall risk due to key testing activities being pushed to end of lifecycle; Increases average cost of fixing defects due to long feedback cycle


Level of Detail

You need to identify the level of documentation detail for this test strategy you are developing so as to set expectations as to how much documentation is needed.

Options (Ordered) Trade-Offs
Goals driven. A high-level collection of testing and quality assurance (QA) principles or guidelines meant to drive the team’s decision making around testing and QA.
  • Provides sufficient guidance to skilled team members
  • Insufficient guidance for low-skilled team members (they will require coaching or help via non-solo strategies); May be insufficient for some regulations, particularly life-critical ones
Lightweight overview. A concise description of your test strategy, potentially in point form.
  • Provides sufficient guidance for most team members; Often sufficient for regulatory environments
  • May not be read by senior team members
Detailed specification. A descriptive and thorough test strategy document.
  • Provides sufficient guidance for outsourced/external testing and for low-skilled team members
  • Very expensive to create and maintain; Often allowed to get out of sync with other team artifacts; Very likely to be ignored by skilled team members


Test Approaches

You need to identify the general testing approaches/categories that you intend to follow so that you know what skills people need and potentially to identify the type of test tooling required.

Options Trade-Offs
Blackbox. The solution is tested via its external interface, such as the user interface (UI) or application programming interface (API). Note: Acceptance criteria are typically implemented at the black box level.
  • Enables you to test a large percentage of scenarios; Can be a very good starting point for your testing efforts
  • Difficult to test internal components, or portions thereof
Clearbox. The internals of the solution are tested. Note: Also known as “white box” and developer unit tests are typically at the white box level.
  • Potential to test all scenarios
  • Can be very expensive; Require intimate knowledge of the design and implementation technologies
Exploratory. An experimental approach to testing where the goal is to find the weak points of the solution where it exhibits unexpected behaviors or even breaks.   Sometimes called negative testing.
  • Enables you to test for the things that the stakeholders didn’t think of; Often identifies problems that would have otherwise been found in production (and hence expensive to fix)
  • Requires significant skill
Confirmatory. The validation that the solution works as requested. Sometimes called “testing to the specification” or positive testing.
  • Confirms that you’ve produced what you said you would
  • Falsely assumes that your stakeholders are able to identify all of the requirements; Can unfortunately motivate an expensive “big requirements up front (BRUF)” approach


Development Strategy

You need to identify how the team will approach development so that you know how to properly staff the team with people who have testing and programming skills.

Options (Ordered) Trade-Offs
Test-first programming. The developer(s) writes one or more tests before they write the production code that fulfills those tests. Sometimes called test-driven programming or test-driven development (TDD).
  • Supports confirmatory testing; Enables the team to safely evolve their solution; Supports regression testing by the team; Results in better design by forcing team members to think through design before committing to code; Ensures writing of unit tests are not “forgotten” or not done due to time constraints
  • Requires significant discipline and skill amongst team members; May require significant investment in writing tests when working with legacy software
Test-after programming. The developer(s) write some production code then write the test(s) that validate the code works as requested.
  • Easier for the team to get going with regression testing; Requires less discipline
  • Requires testing skills within the team; May require significant investment in writing tests when working with legacy software
Testless programming. The developer(s) write some code then provide it to someone else to validate.
  • Potential starting point for teams made up of specialists (i.e. some team members are programmers, others just focus on testing)
  • Often supports a slow, mini-waterfall approach to iterations;   Motivates longer iterations as result of mini-waterfall; Supports less-effective testing strategies (i.e. manual testing)


Test Environment(s) Platform Strategy

You need to identify your strategy(es) for how you intend to deploy new test platforms or better yet leverage existing test platforms.

Options (Ordered) Trade-Offs
Cloud-based. Your test environment is hosted in a private or public cloud environment.
  • Potential for very efficient use of test platform resources; Potential for quick and easy access to testing platforms
  • A private cloud environment will need to be built and maintained or a public cloud environment contracted for and operated; May not be possible to fully approximate production.
Physical. Separate hardware and software is provided for testing.  
  • Provides greatest opportunity to approximate production.
  • Testing environments are often underfunded and difficult or slow to access for development teams (injecting bottlenecks into your process)
Virtual. Virtualization software is used to provide a test environment.
  • Very flexible way for multiple teams to share physical testing environments
  • It may not be possible fully approximate your production environments; You will still need a physical environment available where the virtual environment can run


Test Environment(s) Equivalency Strategy

You need to identify your approach to how close your test environments will represent your production environments.

Options (Ordered) Trade-Offs
Production equivalent. Your test environment is an exact, or at least very close, approximation to your production environment.   This includes both identical hardware and software configurations compared with what is available in production.
  • Provides greatest level of assurance
  • Can be very expensive; Appropriate for pre-production test environments in high-risk situations
Production approximate. An approximation, often involving significantly less hardware than what is operational, of production is used for testing.
  • Your tests will miss production problems, risking very expensive fixes
  • Less expensive; Appropriate for team integration test environments and pre-production test environments for low-risk situations
Disparate. The testing environment is significantly different than production.   Disparate test environments are often built using inexpensive hardware or are simply a partition on a developer’s workstation.
  • Very inexpensive testing environments; Appropriate for developer workstations
  • Very poor at finding integration problems due to poor approximation of production


Non-Functional Testing Strategy

You need to identify your approach(es) to validating non-functional, also known as quality of service (QoS), requirements for your solution.  Non-functional requirement categories include, but are not limited to, the options listed in the table below.

Options Trade-Offs
Security.   Typical security issues include confidentiality, authentication, authorizations, integrity, and non-repudiation.
  • Discover security problems before they occur
  • Security testing often requires deep expertise in security and potentially expensive testing tools
Usability. End users are asked to perform certain tasks to explore issues around ease of use, task time, and user perception of the experience.
  • Discover usability problems while they are still relatively easy to address
  • Usability testing often requires deep experience in user experience (UX) skills; Often requires access to potential end users
Performance. Determine the speed or effectiveness of your solution or portion thereof.
  • Discover performance problems before your solution is released into production
  • May require significant amounts of test data
Availability. Ensure service reliability and your solution’s fault tolerance (the ability to respond gracefully when a software or hardware component fails).
  • Ensure that your solution fulfills availability and reliability requirements
  • Often requires long-running tests. More of a production monitoring issue
Concurrency. The goal is to detect problems around locking, deadlocking, semaphores and single-threading bottlenecks. Sometimes called multi-user testing.
  • Ensure that your solution works when many simultaneous users are working with it
  • Often requires long-running, complex tests
Resilience/reliability.   Determine whether your solution will continue to operate over the long term.
  • Ensure that your solution will operate for a long period of time
  • Often requires long-running tests; More of a production monitoring issue
Volume/rate. Determine that your solution will perform properly under heavy load or large volumes of data. Sometimes called load testing or stress testing.
  • Ensure that your solution works under heavy load
  • Requires significant amounts of test data


Test Suite Strategy

You need to identify the approach that you’re taking to your test suites so that you know how your regression test tooling needs to be configured.

Options (Ordered) Trade-Offs
Multiple regression test suites. There are several regression test suites, such as a fast running suite that runs on developer work stations, a team integration suite that runs on the team integration sandbox (this suite may run for several minutes or even hours), a nightly test suite, and even more.
  • Enables quick test suites to support team regression testing; Supports testing in complex environments; Supports the running of test suites on different cadences (run on commits, run at night, run on weekends, …)
  • Requires several testing environments; Requires strategy for deploying between testing environments; Often requires defect reporting strategy
Single regression test suite. All tests are invoked via a single test suite.
  • Supports testing in straightforward situations
  • Requires creation and maintenance of a test environment
Manual testing. Tests are run manually, often by someone following a documented test script.
  • Appropriate for exploratory testing
  • Very ineffective for regression testing (automate your tests instead); Very slow and expensive; Does not work with modern iterative and agile strategies


Test Data Source

You need to identify your approach to obtaining test data to support your testing efforts.

Options Trade-Offs
Original production data. A copy, often a subset, or actual “live” data is used.
  • Easy source of accurate test data
  • Subsets of production data are protected by privacy regulations such as Health Insurance Portability and Accountability Act (HIPAA) in the US; Current production data may not cover all test scenarios; Often too much data, requiring you to take a subset of it
Masked production data. Original production data is used were some data elements, typically data that can be used to identify an individual or organization, are transformed into non-identifying values.
  • Easy source of accurate test data with privacy concerns addressed
  • Current production data may not cover all test scenarios; Often too much data, requiring you to take a subset of it
Generated data. Large amounts of test data, often with random data values, is generated.
  • Very effective for volume testing
  • Very ineffective for anything other than volume testing due to the random nature of the data values generated
Engineered data. Test data is purposefully created so as to provide known data values to support specific test scenarios.
  • Potential to cover all testing scenarios
  • Many problems occur with production data that you haven’t predicted; Expensive and time consuming to engineer test data
Service/source virtualization. Application of mock or simulation software to enable testing of difficult-to-access solution components (i.e. hardware components or external systems).
  • Simulates systems that you cannot safely or economically test.
  • May not fully simulate the actual system; You still need to test against the actual system


Automation Strategy

You need to determine the level of automation you intend to implement for your test and possibly deployment suites so that you know what tooling support you need and what your team’s potential ability to evolve your solution will be.

Options (Ordered) Trade-Offs
Continuous deployment (CD). When your solution is successfully integrated within a given environment or sandbox it is automatically deployed (and hopefully automatically integrated via CI) in the next level environment.
  • Automates “grunt work” around deployment; Supports regression testing in complex environments; Enables the continuous delivery lifecycle
  • Requires investment in CD tooling
Continuous integration (CI). When something is checked in, such as a source code file or image file, the solution is automatically built again. The changed code is compiled, the solution is integrated, the solution is tested, and optionally code analysis is performed.
  • Automates “grunt work” around building your solution; Supports regression testing in each of your test environments; Important step towards continuous delivery; Key enabler of agile solution delivery
  • Requires investment in CI tooling
Scripts. One or more scripts are manually run to build the solution.
  • Important step towards CI (you need the scripts for your CI tool(s) to invoke)
  • Overhead of running the scripts means team members will do it less often, leading to longer feedback cycles


Defect Reporting

You need to identify how you intend to report/record defects, if at all, so that the team knows how they will do so and what tools they will require.

Options (Ordered) Trade-Offs
Conversation. The defect is reported to the appropriate developer(s) by speaking to them.
  • Fast and efficient way to communicate the issue
  • Not sufficient if you require documentation about the defect; Does not support defect tracking measurements
Agile management tool. A management tool such as Atlassian Jira or VersionOne is used to document and report the defect.
  • Developers are likely already using such a tool to manage their other work items; Defects can be easily treated as just another work item type; Supports defect tracking and reporting
  • Existing testers may not be familiar with this tooling; Test teams that have to support a multi-modal IT department may need to use different tools to report defects back to different teams
Bug/defect tracker. A defect tracker, such as Bugzilla or Quickbugs, is used to document and report the defect.
  • Specific tooling, including reporting, around defect tracking offers potential for best of breed; Possible to track quality metrics such as escaped defects
  • Requires the team to adopt one more tool; May not integrate well, if at all, with any agile work management software being used; May make it harder to make all work visible due to integration challenges
Test management tool. A test management tool such as HP Quality Center/ALM is used to document and report the defect.
  • Existing testers will be familiar with existing test management software; Possible to track quality metrics such as escaped defects
  • Test management tools often automate unnecessary traditional test management bureaucracy; Requires the team to adopt one more tool; May not integrate well, if at all, with any agile work management software being used; May make it harder to make all work visible due to integration challenges


Quality Governance Strategies

You need to identify the quality strategies that the team intends to adopt to govern the quality of the work they will produce.

Options (Ordered) Trade-Offs
Non-solo work. People work together via practices such as pairing, mob programming, and modeling with others.
  • Enables knowledge, skill, and information sharing between team members; Potential defects/issues found and hopefully addressed at point of injection, leading to higher quality and lower cost of defect removal
  • Development can be a bit slower and more expensive in practice than people working alone (although this is often more than made up for in lower cost of addressing defects)
Automated code analysis. Code analysis tools such as Cast and SonarQube are used to either statically or dynamically evaluate the source code to identify potential quality problems.
  • Automates a lot of the “grunt work” of code reviews; Potential to find a very wide range of common defect types; Effective way to ensure common coding conventions are followed
  • Not all potential defects can be found automatically
Quality guidelines. Quality guidelines – including but not limited to code quality, data quality, and documentation quality – are shared with delivery teams.
  • Simple way to capture common values and principles to motivate improved quality and consistency; Captures common, cross-team attributes for Definition of Done (DoD)
  • Some developers require detailed instructions (so codify them with your code analysis tooling)
Informal reviews. Work is reviewed and feedback is provided, often in a lightweight manner.
  • Great technique for sharing skills, promoting common values within the team, and for finding potential defects; May be sufficient for some regulatory compliance situations
  • Longer feedback cycle than automated code analysis or non-solo strategies
Formal reviews. Work is reviewed in a structured manner, often following defined procedures.
  • Supports some regulatory compliance requirements
  • Long feedback cycle; Can require significant planning and documentation overhead