Independent Testing and Agile Teams

The majority of testing, and in simple situations all of it, is performed by an agile delivery team itself. This is because we strive to have cross-functional “whole teams” that have the capability and accountability to perform the activities of solution delivery from beginning to end.   For organizations new to agile this means that you embed testers on agile teams and for organizations experienced in agile that you’ve managed to motivate agile team members to gain testing skills over time (often via close collaboration with other people who already have those skills).

This blog is organized into the follow topics:


Parallel Independent Testing

The whole team approach to development where agile teams test to the best of the ability is a great start, but it isn’t sufficient in some scaling situations. In these situations, described below, you need to consider instituting a parallel independent test team that performs some of the more difficult or expensive forms of testing. As you can see in Figure 1, the basic idea is that on a regular basis the development team makes their working build available to the independent test team. This may happen several times an iteration – we’ve seen teams do so at the end of each iteration, on a nightly basis, as part of their continuous deployment (CD) strategy. How often this occurs needs to be negotiated between the two teams.

Figure 1. Independent testing with an agile team.

Independent agile testing

The goal of this independent testing effort is not to redo the confirmatory testing that is already being done by the development team, but instead to identify the defects that may have fallen through the cracks. The implication is that this independent test team does not need a detailed requirements speculation, although they may need updated release notes including a list of changes since the last time the development team sent them a build. Instead of testing against the specification, the independent testing effort typically focuses on:

  1. Investigative/exploratory testing. Confirmatory testing approaches, such as test-driven development (TDD), validate that you’ve implemented the requirements as they’ve been described to you. But what happens when requirements are missed? What happens when requirements are narrowly focused on a point-specific solution, but not the overall ecosystem? User stories are a great way to explore functional requirements but defects surrounding non-functional requirements such as security, usability, and performance have a tendency to be missed by teams new to this approach.
  2. Production readiness testing. This is sometimes called pre-production system integration testing. The system that you’re building must “play well” with the other systems currently in production when your system is released. To do this properly you must test against versions of other systems that are currently under development, implying that you need access to updated versions on a regular basis. This is fairly straightforward in small organizations, but if your organization has dozens, if not hundreds of IT teams underway it becomes overwhelming for individual development teams to gain such access. A more efficient approach is to have an independent test team be responsible for such enterprise-level system integration testing.
  3. Difficult testing. Many forms of testing require sophisticated skills and sometimes even expensive tooling. Security testing is a classic example.


Why Independent Testing?

There are several reasons why you should consider independent testing:

  1. Regulatory compliance. Disciplined agile teams that find themselves in strict regulatory compliance situations, typical in systems engineering and life critical environments, may need to perform independent testing by law. Having said that, only a portion of your testing efforts will need to be performed independently. We have yet to run into a regulation that says that all of your testing needs to be performed independently, although we have seen several organizations that choose to interpret the regulations that way.
  2. Production readiness testing is exponentially expensive in multi-team environments. Development teams may not have the resources required to perform effective production readiness testing, resources that from an economic point of view must be shared across multiple teams. For example, if there are 5 other development teams in addition to my own then chances are that each team can do the work required to integrate and test against the builds of the other teams. But what about if there’re ten other teams? Twenty? Fifty? It becomes exponentially expensive for each team to do this integration and testing work as the number of teams increases. The implication is that you will discover that you need an independent test team working in parallel to the development team(s) that addresses these sorts of issues.
  3. The average cost of fixing defects is exponentially expensive the longer you wait. For close to four decades Barry Boehm, and other researchers, have gathered data showing that the average cost of addressing a defect rises exponentially the longer it takes to fix. This is depicted in Figure 2. The implication is that we want to find defects as early as we can, in fact ideally we want to build quality in from the start and not inject the defects to begin with. In traditional environments we would have left some forms of testing, in particular system integration testing (SIT) and user acceptance testing (UAT) to the end for the convenience of the people doing the testing (“we need to have everything in place before we can test it”). This results in very expensive and risky defect repair. Parallel independent testing often proves to be a bit more complex for testers, at least at first, but results in much more economical repair efforts.
  4. Complex technical environments. When you’re working with multiple technologies, legacy systems, or legacy data sources the testing of your system can become very difficult. System integration testing (SIT), performance testing, load testing, and security testing become more complex and more important in these situations.
  5. Large or geographically distributed teams. Large agile teams or geographically distributed agile teams are often subdivided into smaller teams, and when this happens system integration testing of the overall system can become complex enough that a separate team should consider taking it on. In fact, SAFe prescribes this with their System Integration Team strategy (which is virtually identical to this strategy). Granted, the reason why you have such teams is because you’re facing either significant domain or technical complexity.
  6. Outsourcing. Teams that are organizationally distributed, for example when some of the work is being outsourced to another organization, will very likely want to perform independent testing to validate the work being performed by the other company(es).  Read this article about agile outsourcing strategies.


Figure 2: Average cost of change curve.

Average cost of change curve


Questionable Reasons to Adopt Independent Testing

We’ve run into a few rather poor excuses to justify independent testing over the years:

  1. Your testing/quality staff prefer to work in a traditional manner. They may insist on testing from a detailed requirements specification, testing that is better performed by the team. They may insist on waiting to test the entire solution once it’s ready, instead of incrementally testing the system while it’s being built (enabling much cheaper defect fixing as discussed earlier). They may insist on all testing being done independently, instead of embedding people with testing skills on solution delivery teams. All of these strategies are choices that reflect a traditional culture, not an agile one. The real solution is to overcome these cultural challenges and help them to gain the skills and mindset required to work in an agile manner.
  2. Testing is outsourced. Some organizations will choose to outsource their testing to an external organization that is focused on testing.


But That Isn’t Agile!

Bullshit! Please excuse my language. Disciplined agile is pragmatic, going beyond the limited approach promoted by many agile purists.  These purists will claim that you don’t need parallel independent testing, and in simple situations this is clearly true. The good news is that it’s incredibly easy to determine whether or not your independent testing effort is providing value: simply compare the likely impact of the defects/change stories being reported with the cost of doing the independent testing. In short, whole team testing works well for agile in the small, but for more complex systems and in some tactical scaling situations you need to be more sophisticated.

2 thoughts on “Independent Testing and Agile Teams

  1. Gary K. Evans

    Scott, You are spot-on, and this reality needs to be repeated again and again within our Agile community. For small or simple, self-contained projects the classic “test everything within the sprint” mantra can be quite effective. But until people on these projects find themselves facing enterprise complexity they will not understand that their purity must yield to pragmatism. One of our great guiding principles in all product development is to reduce or eliminate dependencies. Reducing dependencies within a sprint, and doing the story testing within the sprint, are quite reasonable goals. But reducing these dependencies within the sprint does not necessarily address the external, integration dependencies always present on large, inter-dependent efforts. We must always “shift left” our test effort, but that does not preclude the need for end of effort integration testing, or even parallel, independent testing when the test space starts to require more time than the sprint length. Complexity is a stubborn thing.

  2. John Gagon

    There is a rule of thumb too that the further a team is from a stakeholder, the more bias there is if you rely on the same team instead of a parallel. I have worked at a company that refused to have independent testers, the developers were feeling exhausted and abused doing much more testing than development. They were constantly in debug mode instead of producing. The message that caused so much turnover was that developers were just to get it done and that they didn’t care about wasting time. This ultimately burned out the team after some late night puzzler bugs from the regular exhaustion and chaos.

    The other extreme is the developer who just throws non-compile junk over the test team wall, using notepad as an IDE. Developers need to at least compile and smoke test and use tools to catch non buildable stuff and communicate.

    I believe strongly in all those early activities, pair programming, developer confirmation but also an independent unbiased blaxk box functional test on everything unless the developer is also the stakehokder., solo entrepreneur startup developer/ those rare cases. Also team swarming on testing works very well, extreme POV / bias removal. Bias is the enemy. Even the independent coder can use a UX proofer.

    Typed via mobile


Leave a Reply

Your email address will not be published. Required fields are marked *