Tag Archives: agile

Acceleration: An Agile Productivity Metric

Metrics

A common question that customers ask us is how do you measure productivity on agile teams. If you can easily measure productivity you can easily identify what is working for you in given situations, or what is not working for you, and adjust accordingly. One way to do so is to look at acceleration, which is the change in velocity.

A common metric captured by agile teams is their velocity. Velocity is an agile measure of how much work a team can do during a given iteration. Velocity is typically measured using an arbitrary point system that is unique to a given team or program. For example, my team might estimate that a given work item is two points worth of effort whereas your team might think that it’s seven points of effort, the important thing is that it’s consistent. So if there is another work item requiring similar effort, my team should estimate that it’s two points and your team seven points. With a consistent point system in place, each team can accurately estimate the amount of work that they can do in the current iteration by assuming that they can achieve the same amount of work as last iteration (an XP concept called “yesterday’s weather”). So, if my team delivered 27 points of functionality last iteration we would reasonably assume that all things being equal we can do the same this iteration.

It generally isn’t possible to use velocity as a measure of productivity. You can’t compare the velocity of the two teams because they’re measuring in different units. For example, we have two teams, A and B, each of 5 people and each working on a web site and each having two-week long iterations. Team A reports a velocity of 17 points for their current iteration and team B a velocity of 51 points. They’re both comprised of 5 people, therefore team B must be three times (51/17) as productive as team A. No! Team A is reporting in their point system and B in their point system, so you can’t compare them directly. The traditional strategy, one that is also suggested in the Scaled Agile Framework (SAFe), would be to ask the teams to use the same unit of points.  This might be a viable strategy with a small number of teams although with five or more teams it would likely require more effort than it was worth.  Regardless of the number of teams that you have it would minimally require some coordination to normalize the units and perhaps even some training and development and support of velocity calculation guidelines. Sounds like unnecessary bureaucracy that I would prefer to avoid. Worse yet, so-called “consistent” measurements such as FPs are anything but consistent because there’s always some sort of fudge factor involved in the calculation process that will vary by individual estimator.

An easier solution exists. Instead of comparing velocities you instead calculate the acceleration of each team. Acceleration is the change in velocity over time. The exact formula for acceleration is:

(NewVelocity – InitialVelocity)/InitialVelocity

For example, consider the reported velocities of each team shown in the table below. Team A’s velocity is increasing over time whereas team B’s velocity is trending downwards. All things being equal, you can assume that team A’s productivity is increasing whereas B’s is decreasing. At the end of iteration 10, if we wanted to calculate the acceleration since the previous iteration (#9), it would be (23-22)/22= 4.5% for Team A and (40-41)/41 = -2.4% for Team B.  So Team A improved their productivity 4.5% during iteration 10 and Team decreased their productivity 2.4% that iteration.  A better way to calculate acceleration is to look at the difference in velocity between multiple iterations as this will help to smooth out the numbers over time because as you see in the table the velocity will fluctuate naturally over time (something scientists might refer to as noise).  Let’s calculate acceleration over 5 iterations instead of just one, in this case comparing the differences in velocity from iteration 6 to iteration 10.  For Team A the acceleration would be (23-20)/20 = 15% and for Team B (40-45)/45 = -11.1% during the 5 iteration period, or 3% and -3.7% respectively on average per iteration.

Iteration Team A Velocity Team B

Velocity

1 17 51
2 18 49
3 17 50
4 18 47
5 19 48
6 20 45
7 19 44
8 21 44
9 22 41
10 23 40

 

Normalizing Acceleration for Team Size

The calculations that we performed above assumed that everything on the two teams remained the same. That assumption is likely a bit naïve.  It could very well be that people joined or left either of those teams, something that would clearly impact the team’s velocity and therefore it’s acceleration.  Let’s work through an example.  We’ve expanded the first table to include the size of the team each iteration.  We’ve also added a column showing the average velocity per person per iteration for each team, calculated by dividing the velocity by the team size for that iteration.  Taking the effect of team size into account, the average acceleration between the last five iterations for Team A is (1.9-1.8)/1.8/5 = 1.1% and for Team B is (5-5)/5/5 = 0.

Iteration Team A Velocity Team A Size Team A Velocity Per Person Team B

Velocity

Team B Size Team B Velocity Per Person
1 17 10 1.7 51 10 5.1
2 18 10 1.8 49 10 4.9
3 17 11 1.5 50 10 5
4 18 11 1.6 47 9 5.2
5 19 11 1.7 48 9 5.3
6 20 11 1.8 45 9 5
7 19 12 1.6 44 8 5.5
8 21 12 1.8 44 8 5.5
9 22 12 1.8 41 8 5.1
10 23 12 1.9 40 8 5

Similarly, perhaps there was a holiday during one iteration. When there are ten working days per iteration and you lose one or more of them due to holidays it can have a substantial impact on velocity as well.  As a result you may want to take into account the number of working days each iteration in your calculation.  You would effectively calculate average acceleration per person per day in this case.  Frankly I’m not too worried about that issue as it would affect everyone within your organization in pretty much the same way, and it’s easy to understand why there was a “blip” in the data for that iteration.

 

What Does Acceleration Tell You?

For how you use acceleration in practice, there are three scenarios to consider:

  1. Positive acceleration. This is an indication that productivity may be rising on the team, although it does not indicate the cause of that increase.
  2. Zero acceleration. This is an indication that the team’s productivity is remaining flat, and that perhaps they should consider doing retrospectives regularly and then act on the results from those retrospectives. Better yet they can “dial up” their process improvement efforts by adopting something along the lines of the Disciplined Agile Framework.
  3. Negative acceleration. If the acceleration is negative then productivity on the team is going down, likely an indicator of quality and/or team work problems.

Of course it’s not wise to govern simply by the numbers, so instead of assuming what is going on we would rather go and talk with the people on the two teams. Doing so you might find out that team A has adopted quality-oriented practices such as continuous integration and static code analysis which team B has not, indicating that you might want to help team A share their learnings with other teams.

 

Monetizing Acceleration

This is fairly straightforward to do. For example, assume your acceleration is 0.7%, that there are five people on the team, your annual burdened cost per person is $150,000 (your senior management staff should be able to tell you what this number is), and that you have two week iterations. So, per iteration the average burdened cost per person must be $150,000/26 = $5,770. Productivity improvement per iteration for this team must be $5,770 * 5 * .007 = $202. If the acceleration stayed constant at 0.7% the overall productivity improvement for the year would be (1.007)^26 (assuming the team works all 52 weeks of the year) which would be 1.198 or 19.8%. This would be a savings of $148,500 (pretty much the equivalent of one new person).

Another approach is calculate the acceleration for the year by comparing the velocity from the beginning of the year to the end of the year (note that you want to avoid comparing iterations near any major holidays). So, if the team velocity the first week of February 2015 was 20 points, the same team’s velocity the first week of February 2016 was 23 points, that’s an acceleration of (23-20)/20 = 15% over a one year period, for a savings of $112,500.

 

Advantages of the Acceleration Metric

There are several advantages to using acceleration as an indicator of productivity over traditional techniques such as FP counting:

  1. It’s easy to calculate. We worked through two common variations earlier, you’ll need to experiment to determine what works for you.
  2. It is inexpensive. Acceleration is based on information already being collected by the team, their velocity, so there is no extra work to be done by the team.  Assuming that the team hasn’t decided to take a #NoEstimates approach
  3. It is easy to automate. For example, most agile management tools (e.g. VersionOne, Rally, Jira, Microsoft TFS) calculate velocity automatically from their work item list/product backlog and do velocity trend reporting via their team dashboard functionality. This trend reporting is effectively a visual representation of the team’s acceleration (or deceleration as the case may be).
  4. You can easily adjust for changing team size.
  5. You can easily monetize this metric.
  6. It is unitless. The “units” are % change in points per iteration, or % change in points per time period depending on the way that you want to look at it. Because it’s a percentage you can use it as a basis of comparison.
  7. You apply this across a department. It is fairly straightforward to roll up the acceleration of project teams into an overall acceleration measure for a larger program or portfolio simply by taking a weighted average based on team size. However, this is only applicable to teams that are in a position to report an accurate acceleration (the agile and iterative teams) and of course are willing to do so.

 

Potential Disadvantages of Acceleration

Of course, nothing is perfect, and there are a few potential disadvantages:

  1. It can be gamed. Acceleration is derived from velocity which in turn is derived from manually-collected measures, and anything gathered manually can be easily gamed.
  2. Management must be flexible. For this to be acceptable senior management must be willing to think outside the “traditional metrics box”. Using a non-standard, simple metric to calculate productivity? Preposterous! Directly measuring what you’re truly interested in instead of calculating trends over long periods of time? Doubly preposterous!
  3. The terminology sounds scientific. Terms such as velocity and acceleration can motivate some of us to start believing that we understand the “laws of IT physics”, something which we doubt very highly that as an industry we understand.

We hope that you’ve found this blog post valuable.

An Exploratory “Lean Startup” Lifecycle

One of the key philosophies of the Disciplined Agile Delivery (DAD) framework is that it presents software development teams with choices and guides them through making the right choices given the situation they face.  In other words, it helps them to truly own their process.  Part of doing so is to choose the software delivery lifecycle (SDLC) that is the best fit for their context.  In this blog posting we overview the DAD Exploratory lifecycle which is based in part on Lean Startup strategies.

This lifecycle can be applied in two ways:

  1. As a replacement of the Inception phase of other lifecycles.  In the Inception phase we invest a short yet sufficient amount of time and effort to validate that the initiative being considered makes sense and to gain agreement on the stakeholders’ vision.  In a situation where the actual need and value of what is being proposed is in question this approach is a very good way to determine the true market need before scaling up the initiative and moving into the Construction phase.
  2. As the implementation approach in the Construction phase.  After applying the Exploratory approach to validate your vision, a decision needs to be made regarding which of the four DAD lifecycles to apply as we move into Construction. For instance, you may choose to use DAD’s Scrum-based basic agile lifecycle if there is sufficient confidence from the learnings in the Inception phase regarding the viability of the vision.  However, if there remains some uncertainty regarding the feature set to be delivered it may make more sense to continue using the Exploratory lifecycle to build out the product in Construction.

The following diagram overviews the DAD Exploratory lifecycle.  This lifecycle is followed by agile or lean teams that find themselves in startup or research situations where their stakeholders have an idea for a new product but they do yet understand what is actually needed by their user base.  As a result they need to quickly explore what the market wants via a series of quick learning experiments.

Disciplined Agile Lifecycle Exploratory

Now let’s describe how the Exploratory lifecycle works.  There are six activities to this lifecycle:

  1. Envision.  Your team will explore the idea and identify a potential implementation strategy for implementing it.  This could be as simple as getting a few people together in a room to model storm both the business vision and your technical options on whiteboard and paper.  You want to do just enough thinking to identify a viable hypothesis for what your customers actually want.  This hypothesis needs to be testable, which implies that you need to identify how you are going to measure the effectiveness of the new functionality that you produce.
  2. Build a little.  Your team should invest just enough effort to build a solution that tests the hypothesis.  In lean parlance you want to build what’s known as a minimally viable product (MVP).  The amount of effort will vary, from several days to several weeks – your goal is to make something available very quickly so that you can test your hypothesis.
  3. Deploy.  Once your current solution is ready it is deployed into an environment where you can test your hypothesis. This deployment may be to a subset of your customers, in many ways what used to be called an “alpha” or “beta” release, so that you can determine whether the solution is of interest to them.
  4. Observe & measure.  Once the solution is available in production you want to determine what aspects of it, if any, are of interest to your user base.  To do this you will need to instrument your solution so that it logs data regarding important events within it.  For example, you may decide to record when a screen/page is accessed, when a sale occurs, when certain business functions are invoked, and so on.  The idea is that you want to understand which functionality end users find useful, which functionality leads to customer retention, which functionality leads to greater sales, … whatever is important to you.  Generation of this data enables you to monitor, to observe and measure, how well the new functionality is received by your user base.  This in turn allows you to make a fact-based go-forward decision.  If the functionality is well received then you may choose to continue with the existing strategy and add more functionality.  Or your strategy may be so successful that you decide that you’re ready to productize the development of this solution. If the functionality wasn’t well received your team might choose to pivot and continue in another direction or even give up completely.
  5. Cancel.  Sometimes you discover that the product idea isn’t going to work out after all.  In fact, this is particularly common in research and development (R&D) environments as well as start ups.  The advantage is that if an idea is going to fail, then it is better that you learn that it’s a failure quickly so that you can devote your energies into other strategies.
  6. Productize.  After several iterations of building a little, deploying, and then observing & measuring that you’ve identifying a product that will be successful in the marketplace (or in the case of internal application development successful with your user base).  As described above, although you may choose to continue following this lifecycle, a common decision is for the team to adopt one of the other DAD lifecycles – such as the Scrum-based agile lifecycle, the Kanban-based Lean lifecycle, or the Continuous Delivery lifecycle – and effectively treat the time they spent following this lifecycle as their Inception phase.

To summarize, the DAD process framework takes a flexible, non-prescriptive approach to software-based solution delivery.  As a result of this philosophy DAD supports several development lifecycles, one of which is the Lean-Startup-based Exploratory lifecycle described in this posting.  This lifecycle is typically followed in situations where you are unsure of what your user base wants, and sometimes even when you are unsure of who your user base (your customers) will even be.

Does your team own its process or merely rent it?

Process ownership

An important philosophy within both the agile and lean communities is that a team should own its process. In fact, one of the principles behind the Agile Manifesto is “At regular intervals, the team reflects on how to become more effective, then tunes and adjusts its behavior accordingly.” The idea is that teams should be empowered to tailor their approach, including both their team structure and the process that they follow, to meet the unique needs of the situation that they find themselves in. Teams that own their process will tailor it over time as they learn how to work together, adopting new techniques and tweaking existing ones to increase their effectiveness.

As with most philosophies this one is easy to proselytize but not so easy to actually adopt. When it comes to process improvement, teams will exhibit a range of behavior in practice. Some teams see process as a problem and actively seek to ignore process-related issues. Some teams are ambivalent towards process improvement and generally stick with what they’ve been told to do. And some teams see process improvement as an opportunity to become more effective both as a team and as individuals. This range of behaviors isn’t surprising from a psychology point of view although it can be a bit disappointing from an agile or lean point of view. It has led me to think that perhaps some teams choose to “own” their process but many more still seem to prefer to simple rent it.

The behaviors of people who rent something are generally different than those who own something. Take flats for example. When you rent a flat (called an apartment in North America) you might do a bit of cosmetic work, such as painting and hanging curtains, to make it suitable for your needs. But people rarely put much more effort than that into tailoring their rental flat because they don’t want to invest money in something that isn’t theirs, even though they may live in the flat for several years. It isn’t perfect but it’s good enough. When you own a flat (called a condo in North America) you are much more likely to tailor it to meet your needs. Painting and window dressings are a good start, but you may also choose to renovate the kitchen and bathroom, update the flooring, and even reconfigure the layout by knocking down or moving some walls. One of the reasons why you choose to own a flat is so that you can modify it to meet your specific needs and taste.

You can observe similar behaviors when it comes to software process. Teams that are merely “process renters” will invest a bit of time to adopt a process, perhaps taking a two-day course where they’re taught a few basic concepts. They may make a few initial tailorings of the process, adopt some new role names, and even rework their workspace to better fit the situation that they face. From then on they do little to change the way that they work together. They rarely hold process improvement sessions such as retrospectives, and if they do they typically adopt changes that require minimal effort. Harder improvements, particularly those requiring new skills that require time and effort to learn, are put off to some point in the distant future which never seems to come. Such behavior may be a sign that this “team” is not even be a team at all, but instead a group of individuals who are marginally working together on the same solution. They adopt the trappings of the method, perhaps they spout new terminology and hold the right meetings, but few meaningful changes are actually made.

Process owners behave much differently. Teams that own their process will regularly reflect on how well they’re working and actively seek to get better. They experiment with new techniques and some teams will even measure how successful they are implementing the change. Teams that are process owners will often get coaching to help them improve, both at the individual and at the team level. Process owners strive to understand their process options, even the ones that are not perfectly agile or lean, and choose the ones that are best for the situation they find themselves in.

The Disciplined Agile Delivery (DAD) framework is geared for teams that want to own their process. The DAD framework is process goal-driven, not prescriptive, making your process choices explicit and more importantly providing guidance for selecting the options that make the most sense for your team. This guidance helps your team to get going in the right direction and provides options when you realize that you need to improve. DAD also supports multiple lifecycles because we realize that teams find themselves in a range of situations – sometimes a Scrum-based lifecycle makes sense, sometimes a lean lifecycle is a better fit, sometimes a continuous delivery approach is best, and sometimes you find yourself in a situation where an exploratory (or “Lean Startup”) lifecycle is the way to go.

You have choices, and DAD helps guide you to making the choices that are right for you in your given context. By providing process guidance DAD enables your team to more easily own its own process and thereby increase the benefit of following agile or lean approaches.

Evolving the Goals Overview Diagram

A couple of months ago we decided to evolve the DAD goals overview diagram. The new version is shown in Figure 1 and the previous version in Figure 2.

Figure 1. The new goals overview diagram.
Lifecycle Goals

Figure 2. The previous diagram.
Lifecycle Goals

There are several interesting changes in the diagram:

  1. It’s a mind map. We’ve always been uncomfortable with the table format of Figure 2 but had never gotten around to updating the diagram.  Based on feedback from several people, including a few Black Belts, we moved towards using mind map notation.
  2. Color.  We’re using a color scheme that is consistent with the colors used in the DAD lifecycle diagrams.  Expect updates to the process goal diagrams in the near future for consistency.
  3. Team focus.  For the past year or so we’ve been moving away from the project-oriented terminology that we used in the DAD book.  For example, in Figure 2 we now say “Fulfill the Team Mission” instead of “Fulfill the Project Mission”.  We’re seeing DAD applied more in more by product teams, often following a lean or continuous delivery version of the lifecycle, that have evolved beyond the project mindset.  So we felt we should refactor away from the project-oriented terminology we used in the first edition of the DAD book and thereby remove some unfortunate bias we may have injected into the DAD framework.

As always, we’re open to any feedback that you may have to help us improve this material.  Thanks in advance.

Got Discipline?

Got Discipline

A common question we get regarding Disciplined Agile Delivery (DAD) is “What makes DAD more disciplined than other approaches to agile?”  It’s a fair question, particularly from someone who is new to DAD.  This blog posting explores this question, explicitly summarizing the critical strategies that exhibit the greater levels of discipline in DAD as compared with what Mark and I see in many agile teams today.

It is clear that many common agile practices require discipline.  For example, agile teams it takes discipline to hold concise, streamlined coordination/Scrum meetings; to consistently deliver business value every iteration; to test continuously throughout the lifecycle; to improve your process “in flight”; to work closely with stakeholders and many more things.  Discipline is very important to the success of agile teams, see The Discipline of Agile for a detailed discussion, and DAD takes it to the next level in the following ways:

  1. Reducing the feedback cycle. Techniques that shorten the time between doing something and getting feedback about it are generally lower risk and result in lower cost to address any changes than techniques with longer feedback cycles.  Many of these techniques require agile team members to have new skills and to take a more disciplined approach to their work than they may have in less-than-agile situations.  There are several common ways to shorten the feedback cycle that are common to agile software development that are adopted by the DAD framework.  These techniques, listed in order of immediacy, include non-solo development (e.g. pair programming), active stakeholder participation, continuous integration (CI), continuous deployment (CD), short iterations/sprints, and short release cycles.
  2. Continuous learning. Continuous learning is an important aspect of agile software development in general, not just DAD.  However, DAD explicitly addresses the need for three levels of learning: individual, team, and organizational/enterprise.  It also addresses the need for three categories of learning: domain, technical, and process.  Continuous learning strategies include active stakeholder participation, coaching, mentoring, individual learning, non-solo development, proving the architecture with working code, spikes, retrospectives/reflections, sharing lessons learned between teams, and stakeholder demonstrations.
  3. Incremental delivery of consumable solutions.  Being able to deliver potentially shippable software increments at the end of each iteration is a good start that clearly requires discipline.  The DAD process framework goes one step further and advises you to explicitly produce a potentially consumable solution every iteration, something that requires even greater discipline.  Every construction iteration your team requires the discipline to create working software that is “done”, to write deliverable documentation such as operations manuals and user documentation, to address consumability (usability), to consider organizational change issues pertaining to your solution, and operations and support issues (an aspect of DevOps).
  4. Being process goal-driven.  The DAD framework promotes a process goal-driven approach. For each goal we describe the issues pertaining to that the goal. For example, with initial project planning you need to consider issues such as the amount of initial detail you intend to capture, the amount of ongoing detail throughout the project, the length of iterations, how you will communicate the schedule (if at all), and how you will produce an initial cost estimate (if at all). Each issue can be addressed by several strategies, each of which has trade-offs. Our experience is that this goals-driven, suggestive approach provides just enough guidance for solution delivery teams while being sufficiently flexible so that teams can tailor the process to address the context of the situation in which they find themselves in. The challenge is that it requires significant discipline by agile teams to consider the issues around each goal and then choose the strategy which that is most appropriate for them.
  5. Enterprise awareness.  Whether you like it or not, as you adopt agile you will constrained by the organizational ecosystem, and you need to act accordingly.  It takes discipline to be enterprise aware and to work with enterprise folks who may not be completely agile yet, and have the patience to help them.  It takes discipline to work with your operations and support staff in a “DevOps” manner throughout the lifecycle, particularly when they may not be motivated to do so.  Despite the fact that governing bodies such as project management offices (PMOs), architecture and database authorities, and operations may indeed be a source of impediments to your DAD adoption, these authorities serve important functions in any large enterprise.  Therefore a disciplined approach to proactively working with them and being a positive change agent to make collaboration with them more effective is required.
  6. Adopting a full delivery lifecycle.  Despite some agilists reluctance to admit that projects go through phases the DAD process framework explicitly recognizes that they do.  Building serious solutions requires a lot more than just doing the cool construction stuff.  It takes discipline to ignore this rhetoric and frame your project within the scope of a full delivery lifecycle.  The basic and advanced DAD lifecycles explicitly depict pre-delivery activities, a three-phase delivery lifecycle, and post-delivery activities (operations and support).
  7. Streamlining inception activities.  We devoted a lot of material in the DAD book to describing how to effectively address how to initiate a DAD project.  Unfortunately in our experience we have seen many organizations treat this phase as an opportunity to do massive amounts of upfront documentation in the form of project plans, charters, and requirements specifications.  Some people have referred to the practice of doing too much transitory documentation up front on an agile project (known as Sprint 0 in Scrum) as Water-Scrum-Fall.  We cannot stress enough that this is NOT the intent of the Inception phase.  While we provide many alternatives for documenting your vision in Inception, from very heavy to very light, you should take a minimalist approach to the Inception phase and strive to reach the stakeholder consensus milestone as quickly as possible.  If you are spending more than a few weeks on this phase, you may be regressing to a Water-Scrum-Fall approach.  It takes discipline to be aware of this trap and to streamline your approach as much as possible.
  8. Streamlining transition activities.  In most mid-to-large sized organizations the deployment of solutions is carefully controlled, particularly when the solutions share architectures and have project interdependencies.  For these reasons release cycles to your stakeholders are less frequent that you would like because of existing complexities within the environment.  However, the ability to frequently deploy value to your stakeholders is a competitive advantage; therefore you should reduce the release cycle as much as possible.  This requires a great degree of discipline in areas such as pre-production integration and deployment testing; regular coordination between project teams and with operations and support staff; Change management around both technology and requirements; and adoption of continuous deployment practices to such a degree that very frequent deployments are the norm and the Transition “phase” becomes an automated transition activity.
  9.  Adopting agile governance. It is easier to avoid your traditional governance and tell management that “agile is different” than it is to work with your governors to adapt your governance to properly guide the delivery of your agile teams.  Every organization has a necessary degree of governance and there are ways to make it especially effective on agile initiatives.  It takes discipline to work with your governors to help them understand how disciplined agile teams operate and then discipline to accept and conform to the resulting governance process.
  10. Moving to lean. For all DAD process goals we describe a range of options to address the issues pertaining to that goal.  These options ranged from traditional/heavier approaches that we generally advised against except in very specific situations to agile strategies to very lean strategies.  Generally, the leaner the strategy the greater the discipline it requires.

Adopting a disciplined approach to agile delivery requires the courage to rethink some of the agile rhetoric and make compromises where necessary for the benefit of the “whole enterprise” and not just the whole team.  In our experience most agile projects make certain compromises that are not classically agile in order to get the job done.  Rather than hiding this and fearing reprisals from those who would accuse you of regressing to a traditional approach, it is better to have the courage to take a pragmatic approach to using agile in your situation.

Effective application of DAD certainly requires discipline and skill, but in our experience the key determinant of success is the ability and willingness of the team to work well together and with  stakeholders, both within and external to the team.  For more writings about discipline within the DAD framework, select “Discipline” from the blog category list.

Comparing DAD to the Rational Unified Process (RUP) – Part 2

This post is a follow-up to Comparing DAD to the Rational Unified Process (RUP) – Part 1.  In that post I described in some detail why Disciplined Agile Delivery (DAD) is not “Agile RUP”.  DAD is quite different in both approach and content.  There are however some very good principles that the Unified Process (UP) incorporates that are not part of mainstream agile methods.  This post describes what parts of the UP made it into the DAD process decision framework.

DAD suggests a full delivery lifecycle approach similar to RUP.  DAD recognizes that despite some agile rhetoric projects do indeed go through specific phases.  RUP explicitly has four phases for Inception, Elaboration, Construction, and Transition.  For reasons that I described in the last post, DAD does not include an explicit Elaboration phase.  However the milestone for Elaboration is still in DAD which I will describe shortly.  As the DAD basic lifecycle diagram shows, DAD has three of the four RUP phases.

Disciplined Agile Lifecycle Basic

  • The Inception phase.  An important aspect  of DAD is its explicit inclusion of an Inception phase where project initiation activities occur.  As Scott Ambler says in one of his posts “Although phase tends to be a swear word within the agile community, the reality is that the vast majority of teams do some up front work at the beginning of a project.  While some people will mistakenly refer to this effort as Sprint/Iteration 0 it is easy to observe that on average this effort takes longer than the general perception (the 2009 Agile Project Initiation survey  found the average agile team spends 3.9 weeks in Inception)”.  So in DAD’s Inception phase (usually one iteration) we do some very lightweight visioning activities to properly frame the project.  The milestone for this phase is to obtain “Stakeholder consensus” on how to proceed.  In the book we describe various strategies to get through the Inception phase as quickly as possible, what needs to be done, and how to get stakeholders consensus.
  • The Construction phase.  This phase can be viewed as a set of iterations (Sprints in Scrum parlance) to build increments of the solution.  Within each iteration the team applies a hybrid of practices from Scrum, XP, Agile modeling, Agile data, and other methods to deliver the solution.  DAD recommends a risk-value approach of prioritizing work in the early iterations which draws from the RUP principle of mitigating risk as early as possible in the project by proving the architecture with a working solution.  We therefore balance delivering high-value work with delivering work related to mitigating these architectural risks.  Ideally we deliver stories/features in the early iteration that deliver functionality related to both high business value and risk mitigation (hence DAD’s “risk-value” lifecycle). It is worthwhile to have a checkpoint at the end of the early iterations to verify that indeed our technical risks have been addressed.  DAD has an explicit milestone for this called “Proven architecture”.  This is similar to the RUP Elaboration milestone without risking the confusion that the Elaboration phase often caused for RUP implementations.  All agile methods seek to deliver value into the hands of the stakeholders as quickly as possible.  In many if not most large enterprises it is difficult to actually deliver new increments of the solution at the end of each iteration.  DAD therefore recognizes this reality and assumes that in most cases there will be a number of iterations of Construction before the solution is actually deployed to the customer.  As we make clear in the book, although this is the classic DAD pattern, you should strive to be able to release your solution on a much more frequent basis in the spirit of  achieving the goal of “continuous delivery”.  The milestone for the end of Construction is that we have “Sufficient functionality” to deploy to the stakeholders.  This is the same milestone as the RUP’s Construction milestone.  During the Construction phase it may make sense to periodically review the progress of the project against the vision agreed to in Inception and potentially adjust course.  These optional milestones in DAD are referred to as “Project viability”.
  • The Transition phase.  DAD recognizes that for sophisticated enterprise agile projects often deploying the solution to the stakeholders is not a trivial exercise.  To account for this reality DAD incorporates the RUP Transition phase which is usually one short iteration.  As DAD teams, as well as the enterprise overall streamline their deployment processes this phase should become shorter and ideally disappear over time as continuous deployment becomes possible.  RUP’s Transition milestone is achieved when the customer is satisfied and self-sufficient.  DAD changes this to “Delighted stakeholders”.  This is similar to lean’s delighted customers but we recognize that in an enterprise there are more stakeholders to delight than just customers, such as production support for instance.  One aspect of RUP’s Transition phase is that it is not clear on when during the phase deployments actually take place.  Clearly stakeholders aren’t delighted and satisfied the day the solution goes “live”.  There is usually a period of stabilization, tuning, training etc. before the stakeholders are completely happy.  So DAD has a mid-Transition milestone called “Production ready”.  Some people formalize this as a “go/no go” decision.

So in summary, DAD frames an agile project within the context of an end-to-end risk-value lifecycle with specific milestones to ensure that the project is progressing appropriately.  These checkpoints give specific opportunities to change course, adapt, and progress into the next phases of the project.  While the lifecycle is similar to that of RUP, as described in Part 1 of this post it is important to realize that the actual work performed within the iterations is quite different and far more agile than a typical RUP project.

At Scott Ambler + Associates we are getting a lot of inquiries from companies seeking help to move from RUP to the more agile yet disciplined approach that DAD provides.

Strategies for Verifying Non-Functional Requirements

Early in the lifecycle, during the Inception phase, disciplined agile teams will invest some time in initial requirements envisioning and initial architecture envisioning. One of the issues to be considered as part of requirements envisioning is to identify non-functional requirement (NFRs), also called quality of service (QoS) or simply quality requirements. The NFRs will drive many of your technical decisions that you make when envisioning your initial architectural strategy. These NFRs should be captured someone, in a previous blog I explored the options available to you, and implemented during Construction. It isn’t sufficient to simply implement the NFRs, you must also validate that you have done so appropriately. In this blog posting I overview a collection of agile strategies that you can apply to validate NFRs.

A mainstay of agile validation is the philosophy of whole team testing. The basic idea is that the team itself is responsible for validating its own work, they don’t simply write some code and then throw it over the wall to testers to validate. For organizations new to agile this means that testers sit side-by-side with developers, working together and learning from one another in a collaborative manner. Eventually people become generalizing specialists, T-skilled people, who have sufficient testing skills (and other skills).

Minimally your developers should be performing regression testing to the best of their ability, adopting a continuous integration (CI) strategy in which the regression test suite(s) are run automatically many times a day.  Advanced agile teams will take a test-driven development (TDD) approach where a single test is written just before sufficient production code which fulfills that test.  Regardless of when tests are written by the development team, either before or after the writing of the production code, some tests will validate functional requirements and some will validate non-functional requirements.

Whole team testing is great in theory, and it is strategy that I wholeheartedly recommend, but in some situations it proves insufficient.  It is wonderful to strive to have teams with sufficient skills to get the job done, but sometimes the situation is too complex to allow that.  There are some types of NFRs which require significant expertise to address properly: NFRs pertaining to security, usability, and reliability for example.  To validate these types of requirements, worse yet even to identify them, requires skill and sometimes even specialized (read expensive) tooling.  It would be a stretch to assume that all of your delivery teams will have this expertise and access to these tools.

Recognizing that whole team testing may not sufficiently address validating NFRs many organizations will supplement their whole team testing efforts with parallel independent testing  .  With this approach a delivery team makes their working builds available to a test team on a regular basis, minimally at the end of each iteration, and the testers perform the types of testing on it that the delivery team is either unable or unlikely to perform.  Knowing that some classes of NFRs may be missed by the team, independent test teams will look for those types of defects.  They will also perform pre-production system integration testing and exploratory testing to name a few.  Parallel independent testing is also common in regulatory compliance environments.

From a verification point of view some agile teams will perform either formal or informal reviews.  Experienced agilists prefer to avoid reviews due to their inherently long feedback cycle, which increases the average cost of addressing found defects, in favor of non-solo development strategies such as pair programming and modeling with others.  The challenge with non-solo strategies is that managers unfamiliar with agile techniques, or perhaps the real problem is that they’re still overly influenced by disproved traditional theories of yesteryear, believe that non-solo strategies reduce team productivity.  When done right non-solo strategies increase overall productivity, but the political battle required to convince management to allow your team to succeed often isn’t worth the trouble.

Another strategy for validating NFRs code analysis, both dynamic and static.  There is a range of analysis tools available to you that can address NFR types such as security, performance, and more.  These tools will not only identify potential problems with your code many of them will also provide summaries of what they found, metrics that you can leverage in your automated project dashboards.   This strategy of leveraging tool-generated metrics such as this is a technique which IBM calls Development Intelligence and is highly suggested as an enabler of agile governance in the DAD framework. Disciplined agile teams will include invocation of code analysis tools from you CI scripts to support continuous validation throughout the lifecycle.

Your least effective validation option is end-of-lifecycle testing, in the traditional development world this would be referred to as a testing phase.  The problem with this strategy is that you in effect push significant risk, and significant costs, to the end of the lifecycle.  It has been known for several decades know that the average cost of fixing defects rises the longer it takes you to identify them, motivating you to adopt the more agile forms of testing that I described earlier.  Having said that I still run into organizations in the process of adopting agile techniques that haven’t really made embraced agile, as a result still leave most of their testing effort to the least effective time to do such work.  If you find yourself in that situation you will need to validate NFRs in addition to functional requirements.

To summarize, you have many options for validating NFRs on agile delivery teams.  The secret is to pick the right one(s) for the situation that you find yourself in.  The DAD framework helps to guide you through these important process decisions, describing your options and the trade-offs associated with each one.  For a more detailed discussion of agile validation techniques you may find my article Agile Testing and Quality Strategies to be of value.

Strategies for Implementing Non-Functional Requirements

Non-functional requirements, also known as quality of service (QoS) or technical requirements, are typically system-wide thus they apply to many, and sometimes all of your functional requirements.  Part of ensuring that your solution is potentially consumable each iteration is ensuring that it fulfill its overall quality goals, including applicable NFRs.  This is particularly true with life-critical and mission-critical solutions.  Good sources for NFRs include your enterprise architects and operations staff, although any stakeholder is a potential source for NFRs.

Chapter 8 in the Disciplined Agile Delivery book, written by Mark Lines and myself, overviews several strategies for capturing and then implementing NFRs.  As your stakeholders tell you about functional requirements they will also describe non-functional requirements (NFRs).  These NFRs may describe security access rights, availability requirements, performance concerns, or a host of other issues as saw in my blog regarding initial architecture envisioning.  There are three basic strategies, which can be combined, for capturing NFRs:

  1. Technical stories.  A technical story is a documentation strategy where the NFR is captured as a separate entity that is meant to be addressed in a single iteration.  Technical stories are in effect the NFR equivalent of a user story. For example “The system will be unavailable to end users no more than 30 seconds a week” and “Only the employee, their direct manager, and manager-level human resource people have access to salary information about said employee” are both examples of technical stories.
  2. Acceptance criteria for individual functional requirements.  Part of the strategy of ensuring that a work item is done at the end of an iteration is to verify that it meets all of its acceptance criteria.  Many of these acceptance criterions will reflect NFRs specific to an individual usage requirement, such as “Salary information read-only accessible by the employee,”, “Salary information read-only accessible by their direct manager”, “Salary information read/write accessible by HR managers”, and “Salary information is not accessible to anyone without specific access rights”.  So in effect NFRs are implemented because they become part of your “done” criteria.
  3. Explicit list.  Capture NFRs separately from your work item list in a separate artifact.  This provides you with a reminder for the issues to consider when formulating acceptance criteria for your functional requirements.  In the Unified Process this artifact was called a supplementary specification.

Of course a fourth option would be to not capture NFRs at all.  In theory I suppose this would work in very simple situations but it clearly runs a significant risk of the team building a solution that doesn’t meet the operational needs of the stakeholders.  This is often a symptom of a teams only working with a small subset of their stakeholder types (e.g. only working with end users but not operations staff, senior managers, and so on)

So what are the implications for implementing NFRs given the three previous capture strategies?    Although in the book we would make this sort of comparison via a table to improve consumability, in this blog posting I will use prose due to width constraints.  Let’s consider each one:

  1. Technical stories.  The advantages of this approach are that it is a simple strategy for capturing NFRs and that it works well for solutions with a few NFRs or simple NFRs.  But, the vast majority of NFRs are cross-cutting aspects to several functional stories and as a result cannot be implemented within a single iteration.  This strategy also runs the risk of teams leaving NFRs to the end of the construction phase, thereby pushing technical risk to the end of the lifecycle where it is most difficult and expensive to address.
  2. Acceptance criteria. This is a quality focused approach which makes the complexity of an individual functional requirement apparent, working well with test driven approaches to development.  NFR details are typically identified on a just in time (JIT) basis during construction, fitting in well with a disciplined agile approach.  But, because many NFRs are cross cutting the same NFR will be captured for many functional requirements.  It requires the team to remember and consider all potential NFR issues (see Figure in my previous posting) for each functional requirement.  You will still need to consider NFRs as part of your initial architecture efforts otherwise you risk a major rework effort during the Construction phase because you missed a critical cross-cutting concern).
  3. Explicit list.  This strategy enables you to explore NFRs early in the lifecycle and then address them in your architecture.  The list can be used to drive identification of acceptance criteria on a JIT basis.  But, NFR documents can become long for complex systems (due to the large number of NFRs).  This can be particularly problematic when you have a lot of NFRs that are specific to a small number of functional requirements.  Teams lacking in discipline may not write down the non-functional requirements and trust that they will remember to address them when they’re identifying acceptance criteria for individual stories.

 

The advice that Mark and I give in the book is that in most situations you should maintain an explicit list and then use that to drive identification of acceptance criteria as we’ve found that it’s more efficient and lower risk in the long run.  Of course capturing NFRs is only one part of the overall process of addressing them.  You will also need to implement and validate them during construction, as well as address them in your architecture.

An important issue which goes to NFRs such as consumability, supportability, and operability, is that of deliverable documentation.  At the start of the project is the best time to identify the required documentation that must be created as part of the overall solution.  This potentially includes operations manuals, support manuals, training materials, system overview materials (such as an architecture handbook), and help manuals to name a few.  These deliverable documents will be developed and kept up to date via the continuous documentation practice.

In my next blog posting, the fourth in this three-part series, I will describe strategies for verifying non-functional requirements.

Disciplined Agile Architecture: Initial Architecture Envisioning

An important aspect Disciplined Agile Delivery (DAD) is its explicit inclusion of an Inception phase where project initiation activities occur.  Although phase tends to be a swear word within the agile community, the reality is that the vast majority of teams do some up front work at the beginning of a project.  Some people will mistakenly refer to this effort this Sprint/Iteration 0 it is easy to observe that on average this effort takes longer than a single iteration (the 2009 Agile Project Initiation survey  found the average agile team spends 3.9 weeks in Inception and the November 2010 Agile State of the Art survey found that agile teams have Construction iterations of a bit more than 2 weeks in length).

Regardless of terminology, agile teams are doing some up front work.  Part of that initial work is identifying an initial technical architecture, typically via some initial architecture envisioning http://www.agilemodeling.com/essays/initialArchitectureModeling.htm.  Because your architecture should be based on actual requirements, otherwise you’re “hacking in the large”, your team will also be doing some initial requirements envisioning  http://www.agilemodeling.com/essays/initialRequirementsModeling.htm in parallel.  Your architecture will be driven in part by functional requirements but more often the non-functional requirements, also called quality of service (QoS) or simply quality requirements.  Some potential quality requirements are depicted in the figure below (this figure is taken from the Disciplined Agile Delivery book but was first published in Agile Architecture Strategies ).

Architectural views and concerns

Some architects mistakenly believe that you need to do detailed up front modeling to capture these quality requirements and then act upon them.  This not only isn’t true it also proves to be quite risky in practice, see my discussion about Big Modeling Up Front (BMUF)  for more details.  Disciplined agilists instead will do just enough initial modeling up front and then address the details on a just-in-time (JIT) basis throughout construction.  Of course it’s important to recognize that just enough will vary depending on the context of the situation, teams finding themselves at scale will need to do a bit more modeling than those who don’t.  It’s also important to recognize that to address non-functional requirements throughout construction that you need to have more than just architectural modeling skills.  This topic will be the focus of my next blog posting in this series.

Potential Misconceptions about Agile Architecture

Recently at the Scott W. Ambler + Associates site we received a series of questions from someone who wanted to better understand how architecture issues are addressed on agile project teams.  It seemed to me that the questions were sufficiently generic to warrant a public response instead of a private one.  So, over the next few days I’m going to write several blog postings here to address the issues that were brought up in the questions.  It’s important to note that I will be answering from the point of view of Disciplined Agile Delivery (DAD), and not agile in general.  Other agile methods may provide different advice than DAD does on this subject, or no advice at all in some cases.

The goal of the first blog posting in this series is to address several potential misconceptions that appeared in the email.  I want to start here so as to lay a sensible foundation for the follow-on postings.

Partial Misconception #1: Agile can be prefixed in iteration 0 by architectural design

I’ve named this a “partial misconception” for a few reasons:

  1. Disciplined agile teams do some up-front work.  This is called the Inception Phase in DAD, although other methods may refer to it as iteration/sprint 0, warm up, initiation, or other names.  Up-front work is an explicit part of DAD.
  2. Iteration 0 isn’t an accurate term.  Although I have used this term in the past when discussing project initiation, the reality is that the average agile team spends about a month doing project initiation activities whereas the average iteration length is two weeks.  So, Inception really isn’t a proper iteration.
  3. Inception is more than just architecture.  Several activities typically occur at this point in time, particularly initial architecture envisioning, initial requirements envisioning, initial release planning, and putting the team together to name a few things.

Chapters 6 through 12 in Disciplined Agile Delivery describe these project initiation activities in detail.  Also, I recently wrote that it requires discipline to keep Inception short.

Partial Misconception #2: On principle, Agile is against “big” anything

This is also a “partial misconception” for several reasons:

  1. There is in fact a lot of agile rhetoric against big artifacts.  It’s very easy to find agile writings about the challenges with big requirements up front (BRUF), big modeling up front (BMUF) in general, and detailed up front planning for instance.
  2. Disciplined agile is against needless waste, not “big” things.  Many traditional modeling and planning practices prove to be quite wasteful in practice.  A serious cultural challenge that the traditional community has is that they are afraid to throw out the bathwater because they assume that the baby will go with it.  I believe that Disciplined Agile Delivery (DAD), and Agile Modeling before it, make it quite clear that it’s possible to gain the benefit of thinking before doing without taking on the very serious problems around doing too much thinking before doing.  So, have the discipline to keep the thinking “baby” yet discard any needless documentation “bathwater”.
  3. In rare situations it’s appropriate to create “big” artifacts.  Disciplined agilists aim for sufficient artifacts, the size of which will depend on the context of the situation that your team finds itself in.  In a recent article for Dr. Dobb’s Journal, Disciplined Agile Architecture, I explicitly explored how initial architecture envisioning on an agile project may result in “big” artifacts in some situations.  These situations are very rare mind you, ignoring cultural imperatives to create big artifacts because some people still haven’t made the jump to a disciplined agile approach, but they do happen.  One of the strengths of the DAD process decision framework is that it is goal driven, not prescriptive, and explicitly explores the tradeoffs surrounding the amount of detail to capture and when to do so.

Partial Misconception #3: Refactoring system architecture beyond mid-implementation is much more expensive than refactoring components

Once again, this is a partial misconception.  I suspect part of the problem is a lack of understanding of what refactoring is really all about, a recurring problem with experienced traditionalists, and part because of a lack of understanding of how architecture is address by disciplined agile teams.  Some thoughts:

  1. Refactorings are simple, not difficult.  The goal of refactoring is to make SMALL changes to your design that improve the quality without changing the semantics of the design in a practical manner.  This is true of code refactorings, database refactorings, user interface refactorings, and other types of refactorings.  Small changes are inexpensive to make given the appropriate skills, tools, and organizational environment.
  2. Architectural rework (not refactoring) is often difficult.  Rework, or rewrites, are very large changes the goal of which is typically to replace large portions of your solution.  Yes, the later in the lifecycle such rework occurs very likely the more expensive it will be because you’ve built more based on that architecture that is now being reworked.  This is a general issue, not just an agile one.
  3. Disciplined agile teams get going in the right direction to begin with. The practice of initial architecture envisioning, which we describe in detail in Chapter 9 of Disciplined Agile Delivery, aims to think through the architectural strategy before getting into construction.
  4. Disciplined agile teams prove their architecture works early. The first construction milestone, prove the architecture, reduces the risk of architectural rework.  The goal is to prove that the architecture works by building a working end-to-end skeleton of the solution which implements critical/difficult technical requirements.  This is an agile “fail fast” strategy, or as we say in DAD a “succeed early” strategy, that reduces technical risk on your project.   As an aside, including explicit light-weight milestones such as this is one of many agile governance aspects built right into DAD.
  5. Disciplined agile teams have an architectural role.  This role is called Architecture Owner and one of the responsibilities of the person in this role is to guide the team in architectural issues throughout the entire DAD lifecycle.
  6. There are no guarantees.   No matter how smart your approach, there’s still a chance that rework can happen.  For instance, you can be mid-way through a project and the vendor of a major architectural component of your solution decides to withdraw it from the market.  Or the vendor goes out of business.  Or perhaps your firm is taken over by another firm and the new owners decide to inflict, oops I mean bless you with, their architectural strategy.  Stuff happens.  Once again, this is a general issue, not specifically an agile one.
  7. Quality decreases the cost of rework.  Disciplined agilists will write high-quality code, with a full regression test suite in place, at all times during Construction.  It’s easier to rework high quality artifacts compared with low quality artifacts, so if you get stuck having to perform rework at least the pain is minimized.  My article Agile Testing and Quality Strategies overviews many techniques.

In short, disciplined agile teams do what they can to avoid architectural rework to begin with by having an explicit architecture owner role who focuses on architectural issues throughout the entire lifecycle, by identifying a viable architectural strategy early in the project, proving that architectural strategy works early in Construction, and producing high-quality artifacts throughout the lifecycle that are easier to rework if needed.   With continuous documentation practices and a focus on producing artifacts which are just sufficient enough for the situation at hand, this proves to be far more effective than traditional strategies that assume you require large up-front investments in “big” artifacts, that rely on validation techniques such as architecture reviews instead of the far more concrete feedback of working code, and that often leave quality strategies to the end of the lifecycle (thereby increasing the cost of any rework).

I plan two follow-on blog postings in this series, one exploring how initial architecture envisioning works and one about how to address initial quality requirements (also called non-functional requirements or quality of service requirements) on disciplined agile projects.  Stay tuned!

At Scott W. Ambler + Associates we offer a one-day workshop entitled Agile Architecture: A Disciplined Approach that you should consider if you’re interested in this topic.  We also offer coaching and mentoring services around agile architecture.