Category Archives: Earned Value Management

Applying the principles of Earned Value Management to software projects, for a more accurate understanding of where the project is really at. The emphasis is on “lite” Earned Value, for ease-of-use.

With Earned Value, you are much less likely to “kid yourself” about the true state of the project (and other people are less likely present unrealistic status to you).

Agile with Fixed Scope

It’s a common misconception that agile processes can’t be used with fixed scope.  A number of the founders of the agile movement invented their forms of agile on fixed-scope projects. As I write this, I’m working myself on an 18-month project with about 20 people and a fixed(ish) scope (see below).  So it can be done.  But how?

There are several different strategies you can use:

Strategy 1: Fix the scope and flex the price

This keeps scope management very simple, you just build all of it.  The catch is it may take longer than you expected, so you may need to flex the price through a time-and-materials contract or some kind of sharing of financial risk.  Understandably, this risk of cost overruns renders this simple approach unsuitable in many environments.

Strategy 2: Work in priority order and stop when the money runs out

(Admittedly, this is not exactly fixed scope.) This is very commonly recommended on agile projects, too commonly in my opinion.  But again, it has the virtue of being relatively simple.  Do the most beneficial stuff first, leaving the least beneficial until last.  When the money runs out, just stop and don’t do the rest.  Agile makes this approach possible – but not mandatory.

Strategy 3: Implement remaining features more simply when short of time (“Feature Thinning“)

There are many factors that influence the effort required to develop a feature (or user story, depending on your terminology).  Some of those factors are probably under your control: e.g. How extensive is the validation? How much effort do we put into optimising the user experience (UX) and appearance?  Do we fully automate everything, or do we allow manual overrides so we don’t have to code every single  edge case?  Can we think of something that would save development time, and still meet the overall business goal (in a different way from what was originally expected)?

If you are using good earned value tracking you should know, within the first quarter of the project, whether you are likely to run out of time at the end.  Once you find that out, immediately start seizing all opportunities to simplify the remaining 75% of the project.  Because you have good earned value tracking, you can justify the simplifications to your stakeholders.  The aim is to deliver all of the planned business benefits, just with simpler implementations than might have been originally expected.

We’re using a variant of this strategy on my current project.  We built the highly-used parts of the system first, taking a lot of care with their appearance and usability. The second half of the project consists of functionality that is much less commonly used, so here usability and appearance are much less important. (If it takes a user a few extra minutes to do something, it doesn’t really matter if they only do that thing a few times each year.) So for this second half of the project, we have consciously shifted our design approach away from ease of use and towards simplicity of implementation.  Because we are using earned value-like tracking, we can justify this change of approach to users and management.

Strategy 4: Split each feature (or user story) into essential and nice-to-have parts

This a refinement of the previous strategy. Right from the start of the project, you split features/user stories into two pieces: an essential minimum piece, which you implement early, and nice-to-have embellishments (such as advanced data validation or visual styling) which you defer to the tail end of the project.  If you run out of time, you drop some of the embellishments from the tail, and still deliver a working system with the full scope of capability/functionality.

Strategy 5: Make multiple passes over each story, doing the basics first and then improving it later

Similar to strategy 4, but you may “visit” a given user story 3 or more times within the project, instead of just twice as in Strategy 4.  I like this in theory, but in practice I think it’s too hard to used earned value or burn-chart tracking in this strategy.  Whereas in strategy 4, I feel that earned value remains (just) feasible.

[Tim Wright’s comment,  below, gives more details on how this strategy can be done]

Summary

The last three strategies are all variations on a theme. Within a single project, you may use several of them, and maybe also resort to strategy 2 for a few user stories.

I recently heard the phrase “value management” to describe the work of deciding not only what to build, but also how simply or thoroughly to build it  The aim is to meet the business goals with the optimal expenditure of effort – i.e. do what needs to be done, without overspending on superfluous details.

Further Reading

All of the following are excellent.

Alistair Cockburn’s Trim the Tail.  A rich explanation of the theory and practice of strategies 4 and 5, with significant additional benefits in risk management.

Alistair’s list of related strategies.

Jeff Patton’s concept of Feature Thinning (aka Managing Scale): Jeff’s a leading practitioner of Strategies 3, 4 & 5. See: Finish on time by managing scale, Difference between incremental and iterative, Accurate estimation = red herring  Jeff has often used these techniques on fixed-scope, fixed-price projects.

Description from an agile company called Atomic Object, of how they operate with fixed budget and controlled (rather than fixed) scope: here and here.

Martin Fowler’s Scope Limbering

The opening section my my own agile earned value (pdf) has more info on why fixed scope is a valid option in agile.

I’ve also posted a summary of estimation tips for agile projects.

Key Resources

Here’s a brief summary of the key Earned Value resources on this blog:

  • Earned Value in two sentences: Earned Value in a Nutshell
  • Introduction, from a Agile perspective: Agile Charts
  • Introduction, from a classic Earned Value Management (aerospace/DoD) perspective: Software Tech News article (was also re-printed in the PMI’s Measurable News)
  • A comprehensive 30-page article, compatible with both the agile and traditional approaches to Earned Value.
  • Video: “An animated graphical introduction to Earned Value”. (Update, 2013: sorry, I never got around to posting this one.  It was supposed to be a video of the talks I gave at NZCS events in 2010.  Let me know if you’re interested, and I’ll see what I can do.)
  • “Starter kit” spreadsheet (released with the kind permission of my former employer): Starter Kit
  • Important guidelines for successfully using Earned Value: Rules of the Green Line

Finally, note that none of this is necessarily new or unfamiliar.  Some good project managers do something similar almost intuitively, but often with less graphical display. However I think that many projects, both agile and traditional, slip unknowingly into weaker forms of progress tracking that are dangerous and misleading – resulting in nasty surprises late in the project. The lite earned value approach which is described on the above pages is the best way I know of, to avoid such late surprises.  (Especially if you do the risky user stories/features in the first 3rd of the project, as per Alistair Cockburn’s Design as Knowledge Acquisition.)

Converting Apples to Oranges

There are two common errors when forecasting the final cost of a project.  One is to compare actual cost with planned cost.  The other is to compare actual progress with planned progress.  Both are wrong. 

Earned Value teaches us that the only valid measure is to compare actual cost with actual progress.   This may seem a bit like “comparing apples with oranges” – we seem to be comparing things that are not the same.  The trick is, before we compare them, we convert them both to the same numerical units.   That’s what makes the comparison possible and enables all the predictive goodness of EVM.

(Thanks to Glen Alleman for inspiring this brief post, with his comments here).

Rules of the Green Line

basicChartI draw usually draw Earned Value charts with the cost (AC) line in red, and the progress (EV) line in green. In this post, I’m going to outline some basic rules for getting a good “green line”.  The recommendations in this post are for people doing “lite” Earned Value as described in my posts and live demo.

The basic principle

When you’re part way through the project, you want the green line to give an accurate trend.  That way, you can used it for planning and making decisions.  If it was significantly inaccurate, then it wouldn’t be much use to you.

Actions

So, how do you make sure that it is as accurate as (practically) possible?  Here are suggestions that I’ve come up with, after about 5 years with this style of “lite” Earned Value.  These suggestions are specific to software development projects using agile(ish) processes.

  1. Aim for an approximately linear project structure.  In other words, try to have each month of the project containing approximately the same mix of design, build and test.  Don’t load all the design into the front of the project (waterfall style) because that undermines the predictive power of EV.  EV predictions work best when the nature of the work is roughly the same in all stages of the project – something that is (approximately) achievable with agile processes.
  2. Know your test strategy.  From an EV perspective, the nicest approach is to achieve the agile ideal of spreading the testing work uniformly throughout the project. But I recognise that some projects, to a lesser or greater degree, do need a User Acceptance Testing phase (with real users) at the end.  That can be dangerous from an EV perspective, because it offers a temptation to consider things “done” even when their quality is not known.  In the worst case, this can get ugly: you think you’ve finished everything, and both your red and green lines are at 90-something percent.  But then you start UAT – which unearths lots of problems. As you fix the problems your cost keeps on rising, up and up way past 100%.  Ouch!  That’s why it’s much, much, much better to do a first round of UAT as you go.  Then, before go-live, you may do a second round, but if the first round was taken seriously, the second round shouldn’t find many problems or create many surprises.  (By the way, regardless of how you do formal testing with real users, the testers in your team should always test as you go.)
  3. Fix defects as fast as you find them.  As per the previous point, it’s best to test as you go.  But that’s not enough, to get the benefit you must also fix defects promptly – don’t test as you go but then leave resolution of the defects to the end!  An approach I like to so say that we are aiming to have no more than x defects open at any given time.  Some agile projects set x near zero.  For some projects – e.g. an 18 month project with go-live at the end – I’m comfortable with values of x around 30, but not much higher.  Regardless of the value of x, setting a value for it basically forces your “fix rate” to approximately equal your “find rate” – which is what we need to our earned value tracking to be trustworthy.
  4. A task only counts towards the green line when it’s complete.  Partially complete tasks don’t count at all.  This is the most conservative approach and therefore, I strongly believe, the most realistic.  If, as discussed above, you must have a UAT phase at the end, you should still do at least some testing at the time that you develop the feature, so you know whether it’s “done”.
  5. The green line is always based on estimated task sizes.  You must NEVER tweak the Earned Value numbers for completed tasks to match how long they actually took.  That completely screws things up, because now your past is measured in “actuals” but your future is measured in “estimates” – so you lose all ability to predict the future from the past. To re-iterate, the green line must be based on estimated task sizes.
  6. If you add new scope during the project, make the estimated sizes for new tasks consistent with tasks already in the project.  E.g. say you are measuring task sizes in “points”, and you are thinking of adding a 10 point task. When you add it, do a little sanity check to see, “Is it really the same size as the 10 point tasks that we already have?”.  You want all “10 point tasks” to be roughly the same size, regardless of when you added them to the project.
  7. If you split a task, reallocate its original points to the sub-tasks. For instance if you divide an epic user story into several smaller ones, make sure that the total point value of the small stories equals the value of the original epic.  I.e. chopping up a task mustn’t change the total number of points in the project.
  8. Don’t worry about “derived requirements”, don’t even track them.  Derived requirements are those (hopefully) little things that are not stated in the planned scope, but which turn out to be essential to implementing it.  I often visualise them as little gaps in the stated requirements – implicitly and unavoidably part of the project, but not foreseen in our planning. For Earned Value purposes, the easiest way to handle these is to not track them at all.  Just implement them as necessary, without entering them in your Earned Value system/tool/spreadsheet. For a discussion of why this makes sense, see my STN and Encylopedia articles.

I hope you find these suggestions useful.  “Lite” approaches to Earned Value are still evolving, and are not yet well documented.  So, suggestions for improving this list are most welcome.

 

Encyclopedia Article

This page provides shortcuts to my Earned Value article in the Encyclopedia of Software Engineering.

  • Link to online copy. As of 27 June 2011, you can purchase just the EV article (without having to buy the whole encyclopedia.) [Update, March 2013: the publishers seem to have reverted to selling it only as part of the larger work.  I’m corresponding with them to find out what happened to the single article purchase option]
  • Hard copy of the whole encyclopedia (about 120 articles on other topics, in addition to my one on Earned Value.  Each article is about 30 pages and all are peer reviewed by experts in the relevant field.)

I’m sorry but I cannot provide copies of the article here, since the copyright is held by the publisher.  By the way, I make no money from the sale of the article, so I hope you’ll consider me reasonably unbiased when I say that it’s well worth reading.  (Well, OK, about as unbiased as an author can be about his own work!).

To those who’ve seen my Earned Value talk with the animated interactive charts, this article does include some important details which I could not fit into the talk – in particular how to obtain objective “progress” numbers, and a discussion of how Earned Value relates to the critical path on software projects.

The article also forms a “bridge” between the purely graphical approach, which I use throughout my talk, and the numerical/mathematical approach which is the norm in the EV community.  As such, I believe it is one of the few pieces of writing on Earned Value which brings together all four of the following viewpoints:

  • Agile project management
  • “Traditional” (non-agile) project management
  • An intuitive graphical “feel” for the subject
  • The standard mathematical formulae of Earned value

I hope you find it to be balanced, reasonably comprehensive and, most  importantly of all, useful.

Recent Earned Value Posts of Interest

Glen Alleman posted on the difficulty (or otherwise) of the maths in EVM, and several of us commented.  He  followed that with an example in which the maths is very simple, which I quite liked as an introduction to Earned Value.

Marcin Niebudek wrote an article, in which he adds a budget line to an agile burn chart, measuring both in percentages.  Nice to see I’m not the only one doing that. (Even if he does draw the chart up the "wrong" way ;-)

Updates

June 2011: another interesting one from Glen Alleman, on the relationship between the ANSI standard criteria and simple/agile EV.

July 2011: A nice burn chart example from the folks at Atomic Object.  It doesn’t include the cost line, and I’d prefer if it did, but it does offer a clean solution to some of the challenges of charting change.

Speaking on Earned Value at NZ Computer Society Conference

I’ll be speaking at the New Zealand Computer Society’s 50th Anniversary Conference, on the topic of Earned Value Management.

I’m looking forward to being part of an interesting conference, and hopefully helping to lift the profile of EVM in New Zealand.

Link to presentation abstract

( The abstract’s reference to “Number 8 wire” may be unclear to overseas readers :-)  The phrase simply means “New Zealand ingenuity”, in this part of the world.)

Earned Value in a Nutshell

I’ve been looking for a way to describe the “essence” of Earned Value Management (EVM).  How can I describe the core of what EVM is about – without resorting to an impenetrable jungle of acronyms?

This is particularly important when describing it to people outside EVM’s traditional strongholds of defense and aerospace.  Outside those areas, EVM is under-utilised, and I suspect much of the reason is due to its apparent complexity.  I’ve been an EVM fan for about 5 years now, and I still come across unfamiliar acronyms.  If EVM is to be more widely used, it has to be presented in a way that is accessible to a wide audience.

Here’s what I came up with:
Continue reading Earned Value in a Nutshell

Does our intuition fail us?

Why do so many projects seem to be OK, but, when you get near the end, they turn out not to be OK after all?  Everyone thought you were going to make the target date, but at the last minute… well, no you couldn’t.

I’d like to suggest an answer.  Let’s illustrate it with an example.  Consider an agile project that’s been estimated at 375 points in size. (To my non-agile readers, “points” are just a relative measure of task/feature size.  So for instance, a 20 point feature is estimated to require twice as much work as a 10 point one.  In this project, all the features add up to 375  points).

Also, imagine that our sample project is scheduled to take 12 weeks and we are now half way through the project. After 6 weeks, the team has completed 132 points’ worth of work.   The team leader reports that they are a little behind, since by this time they should have finished 187 points (half of 375).  After speaking with  everyone on the team, he is confident that they can make up the lost ground.

Question: how much faster will they have to work, if they are to finish the project on time? Continue reading Does our intuition fail us?

Agile Charts Part II – The EVM Perspective

I was recently invited to write an article on agile-style EVM charts for Software Tech News, a publication of the US Department of Defense.  The audience was the traditional EVM community within the DoD, so I wrote the article from a traditional EVM perspective.  By way of background, EVM is governed by the ANSI/EIA-748 standard within the DoD.

Thanks for the generous re-print policy of Software Tech News, here is a copy of the article:

EarnedValueForAgileProjects.pdf

It was also re-printed by the PMI, in this 2009 issue of Measurable News.

(If you are looking for the same material, but addressed to an agile audience rather than an ANSI/EIA-748 audience, see ‘Agile Charts‘.)