January 22, 2010 | John Rusk | 4 Comments Why do so many projects seem to be OK, but, when you get near the end, they turn out not to be OK after all? Everyone thought you were going to make the target date, but at the last minute… well, no you couldn’t. I’d like to suggest an answer. Let’s illustrate it with an example. Consider an agile project that’s been estimated at 375 points in size. (To my non-agile readers, “points” are just a relative measure of task/feature size. So for instance, a 20 point feature is estimated to require twice as much work as a 10 point one. In this project, all the features add up to 375 points). Also, imagine that our sample project is scheduled to take 12 weeks and we are now half way through the project. After 6 weeks, the team has completed 132 points’ worth of work. The team leader reports that they are a little behind, since by this time they should have finished 187 points (half of 375). After speaking with everyone on the team, he is confident that they can make up the lost ground. Question: how much faster will they have to work, if they are to finish the project on time? Will they have to work 10% faster than they have so far? 20% faster? 30% faster? Get your own gut-feel for the answer, then scroll down. The answer is eighty four percent. To deliver on time, the team has to produce 84% more output, per week, than they did in the past. That’s almost double. Most teams, I suspect, don’t realise just how much faster they need to go. We look at the status, and think,”Well, we’re a little behind, but it’s not too bad.” We might even crunch some numbers. In this example above, the team should have completed 50% of the work, but they have only done 35%. You look at the figures, see 50 and 35, do the subtraction, and end up with 15%. That doesn’t seem to bad. 15% is not very much, and seems well within our abilities to compensate for – perhaps with a little extra work, and good intentions to “work smarter”. But we’re fooling ourselves. We are forgetting two things: The gap is not 15% of the project, it is 15 percentage points out of the 50 percentage points we are supposed to have by now. 15 / 50 is actually a 30% deficit. So we are further behind than we think. Not only are we further behind than we think, but catching up is harder than we expect. We look at the remainder of the project and think, “We just have to go a little faster than we planned”. That’s true enough, but we forget that “faster than planned” really equals “much, much faster than we have actually been going”. After all, we do not have to achieve an improvement relative to the plan, we have to achieve an improvement relative to reality! It is these two factors which mean that it requires an 84% speedup to compensate for a “15%” gap. An example to do in your head In the example above, I deliberately used numbers than were hard to manipulate in your head. (Well, at least, they were hard to manipulate in mine 😉 So here’s a really simple formulation of a similar problem, just so you can convince yourself than I’m not talking rubbish: Again, imagine a project at the half-way point. This time, the team has done exactly one third of the work, i.e. 33%. Think of it this way: in their progress to date, they have done one on third of the work in half the time. To finish on schedule, they must now do two thirds of the work in the other half of the time. They must do twice as much work in the same amount of time – a 100% increase in speed. Conclusion Teams can’t speed up by 84% or 100%. It just doesn’t happen. (At least, not when holding costs constant – and generally, not even when spending extra.) When projects fall behind, our intuition lets us down. We have to rely on sound data and analysis instead. I believe the best answer is to use Earned Value analysis. A simple Earned Value chart like this goes along way to correcting our faulty assumptions.
I would go even further. In vast majority of cases teams can't really speed up by 20%. That just almost never happens in healthy work environments. Usually the problem is estimating. Estimates are usually too small but it is to be expected. We rarely take effort to make them real, either with reliable velocity measurement or with proper statistical analysis of past records. Now, if our estimates are wrong in the first half of the project they aren't going to be any better in the second part. The team will still be losing distance regularly. As a side note: I know teams which can speed up by 100% or more in terms of productivity but this is nothing you'd call a healthy work environment. Actually it is very poor management which created incentive to have very poor results in the first part of the project so when people get back to their normal speed velocity skyrockets. Anyway that's the subject for a different story.
I agree, in that speed-ups of about 20% or more are probably impossible if the team is "healthy" to start with.
A colleague of mine suggested a technique that’s worked very well on a current project. Borrowing from Cricket, our project “dashboard” includes a “Run Rate” graph. The RunRate graph is a simple column chart produced by Excel. (Pity I can’t show a picture here). The first column (“Estimated”) is the number of accumulated hours per day that we assumed when we took our effort estimates and turned them into a timeline. The second column (“Actual”) is the number of hours per day of actual effort we’ve expended on the project (so, this doesn’t count production support, completing timecards, team meetings etc). The third column (“Required”) is the number of hours per day of effort we need if we are to hit the target date. Works really well – easy to explain to anyone, and easy for them to interpret reliabily.