September 29, 2010 | John Rusk I draw usually draw Earned Value charts with the cost (AC) line in red, and the progress (EV) line in green. In this post, I’m going to outline some basic rules for getting a good “green line”. The recommendations in this post are for people doing “lite” Earned Value as described in my posts and live demo. The basic principle When you’re part way through the project, you want the green line to give an accurate trend. That way, you can used it for planning and making decisions. If it was significantly inaccurate, then it wouldn’t be much use to you. Actions So, how do you make sure that it is as accurate as (practically) possible? Here are suggestions that I’ve come up with, after about 5 years with this style of “lite” Earned Value. These suggestions are specific to software development projects using agile(ish) processes. Aim for an approximately linear project structure. In other words, try to have each month of the project containing approximately the same mix of design, build and test. Don’t load all the design into the front of the project (waterfall style) because that undermines the predictive power of EV. EV predictions work best when the nature of the work is roughly the same in all stages of the project – something that is (approximately) achievable with agile processes. Know your test strategy. From an EV perspective, the nicest approach is to achieve the agile ideal of spreading the testing work uniformly throughout the project. But I recognise that some projects, to a lesser or greater degree, do need a User Acceptance Testing phase (with real users) at the end. That can be dangerous from an EV perspective, because it offers a temptation to consider things “done” even when their quality is not known. In the worst case, this can get ugly: you think you’ve finished everything, and both your red and green lines are at 90-something percent. But then you start UAT – which unearths lots of problems. As you fix the problems your cost keeps on rising, up and up way past 100%. Ouch! That’s why it’s much, much, much better to do a first round of UAT as you go. Then, before go-live, you may do a second round, but if the first round was taken seriously, the second round shouldn’t find many problems or create many surprises. (By the way, regardless of how you do formal testing with real users, the testers in your team should always test as you go.) Fix defects as fast as you find them. As per the previous point, it’s best to test as you go. But that’s not enough, to get the benefit you must also fix defects promptly – don’t test as you go but then leave resolution of the defects to the end! An approach I like to so say that we are aiming to have no more than x defects open at any given time. Some agile projects set x near zero. For some projects – e.g. an 18 month project with go-live at the end – I’m comfortable with values of x around 30, but not much higher. Regardless of the value of x, setting a value for it basically forces your “fix rate” to approximately equal your “find rate” – which is what we need to our earned value tracking to be trustworthy. A task only counts towards the green line when it’s complete. Partially complete tasks don’t count at all. This is the most conservative approach and therefore, I strongly believe, the most realistic. If, as discussed above, you must have a UAT phase at the end, you should still do at least some testing at the time that you develop the feature, so you know whether it’s “done”. The green line is always based on estimated task sizes. You must NEVER tweak the Earned Value numbers for completed tasks to match how long they actually took. That completely screws things up, because now your past is measured in “actuals” but your future is measured in “estimates” – so you lose all ability to predict the future from the past. To re-iterate, the green line must be based on estimated task sizes. If you add new scope during the project, make the estimated sizes for new tasks consistent with tasks already in the project. E.g. say you are measuring task sizes in “points”, and you are thinking of adding a 10 point task. When you add it, do a little sanity check to see, “Is it really the same size as the 10 point tasks that we already have?”. You want all “10 point tasks” to be roughly the same size, regardless of when you added them to the project. If you split a task, reallocate its original points to the sub-tasks. For instance if you divide an epic user story into several smaller ones, make sure that the total point value of the small stories equals the value of the original epic. I.e. chopping up a task mustn’t change the total number of points in the project. Don’t worry about “derived requirements”, don’t even track them. Derived requirements are those (hopefully) little things that are not stated in the planned scope, but which turn out to be essential to implementing it. I often visualise them as little gaps in the stated requirements – implicitly and unavoidably part of the project, but not foreseen in our planning. For Earned Value purposes, the easiest way to handle these is to not track them at all. Just implement them as necessary, without entering them in your Earned Value system/tool/spreadsheet. For a discussion of why this makes sense, see my STN and Encylopedia articles. I hope you find these suggestions useful. “Lite” approaches to Earned Value are still evolving, and are not yet well documented. So, suggestions for improving this list are most welcome.