Category Archives: Estimation and Pricing

Estimation of effort, and costing/pricing, for software development projects

Creating an Agile Contract

Writing an agile contract, without changing your procurement process, is like forcing a square peg into a round hole.  As an industry, we’ve tried to do so for more than decade, and we have to accept it doesn’t work.  We need a revised procurement process.

This post outlines what I believe such a process should look like. It leverages decades-old techniques to provide trustworthy rigour.  It aligns with the realities of agile projects and provides significant protection to both parties.

The flavour of agile assumed here is “Target-Driven Agile”.  Target-Driven Agile is the flavour that sits most comfortably with corporate purchasers, but its also the hardest to create contracts for. See my earlier post on flavours of agile for background.

This post is long. I didn’t have time to make it shorter.  Sorry ;-)

Step A: Business case for initiation

This first step in the procurement process is entirely internal to the purchasing organisation.  Do we have a rough idea for a project? And do we know, very roughly indeed, what it might cost?

At this stage, it’s important not to lock in a budget. Doing so would make a mockery of the pricing and estimation steps that follow.  Nevertheless, we still need to arrive at a very rough idea of what the project might cost, in order to decide whether it’s worth proceeding any further.  The best answer I know of is Reference Class Forecasting: don’t set the price of this project, but instead find the costs of other similar projects that have already been completed.  Reference Class Forecasting is:

  • Recognised as the safest approach to early-stage estimation
  • A clear signal to everyone that, while we have prices of other projects that are similar, we have not finalized the price of this project yet.
  • Arguably more achievable, at this stage, than any other approach to cost estimation. Other approaches require detail, and personnel, that the project doesn’t have yet.

Where you do get the data from? Some customer organisations might have already completed similar projects in-house; while others might be able to ask non-competing organisations who have done similar work. The latter is particularly likely to be useful to public-sector organisations, since they don’t compete with their counterparts in other jurisdictions.

If you can’t find any data on previous projects, then it becomes more difficult.  Instead of Reference Class Forecasting you may have to resort to other approaches, such as expert judgement from external consultants. (Take care not to use consultants who may want to bid for the actual work.)  On the other hand, if you can find some data, but not enough for any reliable statistical analysis, my personal view is that you’ve probably got enough.  Yes, with a limited dataset you can’t use all parts of the Reference Class Forecasting technique, but I suspect you’ll still get enough information to decide whether to proceed to the next steps, and that’s all you need.

It’s also important not to lock in the scope at this stage. Go for a very brief high-level description of the scope rather than traditional “requirements definition”.

Step B: Vendor Selection

The second step is about finding a vendor, and agreeing a contract with them.  This is where this suggested process differs most from old-school procurement.  Here vendor selection is broken down into several sub-steps to allow discovery and collaboration in the definition of scope, price and contract.  There are three benefits of the discovery and collaboration: start as we mean to go on (in terms of agile culture), produce a wisely-defined contract, and reduce project risks to acceptable levels.

Step B-1:  Use QBS to create a vendor shortlist

Use Qualifications-based Selection (QBS) to create a ranked short-list of vendors best able to deliver your project.  QBS is a rigorous process, well proven in architecture and civil/structural engineering.  Under QBS you’re allowed to ask any question except price.  See my IITP Newsline article for an explanation of QBS.

Following the QBS process, proceed to step B2 with the top-ranked vendor. Do this without discussing price. At this point, don’t even mention the results of your Reference Class Forecasting, for fear of warping the process by the anchoring effect.

Step B-2: Reality check before investing in scoping

If your organisation is an experienced purchaser of software, and you found good data in your Reference Class Forecasting, I suggest you skip this step and proceed to Step B-3.  For you, this step B-2 won’t add any value, and may warp or complicate the process.

But if you weren’t able to find much data in your Reference Class Forecasting, or if your organisation is inexperienced in procuring software, then before proceeding any further I think it’s wise to make sure that your cost expectations are not totally unrealistic. It’s important to make this check in a way that stays true to the spirit of QBS.  So keep the numbers vague and only do this after you’ve identified the top-ranked vendor.  Ask them just one question, phrased something like this: “We want to make sure our expectations are realistic. Based on what you know so far, do you think this project will be in the tens of thousands, hundreds of thousands, or millions?”  The vendor may answer, “Probably in the hundreds of thousands or low millions”.   If that’s in line with your Reference Class Forecast, then proceed to the next step without any further ado. But if you thought it was going to cost only 20k, then you’ve discovered useful information!  Someone’s expectations are out of whack!  Find out whether it’s you, or the vendor, by discussing your respective expectations and the reasons for them.  Following these discussions, you’ll need to decide whether to proceed to the next step; to cancel the project; or, following the rules of QBS, to irreversibly abandon your first-choice vendor and try your second choice.  But remember, you’re not negotiating price here, you’re just detecting whether it’s sensible to invest in the next step.

Step B-3: Engage with preferred vendor to define scope

This is about having both parties involved in the collaborative work of defining project scope. By involving the vendor, you’re able to leverage their expertise. And after all, isn’t leveraging external expertise one reason why you outsourced in the first place!  Involving the vendor also increases their understanding of the problem at hand, setting the scene for success in the stages that follow.

Collaboratively defining the scope is not trivial. So the vendor should not be expected to do it for free.  Instead, I’d suggest Time and Materials at a heavily discounted rate.  Why the discount? Because the vendor will make their fair profit later, in the main project, and from the customer’s point of view, the discount minimises financial exposure at this stage.

During this step, as the vendor learns more about the customer’s business, and the customer learns more about what the vendor can do, both parties should be encouraged to suggest changes to the scope – especially simplifications.

Scope definition should be at an appropriate level for agile projects, as I described here.  The outputs of this Step B-2 are as follows. They can all be incorporated in, or referenced by, the final contract.

    1. Scope, documented as a simple list
    2. Sizing points assigned to all scope items. (  The points here are simply for future reference in this project. They do not, and cannot, form any baseline for comparison to other projects because agile points are not comparable between projects.)
    3. Brief documentation of the likely architectural approach

Step B-4: Negotiate contract with preferred vendor

Having been involved in defining the scope, the vendor now has a good understanding on which to base a price.  They should proceed to do so now, preparing an estimated price for the scope that was itemized in the previous step.

Once they have done so, they should share that price with the customer.  If the vendor’s price is higher than the customer is comfortable with, the customer should negotiate with the vendor to seek agreement. The customer may:

  • Share, at last, the results of their own Reference Class Forecasting. Using relatively objective data, such as this, tends to be a good negotiating tool.
  • Ask the vendor to seek more cost-effective design options.
  • Agree with the vendor to drop specific lower-priority elements from scope.

In addition to negotiating price, you also need to agree all the normal parts of a contract, such as ownership of intellectual property etc.  Five things deserve special attention, because they need to be different from traditional waterfall contracts

  • A target cost pricing mechanism. Agree on a form of target-cost pricing. Why target cost? Because, it gives both parties the security of overall parameters and boundaries, while still allowing a necessary degree of flexibility to support agility.
  • Timing of payments. Closely relating to the target cost mechanism, is deciding the timing of payments. Will the timing be based on calendar dates, effort expended by the vendor, or progress made? If you choose the latter, I would strongly recommend that it should be measured by percentage of total scope completed (measured in “story points”), rather than old-school milestones. Milestones pull the project in a waterfall direction, whereas paying by percentage of total scope completed retains the necessary agile freedom to adjust the order of development.
  • Progress reporting using Agile-style Earned Value. I suggest this should be mandated in the contract.  See Step C for how it should be used.
  • External advice. I suggest this should be mandated in the contract. See Step C for how it should be used.
  • Scope management approach. Active and continuous scope management is key to the success of an agile project. The team should actively thin features, and occasionally “fatten” others, based on what they learn during the project. These discussions and decisions cause numerous small changes to the scope. Without shared expectations of how these changes will be handled, this can become a contentious issue. Whatever your chosen approach, it’s necessary to retain enough flexibility and empowerment to allow the agile team to be agile, but also to have enough control to provide good project management and governance. I’d suggest the contract should specify that:
    • Feature thinning is to be a goal of the entire team, and that the entire team is to look for opportunities to do so in all their discussions with users.
    • If the revised features still align with the scope as documented (in the above-mentioned Simple List) then no further discussion or approval is required. But, in any instances where cost may be materially increased OR where thinned/dropped features will not meet the scope as (concisely) documented:
      • The team must seek agreement from one designated senior customer-side employee (the Product Owner). Verbal agreement is enough for small changes; email for big ones. If agreement is given, the team may proceed immediately with the change.
      • AND each month the Product Owner must provide a summary of the material changes to the Project Steering Committee (or its equivalent governance group). The Steering Committee may request further discussion or justification of the changes, and in rare cases may ask the Product Owner to reverse their decision (even if the team has already started, or finished!). Yes, there is a cost to such reversals. But, as long as you have a competent Product Owner, I contend that there would be a greater cost in having the team always wait to hear back from the Steering Committee before proceeding.
    • Work will be sequenced such that lower-benefit features will be left until the end. If time becomes scarce at the end of the project, they may be dropped, if the drop is approved through above-mentioned Product Owner and Steering Committee process.

Step B-5: End of Vendor Selection

If everything is OK so far, sign the contract with the preferred vendor.  If no suitable agreement can be reached, re-start at step B2 with the second choice vendor.  This option, to fall back to the second-choice vendor, is a key element of the QBS process.  It gives the first-choice vendor an incentive to be reasonable, and gives the customer protection against getting stuck with a dud vendor.

Falling back to the second-choice vendor should be rare, thanks to Steps B-1 and B-2. But, if it does happen, most of step B-3 will have to be repeated with the new vendor. That’s just the way it is.  The new vendor needs that engagement to reach sufficient understanding.  You can’t short-circuit it by asking them to simply read documents created by the first vendor.  Besides, you don’t like the first vendor any more!  So best not to ask the second one to rely on their work!

Step C. Execute the project, tracking always and correcting when necessary

Now that you have your contract, go ahead and execute the project. As discussed earlier in this series, one of the key protections in the agile process is rapid detection and resolution of issues.  So you need to make sure that’s happening.  I suggest two key mechanisms:

  • Earned-value-style burn charts. The burn chart should cover the whole project, not the current sprint only. (You can have a separate sprint chart too, if you like). The whole-of-project chart gives a lot of information to allow detection of problems. It should be updated regularly, e.g. weekly, and shared with both the team and the customer.
  • Independent expert review and advice. This is based on a suggestion from Phillip Lee (thanks Phillip!). Phillip’s original suggestion applied to very large government projects. For private-sector projects, and even public ones up to a few million dollars in size, I’d suggest something like this: select a person (or very small group) from outside both organisations to act an external advisor. Have them visit the project at least monthly, look for the following things, and report back to both customer and vendor:
      • Earned value. Is the aforementioned whole-of-project burn chart being prepared in such a way that it fairly and correctly depicts the project?
      • Order of work, with regard to business value. Is the order of work in the project appropriate for on-going agile scope management?
      • Order of work, with regard to risk smoothing. Are risky items being appropriately timed, during approximately the first 2/3rds of the planned duration?
      • Business/User Acceptance Testing. If concurrent BAT/UAT is possible, is it being performed and are the tests sufficiently representative of real-world usage?
      • Deployments. The team should be regularly deploying with a realistic deployment process to a realistic environment (even if not production yet). Is that the case?
      • Is feature-thinning happening?
      • Meaningful retrospectives. To what extent are issues raised in retrospectives being dealt with? What is the advisor’s impression of whether the team feels free to raise concerns and break bad news, in retrospectives and/or in more private channels?

If the external advisors are paid by the customer and turn up wearing suits, odds are no-one on the vendor side will trust them! So, I’d suggest that vendor and customer should split the advisor bill 50/50, and let that fact be known to the team. I’d also suggest that the advisors should act, and probably dress, relatively informally.  Ideally, the team should be as open with the advisors as they are with each other. After all, everyone is aiming for the same thing: a successful project.

It’s important that no-one is forced to comply with the advisors’ recommendations.  They’re just recommendations.  I recall being part of a very successful project that used external advisors.  We found that they identified very relevant problems, but that often the right solution, in our particular context, was somewhat different from what they suggested.


Any agile contract needs to allow these four types of learning to continue during the project:

  • What should we build?
  • How (technically) should we build it?
  • What will it cost?
  • How should we work together?

A traditional procurement process basically forces the first three of those questions to be answered before the project even begins, because they have to be answered in order to write a traditional contract.  The answers then get frozen into the contract, discouraging or even preventing on-going learning after that point. (Or at least, discouraging learning from being acted upon.)

Contrast that with the process I’ve proposed here.  It puts some of the learning up front, in the collaborative engagement at step B-3, but it does so in a way that is workable for both parties and is compatible with Target-Driven agile. The early collaboration produces answers sufficient to serve as the basis for a fair and realistic contract.  Following the signing of the contract, learning continues.  The Target Cost contract balances financial protection with on-going learning about “what will it cost”.  And the contractually-agreed procedures for agile scope management allow on-going learning about “what should we build”.   The mandatory external advice makes sure the learning is happening.

The proposed process draws on Reference Class Forecasting, Qualifications-Based Selection, and Earned Value Management.  Each of these techniques is the best available approach to the problem it solves.  Each is backed by significant research and practical experience.  Each comes from outside the agile community – giving an impartial flavour to the proposed process.

Yes, the proposed process is new and may seem complicated at first.  But, if any easy answer existed, it would have been all over the internet by now!  In the absence of easy fixes, we need the courage to try difficult ones.  If we don’t, we’ll be stuck with old contracts ruining new Agile.

[This is the final post in my series on contracts for agile projects]

Don’t Sabotage Agility

I see two main things going wrong when agile projects are done under traditional contracts.

The first problem is that the parameters of the project – cost, scope and time – are set far too early. The result is often an infeasible project.  No contract, no matter how it’s worded, can fully protect you in that situation.  Even if the contract puts all the financial risk onto the other party, there are plenty of risks that you can’t contract out of – not least of which is the embarrassment and reputational damage of a failed project.

The second problem is that there’s not enough discussion of scope flexibility, during the project.  You don’t necessarily have to out-and-out drop features, but it might be in everyone’s interest to simplify or change them. When I look back at our award-winning large agile project at OSPRI, these discussions were a hallmark of the project and a key reason why it finished exactly on time and slightly under budget.  And yet, if you asked the business now, “what did you miss out on, as a result of scope change during the project” – I suspect they’d be hard pressed to think of any significant features that were completely dropped.  (We did, in fact, drop two quite large features.  One was no longer needed due to business change, and the other was something that no-body seems to have missed. But mostly, we thinned features instead of dropping.)

Open discussion about scope flexibility doesn’t just help the team to deliver on time and on budget. It also helps the team to allocate their resources effectively.  By thinning features, you can take the money you save and devote it to adding value elsewhere.

Finally, scope flexibility is key to being a true professional in software development.  After reading about how professionals in other fields work, I am convinced that software engineers have an obligation to re-evaluate and discuss the scope of each feature as they work.  In fact, I’d go so far to say that, “if you produce a system that exactly matches its pre-written specification, you have acted unprofessionally.”  And yet traditional contracts encourage exactly this this failure of professionalism.

[The next, and final, post in this series will describe what I would do, if it was up to me(!), to preserve agility in software procurement and contracts]

Contracts: Outline of a Target-Driven Agile project

Recently I wrote about Target-Driven Agile.  Now, I’d like to outline what a Target-Driven agile project actually looks like.  Of course, as discussed previously, there are many possible variations.  This is the way I like to do it.

(Note: These are just the key steps/phases in a Target-Driven project. I my next post, I’ll outline some thoughts on how contracts can support these steps).

Step 1: Reach an understanding of the scope

This is about getting a broad, high-level understanding of what we are, and are not, aiming to accomplish in the project.

If there’s someone who already knows the business very well,  and is both trusted and authorized to make all necessary decisions, then just ask them.

But usually it’s not that easy. It’s rare to find one person that fits that brief.  Perhaps the needs of the business are so diverse that no one person can adequately, and fairly, represent them all.  Perhaps there are several strong voices within the business, and they’re all saying different things!  Perhaps no-one has separated the “must-haves” from the “nice-to-haves”. So for most projects it’s necessary to work with various people, to craft a high-level description of scope that’s broadly agreed upon.  This usually requires the analysis and facilitation skills of a Business Analyst (or similar), but the exact approach will depend on the project and organisation.

A key point here is that we’re not aiming for a detailed waterfall-style requirements phase. For an agile project this step should be much briefer. For instance, the FDD variety of agile calls this step “Develop an Overall Model” and expects it to takes about 2 weeks of effort for every 6 months of eventual software development time.  Personally, I’ve noticed this stage often takes longer – more like 1 month for every 6 months of ensuing development.  I’m comfortable with that 1:6 ratio, but if it was to take much longer than that, I’d fear it degenerating into a waterfall-style requirements phase .

By the way, the people who do this work will gain a lot of useful knowledge about the business, so its a good idea if they remain as members of the team during the rest of the project.


Step 2: Document the scope as a simple list

You need to write down the output of step 1. If you’re using User Stories, you can list of the names of the epic (coarse-grained) user stories. Otherwise you might list of “features”, again with just their names and few other details.

Some useful tips:

  • This should be a list, not a document.  Most agile teams store these lists in some kind of purpose-built tool (there are dozens of different tools – free, cheap and expensive). For the smallest projects, I’ve seen a list in Excel work surprisingly well.
  • The number of items in your list depends on the size and nature of your project.  As far as I can tell, you typically end up with at least 20 on even a small project.  Big projects that I’ve seen tend to have a few hundred.  I suspect the list becomes hard to understand once there’s more than about 200, and that teams with big projects probably move to writing coarser-grained “bigger” epic stories to keep the count under that threshold. (Each epic will get broken into smaller pieces later in the project, on a just-in-time basis).
  • You might find it useful to group or categorize them into boarder “themes”.  Story Maps are one way to do this.  Some agile tools are flexible enough to let you choose your own theming approach and visually group stories by those themes.
  • It’s generally not worthwhile to document dependencies between the individual items. For justification this view, see an FDD perspective on dependencies (see 5th reply in message thread), and the general perspective towards the end of this page.


Step 3: Understand your likely architectural approach

You don’t have to design everything up front, but you need a basic idea of your general architectural direction, before you go much further.   To illustrate this point, a team using .NET and SQL Server might set a direction that looks something like this:

  • .NET application over SQL Server
  • ORM and no stored procs
  • ASP.NET MVC for the user interface
  • Bootstrap for styling and layout
  • A list of key external systems you intend to integrate with, and the technological approach for integration with each (SOAP, REST, a Message Bus,…)
  • … plus a few more details about the internal structure of your app.  E.g. As a generally rule where will your core business logic go?  How will the UI layer (ASP.NET MVC in this example) connect to that logic?

Since you’re agile, you may change some of these later. But for now, you need some idea of where you’re heading, in order to make progress on the next steps.

Sometimes, its hard to settle on a direction, especially if there’s unfamiliar technology involved.  So you can build something here.  Maybe even build the same something, in two different ways, and compare.


Step 4: Assign sizing “points” to all items in the scope

This is the usual agile practice of assigning relative sizes to each user story/feature.  Because it’s common agile practice, I won’t include a details here – except to say that for a Target-Driven project you need to assign points up-front to the entire scope.  I.e. everything that’s on the list we made at Step 2.  Why not just assign points to some of them, and do the rest later?  Because:

  • If part of your project is unsized, you can’t make any useful predictions about how long you’ll take to finish it.  On a Target-Driven project, we want to make predictions of total cost and duration.
  • Sizing everything up front has the advantage that its  can be easier to size things consistently.  Why? Because at this early stage you have the same level of ignorance about everything!  Contrast this with that alternative approach of sizing some stories half way through the project.  Half way through, you know the completed stories very well, but the future stories poorly. This makes it more difficult to size future stories correctly relative to the past ones because  you know how difficult the past ones were, but you don’t know about future ones. This discrepancy of understanding can trick you into under-estimating the difficulty of the future stories.  Sizing everything at the start solves this problem, because they’re all future stories.
  • Remember that what we need here is relative sizes, not absolute.  Furthermore, we only need the relative sizing to be right “on average”.  If some future sprint has, say, 10 stories in in, it doesn’t matter if it turns out that some should have had more points than we gave them and some should have had fewer, as long as the errors approximately cancel out across the sprint as a whole. I.e. as long as the sprint as a whole has about the right number of points.

Since this exercise is only about setting relative sizes, this doesn’t necessarily need to be an onerous task.   On one project I worked on, with a budget in the low millions of dollars, I don’t recall there being any more than about 25 person-hours spent assigning these points.  But we were fortunate to have two very experienced people doing the sizing, one of whom knew the scope very well (since he’d worked on the previous steps) and one who knew the technology very well.  In your case, you might need more than two people, or and they might want more time.  But if they ask for lots more time… remind them that you’re only asking them for relative sizes.  You’re not asking them to actually say how long each feature will take to develop.  You’re just asking for some careful(ish) educated guesses, such that their average “30 point” story will indeed turn out to be about 3 times as much work as their average “10 point” story, and so on.

5. Build Incrementally with key safety practices

Whether  agile or waterfall, all Target-Driven projects make predictions about duration, cost and the scope that will actually be delivered.   Predictions are prone to error so:

  • Waterfall projects attempt to reduce  the likelihood of bad predictions, through planning and signoffs.
  • Agile projects attempt to reduce the impact of bad predictions, through fast detection and response.

In my experience, the agile approach works best. But you have to run the project well.  You have to put in place key safety practices that allow you to detect and fix issues rapidly.  Here are some of my favourites:

Detection techniques

  • Transparent, objective, tracking of progress and cost. This is probably the most important detection tool.  I won’t write about it here, because I have a dozen other pages on my site about it, under the heading “Earned Value”.  Here’s the introductory one, and here are very important details for anyone using it on Target-Driven agile.
  • Daily Standups.  This is the best-known agile technique for detecting whether there are any surprises, and whether anyone needs help.
  • User involvement “inside the sprint”.   I’m a fan of the team showing early versions of each feature to the users, then making corrections and improvements based on that feedback – all inside the sprint.  This gets the feedback as soon as possible – often when the feature is not even finished but just barely demo-able.  This allows the team to quickly respond to what the users say, with minimal wastage and rework.  It works well in 4-week sprints, but is probably almost impossible in 1-week sprints.  Alistair Cockburn said some great stuff on this – how 1-week sprints require you to get the user feedback in a following sprint, but I can’t find the reference.  In short, there are pros and cons of different sprint lengths.
  • “Formal” Testing soon after the sprint.  In many organisations, formal “User Acceptance Testing” (UAT) or “Business Acceptance Testing” (BAT) is conducted with a wider group of users before go-live.  If your agile project is going live in a “big bang” (e.g. it’s replacing an existing system on a specific date) you might be tempted to run this UAT/BAT near the end of the project.  But I think its much safer to run many smaller rounds of formal testing during the project.  Where I work, we tried this by doing formal BAT of each sprint’s output, in the month following completion of that sprint. It worked well and gave us useful information much earlier.
  • Regular deployments.  Even if you’re going live in a big bang, you should still “deploy” to a non-production environment at least once very 1 to 3 months (if not much more often). Deploying flushes out issues, and proves the system’s stability (or otherwise!).
  • Risk smoothing. Don’t leave risky stuff  to the end!  Smooth your risks out over the lifetime of the project, with a bias to flushing them out sooner rather than later. As we know, the tail end of a project already has plenty of risks relating to go-live, requests for scope change and other unexpected events.  So don’t also try to do tackle technical risks there.  Move the technically-difficult stories earlier. Don’t necessarily put them right at the very start – there you probably have enough risk just forming, storming and norming the team, and in standing up the architecture. So consider spreading the technically difficult implementation work between about the 15% complete and 65% complete stages of your project, with any potential show-stoppers towards the front of that range.  (See Alistair Cockburn’s “Trimming the Tail” for his excellent take on this.)
  • Retrospectives. It’s now common practice, at the end of each iteration in an agile project, for the team to get together and reflect on how the iteration went. What can they learn from it?  Are there any niggling concerns?  Humans are incredibly perceptive.  As long as you can create a culture where people are comfortable airing concerns and bad news, you will learn a lot from retrospectives.

Techniques for Rapid Response

  • Retrospectives.  Yes, this is both a detection technique and a response technique.  Solutions to a great many problems can be generated in retrospectives.
  • Daily Standups, and followup-chats.  When a problem comes up in stand-up, you don’t necessarily have to solve it in the standup (that can make for lengthy and counter-productive standups). But you can, and I think should, get the relevant team members together immediately after the standup to explore solution options.
  • Allow spare capacity for trouble-shooting.  There are two possible approaches to this.  On large projects, I like to use  both.  The first is: don’t plan your sprints so that everyone is scheduled to be busy for 100% of the available hours, instead, use something like 75% to allow plenty of time for people to help each-other.  The second is to have an “impediment remover” with hands-on-skills. The Scrum flavour of agile says that the Scrum Master is responsible for “removing any impediments to the team’s progress”.  But it’s become conventional in the industry to fill this role with someone who doesn’t personally remove certain kinds of impediments, particularly those of a more technical nature.  It seems that most Scrum Masters don’t write code, can’t personally fix a broken build, and can’t suggest technical solutions to a thorny design problem.  This is OK as long as the team has access to someone who can help with technical impediments.  The role can be a busy one. When I filled it on a 20-person Target-Driven project, it wasn’t full-time but was frequently the biggest part of my working week.
  • Know your strategy for responding to projected cost overruns.  Thanks to your accurate and transparent tracking of progress (above) you’ll soon find out if your project is heading over budget.  But then what? You need to be ready with a range of responses. You also need to be prepared to be creative, and possibly invent new responses to suit your circumstances. This topic deserves a blog post of its own, so I can only summarize it briefly here.   Broadly speaking, the possible responses fall into four categories, and you can mix-and-match from all four.
    1. Adjust the scope.  You don’t necessarily have to drop features outright.  You might just simplify them.  Jeff Patton wrote about it very well, under the heading “Late projects caused by poor estimation and other red herrings“.  Alistair Cockburn’s Trim the tail is also very relevant.
    2. “Borrow from the future”.  You might do things that will save you time now, but will have costs in the future. Running up technical debt is one way to do this, although of course its not always a good one! Another, which I’ve used, is to have each developer specialize in those areas of the system where they are most productive. This made us quicker in the short term, but had two downsides.  In short term, we were exposed to more risk if one person was sick or otherwise unavailable. In the long term, we had a “knowledge debt” because some people didn’t know how to maintain certain parts of the system. This meant that, in the future, we would need to find time for them to learn those areas.  In our case, with an immovable deadline for the initial go-live, this particular trade-off made sense.
    3. Look for ways to increase developer productivity. You should be doing this anyway but you might find that, when pushed, you become more creative ;-)  Two things that help a lot are keeping your architecture simple, and acting on ideas from retrospectives.  You’ll need to keep measuring  progress, as always, to see whether the changes are working.
    4. Just bite the bullet and spend more money. In some circumstances, and with appropriate controls, this might be the businesses’ best option.  It helps, of course, if the iterations already completed prove to the business that the project is delivering the right software and operating in a transparent and stable manner.

6. Schedule a gap between end-of-coding and any target go-live date

I’ve listed this last, because it comes at the end of the project. But obviously you need to plan for it from the start.  It’s useful for piloting the system with a small group of users (if appropriate in your case), for absorbing some schedule overrun, for acting on late-breaking user-feedback, and numerous other purposes.

In the next post in this series, I’ll share some thoughts on how to wrap a contract around these steps.

[Updated 12 Oct 2015 with minor edits for clarity]

Contracts: Two flavours of agile

There are many forms of agile. Some do support setting price and scope up front.  Here, I outline two overall flavours of agile – one which supports fixed scope and price, and one which does not.


Much as the Old Town in a European city is the center of the city, but doesn’t itself have a center (all the little twisty streets are roughly equal in “centerness”), so agile looks like a single place from a distance, but isn’t a single place, and the closer you get to the center, the more you see there isn’t a single center.

The Agile Manifesto was written by over a dozen people with their own world views and their own multi-centeredness, so it’s no wonder if there is no center to agile itself.

Alistair Cockburn, one of the 17-co-authors

Agile has always been a wide-ranging term.  In the beginning, it encompassed several “light” software development process that had been developed in the 1990s.  These included Scrum, Extreme Programming (XP), Crystal, Dynamic Systems Development Method (DSDM), and Feature Driven Development (FDD).  Those processes were represented by some of the original authors of the Agile Manifesto. Other authors subscribed to no particular methodology, but shared the group’s interest in pragmatic lightweight processes.

So agile looks like this: a large “bubble”, with smaller defined bubbles within it.



XP and Scrum have become by far the most popular, so much so that many people think that the XP/Scrum way is the only way to do agile. That’s simply not true.

Each “bubble” has a different emphasis.  XP and Scrum emphasise the ability to handle changing requirements.  FDD and DSDM lean more towards identifying a full(ish) set of requirements up front (in a relatively lightweight way, of course).  Crystal emphasises efficiency and “habitability”. (Habitability = “Would the team willingly work this way again?”)

Note also that there’s lot’s of white space in between the little bubbles – plenty of space for your team to do something that doesn’t fit with any one of the published methodologies, but is still “agile”. (Which is a topic for another day… ;-)

How does this relate to contracts?

When considering contracts for agile project, its helpful to simply the diverse landscape of agile processes.  For contract purposes,  I suggest we can group all the different types of agile into just two flavours.

Flavour 1: Exploratory Agile

  • Don’t have a fixed project scope up front
  • To a significant degree, scope is discovered as the project proceeds
  • Cost is either unknown in advance, or is specified by timeboxing the entire project: “We’ll work in priority order. After we’ve spent $X, we’ll just stop”.
  • Useful for environments where some of the following apply:
    • We cannot know what we need when we start (e.g. R&D projects, or others with very high degree of novelty or business uncertainty)
    • We expect very high degrees of change.  E.g. launching a new commercial product and learning what do do next from user feedback, and maybe even pivoting to a completely different direction.
    • As long as each iteration delivers business value in excess of what it cost, its worthwhile for us to continue.

Most on-line articles about agile contracts assume that this flavour is used. That’s fine, up to a point. It’s OK for authors and companies to say, “The exploratory flavour is mandatory for our kind of agile”.  As we saw above, there are many kinds of agile and people are perfectly entitled to set the rules for their own work.  However, it is not OK to say, “The exploratory flavour is mandatory for all kinds of agile”.  That misrepresents the beliefs of those who drafted the Agile Manifesto.  It’s also just plain wrong.

On the positive side, the Exploratory flavour is genuinely useful in many contexts and is probably the easiest way for a team to get started with agile. But it’s not the only game in town.

Flavour 2: Target-Driven Agile

  • Do have an overall scope.  This scope is defined during the early stages of the project.
  • Expect some changes, refinement and feature-thinning, but on the whole aim to deliver more-or-less the original scope
  • May also have an overall budget , which is also set during the early stages of the project.  The budget might be fixed, or it might be a target with controlled flexibility.
  • Most commonly seen with the FDD and DSDM flavours of agile, but is also possible with Scrum. (In Scrum, you’re allowed to define the backlog up-front, if you want)
  • Useful for environments  where some of the following apply:
    • The project is replacing an existing system.  It only makes sense to conduct the project if we can be reasonably sure of building enough scope to successfully replace the old system, at a price we can afford.
    • The business can’t proceed without knowing what they are getting into – in terms of scope and cost.
    • With a few weeks or months of business analysis, depending on system size, it is actually possible to identify the project scope. Typically the scope would be identified in the form of a few dozen, or maybe a few hundred, “epic” user stories. Preparing such a list likely to be achievable when the project is addressing a known need in an established business. It’s less likely to be possible in startups or R&D.

This is my favourite flavour of agile.  Why? Because its a fair question for the businesses to ask, “What are we getting into?”   If you were about to spend that much money, you’d ask too.  Exploratory Agile dodges the question; Target-Driven Agile answers it.

[This is the first in a short series of posts on Contracts for Target Driven-Agile.  Here’s the next.]

Quick notes on contracts

At today’s IITP Lightning Talk/Panel Discussion, I promised to post some links about how each agile project tends to need its own process, tailored to its own particular situation. Here are those links, and some rough notes on a few other things too:

Tailoring process to each project

The main author on this is Alistair Cockburn. He’s researched and written about why each project needs its now process, and how to cost-effectively do that process configuration. Here’s a quick outline of how to do it, and here’s a much more in-depth description (complete with links to research).

By the way, such tailoring is potentially a challenge to formulating a contract (as per today’s IITP panel) however in practice I think most of the tailoring will focus on a level of detail below what the contract would cover.  The contract would work at a higher level, specifying the overall approach to managing time, scope, cost, risk etc.  While there are still many choices to be made at that higher level, it seems realistic to me to pick one “flavour” of agile for contractual purposes, and to expect to continue with that overall flavour throughout the project.   I posted some outlines of a few “flavours” here, as relates to scope and cost management. After today’s panel I really need to do a more detailed follow-up post, covering more than just scope and cost!

Norwegian Agile Contract

Here’s a link to that standard agile contract, from Norway, which I mentioned.

Feature thinning

Here’s a quick description, including some links to additional info.

Agile is an umbrella term

There are many “defined” types of agile, and a great many others that are not explicitly defined.  The defined ones include XP, FDD, Scum, Crystal, DSDM, and Adaptive Software Development.  I mention this just to illustrate the variety of what “agile” means.

Just as an example, FDD is quite different from the better known Scrum and XP variants.

The tension between being specific and being flexible

When you start out with agile, it helps to have a very specific formulation of what to do. Basically a set of rules.  As you gain experience, it makes sense to start to look beyond the initial set of rules.  This causes difficulties – for instance when someone experienced (e.g. me, today!) says that agile can be many different things.  That’s true, but not very helpful as a starting point for organisations that haven’t tried it yet!

Authors addressing this include Andy Hunt and Jeff Patton.

This tension between being specific and being flexible is, I believe, one of the key challenges in sharing ideas about contracts for agile projects.  Maybe that will be a blog post another day…

Great software pricing research

Most software engineers have an intuitive sense that the industry is approaching pricing and estimation in the wrong way.  But we’ve lacked data to prove, or disprove, our intuitions. Magne Jørgensen and his colleagues, at the Simula Research Laboratory, are doing awesome research to fill the gap.

Some of what they’ve found will support your intuitions (e.g. the danger of price as a selection tool) but some may surprise you (you might have some bad estimation habits). Here are some highlights, just from the last few years:

A Strong Focus on Low Price When Selecting Software Providers Increases the Likelihood of Failure in Software Outsourcing Projects.  Empirical evidence for the Winner’s Curse in software development.

The Influence of Selection Bias on Effort Overruns in Software Development Projects. More on the winner’s curse.

What We Do and Don’t Know About Software Development Effort Estimation. The title says it all!

Myths and Over-Simplifications in Software Engineering. A timely reminder of the dangers of confirmation bias when considering how we should go about software development. Similar subject matter to Laurent Bossavit’s Leprechauns of Software Engineering.

The Ignorance of Confidence Levels in Minimum-Maximum Software Development Effort Intervals.  A study confirming a point which Steve McConnell makes early in “Software Estimation: Demystifying the Black Art” – namely that in practice “90% confident” requires a much wider range than we think it dos.

Software Development Effort Estimation: Why It Fails and How to Improve It. The third-to-last slide (how to get percentage confidence intervals without the problems of min-max approaches) is excellent. Just one catch, which would have affected many of the teams I’ve worked in.  The technique requires 10 to 20 prior projects, each with estimated and actual costs.  I suspect that many estimators don’t have ready access to such data. (Maybe organisations need to improve how they keep these records, but that’s not the whole solution. Some teams simply don’t have enough history, IMHO).

Better Selection of Software Providers Through Trialsourcing. “In this article we show that differences between software providers in terms of productivity and quality can be very large and that traditional means of evaluating software providers … fail to separate the competent from the incompetent ones.”  Describes using trial projects to select suppliers.

Numerical anchors and their strong effects on software development effort estimates.   Text not yet available.  Looks like a good one though.  In the meantime, here’s Wikipedia’s background material about anchoring.

First Impressions in Software Development Effort Estimation: Easy to Create and Difficult to Neutralize.  Another on anchoring (this time with full text).

From Origami to Software Development: a Review of Studies on Judgment-Based Predictions of Performance Time.  Interesting title, but no full text yet.


QBS and Novopay

Here in New Zealand we’ve had a lot of press recently a Novopay, which is a system to pay school teachers that has suffered from significant, embarrassing (and very widely-publicised) problems.

I was recently asked to write an article on the subject for the Institute of IT Professionals (IITP), explaining how I believe the emphasis on price, in tender evaluation, is probably a significant root cause of such failures – and how there is a much better way to structure tenders.  The article seems to have been very well received – with one (male) reader even going so far as to tweet, “Who is John Rusk and can I have his babies?”  (Which is both the best and the weirdest feedback I’ve ever received!) Continue reading QBS and Novopay

In-house development vs Outsourcing

I spent over 16 years working for IT consultancies.  Organisations who needed software would come to us, generally negotiate an up-front price, and we’d write the software for them.  This kind of work, outsourced software development, was the focus of my working life… until 9 months ago.  That was when I moved to an internal role at the Animal Health Board. The Board is not an IT organisation, but rather a registered charity tasked with the single job of eradicating bovine tuberculosis from New Zealand.  It turns out that, to eradicate a microscopic bacterium from an entire country(!), you need a lot of software to keep track of everything.  We’re re-developing one of our core applications, and we’re doing it in-house.

After 9 months in this very different world of in-house software development, I can share several observations:

  1. I like it.  In outsourced development, there was always an invisible “wall” between the developers and the customer – caused by contract and price negotiations, sensitivity over whether acting on user feedback was chargeable or implicit in the original price, and a tendency for both organisations to think in terms of “us versus them”. With in-house development, the wall is gone. Gone also are the considerable costs and frustrations the “wall” used to create.
  2. Making sure the “wall” stays gone requires conscious effort from managers and staff at all levels.  In some organisations it would be easy to slip back into “us and them”, between the “business” and “IT” functions of the single organisation.  At the Animal Health Board, we’ve been blessed with management who started the project off on the right foot, and with users and team members who kept it heading in the right direction.
  3. No matter how you run your project, you need good people.  I feel very lucky that good people were brought into key roles before I joined, and that we’ve been able to continue attracting good people as the project has continued.  My advice, to any organisation starting in-house development for the first time, would be to be very careful with your initial hiring choices, since so much will depend on them.  As for continuing to attract good people, I think it helps that we are a unique, focussed organisation, which does good both for the New Zealand economy and for wildlife conservation and biodiversity.  So my suggestion would be that, if your organisation has tangible benefits in the real world, promote that fact to attract staff.
  4. I believe the team, which is a mix of contract and permanent staff,  is at least as good as what we would have got through any outsourcing vendor – and quite possibly better.  Plus, we have the bonus of better contact between team and business, and lower costs per developer-hour.
  5. A challenging project is still a challenging project, no matter how you do it.  Bringing this major project in house hasn’t guaranteed we’ll get an effective system at an affordable cost, but it has improved the odds.


In-house development has been effective and enjoyable.  It would now be my preference for projects that meet the following criteria:

  • The project is big enough to justify the costs of recruiting and building a team.  (E.g. there’s at least 12 months’ worth of work for the team)
  • But it’s not too big. (E.g. building a team of between 10 and 20 people, as we have done, is OK, but if you think you need 100 people, and your organisation has never built an IT project team before, maybe it’s safer to find a large vendor who already has the 100 people!  Having said that, a 100-person project is risky no matter how you do it, so maybe it’s worth looking for ways to cut scope, extend timeframes, and do it with a smaller team.)
  • There’s a workable resourcing plan for after the project goes live.  For instance, if you expect maintenance to require no more than one full-time person, after go-live, how will you keep someone on to do that work?  If they are your only programmer, won’t they get lonely and leave?  What if you need them less that full time?  In such cases, outsourced maintenance may be a better option – although bear in mind that your vendor will also struggle to maintain expertise in your system if your needs are low or intermittent. (Since they, like you, can’t devote a full-time person to the system if there’s not enough work.) In our case, at the Animal Health Board, we know we have significant IT work beyond this project, so running an on-going internal IT team was always part of the plan. (And in fact, we’ve already had such a team for several years.)

Throwing Down the Gauntlet

For many years, my career goal was to make outsourced development work better.  I wanted to solve the problems of the “wall” that gets between customer and supplier. But now I see many of those problems can often be sidestepped, by moving the development in-house.  So I’d like to propose two challenges to IT consultancies:

  • What value do you add, over and above what your customer could do in-house?  Years ago, it used to be easy for IT consultancies to possess rare knowledge.  For instance, before the internet there were only two ways to learn how to develop with complex product like Oracle: read the manuals, which took up about 1 metre of shelf space, or sit next to an experienced user.  I found both at an IT consultancy.  But today, I’d use the internet instead.   In the internet age, an IT consultancy must do more than simply possess people and documentation.
  • How much of your day-to-day activity, and your fee structure, is devoted to simply ensuring the survival of your organisation?  There may be a significant cost to ensuring your organisation survives – in sales, marketing and management.  Ultimately, your customers pay that cost though the fees you charge them.  Is the survival of your organisation worth that much, to your customers?

I’m not saying that outsourcing is always a bad choice, just that the inherent advantages of in-house development raise the bar for those who wish to sell outsourcing.

For customers, investing in IT might make sense even when its not your core business.

Estimation Summary for Agile Projects – Nov 2011

It’s over 7 years since my first post on contracts for agile projects.  During the years since I’ve worked almost exclusively on agile projects with fixed scope, learning some real-life lessons along the way.

So here are some of the key points that now I keep in mind when considering estimation, pricing and contracts for agile projects.  These points are particularly relevant when customers have asked for fixed price and fixed scope.

Continue reading Estimation Summary for Agile Projects – Nov 2011

Million, Billion, Trillion

Humans are bad at understanding large numbers.   Our education system successfully trains us to understand the relative magnitudes of small numbers,  but for larger numbers we tend to fall back on an intuitive logarithmic scale.  So we underestimate the real difference between, say, a million and a billion.

Here’s a wee table I put together, to help with visualising large numbers.  The table is based on the thickness of a New Zealand $1 coin.

  • We start with just one, which is 2.74mm thick (about twice as thick as a US quarter).
  • Then, imagine we make a stack of 1000 coins.  It would be about the height of a person (a tall person, with their arms up-stretched).
  • For a million coins, we would have to lie the stack on its side.  It would form a “sausage” of coins about the length of an airport runway.
  • For a billion coins, the sausage would be as long as a country (specifically, India).
  • For a trillion coins, we’d end up way out in space (almost 8 times as far away as the moon).  That’s hard to visualise, because there’s nothing there.  So how about we try $14 trillion instead (the size of the US public debt)?  If we had the US public debt in $1 coins, the stack would reach all the way to Venus.
Number   Length
$1k 10^3 person
$1 million 10^6 airport runway
$1 billion 10^9 India
$1 trillion 10^12 Moon * 8

So to visualise the difference between a thousand, a million and a billion, you can say: “person, runaway, India”.


Footnotes Nov 2011:

1. Here’s an amazing visualisation of dollar values at various scales

2. Further to the billion-coins-in-India example: what if we lined up a billion people, instead of a billion coins?  That’s approximately the actual population of India.   To get a line the same length as the  billion coins, we’d have to ask the people to stand about 500-abreast.  I.e. the population of India could stretch from the north to the south of the country, if they formed a long, skinny loosely-packed crowd approximately 500m wide.

3. A similar crowd in New Zealand, with the entire population of 4 million stretching the full length of the country, would be only 2 or 3 people wide – in part because the country is sparsely populated, and in part because it’s a long skinny country.

Jan 2012:

4. The number of stars in the observable universe is estimated at 7 x 1022.   Given the world population, that’s 10 trillion stars per person – which equates to one stack of coins, reaching most of the way to Venus, for every person alive today.