Don’t Sabotage Agility

I see two main things going wrong when agile projects are done under traditional contracts.

The first problem is that the parameters of the project – cost, scope and time – are set far too early. The result is often an infeasible project.  No contract, no matter how it’s worded, can fully protect you in that situation.  Even if the contract puts all the financial risk onto the other party, there are plenty of risks that you can’t contract out of – not least of which is the embarrassment and reputational damage of a failed project.

The second problem is that there’s not enough discussion of scope flexibility, during the project.  You don’t necessarily have to out-and-out drop features, but it might be in everyone’s interest to simplify or change them. When I look back at our award-winning large agile project at OSPRI, these discussions were a hallmark of the project and a key reason why it finished exactly on time and slightly under budget.  And yet, if you asked the business now, “what did you miss out on, as a result of scope change during the project” – I suspect they’d be hard pressed to think of any significant features that were completely dropped.  (We did, in fact, drop two quite large features.  One was no longer needed due to business change, and the other was something that no-body seems to have missed. But mostly, we thinned features instead of dropping.)

Open discussion about scope flexibility doesn’t just help the team to deliver on time and on budget. It also helps the team to allocate their resources effectively.  By thinning features, you can take the money you save and devote it to adding value elsewhere.

Finally, scope flexibility is key to being a true professional in software development.  After reading about how professionals in other fields work, I am convinced that software engineers have an obligation to re-evaluate and discuss the scope of each feature as they work.  In fact, I’d go so far to say that, “if you produce a system that exactly matches its pre-written specification, you have acted unprofessionally.”  And yet traditional contracts encourage exactly this this failure of professionalism.

[The next, and final, post in this series will describe what I would do, if it was up to me(!), to preserve agility in software procurement and contracts]

Feature Thinning

Feature Thinning is the agile practice of simplifying the scope and implementation of specific features, on a case-by-case basis.  Often, given their growing knowledge of the technology and business domain, an agile team can suggest simpler alternatives to what the users originally asked for.  Often, these simpler alternatives still give all the key benefits, at much less cost.

It is important to thin features whenever you can, because the money you save will come in handy in other areas of the project.  For instance, the users may look at some other feature that you’ve built and say, “Yes, that’s exactly what we asked for. But now that we’ve had the chance to try it out, we realise we should have asked for something different.” There’s nothing wrong with that! It’s good and normal.  For instance, on one project we completely re-designed and re-built a whole series of key screens based on user feedback like this – and thank goodness we did! But you can only afford to delight your users like that if you have saved money elsewhere in the project – which is why it’s so important to thin features wherever you can.

A note on terminology.  Jeff Patton is the key author who has written about feature thinning.  But he didn’t call it “feature thinning”.  He originally used terms like “managing scale [of each feature]”.  I spoke with Jeff recently, and he confirmed that he now prefers the term “feature thinning”.

For more, see Jeff’s most excellent post on estimates, red herrings and dead fish.

[This post is a brief digression from my current series on contracts for agile projects. The final two instalments of that series will follow…]

Contracts: Outline of a Target-Driven Agile project

Recently I wrote about Target-Driven Agile.  Now, I’d like to outline what a Target-Driven agile project actually looks like.  Of course, as discussed previously, there are many possible variations.  This is the way I like to do it.

(Note: These are just the key steps/phases in a Target-Driven project. I my next post, I’ll outline some thoughts on how contracts can support these steps).

Step 1: Reach an understanding of the scope

This is about getting a broad, high-level understanding of what we are, and are not, aiming to accomplish in the project.

If there’s someone who already knows the business very well,  and is both trusted and authorized to make all necessary decisions, then just ask them.

But usually it’s not that easy. It’s rare to find one person that fits that brief.  Perhaps the needs of the business are so diverse that no one person can adequately, and fairly, represent them all.  Perhaps there are several strong voices within the business, and they’re all saying different things!  Perhaps no-one has separated the “must-haves” from the “nice-to-haves”. So for most projects it’s necessary to work with various people, to craft a high-level description of scope that’s broadly agreed upon.  This usually requires the analysis and facilitation skills of a Business Analyst (or similar), but the exact approach will depend on the project and organisation.

A key point here is that we’re not aiming for a detailed waterfall-style requirements phase. For an agile project this step should be much briefer. For instance, the FDD variety of agile calls this step “Develop an Overall Model” and expects it to takes about 2 weeks of effort for every 6 months of eventual software development time.  Personally, I’ve noticed this stage often takes longer – more like 1 month for every 6 months of ensuing development.  I’m comfortable with that 1:6 ratio, but if it was to take much longer than that, I’d fear it degenerating into a waterfall-style requirements phase .

By the way, the people who do this work will gain a lot of useful knowledge about the business, so its a good idea if they remain as members of the team during the rest of the project.

Step 2: Document the scope as a simple list

You need to write down the output of step 1. If you’re using User Stories, you can list of the names of the epic (coarse-grained) user stories. Otherwise you might list of “features”, again with just their names and few other details.

Some useful tips:

  • This should be a list, not a document.  Most agile teams store these lists in some kind of purpose-built tool (there are dozens of different tools – free, cheap and expensive). For the smallest projects, I’ve seen a list in Excel work surprisingly well.
  • The number of items in your list depends on the size and nature of your project.  As far as I can tell, you typically end up with at least 20 on even a small project.  Big projects that I’ve seen tend to have a few hundred.  I suspect the list becomes hard to understand once there’s more than about 200, and that teams with big projects probably move to writing coarser-grained “bigger” epic stories to keep the count under that threshold. (Each epic will get broken into smaller pieces later in the project, on a just-in-time basis).
  • You might find it useful to group or categorize them into boarder “themes”.  Story Maps are one way to do this.  Some agile tools are flexible enough to let you choose your own theming approach and visually group stories by those themes.
  • It’s generally not worthwhile to document dependencies between the individual items. For justification this view, see an FDD perspective on dependencies (see 5th reply in message thread), and the general perspective towards the end of this page.

Step 3: Understand your likely architectural approach

You don’t have to design everything up front, but you need a basic idea of your general architectural direction, before you go much further.   To illustrate this point, a team using .NET and SQL Server might set a direction that looks something like this:

  • .NET application over SQL Server
  • ORM and no stored procs
  • ASP.NET MVC for the user interface
  • Bootstrap for styling and layout
  • A list of key external systems you intend to integrate with, and the technological approach for integration with each (SOAP, REST, a Message Bus,…)
  • … plus a few more details about the internal structure of your app.  E.g. As a generally rule where will your core business logic go?  How will the UI layer (ASP.NET MVC in this example) connect to that logic?

Since you’re agile, you may change some of these later. But for now, you need some idea of where you’re heading, in order to make progress on the next steps.

Sometimes, its hard to settle on a direction, especially if there’s unfamiliar technology involved.  So you can build something here.  Maybe even build the same something, in two different ways, and compare.

Step 4: Assign sizing “points” to all items in the scope

This is the usual agile practice of assigning relative sizes to each user story/feature.  Because it’s common agile practice, I won’t include a details here – except to say that for a Target-Driven project you need to assign points up-front to the entire scope.  I.e. everything that’s on the list we made at Step 2.  Why not just assign points to some of them, and do the rest later?  Because:

  • If part of your project is unsized, you can’t make any useful predictions about how long you’ll take to finish it.  On a Target-Driven project, we want to make predictions of total cost and duration.
  • Sizing everything up front has the advantage that its  can be easier to size things consistently.  Why? Because at this early stage you have the same level of ignorance about everything!  Contrast this with that alternative approach of sizing some stories half way through the project.  Half way through, you know the completed stories very well, but the future stories poorly. This makes it more difficult to size future stories correctly relative to the past ones because  you know how difficult the past ones were, but you don’t know about future ones. This discrepancy of understanding can trick you into under-estimating the difficulty of the future stories.  Sizing everything at the start solves this problem, because they’re all future stories.
  • Remember that what we need here is relative sizes, not absolute.  Furthermore, we only need the relative sizing to be right “on average”.  If some future sprint has, say, 10 stories in in, it doesn’t matter if it turns out that some should have had more points than we gave them and some should have had fewer, as long as the errors approximately cancel out across the sprint as a whole. I.e. as long as the sprint as a whole has about the right number of points.

Since this exercise is only about setting relative sizes, this doesn’t necessarily need to be an onerous task.   On one project I worked on, with a budget in the low millions of dollars, I don’t recall there being any more than about 25 person-hours spent assigning these points.  But we were fortunate to have two very experienced people doing the sizing, one of whom knew the scope very well (since he’d worked on the previous steps) and one who knew the technology very well.  In your case, you might need more than two people, or and they might want more time.  But if they ask for lots more time… remind them that you’re only asking them for relative sizes.  You’re not asking them to actually say how long each feature will take to develop.  You’re just asking for some careful(ish) educated guesses, such that their average “30 point” story will indeed turn out to be about 3 times as much work as their average “10 point” story, and so on.

5. Build Incrementally with key safety practices

Whether  agile or waterfall, all Target-Driven projects make predictions about duration, cost and the scope that will actually be delivered.   Predictions are prone to error so:

  • Waterfall projects attempt to reduce  the likelihood of bad predictions, through planning and signoffs.
  • Agile projects attempt to reduce the impact of bad predictions, through fast detection and response.

In my experience, the agile approach works best. But you have to run the project well.  You have to put in place key safety practices that allow you to detect and fix issues rapidly.  Here are some of my favourites:

Detection techniques

  • Transparent, objective, tracking of progress and cost. This is probably the most important detection tool.  I won’t write about it here, because I have a dozen other pages on my site about it, under the heading “Earned Value”.  Here’s the introductory one, and here are very important details for anyone using it on Target-Driven agile.
  • Daily Standups.  This is the best-known agile technique for detecting whether there are any surprises, and whether anyone needs help.
  • User involvement “inside the sprint”.   I’m a fan of the team showing early versions of each feature to the users, then making corrections and improvements based on that feedback – all inside the sprint.  This gets the feedback as soon as possible – often when the feature is not even finished but just barely demo-able.  This allows the team to quickly respond to what the users say, with minimal wastage and rework.  It works well in 4-week sprints, but is probably almost impossible in 1-week sprints.  Alistair Cockburn said some great stuff on this – how 1-week sprints require you to get the user feedback in a following sprint, but I can’t find the reference.  In short, there are pros and cons of different sprint lengths.
  • “Formal” Testing soon after the sprint.  In many organisations, formal “User Acceptance Testing” (UAT) or “Business Acceptance Testing” (BAT) is conducted with a wider group of users before go-live.  If your agile project is going live in a “big bang” (e.g. it’s replacing an existing system on a specific date) you might be tempted to run this UAT/BAT near the end of the project.  But I think its much safer to run many smaller rounds of formal testing during the project.  Where I work, we tried this by doing formal BAT of each sprint’s output, in the month following completion of that sprint. It worked well and gave us useful information much earlier.
  • Regular deployments.  Even if you’re going live in a big bang, you should still “deploy” to a non-production environment at least once very 1 to 3 months (if not much more often). Deploying flushes out issues, and proves the system’s stability (or otherwise!).
  • Risk smoothing. Don’t leave risky stuff  to the end!  Smooth your risks out over the lifetime of the project, with a bias to flushing them out sooner rather than later. As we know, the tail end of a project already has plenty of risks relating to go-live, requests for scope change and other unexpected events.  So don’t also try to do tackle technical risks there.  Move the technically-difficult stories earlier. Don’t necessarily put them right at the very start – there you probably have enough risk just forming, storming and norming the team, and in standing up the architecture. So consider spreading the technically difficult implementation work between about the 15% complete and 65% complete stages of your project, with any potential show-stoppers towards the front of that range.  (See Alistair Cockburn’s “Trimming the Tail” for his excellent take on this.)
  • Retrospectives. It’s now common practice, at the end of each iteration in an agile project, for the team to get together and reflect on how the iteration went. What can they learn from it?  Are there any niggling concerns?  Humans are incredibly perceptive.  As long as you can create a culture where people are comfortable airing concerns and bad news, you will learn a lot from retrospectives.

Techniques for Rapid Response

  • Retrospectives.  Yes, this is both a detection technique and a response technique.  Solutions to a great many problems can be generated in retrospectives.
  • Daily Standups, and followup-chats.  When a problem comes up in stand-up, you don’t necessarily have to solve it in the standup (that can make for lengthy and counter-productive standups). But you can, and I think should, get the relevant team members together immediately after the standup to explore solution options.
  • Allow spare capacity for trouble-shooting.  There are two possible approaches to this.  On large projects, I like to use  both.  The first is: don’t plan your sprints so that everyone is scheduled to be busy for 100% of the available hours, instead, use something like 75% to allow plenty of time for people to help each-other.  The second is to have an “impediment remover” with hands-on-skills. The Scrum flavour of agile says that the Scrum Master is responsible for “removing any impediments to the team’s progress”.  But it’s become conventional in the industry to fill this role with someone who doesn’t personally remove certain kinds of impediments, particularly those of a more technical nature.  It seems that most Scrum Masters don’t write code, can’t personally fix a broken build, and can’t suggest technical solutions to a thorny design problem.  This is OK as long as the team has access to someone who can help with technical impediments.  The role can be a busy one. When I filled it on a 20-person Target-Driven project, it wasn’t full-time but was frequently the biggest part of my working week.
  • Know your strategy for responding to projected cost overruns.  Thanks to your accurate and transparent tracking of progress (above) you’ll soon find out if your project is heading over budget.  But then what? You need to be ready with a range of responses. You also need to be prepared to be creative, and possibly invent new responses to suit your circumstances. This topic deserves a blog post of its own, so I can only summarize it briefly here.   Broadly speaking, the possible responses fall into four categories, and you can mix-and-match from all four.
    1. Adjust the scope.  You don’t necessarily have to drop features outright.  You might just simplify them.  Jeff Patton wrote about it very well, under the heading “Late projects caused by poor estimation and other red herrings“.  Alistair Cockburn’s Trim the tail is also very relevant.
    2. “Borrow from the future”.  You might do things that will save you time now, but will have costs in the future. Running up technical debt is one way to do this, although of course its not always a good one! Another, which I’ve used, is to have each developer specialize in those areas of the system where they are most productive. This made us quicker in the short term, but had two downsides.  In short term, we were exposed to more risk if one person was sick or otherwise unavailable. In the long term, we had a “knowledge debt” because some people didn’t know how to maintain certain parts of the system. This meant that, in the future, we would need to find time for them to learn those areas.  In our case, with an immovable deadline for the initial go-live, this particular trade-off made sense.
    3. Look for ways to increase developer productivity. You should be doing this anyway but you might find that, when pushed, you become more creative ;-)  Two things that help a lot are keeping your architecture simple, and acting on ideas from retrospectives.  You’ll need to keep measuring  progress, as always, to see whether the changes are working.
    4. Just bite the bullet and spend more money. In some circumstances, and with appropriate controls, this might be the businesses’ best option.  It helps, of course, if the iterations already completed prove to the business that the project is delivering the right software and operating in a transparent and stable manner.

6. Schedule a gap between end-of-coding and any target go-live date

I’ve listed this last, because it comes at the end of the project. But obviously you need to plan for it from the start.  It’s useful for piloting the system with a small group of users (if appropriate in your case), for absorbing some schedule overrun, for acting on late-breaking user-feedback, and numerous other purposes.

In the next post in this series, I’ll share some thoughts on how to wrap a contract around these steps.

[Updated 12 Oct 2015 with minor edits for clarity]

Contracts: Two flavours of agile

There are many forms of agile. Some do support setting price and scope up front.  Here, I outline two overall flavours of agile – one which supports fixed scope and price, and one which does not.


Much as the Old Town in a European city is the center of the city, but doesn’t itself have a center (all the little twisty streets are roughly equal in “centerness”), so agile looks like a single place from a distance, but isn’t a single place, and the closer you get to the center, the more you see there isn’t a single center.

The Agile Manifesto was written by over a dozen people with their own world views and their own multi-centeredness, so it’s no wonder if there is no center to agile itself.

Alistair Cockburn, one of the 17-co-authors

Agile has always been a wide-ranging term.  In the beginning, it encompassed several “light” software development process that had been developed in the 1990s.  These included Scrum, Extreme Programming (XP), Crystal, Dynamic Systems Development Method (DSDM), and Feature Driven Development (FDD).  Those processes were represented by some of the original authors of the Agile Manifesto. Other authors subscribed to no particular methodology, but shared the group’s interest in pragmatic lightweight processes.

So agile looks like this: a large “bubble”, with smaller defined bubbles within it.



XP and Scrum have become by far the most popular, so much so that many people think that the XP/Scrum way is the only way to do agile. That’s simply not true.

Each “bubble” has a different emphasis.  XP and Scrum emphasise the ability to handle changing requirements.  FDD and DSDM lean more towards identifying a full(ish) set of requirements up front (in a relatively lightweight way, of course).  Crystal emphasises efficiency and “habitability”. (Habitability = “Would the team willingly work this way again?”)

Note also that there’s lot’s of white space in between the little bubbles – plenty of space for your team to do something that doesn’t fit with any one of the published methodologies, but is still “agile”. (Which is a topic for another day… ;-)

How does this relate to contracts?

When considering contracts for agile project, its helpful to simply the diverse landscape of agile processes.  For contract purposes,  I suggest we can group all the different types of agile into just two flavours.

Flavour 1: Exploratory Agile

  • Don’t have a fixed project scope up front
  • To a significant degree, scope is discovered as the project proceeds
  • Cost is either unknown in advance, or is specified by timeboxing the entire project: “We’ll work in priority order. After we’ve spent $X, we’ll just stop”.
  • Useful for environments where some of the following apply:
    • We cannot know what we need when we start (e.g. R&D projects, or others with very high degree of novelty or business uncertainty)
    • We expect very high degrees of change.  E.g. launching a new commercial product and learning what do do next from user feedback, and maybe even pivoting to a completely different direction.
    • As long as each iteration delivers business value in excess of what it cost, its worthwhile for us to continue.

Most on-line articles about agile contracts assume that this flavour is used. That’s fine, up to a point. It’s OK for authors and companies to say, “The exploratory flavour is mandatory for our kind of agile”.  As we saw above, there are many kinds of agile and people are perfectly entitled to set the rules for their own work.  However, it is not OK to say, “The exploratory flavour is mandatory for all kinds of agile”.  That misrepresents the beliefs of those who drafted the Agile Manifesto.  It’s also just plain wrong.

On the positive side, the Exploratory flavour is genuinely useful in many contexts and is probably the easiest way for a team to get started with agile. But it’s not the only game in town.

Flavour 2: Target-Driven Agile

  • Do have an overall scope.  This scope is defined during the early stages of the project.
  • Expect some changes, refinement and feature-thinning, but on the whole aim to deliver more-or-less the original scope
  • May also have an overall budget , which is also set during the early stages of the project.  The budget might be fixed, or it might be a target with controlled flexibility.
  • Most commonly seen with the FDD and DSDM flavours of agile, but is also possible with Scrum. (In Scrum, you’re allowed to define the backlog up-front, if you want)
  • Useful for environments  where some of the following apply:
    • The project is replacing an existing system.  It only makes sense to conduct the project if we can be reasonably sure of building enough scope to successfully replace the old system, at a price we can afford.
    • The business can’t proceed without knowing what they are getting into – in terms of scope and cost.
    • With a few weeks or months of business analysis, depending on system size, it is actually possible to identify the project scope. Typically the scope would be identified in the form of a few dozen, or maybe a few hundred, “epic” user stories. Preparing such a list likely to be achievable when the project is addressing a known need in an established business. It’s less likely to be possible in startups or R&D.

This is my favourite flavour of agile.  Why? Because its a fair question for the businesses to ask, “What are we getting into?”   If you were about to spend that much money, you’d ask too.  Exploratory Agile dodges the question; Target-Driven Agile answers it.

[This is the first in a short series of posts on Contracts for Target Driven-Agile.  Here’s the next.]

Quick notes on contracts

At today’s IITP Lightning Talk/Panel Discussion, I promised to post some links about how each agile project tends to need its own process, tailored to its own particular situation. Here are those links, and some rough notes on a few other things too:

Tailoring process to each project

The main author on this is Alistair Cockburn. He’s researched and written about why each project needs its now process, and how to cost-effectively do that process configuration. Here’s a quick outline of how to do it, and here’s a much more in-depth description (complete with links to research).

By the way, such tailoring is potentially a challenge to formulating a contract (as per today’s IITP panel) however in practice I think most of the tailoring will focus on a level of detail below what the contract would cover.  The contract would work at a higher level, specifying the overall approach to managing time, scope, cost, risk etc.  While there are still many choices to be made at that higher level, it seems realistic to me to pick one “flavour” of agile for contractual purposes, and to expect to continue with that overall flavour throughout the project.   I posted some outlines of a few “flavours” here, as relates to scope and cost management. After today’s panel I really need to do a more detailed follow-up post, covering more than just scope and cost!

Norwegian Agile Contract

Here’s a link to that standard agile contract, from Norway, which I mentioned.

Feature thinning

Here’s a quick description, including some links to additional info.

Agile is an umbrella term

There are many “defined” types of agile, and a great many others that are not explicitly defined.  The defined ones include XP, FDD, Scum, Crystal, DSDM, and Adaptive Software Development.  I mention this just to illustrate the variety of what “agile” means.

Just as an example, FDD is quite different from the better known Scrum and XP variants.

The tension between being specific and being flexible

When you start out with agile, it helps to have a very specific formulation of what to do. Basically a set of rules.  As you gain experience, it makes sense to start to look beyond the initial set of rules.  This causes difficulties – for instance when someone experienced (e.g. me, today!) says that agile can be many different things.  That’s true, but not very helpful as a starting point for organisations that haven’t tried it yet!

Authors addressing this include Andy Hunt and Jeff Patton.

This tension between being specific and being flexible is, I believe, one of the key challenges in sharing ideas about contracts for agile projects.  Maybe that will be a blog post another day…

Your thoughts on a simple waterfall vs agile comparison

I’m seeking feedback on the following comparison of agile vs waterfall(*)   The comparison is to be used as background information for a panel discussion on agile contracts, so it emphasizes those aspects which I felt were most relevant to that topic.  I’ve tried to keep it agnostic as to the exact flavour of agile to be used.

Waterfall Agile
Requirements always identified up front Requirements may be identified up front, in a concise list
Users sign off documents Users try out the software regularly
Integrate and stabilize at end Integrate and stabilize frequently
Progress is measured by milestones Progress is measured by % complete (with continuous testing)
Reduce likelihood of bad predictions through planning & signoffs Reduce impact of bad predictions though fast detection & response
Value: delivering on promises Value: openness

What are your thoughts?  I’m particularly interested in your thoughts on the second-to-last line, about the approach to “bad predictions”.  Does that make sense as it stands?  Do I need to add text explaining that I’m talking about all kinds of predictions – not only how long things will take to build, but also what should be built?

(*) Yes, I know, presenting agile and waterfall as opposites is logically flawed, since there’s no “opposite of agile“.  But we need something as background/context for the panel audience.

Great software pricing research

Most software engineers have an intuitive sense that the industry is approaching pricing and estimation in the wrong way.  But we’ve lacked data to prove, or disprove, our intuitions. Magne Jørgensen and his colleagues, at the Simula Research Laboratory, are doing awesome research to fill the gap.

Some of what they’ve found will support your intuitions (e.g. the danger of price as a selection tool) but some may surprise you (you might have some bad estimation habits). Here are some highlights, just from the last few years:

A Strong Focus on Low Price When Selecting Software Providers Increases the Likelihood of Failure in Software Outsourcing Projects.  Empirical evidence for the Winner’s Curse in software development.

The Influence of Selection Bias on Effort Overruns in Software Development Projects. More on the winner’s curse.

What We Do and Don’t Know About Software Development Effort Estimation. The title says it all!

Myths and Over-Simplifications in Software Engineering. A timely reminder of the dangers of confirmation bias when considering how we should go about software development. Similar subject matter to Laurent Bossavit’s Leprechauns of Software Engineering.

The Ignorance of Confidence Levels in Minimum-Maximum Software Development Effort Intervals.  A study confirming a point which Steve McConnell makes early in “Software Estimation: Demystifying the Black Art” – namely that in practice “90% confident” requires a much wider range than we think it dos.

Software Development Effort Estimation: Why It Fails and How to Improve It. The third-to-last slide (how to get percentage confidence intervals without the problems of min-max approaches) is excellent. Just one catch, which would have affected many of the teams I’ve worked in.  The technique requires 10 to 20 prior projects, each with estimated and actual costs.  I suspect that many estimators don’t have ready access to such data. (Maybe organisations need to improve how they keep these records, but that’s not the whole solution. Some teams simply don’t have enough history, IMHO).

Better Selection of Software Providers Through Trialsourcing. “In this article we show that differences between software providers in terms of productivity and quality can be very large and that traditional means of evaluating software providers … fail to separate the competent from the incompetent ones.”  Describes using trial projects to select suppliers.

Numerical anchors and their strong effects on software development effort estimates.   Text not yet available.  Looks like a good one though.  In the meantime, here’s Wikipedia’s background material about anchoring.

First Impressions in Software Development Effort Estimation: Easy to Create and Difficult to Neutralize.  Another on anchoring (this time with full text).

From Origami to Software Development: a Review of Studies on Judgment-Based Predictions of Performance Time.  Interesting title, but no full text yet.


People Skills, distilled

Here’s a 6-point summary of my “People Skills” talk.  The points are in pairs, two about negotiation, two about the “arrows of communication”, and two about mindset.

Identify interests
Generate options

Share your stories
Ask for their experiences

Don’t try to win the meeting
Test your assumptions

Note that there’s far more to people skills than these 6 points. The book Crucial Conversations has many more learnable skills. (About 25 in total).  The 6 I’ve listed here are the ones that seem particularly important to me, and which flow together (somewhat) in the structure of the talk I give about People Skills.


Identifying Interests

A key part of good negotiation, or negotiation-like discussions such as those about design of a new product, is identifying the interests of all parties.

I suspect that there’s a very common mistake made, when identifying interests. That is to assume what the other person’s interests are, instead of asking them.

But it gets worse. In my opinion, assumptions are particularly dangerous when they are assumptions about the other person’s motivations or attitudes.  E.g. “He wants all the glory for coming up with the idea”.  There’s several things wrong with assumptions of this type:

  • Firstly, they distract attention from the real interests that we should be focussing the discussion on: what business benefits does the other person want to obtain?
  • Secondly, they encourage us to fall back into a Unilateral Control mindset.  I find it better if I simply don’t make any assumptions of this type.  Instead of making assumptions about the other person’s motives, I focus on the actual business problem at hand, and seek to learn more about their practical interests in relation to the business problem.

As you talk (and listen) openly about the actual business problem, you’re likely to find that the other person’s motives are not too bad.  No-one comes to work to deliberately do a poor job. On some level, virtually everyone wants a good outcome for the business they work for.  Making negative assumptions about their motives is usually mistaken and almost always a distraction and waste of your time.

Mindsets, distilled

It’s not easy to summarise the wonderful work of Chris Argyris. His work on mindsets, namely the Unilateral Control mindset and the Mutual Learning mindset, seems particularly difficult to summarise – and yet it’s so vitally important to anyone who works with other people.

Here’s my latest attempt, at approachable wording for the two mindsets.

Unilateral Control:  (common, and counter-productive)

“Guess what they’re thinking.
Don’t trigger negative emotions.
Get them to do what you want”.

Mutual Learning: (works better)

” Test assumptions (about what they’re thinking)
Share valid information.
Seek well-informed agreement.”

(In this context, “unilateral” simply means “one sided” and “mutual” means “we’re all in this together”. )