Contracts: Outline of a Target-Driven Agile project

Recently I wrote about Target-Driven Agile.  Now, I’d like to outline what a Target-Driven agile project actually looks like.  Of course, as discussed previously, there are many possible variations.  This is the way I like to do it.

(Note: These are just the key steps/phases in a Target-Driven project. I my next post, I’ll outline some thoughts on how contracts can support these steps).

Step 1: Reach an understanding of the scope

This is about getting a broad, high-level understanding of what we are, and are not, aiming to accomplish in the project.

If there’s someone who already knows the business very well,  and is both trusted and authorized to make all necessary decisions, then just ask them.

But usually it’s not that easy. It’s rare to find one person that fits that brief.  Perhaps the needs of the business are so diverse that no one person can adequately, and fairly, represent them all.  Perhaps there are several strong voices within the business, and they’re all saying different things!  Perhaps no-one has separated the “must-haves” from the “nice-to-haves”. So for most projects it’s necessary to work with various people, to craft a high-level description of scope that’s broadly agreed upon.  This usually requires the analysis and facilitation skills of a Business Analyst (or similar), but the exact approach will depend on the project and organisation.

A key point here is that we’re not aiming for a detailed waterfall-style requirements phase. For an agile project this step should be much briefer. For instance, the FDD variety of agile calls this step “Develop an Overall Model” and expects it to takes about 2 weeks of effort for every 6 months of eventual software development time.  Personally, I’ve noticed this stage often takes longer – more like 1 month for every 6 months of ensuing development.  I’m comfortable with that 1:6 ratio, but if it was to take much longer than that, I’d fear it degenerating into a waterfall-style requirements phase .

By the way, the people who do this work will gain a lot of useful knowledge about the business, so its a good idea if they remain as members of the team during the rest of the project.

Step 2: Document the scope as a simple list

You need to write down the output of step 1. If you’re using User Stories, you can list of the names of the epic (coarse-grained) user stories. Otherwise you might list of “features”, again with just their names and few other details.

Some useful tips:

  • This should be a list, not a document.  Most agile teams store these lists in some kind of purpose-built tool (there are dozens of different tools – free, cheap and expensive). For the smallest projects, I’ve seen a list in Excel work surprisingly well.
  • The number of items in your list depends on the size and nature of your project.  As far as I can tell, you typically end up with at least 20 on even a small project.  Big projects that I’ve seen tend to have a few hundred.  I suspect the list becomes hard to understand once there’s more than about 200, and that teams with big projects probably move to writing coarser-grained “bigger” epic stories to keep the count under about 100 or 200. (Each epic will get broken into smaller pieces later in the project, on a just-in-time basis).
  • You might find it useful to group or categorize them into boarder “themes”.  Story Maps are one way to do this.  Some agile tools are flexible enough to let you choose your own theming approach and visually group stories by those themes.
  • It’s generally not worthwhile to document dependencies between the individual items. For justification this view, see an FDD perspective on dependencies (see 5th reply in message thread), and the general perspective towards the end of this page.

Step 3: Understand your likely architectural approach

You don’t have to design everything up front, but you need a basic idea of your general architectural direction, before you go much further.   To illustrate this point, a team using .NET and SQL Server might set a direction that looks something like this:

  • .NET application over SQL Server
  • ORM and no stored procs
  • ASP.NET MVC for the user interface
  • Bootstrap for styling and layout
  • A list of key external systems you intend to integrate with, and the technological approach for integration with each (SOAP, REST, a Message Bus,…)
  • … plus a few more details about the internal structure of your app.  E.g. As a generally rule where will your core business logic go?  How will the UI layer (ASP.NET MVC in this example) connect to that logic?

Since you’re agile, you may change some of these later. But for now, you need some idea of where you’re heading, in order to make progress on the next steps.

Sometimes, its hard to settle on a direction, especially if there’s unfamiliar technology involved.  So you can build something here.  Maybe even build the same something, in two different ways, and compare them.

Step 4: Assign sizing “points” to all items in the scope

This is the usual agile practice of assigning relative sizes to each user story/feature.  Because it’s common agile practice, I won’t include a details here – except to say that for a Target-Driven project you need to assign points up-front to the entire scope.  I.e. everything that’s on the list we made at Step 2.  Why not just assign points to some of them, and do the rest later?  Because:

  • If part of your project is unsized, you can’t make any useful predictions about how long you’ll take to finish it.  On a Target-Driven project, we want to make predictions of total cost and duration. (We’ll cover predictions in the next post).
  • Sizing everything up front has the advantage that its  can be easier to size things consistently.  Why? Because at this early stage you have the same level of ignorance about everything!  Contrast this with that alternative approach of sizing some stories half way through the project.  Half way through, you know the completed stories very well, but the future stories poorly. This makes it more difficult to size future stories correctly relative to the past ones because  you know how difficult the past ones were, but you don’t know about future ones. This discrepancy of understanding can trick you into under-estimating the difficulty of the future stories.  Sizing everything at the start solves this problem, because at that stage they’re all future stories.
  • Remember that what we need here is relative sizes, not absolute.  Furthermore, we only need the relative sizing to be right “on average”.  If some future sprint has, say, 10 stories in in, it doesn’t matter if some turn out to be smaller than we estimated and some bigger, as long as the errors roughly cancel out across the sprint as a whole.

Since this exercise is only about setting relative sizes, this doesn’t necessarily need to be an onerous task.   On one project I worked on, with a budget in the low millions of dollars, I don’t recall there being any more than about 25 person-hours spent assigning these points.  But your mileage may vary.  We were fortunate to have two very experienced people doing the sizing, one of whom knew the scope very well (since he’d worked on the previous steps) and one who knew the technology very well.  They who worked efficiently together, and were comfortable with the approach.  In your case, you might need more than two people, or and they might want more time.  But, if they ask for lots more time… remind them that you’re only asking them for relative sizes.  You’re not asking them to actually say how long each feature will take to develop.  You’re just asking for some careful(ish) educated guesses, such that their average “30 point” story will indeed turn out to be about 3 times as much work as their average “10 point” story, and so on.

(By the way: when using up-front sizing like this, these point values are used for overall project tracking, but they should not be the last word on what will, and won’t, fit in a particular sprint. For that, you need to ask the developers to make their own commitments at sprint planning time, and you need to respect what they say.  If fact, on a story-by-story basis, I never cross-check individual developer commitments against the points that were assigned at the start of the project.  Such a comparison would be unhealthy and unnecessary. All I care about is totals: how many of the originally-estimated points, in total, does the whole team complete in a typical sprint?)

5. Build Incrementally with key safety practices

Whether they are agile or waterfall, all Target-Driven projects make predictions about duration, cost and the scope that will actually be delivered.   Predictions are prone to error, so:

  • Waterfall projects attempt to reduce  the likelihood of bad predictions, through planning and signoffs.
  • Agile projects attempt to reduce the impact of bad predictions, through fast detection and response.

In my experience, the agile approach works best. But you have to run the project well.  You have to put in place key safety practices that allow you to detect and fix issues very rapidly.  Here are some of my favorites:

Detection techniques

  • Transparent, objective, tracking of progress and cost. This is probably the most important detection tool.  I won’t write about it here, because I have a dozen other pages on my site about it, under the heading “Earned Value”.  Here’s the introductory one, and here are very important details for anyone using it on Target-Driven agile.
  • Daily Standups.  This is the best-known agile technique for detecting whether there are any surprises, and whether anyone would like some help.
  • User involvement “in the sprint”.   I’m a fan of the team showing early versions of each feature to the users, then making corrections and improvements based on that feedback – all inside the sprint.  This gets the user feedback as soon as possible – often when the feature is not even finished, but just barely demo-able.  This allows the team to respond to what the users say very quickly, with minimal wastage and rework.  It works well in 4-week sprints, but it’s probably almost impossible in 1-week sprints.  Alistair Cockburn said some great stuff on this – how 1 week sprints require you to get the user feedback in a following sprint, but I can’t find the reference at present.  Anyway, there are pros and cons of different sprint lengths.
  • “Formal” Testing soon after the sprint.  In many organisations, formal “User Acceptance Testing” (UAT) or “Business Acceptance Testing” (BAT) is conducted with a wider group of users before go-live.  If your agile project is going live in a “big bang” (e.g. it’s replacing an existing system on a specific date) you might be tempted to run this UAT/BAT near the end of the project.  But I think its much safer to run many smaller rounds of formal testing during the project.  At work, we tried this by doing formal BAT of each sprint’s output, in the month following completion of that sprint. It worked well, and gave us useful information much earlier that we otherwise would have obtained it.
  • Regular deployments.  Even if you’re going live in a big bang, you should still “deploy” to a non-production environment at least once very 1 to 3 months (if not much more often). Deploying flushes out issues, and proves the stability (or otherwise!) of the system.
  • Risk smoothing. Don’t leave all the risky stuff  to the end!  Smooth your risks out over the lifetime of the project, with a bias to flushing them out sooner rather than later. As we know, the tail end of a project already has plenty of risks relating to go-live, scope change and other unexpected events.  So don’t also try to do tackle technical risks there.  Move the technically-difficult stories earlier. But don’t necessarily put them right at the start – there you may have enough risk just forming, storming and norming the team, and in standing up the architecture.  Therefore, consider spreading the technically difficult implementation work between about the 15% complete and 65% complete stages of your project, with any potential show-stoppers towards the front of that range.  (See Alistair Cockburn’s “Trimming the Tail” for his excellent take on this.)
  • Retrospectives. It’s now common practice, at the end of each iteration in an agile project, for the team to get together and reflect on how the iteration went. What can they learn from it?  Are there any niggling concerns?  Humans are incredibly perceptive.  As long as you can create a culture where people are comfortable airing concerns and bad news, you will learn a lot from retrospectives.

Techniques for Rapid Response

  • Retrospectives.  Yes, this is both a detection technique and a response technique.  Solutions to a great many problems can be generated in retrospectives.
  • Daily Standups, and followup-chats.  When a problem comes up in stand-up, you don’t necessarily have to solve it in the standup (that can make for lengthy and counter-productive standups). But you can, and I think should, get the relevant team members together immediately after the standup to explore solution options.
  • Allow spare capacity for trouble-shooting.  There are two possible approaches to this.  On large projects, I like to use them both together.  The first is: don’t plan your sprints so that everyone is scheduled to be busy for 100% of the available hours, instead, use something like 75% to allow plenty of time for people to help each-other.  The second is to have an impediment remover with hands-on-skills. The Scrum flavour of agile says that the Scrum Master is responsible for “removing any impediments to the team’s progress”.  It’s become conventional in the industry to fill this role with someone who doesn’t personally remove certain kinds of impediments, particularly those of a more technical nature.  It seems that most Scrum Masters don’t write code, can’t personally fix a broken build, and can’t suggest technical solutions to a thorny design problem.  This is not a bad thing, as long as your team has access to someone who can.  When the team encounters a technical obstacle which they either can’t spare the time for, or are not sure how to solve, who will help them? This can become a busy role.  When I filled it on a 20-person Target-Driven project, it usually wasn’t full-time but it was frequently the biggest part of my working week.
  • Know your strategy for responding to projected cost overruns.  Thanks to accurate and transparent tracking of progress (above) you’ll soon know if your project is heading over budget.  But then what? What will you do about it?  You need to be ready with a range of possible responses. You also need to be prepared to be creative, and possibly invent new solutions to suit your circumstances. This topic deserves a blog post of its own, so I can only summarize it briefly here.   Broadly speaking, the possible responses fall into four categories, and you can mix-and-match from all four.
    1. Adjust the scope.  You don’t necessarily have to drop features outright.  You might just simplify them.  Jeff Patton wrote about it very well, under the heading “Late projects caused by poor estimation and other red herrings“.  Alistair Cockburn’s Trim the tail is also very relevant.
    2. “Borrow from the future”.  You might do things that will save you time now, but will have costs in the future. Running up technical debt is one way to do this, although of course its not always a good one! Another, which I’ve used, is to have each developer specialize in those areas of the system where they are most productive. This made us quicker in the short term, but had two downsides.  In short term, we were exposed to more risk if one person was sick, or otherwise unavailable. In the long term, we had a “knowledge debt”.  Some people didn’t know how to maintain certain parts of the system. This meant that, in the future, we would need to find time for them to learn those areas.  In our case, with an immovable deadline for the initial go-live, this particular trade-off made sense.
    3. Look for ways to increase developer productivity. You should be doing this anyway, but you might find that, when pushed, you become more creative ;-)  Two things that help a lot are keeping your architecture simple, and acting on ideas from retrospectives.  You’ll need to keep measuring  progress, as always, to see whether the changes are working.
    4. Just bite the bullet and spend more money. In some circumstances, and with appropriate controls, this might be the businesses’ best option.  It helps, of course, if the iterations already completed prove to the business that the project is delivering the right software and operating in a transparent and stable manner.

6. Schedule a gap between end-of-coding and any target go-live date

I’ve listed this last, because it comes at the end of the project. But obviously you need to plan for it from the start.  It’s useful for piloting the system with a small group of users (if appropriate in your case), for absorbing some schedule overrun, for acting on late-breaking user-feedback, and numerous other purposes.

In the next post in this series, I’ll share some thoughts on how to wrap a contract around these steps.

Contracts: Two flavours of agile

There are many forms of agile. Some do support setting price and scope up front.  Here, I outline two overall flavours of agile – one which supports fixed scope and price, and one which does not.


Much as the Old Town in a European city is the center of the city, but doesn’t itself have a center (all the little twisty streets are roughly equal in “centerness”), so agile looks like a single place from a distance, but isn’t a single place, and the closer you get to the center, the more you see there isn’t a single center.

The Agile Manifesto was written by over a dozen people with their own world views and their own multi-centeredness, so it’s no wonder if there is no center to agile itself.

Alistair Cockburn, one of the 17-co-authors

Agile has always been a wide-ranging term.  In the beginning, it encompassed several “light” software development process that had been developed in the 1990s.  These included Scrum, Extreme Programming (XP), Crystal, Dynamic Systems Development Method (DSDM), and Feature Driven Development (FDD).  Those processes were represented by some of the original authors of the Agile Manifesto. Other authors subscribed to no particular methodology, but shared the group’s interest in pragmatic lightweight processes.

So agile looks like this: a large “bubble”, with smaller defined bubbles within it.



XP and Scrum have become by far the most popular, so much so that many people think that the XP/Scrum way is the only way to do agile. That’s simply not true.

Each “bubble” has a different emphasis.  XP and Scrum emphasise the ability to handle changing requirements.  FDD and DSDM lean more towards identifying a full(ish) set of requirements up front (in a relatively lightweight way, of course).  Crystal emphasises efficiency and “habitability”. (Habitability = “Would the team willingly work this way again?”)

Note also that there’s lot’s of white space in between the little bubbles – plenty of space for your team to do something that doesn’t fit with any one of the published methodologies, but is still “agile”. (Which is a topic for another day… ;-)

How does this relate to contracts?

When considering contracts for agile project, its helpful to simply the diverse landscape of agile processes.  For contract purposes,  I suggest we can group all the different types of agile into just two flavours.

Flavour 1: Exploratory Agile

  • Don’t have a fixed project scope up front
  • To a significant degree, scope is discovered as the project proceeds
  • Cost is either unknown in advance, or is specified by timeboxing the entire project: “We’ll work in priority order. After we’ve spent $X, we’ll just stop”.
  • Useful for environments where some of the following apply:
    • We cannot know what we need when we start (e.g. R&D projects, or others with very high degree of novelty or business uncertainty)
    • We expect very high degrees of change.  E.g. launching a new commercial product and learning what do do next from user feedback, and maybe even pivoting to a completely different direction.
    • As long as each iteration delivers business value in excess of what it cost, its worthwhile for us to continue.

Most on-line articles about agile contracts assume that this flavour is used. That’s fine, up to a point. It’s OK for authors and companies to say, “The exploratory flavour is mandatory for our kind of agile”.  As we saw above, there are many kinds of agile and people are perfectly entitled to set the rules for their own work.  However, it is not OK to say, “The exploratory flavour is mandatory for all kinds of agile”.  That misrepresents the beliefs of those who drafted the Agile Manifesto.  It’s also just plain wrong.

On the positive side, the Exploratory flavour is genuinely useful in many contexts and is probably the easiest way for a team to get started with agile. But it’s not the only game in town.

Flavour 2: Target-Driven Agile

  • Do have an overall scope.  This scope is defined during the early stages of the project.
  • Expect some changes, refinement and feature-thinning, but on the whole aim to deliver more-or-less the original scope
  • May also have an overall budget , which is also set during the early stages of the project.  The budget might be fixed, or it might be a target with controlled flexibility.
  • Most commonly seen with the FDD and DSDM flavours of agile, but is also possible with Scrum. (In Scrum, you’re allowed to define the backlog up-front, if you want)
  • Useful for environments  where some of the following apply:
    • The project is replacing an existing system.  It only makes sense to conduct the project if we can be reasonably sure of building enough scope to successfully replace the old system, at a price we can afford.
    • The business can’t proceed without knowing what they are getting into – in terms of scope and cost.
    • With a few weeks or months of business analysis, depending on system size, it is actually possible to identify the project scope. Typically the scope would be identified in the form of a few dozen, or maybe a few hundred, “epic” user stories. Preparing such a list likely to be achievable when the project is addressing a known need in an established business. It’s less likely to be possible in startups or R&D.

This is my favourite flavour of agile.  Why? Because its a fair question for the businesses to ask, “What are we getting into?”   If you were about to spend that much money, you’d ask too.  Exploratory Agile dodges the question.  Target-Driven Agile answers it.

[This is the first in a short series of posts on Contracts for Target Driven-Agile.  Here’s the next.]

Quick notes on contracts

At today’s IITP Lightning Talk/Panel Discussion, I promised to post some links about how each agile project tends to need its own process, tailored to its own particular situation. Here are those links, and some rough notes on a few other things too:

Tailoring process to each project

The main author on this is Alistair Cockburn. He’s researched and written about why each project needs its now process, and how to cost-effectively do that process configuration. Here’s a quick outline of how to do it, and here’s a much more in-depth description (complete with links to research).

By the way, such tailoring is potentially a challenge to formulating a contract (as per today’s IITP panel) however in practice I think most of the tailoring will focus on a level of detail below what the contract would cover.  The contract would work at a higher level, specifying the overall approach to managing time, scope, cost, risk etc.  While there are still many choices to be made at that higher level, it seems realistic to me to pick one “flavour” of agile for contractual purposes, and to expect to continue with that overall flavour throughout the project.   I posted some outlines of a few “flavours” here, as relates to scope and cost management. After today’s panel I really need to do a more detailed follow-up post, covering more than just scope and cost!

Norwegian Agile Contract

Here’s a link to that standard agile contract, from Norway, which I mentioned.

Feature thinning

There are some good links to this in the “further reading” section of this page.  BTW, the page itself is about agile with fixed scope, and some ways to approach it.

Agile is an umbrella term

There are many “defined” types of agile, and a great many others that are not explicitly defined.  The defined ones include XP, FDD, Scum, Crystal, DSDM, and Adaptive Software Development.  I mention this just to illustrate the variety of what “agile” means.

Just as an example, FDD is quite different from the better known Scrum and XP variants.

The tension between being specific and being flexible

When you start out with agile, it helps to have a very specific formulation of what to do. Basically a set of rules.  As you gain experience, it makes sense to start to look beyond the initial set of rules.  This causes difficulties – for instance when someone experienced (e.g. me, today!) says that agile can be many different things.  That’s true, but not very helpful as a starting point for organisations that haven’t tried it yet!

Authors addressing this include Andy Hunt and Jeff Patton.

This tension between being specific and being flexible is, I believe, one of the key challenges in sharing ideas about contracts for agile projects.  Maybe that will be a blog post another day…

Your thoughts on a simple waterfall vs agile comparison

I’m seeking feedback on the following comparison of agile vs waterfall(*)   The comparison is to be used as background information for a panel discussion on agile contracts, so it emphasizes those aspects which I felt were most relevant to that topic.  I’ve tried to keep it agnostic as to the exact flavour of agile to be used.

Waterfall Agile
Requirements always identified up front Requirements may be identified up front, in a concise list
Users sign off documents Users try out the software regularly
Integrate and stabilize at end Integrate and stabilize frequently
Progress is measured by milestones Progress is measured by % complete (with continuous testing)
Reduce likelihood of bad predictions through planning & signoffs Reduce impact of bad predictions though fast detection & response
Value: delivering on promises Value: openness

What are your thoughts?  I’m particularly interested in your thoughts on the second-to-last line, about the approach to “bad predictions”.  Does that make sense as it stands?  Do I need to add text explaining that I’m talking about all kinds of predictions – not only how long things will take to build, but also what should be built?

(*) Yes, I know, presenting agile and waterfall as opposites is logically flawed, since there’s no “opposite of agile“.  But we need something as background/context for the panel audience.

Great software pricing research

Most software engineers have an intuitive sense that the industry is approaching pricing and estimation in the wrong way.  But we’ve lacked data to prove, or disprove, our intuitions. Magne Jørgensen and his colleagues, at the Simula Research Laboratory, are doing awesome research to fill the gap.

Some of what they’ve found will support your intuitions (e.g. the danger of price as a selection tool) but some may surprise you (you might have some bad estimation habits). Here are some highlights, just from the last few years:

A Strong Focus on Low Price When Selecting Software Providers Increases the Likelihood of Failure in Software Outsourcing Projects.  Empirical evidence for the Winner’s Curse in software development.

The Influence of Selection Bias on Effort Overruns in Software Development Projects. More on the winner’s curse.

What We Do and Don’t Know About Software Development Effort Estimation. The title says it all!

Myths and Over-Simplifications in Software Engineering. A timely reminder of the dangers of confirmation bias when considering how we should go about software development. Similar subject matter to Laurent Bossavit’s Leprechauns of Software Engineering.

The Ignorance of Confidence Levels in Minimum-Maximum Software Development Effort Intervals.  A study confirming a point which Steve McConnell makes early in “Software Estimation: Demystifying the Black Art” – namely that in practice “90% confident” requires a much wider range than we think it dos.

Software Development Effort Estimation: Why It Fails and How to Improve It. The third-to-last slide (how to get percentage confidence intervals without the problems of min-max approaches) is excellent. Just one catch, which would have affected many of the teams I’ve worked in.  The technique requires 10 to 20 prior projects, each with estimated and actual costs.  I suspect that many estimators don’t have ready access to such data. (Maybe organisations need to improve how they keep these records, but that’s not the whole solution. Some teams simply don’t have enough history, IMHO).

Better Selection of Software Providers Through Trialsourcing. “In this article we show that differences between software providers in terms of productivity and quality can be very large and that traditional means of evaluating software providers … fail to separate the competent from the incompetent ones.”  Describes using trial projects to select suppliers.

Numerical anchors and their strong effects on software development effort estimates.   Text not yet available.  Looks like a good one though.  In the meantime, here’s Wikipedia’s background material about anchoring.

First Impressions in Software Development Effort Estimation: Easy to Create and Difficult to Neutralize.  Another on anchoring (this time with full text).

From Origami to Software Development: a Review of Studies on Judgment-Based Predictions of Performance Time.  Interesting title, but no full text yet.


People Skills, distilled

Here’s a 6-point summary of my “People Skills” talk.  The points are in pairs, two about negotiation, two about the “arrows of communication”, and two about mindset.

Identify interests
Generate options

Share your stories
Ask for their experiences

Don’t try to win the meeting
Test your assumptions

Note that there’s far more to people skills than these 6 points. The book Crucial Conversations has many more learnable skills. (About 25 in total).  The 6 I’ve listed here are the ones that seem particularly important to me, and which flow together (somewhat) in the structure of the talk I give about People Skills.


Identifying Interests

A key part of good negotiation, or negotiation-like discussions such as those about design of a new product, is identifying the interests of all parties.

I suspect that there’s a very common mistake made, when identifying interests. That is to assume what the other person’s interests are, instead of asking them.

But it gets worse. In my opinion, assumptions are particularly dangerous when they are assumptions about the other person’s motivations or attitudes.  E.g. “He wants all the glory for coming up with the idea”.  There’s several things wrong with assumptions of this type:

  • Firstly, they distract attention from the real interests that we should be focussing the discussion on: what business benefits does the other person want to obtain?
  • Secondly, they encourage us to fall back into a Unilateral Control mindset.  I find it better if I simply don’t make any assumptions of this type.  Instead of making assumptions about the other person’s motives, I focus on the actual business problem at hand, and seek to learn more about their practical interests in relation to the business problem.

As you talk (and listen) openly about the actual business problem, you’re likely to find that the other person’s motives are not too bad.  No-one comes to work to deliberately do a poor job. On some level, virtually everyone wants a good outcome for the business they work for.  Making negative assumptions about their motives is usually mistaken and almost always a distraction and waste of your time.

Mindsets, distilled

It’s not easy to summarise the wonderful work of Chris Argyris. His work on mindsets, namely the Unilateral Control mindset and the Mutual Learning mindset, seems particularly difficult to summarise – and yet it’s so vitally important to anyone who works with other people.

Here’s my latest attempt, at approachable wording for the two mindsets.

Unilateral Control:  (common, and counter-productive)

“Guess what they’re thinking.
Don’t trigger negative emotions.
Get them to do what you want”.

Mutual Learning: (works better)

” Test assumptions (about what they’re thinking)
Share valid information.
Seek well-informed agreement.”

(In this context, “unilateral” simply means “one sided” and “mutual” means “we’re all in this together”. )

The dynamics of trends

I think this lovely quote, originally about scientific research, probably explains a lot about how trends come and go in software engineering.

after a new paradigm is proposed, the [publication]process is tilted toward positive results. But then, after a few years, the academic incentives shift—the paradigm has become entrenched—so that the most notable results are now those that disprove the theory.

From . The article quotes a study by John Ioannidis, who writes:

It can be proven that most claimed research findings are false…

…for many current scientific fields, claimed research findings may often be simply accurate measures of the prevailing bias.

From (Emphasis added)

An Experiment in Think-First Development

I conducted an experiment today. I chose a problem which Ron Jeffries solved with TDD. I took the opposite approach.  I sat for about 5 minutes and thought about the solution.  Then I wrote down the code, added a unit test, ran the test to find the errors (there were 3), added one more test, re-ran both tests, and I was done.

What did I learn?

  1. I’m reasonably happy with my “think first” solution.
  2. I like it because it represents the solution in a very direct way.  It’s something my mind can relate to.  The design embodies a “Unit Metaphor”.  I just made that term up ;)  I mean a small-scale version of XP’s System Metaphor – a way of thinking about this unit of code that makes sense to me, as a human.
  3. I don’t think I would have come up with such a direct solution if I’d worked test-first.  I believe I would have been led to the solution in a much more round-about way,  and vestiges of the journey would have remained in the final code.
  4. During TDD the code “speaks” to you.  But I question whether it speaks with a sufficiently creative voice.  Can it really “tell” you a good Unit Metaphor?  Or does it merely tell you about improved variations of itself?  If the Unit Metaphor is missing at the start, will it remain missing for ever? (And it probably will be missing at the start, because as a good TDD practitioner you deliberately didn’t think about it at the start, right? ;)
  5. As an aside, maybe this example problem is too small.  Ron got a 6000-word blog-post out of it, but its it really a big enough problem to serve as a test-bed of design and coding techniques?  Maybe our online discussion about TDD is skewed by the inevitable necessity to use relatively small examples.  I don’t know….

What I do know (or at least strongly believe ;-) is that a certain degree of directness helps humans understand code, and a little up-front thought may help to create that directness. The trick, I suggest, is to seek a simple Unit Metaphor during your up-front thinking.

The Design Problem

The problem posed was to write code to create textual output in a “diamond” pattern, like this:

- - A - -
- B - B -
C - - - C
- B - B -
- - A - -

(spaces added here, just for readability).

Obviously it should be parameterized, to produce diamonds of various sizes.  The next size up has a “D” line in the middle, surrounded by two “C” lines.

This coding problem was previously mentioned by Seb Rose and Alistair Cockburn.

Comparing the Solutions 

(If you want to try writing your own solution, best to do that now, before following the links to Ron’s solution and mine).

Ron’s solution is in Ruby.  You can find it at the bottom of this page.

My solution is in C#, since that’s the language I know best.  You can find it, and the two unit tests, in this text file.

Comparing the two, Ron’s looks more visually appealing  at first glance. The methods are shorter, like methods are “supposed” to be, and it’s doing some clever stuff with generating only one quarter of the output and using symmetry to produce the rest.

Mine looks uglier.  The implementation is one 24-line method. (I think I’ve violated a few published coding standards right there!). But it does its work in a very straightforward way. It builds up the diamond one complete  line at a time.  It directly models the current width of the diamond, by keeping track of the edge’s “current distance from the centre”.

My, totally biased(!), view is that the direct, single-method implementation is actually easier for humans to make sense of and reason about.

BTW George Dinwiddie posted another solution here.

An Aside About Timing

It’s worth noting that my initial 5 minutes of thinking produced the general shape of the solution, but not all the details.  The actual coding, including the two tests, took about 18 minutes – an embarrassing proportion of which was consumed by the three bugs and with details of C# that I really should have known (e.g. I felt sure there was a built-in “IsOdd” method somewhere for integers.  But apparently there’s not.)

I think I would have taken longer to produce a solution with a pure TDD approach.  Of course, I can’t prove that because, as Ron points out in his post, its impossible for one person to realistically test two different approaches to the same problem – since any second attempt is polluted by knowledge gained in the first.


For the record, I also enjoy test-first. Particularly on really complex problems, or on simple ones when I’m suffering from writer’s block.

What I object to, and feel uncomfortable with, is the common implication that there’s only one true way to build software. People differ. Projects differ. Elements within projects differ. We should embrace those differences, and draw on our full range of tools – including up-front thought.