Category Archives: Agile Software Development

This blog began as a software development blog, on the topic of Agile software development. These are the original posts, from before I changed focus to “the neglected essentials of software development”.

Feature Thinning

Feature Thinning is the agile practice of simplifying the scope and implementation of specific features, on a case-by-case basis.  Often, given their growing knowledge of the technology and business domain, an agile team can suggest simpler alternatives to what the users originally asked for.  Often, these simpler alternatives still give all the key benefits, at much less cost.

It is important to thin features whenever you can, because the money you save will come in handy in other areas of the project.  For instance, the users may look at some other feature that you’ve built and say, “Yes, that’s exactly what we asked for. But now that we’ve had the chance to try it out, we realise we should have asked for something different.” There’s nothing wrong with that! It’s good and normal.  For instance, on one project we completely re-designed and re-built a whole series of key screens based on user feedback like this – and thank goodness we did! But you can only afford to delight your users like that if you have saved money elsewhere in the project – which is why it’s so important to thin features wherever you can.

A note on terminology.  Jeff Patton is the key author who has written about feature thinning.  But he didn’t call it “feature thinning”.  He originally used terms like “managing scale [of each feature]”.  I spoke with Jeff recently, and he confirmed that he now prefers the term “feature thinning”.

For more, see Jeff’s most excellent post on estimates, red herrings and dead fish.

[This post is a brief digression from my current series on contracts for agile projects. The final two instalments of that series will follow…]

Your thoughts on a simple waterfall vs agile comparison

I’m seeking feedback on the following comparison of agile vs waterfall(*)   The comparison is to be used as background information for a panel discussion on agile contracts, so it emphasizes those aspects which I felt were most relevant to that topic.  I’ve tried to keep it agnostic as to the exact flavour of agile to be used.

Waterfall Agile
Requirements always identified up front Requirements may be identified up front, in a concise list
Users sign off documents Users try out the software regularly
Integrate and stabilize at end Integrate and stabilize frequently
Progress is measured by milestones Progress is measured by % complete (with continuous testing)
Reduce likelihood of bad predictions through planning & signoffs Reduce impact of bad predictions though fast detection & response
Value: delivering on promises Value: openness

What are your thoughts?  I’m particularly interested in your thoughts on the second-to-last line, about the approach to “bad predictions”.  Does that make sense as it stands?  Do I need to add text explaining that I’m talking about all kinds of predictions – not only how long things will take to build, but also what should be built?

(*) Yes, I know, presenting agile and waterfall as opposites is logically flawed, since there’s no “opposite of agile“.  But we need something as background/context for the panel audience.

An Experiment in Think-First Development

I conducted an experiment today. I chose a problem which Ron Jeffries solved with TDD. I took the opposite approach.  I sat for about 5 minutes and thought about the solution.  Then I wrote down the code, added a unit test, ran the test to find the errors (there were 3), added one more test, re-ran both tests, and I was done.

What did I learn?

  1. I’m reasonably happy with my “think first” solution.
  2. I like it because it represents the solution in a very direct way.  It’s something my mind can relate to.  The design embodies a “Unit Metaphor”.  I just made that term up ;)  I mean a small-scale version of XP’s System Metaphor – a way of thinking about this unit of code that makes sense to me, as a human.
  3. I don’t think I would have come up with such a direct solution if I’d worked test-first.  I believe I would have been led to the solution in a much more round-about way,  and vestiges of the journey would have remained in the final code.
  4. During TDD the code “speaks” to you.  But I question whether it speaks with a sufficiently creative voice.  Can it really “tell” you a good Unit Metaphor?  Or does it merely tell you about improved variations of itself?  If the Unit Metaphor is missing at the start, will it remain missing for ever? (And it probably will be missing at the start, because as a good TDD practitioner you deliberately didn’t think about it at the start, right? ;)
  5. As an aside, maybe this example problem is too small.  Ron got a 6000-word blog-post out of it, but its it really a big enough problem to serve as a test-bed of design and coding techniques?  Maybe our online discussion about TDD is skewed by the inevitable necessity to use relatively small examples.  I don’t know….

What I do know (or at least strongly believe ;-) is that a certain degree of directness helps humans understand code, and a little up-front thought may help to create that directness. The trick, I suggest, is to seek a simple Unit Metaphor during your up-front thinking.

The Design Problem

The problem posed was to write code to create textual output in a “diamond” pattern, like this:

- - A - -
- B - B -
C - - - C
- B - B -
- - A - -

(spaces added here, just for readability).

Obviously it should be parameterized, to produce diamonds of various sizes.  The next size up has a “D” line in the middle, surrounded by two “C” lines.

This coding problem was previously mentioned by Seb Rose and Alistair Cockburn.

Comparing the Solutions 

(If you want to try writing your own solution, best to do that now, before following the links to Ron’s solution and mine).

Ron’s solution is in Ruby.  You can find it at the bottom of this page.

My solution is in C#, since that’s the language I know best.  You can find it, and the two unit tests, in this text file.

Comparing the two, Ron’s looks more visually appealing  at first glance. The methods are shorter, like methods are “supposed” to be, and it’s doing some clever stuff with generating only one quarter of the output and using symmetry to produce the rest.

Mine looks uglier.  The implementation is one 24-line method. (I think I’ve violated a few published coding standards right there!). But it does its work in a very straightforward way. It builds up the diamond one complete  line at a time.  It directly models the current width of the diamond, by keeping track of the edge’s “current distance from the centre”.

My, totally biased(!), view is that the direct, single-method implementation is actually easier for humans to make sense of and reason about.

BTW George Dinwiddie posted another solution here.

An Aside About Timing

It’s worth noting that my initial 5 minutes of thinking produced the general shape of the solution, but not all the details.  The actual coding, including the two tests, took about 18 minutes – an embarrassing proportion of which was consumed by the three bugs and with details of C# that I really should have known (e.g. I felt sure there was a built-in “IsOdd” method somewhere for integers.  But apparently there’s not.)

I think I would have taken longer to produce a solution with a pure TDD approach.  Of course, I can’t prove that because, as Ron points out in his post, its impossible for one person to realistically test two different approaches to the same problem – since any second attempt is polluted by knowledge gained in the first.

Wrap-Up

For the record, I also enjoy test-first. Particularly on really complex problems, or on simple ones when I’m suffering from writer’s block.

What I object to, and feel uncomfortable with, is the common implication that there’s only one true way to build software. People differ. Projects differ. Elements within projects differ. We should embrace those differences, and draw on our full range of tools – including up-front thought.

Agile with Fixed Scope

It’s a common misconception that agile processes can’t be used with fixed scope.  A number of the founders of the agile movement invented their forms of agile on fixed-scope projects. As I write this, I’m working myself on an 18-month project with about 20 people and a fixed(ish) scope (see below).  So it can be done.  But how?

There are several different strategies you can use:

Strategy 1: Fix the scope and flex the price

This keeps scope management very simple, you just build all of it.  The catch is it may take longer than you expected, so you may need to flex the price through a time-and-materials contract or some kind of sharing of financial risk.  Understandably, this risk of cost overruns renders this simple approach unsuitable in many environments.

Strategy 2: Work in priority order and stop when the money runs out

(Admittedly, this is not exactly fixed scope.) This is very commonly recommended on agile projects, too commonly in my opinion.  But again, it has the virtue of being relatively simple.  Do the most beneficial stuff first, leaving the least beneficial until last.  When the money runs out, just stop and don’t do the rest.  Agile makes this approach possible – but not mandatory.

Strategy 3: Implement remaining features more simply when short of time (“Feature Thinning“)

There are many factors that influence the effort required to develop a feature (or user story, depending on your terminology).  Some of those factors are probably under your control: e.g. How extensive is the validation? How much effort do we put into optimising the user experience (UX) and appearance?  Do we fully automate everything, or do we allow manual overrides so we don’t have to code every single  edge case?  Can we think of something that would save development time, and still meet the overall business goal (in a different way from what was originally expected)?

If you are using good earned value tracking you should know, within the first quarter of the project, whether you are likely to run out of time at the end.  Once you find that out, immediately start seizing all opportunities to simplify the remaining 75% of the project.  Because you have good earned value tracking, you can justify the simplifications to your stakeholders.  The aim is to deliver all of the planned business benefits, just with simpler implementations than might have been originally expected.

We’re using a variant of this strategy on my current project.  We built the highly-used parts of the system first, taking a lot of care with their appearance and usability. The second half of the project consists of functionality that is much less commonly used, so here usability and appearance are much less important. (If it takes a user a few extra minutes to do something, it doesn’t really matter if they only do that thing a few times each year.) So for this second half of the project, we have consciously shifted our design approach away from ease of use and towards simplicity of implementation.  Because we are using earned value-like tracking, we can justify this change of approach to users and management.

Strategy 4: Split each feature (or user story) into essential and nice-to-have parts

This a refinement of the previous strategy. Right from the start of the project, you split features/user stories into two pieces: an essential minimum piece, which you implement early, and nice-to-have embellishments (such as advanced data validation or visual styling) which you defer to the tail end of the project.  If you run out of time, you drop some of the embellishments from the tail, and still deliver a working system with the full scope of capability/functionality.

Strategy 5: Make multiple passes over each story, doing the basics first and then improving it later

Similar to strategy 4, but you may “visit” a given user story 3 or more times within the project, instead of just twice as in Strategy 4.  I like this in theory, but in practice I think it’s too hard to used earned value or burn-chart tracking in this strategy.  Whereas in strategy 4, I feel that earned value remains (just) feasible.

[Tim Wright’s comment,  below, gives more details on how this strategy can be done]

Summary

The last three strategies are all variations on a theme. Within a single project, you may use several of them, and maybe also resort to strategy 2 for a few user stories.

I recently heard the phrase “value management” to describe the work of deciding not only what to build, but also how simply or thoroughly to build it  The aim is to meet the business goals with the optimal expenditure of effort – i.e. do what needs to be done, without overspending on superfluous details.

Further Reading

All of the following are excellent.

Alistair Cockburn’s Trim the Tail.  A rich explanation of the theory and practice of strategies 4 and 5, with significant additional benefits in risk management.

Alistair’s list of related strategies.

Jeff Patton’s concept of Feature Thinning (aka Managing Scale): Jeff’s a leading practitioner of Strategies 3, 4 & 5. See: Finish on time by managing scale, Difference between incremental and iterative, Accurate estimation = red herring  Jeff has often used these techniques on fixed-scope, fixed-price projects.

Description from an agile company called Atomic Object, of how they operate with fixed budget and controlled (rather than fixed) scope: here and here.

Martin Fowler’s Scope Limbering

The opening section my my own agile earned value (pdf) has more info on why fixed scope is a valid option in agile.

I’ve also posted a summary of estimation tips for agile projects.

Faith, doubt and evidence – agile as religion versus agile as social science

My first exposure to Agile was in the early 2000’s, when a friend leant me the Schwaber and Beedle Scrum book.  It kept talking about mysterious  “emergent properties”, which were apparently good things that would inevitably happen when you followed the process.  But it never really explained why those properties would emerge and, to my mind, it never gave satisfactory evidence that they really would emerge.

That was my first taste of agile as a “belief-based movement”.  I have no problem with the book itself. As I explain below, its approach is ideal for beginners. However, in asking readers to take certain things on faith it set a dangerous precendent.  I vividly remember a presentation, again on Scrum, which I paid to attend some years later.  The audience was asked to accept some truly remarkable claims based only on the word of the presenter. It felt like something which no professional event should ever resemble: the recruiting session of a strange religious cult.  The kind where the charismatic speaker sways all listeners with his eloquence and fervour… then gets arrested three years later for dodgy behaviour.

Agile as a belief system

At its worst and most cult-like extremes, belief-based agile can be dismissed out-of-hand, and rightly so.  But at the milder end of the scale, such as the Schwaber-Beedle book, it serves a valid purpose.  It presents a clear and compelling introduction to agile, which is great for people who have never heard about it before.  It worked for me.  Our team tried what we read in the book, and we loved it. It was great to have such an introduction to agile.  Without that simplicity, we might never have got started.

But then we started to struggle.  Our projects didn’t seem to fit the book.  We had constraints which the book didn’t even mention.  We began to suspect Scrum was good for other people but not quite right for us.

But what bothered me more was that I still didn’t understand it.  What the heck are these so-called emergent properties anyway? And how do you make one?

Agile as (social) science

It wasn’t until I read Alistair Cockburn’s book Crystal Clear that things started to make sense.  The book is not about Scrum; it’s about Alistair’s rather different formulation of agile.  And yet, it was only by reading Crystal Clear that I actually understood Scrum.  Although it uses different terminology, Crystal Clear filled in the missing pieces and answered my questions about Scrum’s “emergent properties”.

But Crystal Clear did three things which even more important:

  1. It showed that agile can be based on evidence and science.  I’m not talking here about the popular misconception of science, in which we have blind faith in white-coated experts, but about real  science where even the experts question their own judgement and follow the evidence where ever it may lead.  Crystal Clear grew out of Alistair’s PhD research (which in turn grew from his process work at IBM and elsewhere). Based on observations of real-world projects, Alistair formed hypotheses about the factors that led to success in software development.  Then he painfully discarded one hypothesis after another as the evidence demanded.  He eventually arrived at the family of methodologies known as Crystal, of which Crystal Clear is the version applicable to small teams.  (Alistair’s development of Crystal preceded the founding of the agile movement.  He was later one of the 17 co-creators of the Agile Manifesto.)
  2. It showed that agile should be flexible, to suit the nature of humans.  It is impossible to read Crystal and not be struck by how human-friendly it is;  how it allows and requires variation to suit your circumstances; and how one-size-fits-all rules are the very antithesis of agile.
  3. It explained that simple belief-based formulations of agile are the starting point, not the destination.  The common rigid, belief-based formulation of agile is an ideal learning tool, but it’s only the starting point on the learning journey, not the end.

Conclusion

Today, I happily use Scrum – informed by and infused with a healthy dose of Crystal Clear.  If you are not familiar with Crystal Clear and Alistair’s work in general, you owe it to yourself and your project to learn.

Become practiced in making changes

Agile has always embraced change.  But businesses are still be tempted to minimise those changes, by trying to get things “right” first time. 

On my current project, we’re realised that minimising change is not necessarily a good thing. Why? After the system goes live, we must be able to change it.  (In our opinion, an unmaintainable system would be a failure.)  So… if we are going to successfully change the system after go-live, then we must practice making changes before go live.  Without such practice, how would we know whether the architecture is maintainable?  How would we know whether the team has the necessary skills and confidence to safely change the code?

So each mistake, dead-end, or change of direction has a silver lining: it’s a chance for us to practice the essential skill of changing our code.  We need these mistakes to happen, so we can practice correcting them.

Pair Programming Wrap-Up

I previously blogged about the nature of expertise, and resulting questions about the efficiency of expert-expert pairings.  Here I’d like to tidy up some loose ends with regard to the relationship between expertise and pair programming.

How does expertise develop?

Kahneman’s book points out that expertise develops, virtually inevitably, when a practitioner has long experience in a field where both of the following apply:

  1. There is indeed some correlation between decisions and outcomes. (Conversely, no amount of practice will ever give you expertise in predicting the next outcome on a roulette wheel!) In the field of software development, our design decisions do indeed influence the outcomes of correctness, performance and maintainability.
  2. Practitioners receive feedback on their decisions (so that their inner neural network can learn). In other words, we make decisions, and then actually see how those decisions affect the correctness, performance and maintainability of our software. As long as we maintain an open-minded willingness to learn, and we don’t change jobs too often, we will receive this feedback and therefore expertise will develop.

A case in favour of pairing

While I’ve questioned the efficiency of expert-expert pairings, the science of expertise suggests a possible benefit of expert-novice pairings – a benefit which I have not seen described elsewhere. I suspect that such pairing may help the novice to develop expertise more quickly. Not because of what they expert says (as implied by some other descriptions of expert-novice parings), but because of what the expert does. Remember that much of what we call “expertise” takes place unconsciously. So the expert couldn’t put it into words even if they tried. But, perhaps, there is some value in less experienced programmers seeing how the expert works. Maybe that will give the less-experienced member of the pair a chance to pick up on things which the expert could never teach verbally.

A related point is that much of programming is about how the software is produced. Yes, you can learn from reading other people’s code after it’s finished (and we should do this more often) but I think we might learn more from observing how that code is produced – What kind of things did the author try as they worked? Where did they backtrack? How did unit tests guide their work? What are their special Google search tricks for finding relevant information?

(Note: don’t forget the conventional advice that “pair programming is best done by peers”. An expert-novice pairing is arguably not at all the same thing as regular pair programming.  My point here is that it might be a valuable tool for transferring the expert’s “automatic” (“non-concsious”) skills.  I would see it being used in moderation; rather than  for 6 or 8 hours a day like regular pairing.)

The mathematics of paired experts

I discussed my previous post with a friend before writing it. We agreed that experts can’t discuss the first part of their thinking (the automatic bit), and that discussing the second part (the effortful bit) takes longer because the experts have to slow down to the speed of human speech.  However my friend suggested that the collaboration will still pay off, due to improved quality of the final decision.

That could be a very strong argument if feedback from a peer was the only kind of feedback available.  But, it’s not.  Experts also get feedback from the situation, via unit tests etc.

Example: An expert is looking for the cause of a defect.  The bug is reproducible, but the expert doesn’t know what causes it.  Hunting for the bug is like a search.  At each step in the process, the expert chooses a “move” that will narrow the range of possible causes.  Moves may include setting particular breakpoints, looking for patterns in the inputs that cause the bug, stepping through the code, adding additional unit tests, and so on.  Each move narrows down the search space, until eventually the defect is found.  The better the move, the more it narrows the search space.  A really good first move might eliminate 90% of possible causes, leaving only 10% remaining.  A less efficient move may eliminate only 50% of the possible causes, leaving the other 50% still to be searched.

Question: if the expert is paired with another, will the pair choose substantially better moves?  Yes, they probably will.  After some discussion, the pair may choose a “90% move” (eliminating 90% of possible causes) while an expert working alone might only come up with a “70% move”.

But what about the time cost of the pair’s discussion?  In the time that the pair spend discussing one move, a solo expert might try several. Imagine that a solo expert can make two moves in the time it takes a pair to discuss one.  If the solo expert tries one 70% move, followed by another, they’ll cut the search space down to 9% of the original space. [ (100% – 70%)^2 =  30%^2 = 9%].   That’s about the same amount of progress as the pair’s “90% move”.   So the solo expert has produced the same result as the pair, in the same amount of time, but for only half the personnel cost.  If the solo expert’s cycle time is even shorter still, allowing them to try 3 or 4 things in the time that the pair tries one, the solo expert will outperform the pair in both cost and elapsed time.

In summary I suggest that: by using a shorter cycle time a solo expert might outperform a pair of experts.  This a special case of the general principle that “starting to iterate” often beats “improving the plan”.

So how should experts interact?

If experts avoid pair programming, will they end up working completely alone?  No.  I think there are at least two options worth considering:

Option 1: Expert “pair preview”

I’m suggesting this based on a real life example.  I’m calling it “preview” rather than “review” because it comes before the code is written.  I’ll anonymise it by changing the names to “Alice” and “Bob”.

Alice was (and still is) an awesome programmer, whose expertise I greatly respect.  She was working alone on a difficult part of the system.  Once Alice had an initial design in mind, she ran it by Bob for review.  So far, so good.  But the review is where things came unstuck.  Alice and Bob didn’t know Kahneman’s concept of effortful versus automatic thought.  Alice described her proposed design to Bob.  Bob understood it, but had a hunch that an alternative design would be simpler.  Bob didn’t know it, but his half-formed alternative design was generated by the automatic “part” of his mind.  Because it was automatically-generated, it seemed like nothing but a vague hunch.  Consequently neither Bob nor Alice gave it the attention it deserved.

In hindsight, they should have recognised Bob’s hunch for what it was: an instinctive product of Bob’s long expertise, and therefore something worth looking into.  Perhaps Alice should have spent a couple of days spiking Bob’s idea.  That would have tested the hunch, and by having Alice do the spiking (rather than Bob, who had suggested it), we would have leveraged Alice’s existing detailed knowledge of the problem space.  (Having Alice do the spike would have also made it psychologically easier for her to embrace the new idea, if it did indeed prove to be the best.)

Unfortunately, no such spiking happened, and the two programmers did not manage to have an effective discussion of the problem.  Alice spent about 5 weeks implementing her proposed design.  Some years later, it proved inadequate to the growing needs of the system, at which point it was re-implemented along the lines of Bob’s hunch – in only 2.5 days.

Option 2: “Expert Escalates”

Even experts get stuck.  Part of being a responsible expert is to realise when you’re not making productive progress, and seek out someone to bounce ideas off.  Perhaps they will see something that you haven’t, or spark some new ideas.

What about the evidence?

Two commenters on my previous post suggested that my concerns were all very well, but couldn’t be valid because the research shows that pair programming works.  I promised to respond with some comments on “the evidence”.

It turns out that the evidence is rather more mixed that you might suppose. Here’s a quote from the abstract of a meta analysis done by the excellent Simula Research Laboratory.

…between-study variance is significant, and there are signs of publication bias among published studies on pair programming.  A more detailed examination of the evidence suggests that pair programming is faster than solo programming when programming task complexity is low and yields code solutions of higher quality when task complexity is high. The higher quality for complex tasks comes at a price of considerably greater effort
— from here, or alternate link [emphasis added]

The finding, that paired effort for complex tasks is much greater than solo effort, appears consistent with my above concerns and reasoning about paired experts.

Expertise versus Pair Programming

Let’s start this post with a thought experiment.  Not in software development, but in playing chess.

Imagine two novice chess players, working as a team. (We’ll assume their opponent is a computer, so it can’t overhear them talk.)  Our two novices will benefit greatly from their collaboration.  They’ll discuss all their thinking – everything from possible moves, to correcting each others mistakes, to “can you remind me how a knight moves?”.  Working together like this they will make fewer mistakes, and generate better moves than each would have done alone.

But what if we paired two chess masters?  Scientific research proves that the thinking of experts is different. Much of it happens as automatic pattern recognition rather than conscious reasoning.  When a chess master looks at a chess position, (good) possible moves spring to mind immediately.  As described in my previous post, these “candidate moves” are generated directly by the brain’s underlying neural network.  Neural networks can’t explain their own reasoning, so the master doesn’t consciously know why those moves came to mind.  Of course, the moves are indeed based on the master’s long experience, but the mapping from prior experience to current ideas is not open to inspection by the conscious mind.

Continue reading Expertise versus Pair Programming

Compliance to Spec Considered Unprofessional

In keeping with the theme of this blog, which is “the neglected essentials of software development”, I’d like to share something I’ve learned recently.  It’s about how Professional people in other fields think – people like architects, town planners and doctors.  As they work, they engage in an on-going

…conversation with the situation.

What a lovely phrase.  It comes from the classic book The Reflective Practitioner: How Professionals Think In Action by Donald Schön. Schön was an influential professor at MIT.  His book was published in 1983, so none of his case studies come from IT, and yet he finds something that agile software developers will recognise.  In all the professions he studied, professionals relied heavily on feedback and reflection (i.e. being observant and thoughtful, and changing your mind as necessary).  He found that this approach is not just the normal way for professionals to work – it’s also the best way.

Here are some favourite quotes from the book:

At the same time that the enquirer [i.e. professional] tries to shape the situation…, he must hold himself open to the situation’s back-talk. He must be willing to enter into new confusions and uncertainties.  Hence, he must adopt a kind of double vision.  He must act in accordance with the view he has adopted, but he must realize that he an always break it open [change it] later, indeed must break it open later in order to make new sense of his transaction with the situation.

… he recognizes that the situation, having a life of its own distinct from his intentions, may foil his [plans] and reveal new meanings.

Implications

Schön’s work sheds light on some key issues in software development:

Issue 1 – Contracts for Agile Projects

Too many contracts emphasise doing “what you said you were going to do”, rather than “what turns out to be best”.

Issue 2 – BDUF

Consider the agile aversion to Big Design Up Front (BDUF).  BDUF is not an on-going conversation with the situation.  It’s just a period of thinking, followed by an attempt to give the situation a lecture!

Issue 3 – Handing over “designs”

There are implications for passing work from one person to another.  Some years ago, as a software architect, I had difficulty working with a more junior developer.  I now realise what was going on.  He thought I was handing over a description of what he should build; but I thought I was handing over an incomplete conversation with the situation.

It’s difficult to hand over an in-flight conversation, for someone else to continue on your behalf.  And yet, when a senior hands over a so-called “design” to a junior, that is exactly what’s happening:

Here’s this conversation I’ve been having with the problem. Here’s a rough outline of what I said and how the situation replied.  So I think we might proceed as follows…<outline of suggested solution goes here>.  But, don’t just take my word for it.  Continue the conversation, thinking and altering course when necessary, as I would have done if I had continued the conversation myself. Finally, if you need to change course, but don’t know how, ask me and we’ll figure it out together.”

That’s the verbose way to say it.  Unfortunately, we often cut corners and say, “Here, build this”.  Junior programmers don’t know that, “Here, build this”, really means , “Please continue my conversation.”  In fact lots of senior programmers don’t know that either – I know I didn’t.  It was only when I read Shön’s book that I finally had the mental tools to really understand the situation.

Issue 4 – “BA” or “Designer”

Finally, the conversation metaphor tells us about the role of business analysts, on both traditional and agile projects.  At the recent  #10yrsagile event in Utah, Jeff Patton and others commented on the difference between “BAs” and “designers”. The “designer” model directly supports “conversations with the situation”, since the person in the designer role clearly undertakes such dialog with the situation.  But in the BA model, if interaction between the BA and developer is either (i) infrequent, (ii) asynchronous or (iii) one-way, then the conversation with the situation breaks down.

Example

Here is a detailed real example, of a conversation with the situation while I was coding.

Support from Other Authors

Several other authors have shared similar findings.  For instance, in his book “The Design of Design” Fred Brooks quotes the following diagram (originally from Maher, Poon and Boulanger):crocodileDiagram

 

 

 

 

 

In this diagram, the problem and solution both shape each other, as we move left-to-right through time.  (I remember it as the “crocodile diagram”, since the zig-zags look like the teeth of a crudely-drawn crocodile.)

Jeff Patton eloquently explains the rationale of building something different from what you initially expected.

And, of course, no discussion of these topics would be complete without referring to the classic paper from Jack Reeves. Schön’s work adds weight to his arguments.

Conclusion

When we put all this together, we are led to a surprising conclusion regarding specifications (and their agile equivalents, such as acceptance tests or criteria that are prepared in advance of coding).

If you produce a system that exactly matches its pre-written specification, you have acted unprofessionally.
Either, you didn’t maintain a conversation with the situation after you got the spec; or you kept on talking with the situation, but you ignored it’s advice whenever it disagreed with the spec.

Regrettably, we are all encouraged to make this mistake.  On traditional projects, there may be implicit or explicit pressure to minimise the number of change requests.  On agile projects, there may be pressure to keep the velocity up, and some developers may respond by doing just enough to meet the pre-written acceptance tests, and quite deliberately “not asking too many questions”.  In so doing, they rob the project of their natural talent for “looking around and taking initiative”.  Finally, on projects of all kinds, there is often insufficient time budgeted for the Validation (fitness to purpose) part of “Verification and Validation”.

I hope that, with increasing recognition of Schön’s work, will come the realisation that true professionals do not comply with a spec, they surpass it.

10 Years Agile – a perspective from outside the room

Recently Alistair Cockburn invited 32 agile practitioners to join him at the Snowbird ski resort, for a retrospective on the agile movement as a whole.  I followed the event via Twitter, sitting under a sun umbrella here in the southern hemisphere, reading the tweets from the snow.

The event was both a celebration of the first 10 years of agile, and an attempt to answer three questions which Alistair circulated before the meeting:

  1. What problems in software or product development have we solved (and therefore should not simply keep re-solving)?
  2. What problems are fundamentally unsolvable (so therefore we should not keep trying to “solve” them)?
  3. What problems can we sensibly address – problems that we can mitigate either with money, effort or innovation? (and therefore, these are the problems we should set our attention to, next.)

There was clearly much to celebrate from the past 10 years of agile, but when it came to the future, there seemed to be concerns in several areas: the slowness with which good practices are adopted in some quarters; the risk of losing momentum now that there is no longer a common “enemy” (as heavy process was, when agile began); and a perceived lack of strength, clarity and consensus in the group’s answer to Alistair’s third question – “what next?”

Reflecting on those concerns, here are my “ah-has” from my virtual participation:

Gradual Change is Normal (Part 1)

Mike Cohn wrote a great post likening agile to Object Orientation.  Eventually, OO was taken for granted, and people stopped talking about it.  He suggests that agile will go the same way, and I agree.

However, these things can take a really, really long time. Here’s an example, that extends Mike’s OO one: By the late 90’s, OO had blossomed so much that the premier OO conference, OOPSLA, moved away from its original goal of making OO practical and understandable.  OOPSLA’s work was done.  But, at the turn of the century, there were still a lot of people who weren’t using OO.  Legions of VB6 developers making business apps on the Microsoft platform were not using OO at all.  Slowly, over the decade that followed, OO seeped into the Microsoft development community.  Languages were replaced and improved, and starting in about 2007 we saw wide adoption of Object-Relational Mapping and a variant of Smalltalk’s decades-old Model-View-Presenter pattern.  (Yes, Microsoft played a role in the timing, but so did changing sentiment within the developer community.)

There are two key things to note from this example:

  • We are currently seeing (real) OO go mainstream in the Microsoft community –  about a decade after it went mainstream in most other parts of the development community.
  • This happened without any industry association driving it.   There was no equivalent to the Agile Alliance, cajoling Microsoft developers into using Objects.  There was no need for such a body, because OO had become adequately embedded in the industry’s  consciousness back in the 90s.  It was like a virus that had infected enough  of the population that it could no longer be eliminated.  Slowly, over the decade that followed, it reached the rest.

So don’t sweat it. Agile isn’t going away.  It will continue to influence people, and to be embraced by new users, long after the early adopters have taken it for granted and stopped talking about it.

Gradual Change is Normal (Part 2)

In a post on his delightfully-named blog, Jonathan House wrote the following after the Snowbird event:

Does Agile really make a difference? – Not so much a question that showed up at the conference, but one that kept running around in my head, kicking over garbage cans and spray-painting the cat. … It’s clear we’ve made excellent progress over the past 10 years, but it’s still not so clear to me why businesses aren’t beating down the door to really adopt Agile throughout the enterprise …

Jonathan, I’m fairly sure this is not a reflection on agile.  It is reflection on business.  Business leaders often fail to follow advice, even when they actually agree that the advice is good.  I don’t say this out of personal dissatisfaction; it is a well-researched fact.  There’s even a book on the subject, called The Knowing-Doing Gap by Jeff Pfeffer and Bob Sutton.  (I mentioned Pfeffer and Sutton in my Agile Roots talk.  As an agilist and geek, I find their work extremely credible and relevant).

I am convinced that you are observing not a failure in agile, but a significant problem in business in general.

(Having said that, it wouldn’t hurt if we in the agile community could familiarize ourselves with how the more self-aware sections of business community do already understand agile, just with different words and under different names.  It will be much more polite, and therefore more likely to succeed, if we listen before we lecture).

Diversity is Good

I think some people who were following the Snowbird event may have been disappointed at an apparent lack of consensus.  I know I was, at first.  But then I remembered that diversity is good.  Think of the importance of genetic diversity in populations of animals; remember how the messy diversity of capitalism out-performed Soviet central planning; and think back to the diversity of the original 17 signatories to the Agile Manifesto – a diversity which was alive and well both before and after they signed the Manifesto.

By having a range of people, with differing interests and priorities, the agile community is much more likely to successfully address the many and varied challenges of the next decade.

I think the point with diversity is not just to accept it, but to actually encourage it.  Good agile, particularly by experienced practitioners, will be situationally specific.  I suspect this point has not fully seeped into the overall consciousness of the wider agile community.  Perhaps one of the tasks for the next 10 years is to promote the acceptance of “situationally specific agile”, and to help people learn (i.e. practice) how to do it.

There is Plenty to Do

Several people pointed out that agile lacks a clear “enemy” now.  When agile started, heavyweight process was the enemy. But today there is no clearly-identifiable external enemy.  So what is the point of the agile movement now?

David Anderson seemed to reflect the mood of the group when he wrote:

The mission now is incremental improvement. It’s evolution, education and improving levels of maturity, rather than a revolution. The enemy is now within. The enemy is as Joshua Kerievsky put it “all the crap I see out there” despite 10 years of Agile methods.

David Anderson went on to write

I don’t believe the Agile movement knows how to operate without something to revolt against. Agile came, it served its purpose, it had a positive effect. What next? Perhaps it is time to move on?

Is the really nothing more to do?  Nothing more but combating the enemies within, of complacency and poor execution?  I can see the logic of what Joshua and David (and others) are saying, but please, please don’t let it be true!  There is so much more to do.  Here are just a few topics that come to mind:

  • The spread of agile beyond software (as per Jonathan House’s comment above, and others on the day)
  • Jeff Patton’s story mapping and feature thinning – two wonderful techniques which deserve to become much more prominent in agile’s second decade
  • Advancement of knowledge/skill in Ri-level/situationally-specific agile
  • Philippe Kruchten’s herd of elephants (elaborated/commented on here by Scott Ambler)
  • Really paying attention to “Individuals and Interactions”.  For 10 years we’ve been getting this first point of the Agile Manifesto round the wrong way!!  So you can’t tell me there’s nothing left to do ;-)  James Coplien phrased it well in his outstanding reflection on the first 10 years: “We have test scripts and jUnit trumping individuals and interactions”.
  • Sharing our failures as well as our successes, as several people mentioned on the day
  • Contracts for agile projects – currently, only a partially-solved problem at best.

We’ve only been doing knowledge work in teams for a short part of human history.  The previous millennia did not prepare us well for the last few decades of complex, abstract, co-operation in non-trivial teams.  So of course this is still more to learn.

Summary

We need to keep thinking, keep talking and keep learning.

We need to keep agile’s sense of humanity – of valuing, respecting and caring for people in the workplace.

We need to retain and treasure the global and local communities of practitioners, which the agile movement has created.

We need to keep all these things, even though there is no longer a common enemy to unite us.  We need to learn to operate in an agile community that doesn’t just accept, but actively promotes, its own diversity.

I recently stumbled across a quote from the French poet Stéphane Mallarmé:

to define is to kill, to suggest is to create

Let us not define our future.  Let’s suggest it.