August 16, 2015 | John Rusk Recently I wrote about Target-Driven Agile. Now, I’d like to outline what a Target-Driven agile project actually looks like. Of course, as discussed previously, there are many possible variations. This is the way I like to do it. (Note: These are just the key steps/phases in a Target-Driven project. I my next post, I’ll outline some thoughts on how contracts can support these steps). Step 1: Reach an understanding of the scope This is about getting a broad, high-level understanding of what we are, and are not, aiming to accomplish in the project. If there’s someone who already knows the business very well, and is both trusted and authorized to make all necessary decisions, then just ask them. But usually it’s not that easy. It’s rare to find one person that fits that brief. Perhaps the needs of the business are so diverse that no one person can adequately, and fairly, represent them all. Perhaps there are several strong voices within the business, and they’re all saying different things! Perhaps no-one has separated the “must-haves” from the “nice-to-haves”. So for most projects it’s necessary to work with various people, to craft a high-level description of scope that’s broadly agreed upon. This usually requires the analysis and facilitation skills of a Business Analyst (or similar), but the exact approach will depend on the project and organisation. A key point here is that we’re not aiming for a detailed waterfall-style requirements phase. For an agile project this step should be much briefer. For instance, the FDD variety of agile calls this step “Develop an Overall Model” and expects it to takes about 2 weeks of effort for every 6 months of eventual software development time. Personally, I’ve noticed this stage often takes longer – more like 1 month for every 6 months of ensuing development. I’m comfortable with that 1:6 ratio, but if it was to take much longer than that, I’d fear it degenerating into a waterfall-style requirements phase . By the way, the people who do this work will gain a lot of useful knowledge about the business, so its a good idea if they remain as members of the team during the rest of the project. Step 2: Document the scope as a simple list You need to write down the output of step 1. If you’re using User Stories, you can list of the names of the epic (coarse-grained) user stories. Otherwise you might list of “features”, again with just their names and few other details. Some useful tips: This should be a list, not a document. Most agile teams store these lists in some kind of purpose-built tool (there are dozens of different tools – free, cheap and expensive). For the smallest projects, I’ve seen a list in Excel work surprisingly well. The number of items in your list depends on the size and nature of your project. As far as I can tell, you typically end up with at least 20 on even a small project. Big projects that I’ve seen tend to have a few hundred. I suspect the list becomes hard to understand once there’s more than about 200, and that teams with big projects probably move to writing coarser-grained “bigger” epic stories to keep the count under that threshold. (Each epic will get broken into smaller pieces later in the project, on a just-in-time basis). You might find it useful to group or categorize them into boarder “themes”. Story Maps are one way to do this. Some agile tools are flexible enough to let you choose your own theming approach and visually group stories by those themes. It’s generally not worthwhile to document dependencies between the individual items. For justification this view, see an FDD perspective on dependencies (see 5th reply in message thread), and the general perspective towards the end of this page. Step 3: Understand your likely architectural approach You don’t have to design everything up front, but you need a basic idea of your general architectural direction, before you go much further. To illustrate this point, a team using .NET and SQL Server might set a direction that looks something like this: .NET application over SQL Server ORM and no stored procs ASP.NET MVC for the user interface Bootstrap for styling and layout A list of key external systems you intend to integrate with, and the technological approach for integration with each (SOAP, REST, a Message Bus,…) … plus a few more details about the internal structure of your app. E.g. As a generally rule where will your core business logic go? How will the UI layer (ASP.NET MVC in this example) connect to that logic? Since you’re agile, you may change some of these later. But for now, you need some idea of where you’re heading, in order to make progress on the next steps. Sometimes, its hard to settle on a direction, especially if there’s unfamiliar technology involved. So you can build something here. Maybe even build the same something, in two different ways, and compare. Step 4: Assign sizing “points” to all items in the scope This is the usual agile practice of assigning relative sizes to each user story/feature. Because it’s common agile practice, I won’t include a details here – except to say that for a Target-Driven project you need to assign points up-front to the entire scope. I.e. everything that’s on the list we made at Step 2. Why not just assign points to some of them, and do the rest later? Because: If part of your project is unsized, you can’t make any useful predictions about how long you’ll take to finish it. On a Target-Driven project, we want to make predictions of total cost and duration. Sizing everything up front has the advantage that its can be easier to size things consistently. Why? Because at this early stage you have the same level of ignorance about everything! Contrast this with that alternative approach of sizing some stories half way through the project. Half way through, you know the completed stories very well, but the future stories poorly. This makes it more difficult to size future stories correctly relative to the past ones because you know how difficult the past ones were, but you don’t know about future ones. This discrepancy of understanding can trick you into under-estimating the difficulty of the future stories. Sizing everything at the start solves this problem, because they’re all future stories. Remember that what we need here is relative sizes, not absolute. Furthermore, we only need the relative sizing to be right “on average”. If some future sprint has, say, 10 stories in in, it doesn’t matter if it turns out that some should have had more points than we gave them and some should have had fewer, as long as the errors approximately cancel out across the sprint as a whole. I.e. as long as the sprint as a whole has about the right number of points. Since this exercise is only about setting relative sizes, this doesn’t necessarily need to be an onerous task. On one project I worked on, with a budget in the low millions of dollars, I don’t recall there being any more than about 25 person-hours spent assigning these points. But we were fortunate to have two very experienced people doing the sizing, one of whom knew the scope very well (since he’d worked on the previous steps) and one who knew the technology very well. In your case, you might need more than two people, or and they might want more time. But if they ask for lots more time… remind them that you’re only asking them for relative sizes. You’re not asking them to actually say how long each feature will take to develop. You’re just asking for some careful(ish) educated guesses, such that their average “30 point” story will indeed turn out to be about 3 times as much work as their average “10 point” story, and so on. 5. Build Incrementally with key safety practices Whether agile or waterfall, all Target-Driven projects make predictions about duration, cost and the scope that will actually be delivered. Predictions are prone to error so: Waterfall projects attempt to reduce the likelihood of bad predictions, through planning and signoffs. Agile projects attempt to reduce the impact of bad predictions, through fast detection and response. In my experience, the agile approach works best. But you have to run the project well. You have to put in place key safety practices that allow you to detect and fix issues rapidly. Here are some of my favourites: Detection techniques Transparent, objective, tracking of progress and cost. This is probably the most important detection tool. I won’t write about it here, because I have a dozen other pages on my site about it, under the heading “Earned Value”. Here’s the introductory one, and here are very important details for anyone using it on Target-Driven agile. Daily Standups. This is the best-known agile technique for detecting whether there are any surprises, and whether anyone needs help. User involvement “inside the sprint”. I’m a fan of the team showing early versions of each feature to the users, then making corrections and improvements based on that feedback – all inside the sprint. This gets the feedback as soon as possible – often when the feature is not even finished but just barely demo-able. This allows the team to quickly respond to what the users say, with minimal wastage and rework. It works well in 4-week sprints, but is probably almost impossible in 1-week sprints. Alistair Cockburn said some great stuff on this – how 1-week sprints require you to get the user feedback in a following sprint, but I can’t find the reference. In short, there are pros and cons of different sprint lengths. “Formal” Testing soon after the sprint. In many organisations, formal “User Acceptance Testing” (UAT) or “Business Acceptance Testing” (BAT) is conducted with a wider group of users before go-live. If your agile project is going live in a “big bang” (e.g. it’s replacing an existing system on a specific date) you might be tempted to run this UAT/BAT near the end of the project. But I think its much safer to run many smaller rounds of formal testing during the project. Where I work, we tried this by doing formal BAT of each sprint’s output, in the month following completion of that sprint. It worked well and gave us useful information much earlier. Regular deployments. Even if you’re going live in a big bang, you should still “deploy” to a non-production environment at least once very 1 to 3 months (if not much more often). Deploying flushes out issues, and proves the system’s stability (or otherwise!). Risk smoothing. Don’t leave risky stuff to the end! Smooth your risks out over the lifetime of the project, with a bias to flushing them out sooner rather than later. As we know, the tail end of a project already has plenty of risks relating to go-live, requests for scope change and other unexpected events. So don’t also try to do tackle technical risks there. Move the technically-difficult stories earlier. Don’t necessarily put them right at the very start – there you probably have enough risk just forming, storming and norming the team, and in standing up the architecture. So consider spreading the technically difficult implementation work between about the 15% complete and 65% complete stages of your project, with any potential show-stoppers towards the front of that range. (See Alistair Cockburn’s “Trimming the Tail” for his excellent take on this.) Retrospectives. It’s now common practice, at the end of each iteration in an agile project, for the team to get together and reflect on how the iteration went. What can they learn from it? Are there any niggling concerns? Humans are incredibly perceptive. As long as you can create a culture where people are comfortable airing concerns and bad news, you will learn a lot from retrospectives. Techniques for Rapid Response Retrospectives. Yes, this is both a detection technique and a response technique. Solutions to a great many problems can be generated in retrospectives. Daily Standups, and followup-chats. When a problem comes up in stand-up, you don’t necessarily have to solve it in the standup (that can make for lengthy and counter-productive standups). But you can, and I think should, get the relevant team members together immediately after the standup to explore solution options. Allow spare capacity for trouble-shooting. There are two possible approaches to this. On large projects, I like to use both. The first is: don’t plan your sprints so that everyone is scheduled to be busy for 100% of the available hours, instead, use something like 75% to allow plenty of time for people to help each-other. The second is to have an “impediment remover” with hands-on-skills. The Scrum flavour of agile says that the Scrum Master is responsible for “removing any impediments to the team’s progress”. But it’s become conventional in the industry to fill this role with someone who doesn’t personally remove certain kinds of impediments, particularly those of a more technical nature. It seems that most Scrum Masters don’t write code, can’t personally fix a broken build, and can’t suggest technical solutions to a thorny design problem. This is OK as long as the team has access to someone who can help with technical impediments. The role can be a busy one. When I filled it on a 20-person Target-Driven project, it wasn’t full-time but was frequently the biggest part of my working week. Know your strategy for responding to projected cost overruns. Thanks to your accurate and transparent tracking of progress (above) you’ll soon find out if your project is heading over budget. But then what? You need to be ready with a range of responses. You also need to be prepared to be creative, and possibly invent new responses to suit your circumstances. This topic deserves a blog post of its own, so I can only summarize it briefly here. Broadly speaking, the possible responses fall into four categories, and you can mix-and-match from all four. Adjust the scope. You don’t necessarily have to drop features outright. You might just simplify them. Jeff Patton wrote about it very well, under the heading “Late projects caused by poor estimation and other red herrings“. Alistair Cockburn’s Trim the tail is also very relevant. “Borrow from the future”. You might do things that will save you time now, but will have costs in the future. Running up technical debt is one way to do this, although of course its not always a good one! Another, which I’ve used, is to have each developer specialize in those areas of the system where they are most productive. This made us quicker in the short term, but had two downsides. In short term, we were exposed to more risk if one person was sick or otherwise unavailable. In the long term, we had a “knowledge debt” because some people didn’t know how to maintain certain parts of the system. This meant that, in the future, we would need to find time for them to learn those areas. In our case, with an immovable deadline for the initial go-live, this particular trade-off made sense. Look for ways to increase developer productivity. You should be doing this anyway but you might find that, when pushed, you become more creative 😉 Two things that help a lot are keeping your architecture simple, and acting on ideas from retrospectives. You’ll need to keep measuring progress, as always, to see whether the changes are working. Just bite the bullet and spend more money. In some circumstances, and with appropriate controls, this might be the businesses’ best option. It helps, of course, if the iterations already completed prove to the business that the project is delivering the right software and operating in a transparent and stable manner. 6. Schedule a gap between end-of-coding and any target go-live date I’ve listed this last, because it comes at the end of the project. But obviously you need to plan for it from the start. It’s useful for piloting the system with a small group of users (if appropriate in your case), for absorbing some schedule overrun, for acting on late-breaking user-feedback, and numerous other purposes. In the next post in this series, I’ll share some thoughts on how to wrap a contract around these steps. [Updated 12 Oct 2015 with minor edits for clarity]