October 2, 2006 | John Rusk Steve Yegge points out that it’s very hard to do a valid scientific experiment in software development: “You can’t have the same team do the same project twice; a bunch of stuff changes the second time around. You can’t have 2 teams do the same project; it’s too hard to control all the variables, and it’s prohibitively expensive to try it in any case. The same team doing 2 different projects in a row isn’t an experiment either. About the best you can do is gather statistical data across a lot of teams doing a lot of projects, and try to identify similarities, and perform some regressions, and hope you find some meaningful correlations.” That’s exactly how Crystal Clear came about. Alistair Cockburn spent a lot of time studying successful teams, looking for the similarities, then he documented them. That’s not the kind of half-baked fad which Steve criticises, it’s about as scientific as it gets (in our field). In fact, it was scientific enough to earn Alistair his Ph.D. His doctoral dissertation is on-line and makes suprisingly easy reading. (Or you can find a shorter outline here.) I encourage you to read it, to see how Alistair grappled with the very problems Steve describes. (Eventually, he solved them much more successfully than Steve would expect. He documented his results as Crystal Clear). Steve makes some very valid criticisms of the Agile movement. However, his assertion of a lack of scientific rigour is incorrect (in Alistair’s case, at least). Alistair conducted exactly the kind of research which Steve calls for, without any pre-conceived notions of what the outcome should be (he started years before the agile movement was founded). He came up with a simple, effective process – which happens to be agile. Interestingly, Alistair’s research-based process is somewhat different from XP. As I’ve said before, XP is a subset of agile, not a synonym.