Most software engineers have an intuitive sense that the industry is approaching pricing and estimation in the wrong way.  But we’ve lacked data to prove, or disprove, our intuitions. Magne Jørgensen and his colleagues, at the Simula Research Laboratory, are doing awesome research to fill the gap.

Some of what they’ve found will support your intuitions (e.g. the danger of price as a selection tool) but some may surprise you (you might have some bad estimation habits). Here are some highlights, just from the last few years:

A Strong Focus on Low Price When Selecting Software Providers Increases the Likelihood of Failure in Software Outsourcing Projects.  Empirical evidence for the Winner’s Curse in software development.

The Influence of Selection Bias on Effort Overruns in Software Development Projects. More on the winner’s curse.

What We Do and Don’t Know About Software Development Effort Estimation. The title says it all!

Myths and Over-Simplifications in Software Engineering. A timely reminder of the dangers of confirmation bias when considering how we should go about software development. Similar subject matter to Laurent Bossavit’s Leprechauns of Software Engineering.

The Ignorance of Confidence Levels in Minimum-Maximum Software Development Effort Intervals.  A study confirming a point which Steve McConnell makes early in “Software Estimation: Demystifying the Black Art” – namely that in practice “90% confident” requires a much wider range than we think it dos.

Software Development Effort Estimation: Why It Fails and How to Improve It. The third-to-last slide (how to get percentage confidence intervals without the problems of min-max approaches) is excellent. Just one catch, which would have affected many of the teams I’ve worked in.  The technique requires 10 to 20 prior projects, each with estimated and actual costs.  I suspect that many estimators don’t have ready access to such data. (Maybe organisations need to improve how they keep these records, but that’s not the whole solution. Some teams simply don’t have enough history, IMHO).

Better Selection of Software Providers Through Trialsourcing. “In this article we show that differences between software providers in terms of productivity and quality can be very large and that traditional means of evaluating software providers … fail to separate the competent from the incompetent ones.”  Describes using trial projects to select suppliers.

Numerical anchors and their strong effects on software development effort estimates.   Text not yet available.  Looks like a good one though.  In the meantime, here’s Wikipedia’s background material about anchoring.

First Impressions in Software Development Effort Estimation: Easy to Create and Difficult to Neutralize.  Another on anchoring (this time with full text).

From Origami to Software Development: a Review of Studies on Judgment-Based Predictions of Performance Time.  Interesting title, but no full text yet.