I tend to forget that I've written a lot of articles and tutorials that are out of the content management system that I now use to run devdaily.com, and last night I stumbled across an article I wrote on software cost estimating that I happen to think is pretty good. So, to make that article easier to find, here's a link to my software cost estimating tutorial.
(As you'll see from the look and feel of those web pages, those pages are formatted very differently, but you'll still be on the devdaily.com website.)
If estimating software development projects is important to you, I hope you enjoy that article.
Was the lowballing in WBS estimates due to (a) incomplete WBS -- tasks omitted or tasks included but speced as less functionally complex than they turned out?; (b) underestimates of the individual WBS tasks, with the task list reasonably complete and with accurate specs. Or if both, what was the rough proportion of the two kinds of errors?
Thanks. --David Lewis
There may have been a little bit of an "incomplete WBS" problem, but it was much more of an underestimating problem, probably at least 90% underestimating, 10% incomplete WBS tasks.
Because our company was a consulting firm, we had to have very solid specs before we started writing any application, and we also tracked changes to requirements. We also knew what our work environment would be like, so we knew the overhead related to meetings and other processes.
In my opinion, at that point in our company's evolution, we hadn't asked developers to estimate their tasks often enough, and so they were overly optimistic about them. They all got much better at estimating over time, but still tended to under-estimate.
I don't think I included this in the tutorial, but at one point our company bid on software projects on a fixed price basis, and if I had bid them solely on our WBS estimates, we would have lost a lot of money.