I agree with this response. We are normally not asked to predict for situations where something big changes in the team. But I of course acknowledge that these things do happen. When you have a stable team, the numbers that this method yields are also very stable.
My experience is that most project managers take a non-probabilistic approach.
Say you have your usual list of breakdown tasks and assign a time/budget estimate for each in terms of “low”, “most likely”, and “high”. The intuitive answer is to sum up the “most likely” for your total estimate. However, this ignores the probability that a delay in one task affects others.
Instead, if you take into account the covariance relationship between tasks (using historic or simulated data) you often find that “most likely” summation has a quite low probability of being met. For the org that applied this, there was a less than 20% chance we’d meet or best that intuitive estimate. No wonder we were chronically over budget and over schedule!
I've been reading the "Software Estimation: Demystifying the Black Art" from Steve McConnell.
He introduces a distinction that, at least for me, has been instrumental: estimations and plans are different things.
Estimations are honest, based on past performance data and probabilistic on their very natures.
Plans are, on the other hand, built with a target date in mind, taking into account the estimate previously made, desired delivery dates from customers and everything we are so used to.
By planing fulfillment of tasks closer to the estimates, you decrease the risk of the plan failing. You can build a shorter schedule and assume that staff will work overtime, assume more optimistic estimates and so on, but, then, the risk of failure will be higher. Such risk will, of course, never be zero though.
It's a simple distinction, but it has important implications. We don't feel anymore the pressure of making pessimistic, therefore dishonest estimates just out of fear of being pressed to cut the schedule. And also gave us a better argumentative tool to negotiate schedules with our clients.
I think it's also useful for making all the probabilities a bit clearer to project managers. It's like "OK, I know that you need me to commit with a delivery date, but I'm also going to make clear to you that there are some risks involved and I wanna make everybody aware of them"
That’s an important distinction. The way we handled it was by letting managers define their acceptable level of risk and then use the model to define the estimates in that context.
For example, if they were ok with a 60% chance of making or beating a cost estimate, the forecast could be much more aggressive than, say, a management expectation of 90% chance of being on budget
It’s a straightforward enough primer that it can be done in Excel, including simulating the data if necessary.
Even if this type of model is too simple for actual estimation, it’s a useful (and sobering) tool to help managers understand why their intuitive estimates can so often be incorrect.
You can intuit how much “active time” it will take you, personally, to do something. How can you intuit how long a task is going to spend in a queue waiting to be worked on because your team doesn’t have capacity, or another team “down the chain” doesn’t have capacity?
We have queuing theory because people are bad at intuiting the latter, and I don’t even think we're anywhere close to good, as an industry, at intuiting the former.
You can talk about BS (in the context of software) like queuing theory or you can actually write software. I suggest the Mythical man month.
Sometimes I think humans developed language only to be able to pretend doing something:
Best hunter of the tribe kills a mammoth. But he is not verbally talented. Now an army of bureaucrats appear and tell everyone that they were instrumental in slaying the prey by applying some BS methodology. The tribe is gaslighted, the bureaucrats gain importance, influence and economic wealth.
Queuing theory is a branch of mathematics. It is useful, in a software context, for things like predicting server capacity and predicting response times of programs. It is also regularly used to predict things like hospital wait times.
Here is a very good introduction, I hope you can learn something new from it (:
there's no actual use of queuing theory in the article though, it's just mentioned as some sort of irrelevant justification. it's not even a monte carlo simulation, it's a bootstrap. you definitely don't need queuing theory to run a bootstrap
If you build a general ledger application for the 10th time, sure forecasting is fairly straightforward. Nothing I do at my (very large, non tech but highly software driven) employer has ever been done before here. All estimates are treated officially as if they are date time accurate, but changes happen during the lifetime of the project so often you may as well use a random number. I call it a "nod nod wink wink" estimate: every wants it to be accurate, but no one really expects it to mean anything, other than the budget people.
One of my favorite managers required i give him estimates.
I hated it because we both knew the number was bullshit.
On the other hand, having to think about the estimate and give him something, even if at times it was a guess, i still found it beneficial. It meant i focused better, stayed on task, and often delivered on time anyway.
Im not saying everyone needs the accountability rails, but some people excell with this particular helper.
I agree with this response. We are normally not asked to predict for situations where something big changes in the team. But I of course acknowledge that these things do happen. When you have a stable team, the numbers that this method yields are also very stable.