well, this is a good example of the dangers of dropping to the lower level of the opposition; taken out of context, the images in the slides have a counter-productive effect on the presentation. You just always have to hold higher standards, or get dragged into the mud.
This only happens when we're (and I do mean all of us, not only a general IT crowd, but people, society as a whole) are too serious about ourselves, up to the level of butt-tight about it. I can't get insulted by the shiny pile, I cant get insulted by the monkey. In best case I'll think its mildly funny and adds some... shine to the presentation, at worst I'll think its simply a lack of good taste and move on with it.
On a natural log scale a 0.01 change in a coefficient (X) is approximately a 1% in response (Y). Where as a 0.01 change on a log10 scale is a 2.3% change in response.
Therefore Using the example if you see a 0.06 difference in coefficients on a natural log scale you know that the difference response is approximately 6%, whereas you'll need a calculator if it's on a log10 scale.
To go an entire World Cup without conceding a goal is unprecedented, and I would expect the probabilities to reflect this. I seems that the probabilities are calculated independently.
Nope, from the wp article: "The bus factor is the total number of key developers who would need to be incapacitated [...] to send the project into such disarray that it would not be able to proceed"
Note that often (e.g. when comparing bus factor of projects of different sizes) it makes sense to normalize by dividing by the size of the team in which case the answer to asogi's question is "Yes, normalized bus factor is 1/n"
> when comparing bus factor of projects of different sizes [] it makes sense to normalize by dividing by the size of the team
Consider two projects, a one-man project with a bus factor of, obviously, 1, and a 100-man project with so much redundancy within the group that the bus factor is an impressively robust 50. The standard bus factor immediately tells us that the big project is much much less fragile.
The "normalized bus factor" you describe (1 for the solo project, 1/2 for the huge project) perversely tells us that the big project is much more fragile, in that it will be disabled by the loss of 50% of the team, while the solo project will only go down after a full 100% of the team is eliminated.
Why does it make more sense to count redundancy by "percentage of the team, whatever size the team might be" rather than by "number of setbacks before project failure"? In general, the bigger team really does make the project more robust, not less robust.
Hm. I wonder if there's a way to distinguish between "If this particular person gets hit by a bus, we're screwed" versus "If any of these n people get hit by a bus, we're screwed" (which is a lot worse).
How about looking at the number of people who must be removed in order to bring the bus factor down to 1? We can call it the "bus co-factor" :). With the bus factor we're picking the most critical people first, and removing the least number of them; with the bus co-factor we're picking the least critical people first, and removing the greatest number of them.
There's probably already some name and many theorems in graph theory for both of these ideas.
You can treat it like a reverse big-O notation. Your figure out the bus factor for each sub-task/system in your project (this is the risk factor) and then determine the priority of each sub-task/system. If a GUI is "easy" or low priority for your project, one GUI person doesn't mean your project has a bus factor of 1. But if networking is critical and you have one person that understands the novel telecon protocol stack you're using, that's a bus factor of 1.