> The review highlights a number of experiments that show how actual human reasoning differs from maximizing utility.
The conventional definition of utility is pretty strict, for example, it must be a function of the final outcome only, and there is no model uncertainty, which would go under the title of ambiguity aversion instead. So maybe it's not that surprising that a really specific narrow concept doesn't describe all of human behaviour and needs to be extended. But since you can extend it sufficiently to describe some interesting behaviours, is it really necessary to focus specifically on the utility function, and not the other things that people might be maximizing?
Sorry, I don't really understand your comment. Assuming that people maximize utility is a useful model for certain tasks. Kahneman's work shows that people's decisions differ systematically from any kind of rational maximization, and are explained better when you allow for biases such as anchoring, loss aversion, and substituting a hard question for a related easy question.
My point is that utility is a rather narrowly-defined concept, so if you find a situation where people don't seem to be maximizing any utility function, one of the possibilities is that the concept of a utility function is too narrowly-defined. Things like anchoring, loss aversion, ambiguity aversion can all be modelled: the only thing you lose is the name "utility function". Maybe the utility function needs to depend on the entire history of states (for loss aversion), or maybe the question being asked is subject to uncertainty, or there is a fundamental amount of model uncertainty. All of those can be modelled in probabilistic terms.
So if rational maximization means that people have a utility function that they maximize, then yes, rational maximization is not what people do. But that is partly the fault of how the definition of utility was chosen.
Von Neumann and Morgenstern showed that, as long as people can consistently order choices (ordinal utility), there is a utility function (cardinal utility) they are satisfying.
What Kahneman and Tversky observed is that people don't even choose consistently. It depends on how the choices are presented. For instance, whether the subject frames an outcome as a loss or a smaller-than-expected gain. No matter how you define a utility function, it will not always be maximized. So, it's not a question of defining the function less narrowly. You can present two games with mathematically identical sets of outcomes and people consistently rank the outcomes differently.
Anyway, it's a very good and important book, and doesn't have much to do with Bayesian statistics.
[Ninja-edited since HN doesn't let me respond further below...if you can show the outcomes K & T observed are in fact consistent with a more broadly defined utility function, then you too can win a Nobel prize!]
> What Kahnemann and Tversky observed is that people don't even choose consistently. It depends on how the choices are presented. For instance, whether the subject frames a choice as a loss or a smaller-than-expected gain. So, no matter how you define a utility function, it will not always be maximized.
I disagree, I think you are assuming that people accept questions at face value and unfailingly trust the experimenter. Then equivalent but differently-stated problems would be equivalent, and you would reach that conclusion.
But when people use heuristics, those heuristics are grounded in their experience, and are like a prior on the meaning of the question. Stating the same question in two different ways and getting different answers means either that there is no utility function, or that the "utility function" depends (through model uncertainty, for example) on the exact phrasing of the question.
My point is that these discussions are very closely tied to the kinds of assumptions you make about how people reason, what is rational, and what inputs the utility function has. Kahneman and Tversky got around this problem, I think, by doing something eminently reasonable: postulating a clear and unambiguous definition of a utility function. But the concept of "rationality" is richer than that, so the conversation should not stop there.
> postulating a clear and unambiguous definition of a utility function. But the concept of "rationality" is richer than that
The word "rationality" may be ambiguous, as most words describing anything complex are, but the authors attempted to provide a clear model and work within those bounds. When we begin discussing the ideas informally, and using terms in a broader and more colloquial sense, then we're at fault if the results have become muddied.
The authors demonstrated a reasonable utility function, one which most people upon reflection would agree is logical, and demonstrated that people do not consistently act in a way that maximizes that function.
We can always move the goal post, and claim that if people appear to be acting irrationally it's because we simply don't understand their concept of rationality (or the more complex function they're maximizing). But that seems rather circular; it would be nice hear examples of a richer concept of rationality, in the context of the author's experiments, that might explain seemingly inconsistent behavior.
The conventional definition of utility is pretty strict, for example, it must be a function of the final outcome only, and there is no model uncertainty, which would go under the title of ambiguity aversion instead. So maybe it's not that surprising that a really specific narrow concept doesn't describe all of human behaviour and needs to be extended. But since you can extend it sufficiently to describe some interesting behaviours, is it really necessary to focus specifically on the utility function, and not the other things that people might be maximizing?