Judgment Under Uncertainty: Heuristics and Biases

Judgment Under Uncertainty: Heuristics and Biases

The thirty-five chapters in this book describe various judgmental heuristics and the biases they produce, not only in laboratory experiments but in important social, medical, and political situations as well.

This book will be useful to a wide range of students and researchers, as well as to decision makers seeking to gain insight into their judgments and to improve them.

Reviews of the Judgment Under Uncertainty: Heuristics and Biases

I read this book because it and Gödel, Escher, Bach were mentioned in the same breath in Eliezer Yudkowsky's incomparable Harry Potter and the Methods of Rationality. My development as a scientist and rationalist has been intertwined in some unlikely ways with the Harry Potter phenomenon. I first encountered game theory about eight years ago in a book called The Science of Harry Potter.

These concepts have been written about a lot in a pop-sci setting, and I think the treatment in these papers is much more nuanced than the typical presentation. Among these is the existence of generally and implicitly accepted "rules of conversation" that are often violated by experimenters; broadly speaking, that one's interlocutor will be "informative, truthful, relevant, and clear." For almost anyone interested in the topic, however, I would recommend first reading Kahneman's excellent and accessible book, "Thinking, Fast and Slow."

The underlying premise of this book is that there are much more fundamental biases in human judgments. I believe the editor, Kahneman, has written another book directed more towards a lay audience, "Thinking Fast and Slow." I will look into reading that too, but I'm sure a lot of the material in it is drawn from these scientific studies. Here are a few examples: Humans tend to be uncharitable in making judgments of others; when seeking explanations of others' behavior, we tend to attribute more to characteristics of the individual and not the situation in which they find themselves. For instance, if an editor is very confident a manuscript will get published because of the excellent writing, he rarely takes into account the success rates of similar books. The integration of book base case data AND intuitive judgments is referred to as regression, and leads to better estimates. Here's a list of the essays contained in the book and a brief description of each: Judgment under uncertainty: heuristics and biases More of a summary of the entire book with introductory concepts, including representativeness (e.g. what is the probability that object A belongs to class B?), misconceptions of chance (truly random events don't seem random to humans), and sample size (humans are bad at taking into account the effects of sample size when making decisions). Subjective probability: A judgment of representativeness Human evaluate the representativeness of a sample by looking for similarities to the population of interest and the apparent "randomness" of the sample. For example, the fundamental attribution error, in which we "infer broad personal dispositions and expect consistency in behavior or outcomes across widely disparate situations and contexts." Evidential impact of base rates Even when given base rate data, humans rarely take it into account, using their initial intuitions rather than the hard numbers provided by scientific studies. The availability bias in social perception and interaction The simulation heuristic There are two kinds of judgments where availability can play a role: how easily past information is recalled, and how easily new situations are created using the imagination. Informal covariation assessment: Data-based versus theory-based judgments Humans are really bad at evaluating covariation, because they look at a limited selection of the data. Historians do 'play new tricks on the dead in every generation.'" Evaluation of compound probabilities in sequential choice Humans are really bad at compound probabilities, probabilities based on sequential events. Conservatism in human information processing Baye's theorem gives the user updated probabilities based on new information. The best-guess hypothesis in multi-stage inference When making multi-stage inferences, humans tend to use the best-guess hypothesis: make your best guess, and pretend it's actually 100% true when taking action, rather than taking into account other possibilities that still might exist. Inferences of personal characteristics on the basis of information retrieved from one's memory When making decisions based on memory, the user should take into account (1) diagnostic value of the information available and (2) the reliability of the information available. The vitality of mythical numbers Another good one, it looks at how humans can be overconfident in quick calculations.

While I agree with Sam that there's good stuff in here, little needles in a bed of hay; and think that this academic work could easily have been compressed into a work half its size - it was also written in the 80's, and things *were* different back then. This concept immediately seemed applicable to tying together 3 things in my head: the likelihood of free will of inanimate objects like point particles, when to defy data and the likelihood of free will for an observer like you, the reader * A bias that I haven't seen anyone else point out "Shit has Happened even when I was more constipated than this and I had to push much harder" came out while I was thinking of this stuff. * ...and another one: toplevel bias And in general added a good couple of new biases to the list for me to watch out for. This among other things, is why it turns out to be a good book to read along with Karl Popper's The Open Society and Its Enemies.

He was awarded the 2002 Nobel Prize in Economics for his work in Prospect theory.

  • English

  • Psychology

  • Rating: 4.12
  • Pages: 544
  • Publish Date: April 30th 1982 by Cambridge University Press
  • Isbn10: 0521284147
  • Isbn13: 9780521284141