وقایع نادرکتاب: تفکر،سریع و کند / فصل 30
- زمان مطالعه 37 دقیقه
- سطح خیلی سخت
دانلود اپلیکیشن «زیبوک»
این فصل را میتوانید به بهترین شکل و با امکانات عالی در اپلیکیشن «زیبوک» بخوانید
متن انگلیسی فصل
I visited Israel several times during a period in which suicide bombings in buses were relatively common—though of course quite rare in absolute terms. There were altogether 23 bombings between December 2001 and September 2004, which had caused a total of 236 fatalities. The number of daily bus riders in Israel was approximately 1.3 million at that time. For any traveler, the risks were tiny, but that was not how the public felt about it. People avoided buses as much as they could, and many travelers spent their time on the bus anxiously scanning their neighbors for packages or bulky clothes that might hide a bomb.
I did not have much occasion to travel on buses, as I was driving a rented car, but I was chagrined to discover that my behavior was also affected. I found that I did not like to stop next to a bus at a red light, and I drove away more quickly than usual when the light changed. I was ashamed of myself, because of course I knew better. I knew that the risk was truly negligible, and that any effect at all on my actions would assign an inordinately high “decision weight” to a minuscule probability. In fact, I was more likely to be injured in a driving accident than by stopping near a bus. But my avoidance of buses was not motivated by a rational concern for survival. What drove me was the experience of the moment: being next to a bus made me think of bombs, and these thoughts were unpleasant. I was avoiding buses because I wanted to think of something else.
My experience illustrates how terrorism works and why it is so effective: it induces an availability cascade. An extremely vivid image of death and damage, constantly reinforced by media attention and frequent conversations, becomes highly accessible, especially if it is associated with a specific situation such as the sight of a bus. The emotional arousal is associative, automatic, and uncontrolled, and it produces an impulse for protective action. System 2 may “know” that the probability is low, but this knowledge does not eliminate the self-generated discomfort and the wish to avoid it. System 1 cannot be turned off. The emotion is not only disproportionate to the probability, it is also insensitive to the exact level of probability. Suppose that two cities have been warned about the presence of suicide bombers. Residents of one city are told that two bombers are ready to strike. Residents of another city are told of a single bomber. Their risk is lower by half, but do they feel much safer?
Many stores in New York City sell lottery tickets, and business is good. The psychology of high-prize lotteries is similar to the psychology of terrorism. The thrilling possibility of winning the big prize is shared by the community and re Cmuninforced by conversations at work and at home. Buying a ticket is immediately rewarded by pleasant fantasies, just as avoiding a bus was immediately rewarded by relief from fear. In both cases, the actual probability is inconsequential; only possibility matters. The original formulation of prospect theory included the argument that “highly unlikely events are either ignored or overweighted,” but it did not specify the conditions under which one or the other will occur, nor did it propose a psychological interpretation of it. My current view of decision weights has been strongly influenced by recent research on the role of emotions and vividness in decision making. Overweighting of unlikely outcomes is rooted in System 1 features that are familiar by now. Emotion and vividness influence fluency, availability, and judgments of probability—and thus account for our excessive response to the few rare events that we do not ignore.
Overestimation and Overweighting
What is your judgment of the probability that the next president of the United States will be a third-party candidate?
How much will you pay for a bet in which you receive $1,000 if the next president of the United States is a third-party candidate, and no money otherwise?
The two questions are different but obviously related. The first asks you to assess the probability of an unlikely event. The second invites you to put a decision weight on the same event, by placing a bet on it.
How do people make the judgments and how do they assign decision weights? We start from two simple answers, then qualify them. Here are the oversimplified answers:
People overestimate the probabilities of unlikely events.
People overweight unlikely events in their decisions.
Although overestimation and overweighting are distinct phenomena, the same psychological mechanisms are involved in both: focused attention, confirmation bias, and cognitive ease.
Specific descriptions trigger the associative machinery of System 1. When you thought about the unlikely victory of a third-party candidate, your associative system worked in its usual confirmatory mode, selectively retrieving evidence, instances, and images that would make the statement true. The process was biased, but it was not an exercise in fantasy. You looked for a plausible scenario that conforms to the constraints of reality; you did not simply imagine the Fairy of the West installing a third-party president. Your judgment of probability was ultimately determined by the cognitive ease, or fluency, with which a plausible scenario came to mind.
You do not always focus on the event you are asked to estimate. If the target event is very likely, you focus on its alternative. Consider this example:
What is the probability that a baby born in your local hospital will be released within three days?
You were asked to estimate the probability of the baby going home, but you almost certainly focused on the events that might cause a baby not to be released within the normal period. Our mind has a useful capability to Bmun q to Bmufocus spontaneously on whatever is odd, different, or unusual. You quickly realized that it is normal for babies in the United States (not all countries have the same standards) to be released within two or three days of birth, so your attention turned to the abnormal alternative. The unlikely event became focal. The availability heuristic is likely to be evoked: your judgment was probably determined by the number of scenarios of medical problems you produced and by the ease with which they came to mind. Because you were in confirmatory mode, there is a good chance that your estimate of the frequency of problems was too high.
The probability of a rare event is most likely to be overestimated when the alternative is not fully specified. My favorite example comes from a study that the psychologist Craig Fox conducted while he was Amos’s student. Fox recruited fans of professional basketball and elicited several judgments and decisions concerning the winner of the NBA playoffs. In particular, he asked them to estimate the probability that each of the eight participating teams would win the playoff; the victory of each team in turn was the focal event.
You can surely guess what happened, but the magnitude of the effect that Fox observed may surprise you. Imagine a fan who has been asked to estimate the chances that the Chicago Bulls will win the tournament. The focal event is well defined, but its alternative—one of the other seven teams winning—is diffuse and less evocative. The fan’s memory and imagination, operating in confirmatory mode, are trying to construct a victory for the Bulls. When the same person is next asked to assess the chances of the Lakers, the same selective activation will work in favor of that team. The eight best professional basketball teams in the United States are all very good, and it is possible to imagine even a relatively weak team among them emerging as champion. The result: the probability judgments generated successively for the eight teams added up to 240%! This pattern is absurd, of course, because the sum of the chances of the eight events must add up to 100%. The absurdity disappeared when the same judges were asked whether the winner would be from the Eastern or the Western conference. The focal event and its alternative were equally specific in that question and the judgments of their probabilities added up to 100%.
To assess decision weights, Fox also invited the basketball fans to bet on the tournament result. They assigned a cash equivalent to each bet (a cash amount that was just as attractive as playing the bet). Winning the bet would earn a payoff of $160. The sum of the cash equivalents for the eight individual teams was $287. An average participant who took all eight bets would be guaranteed a loss of $127! The participants surely knew that there were eight teams in the tournament and that the average payoff for betting on all of them could not exceed $160, but they overweighted nonetheless. The fans not only overestimated the probability of the events they focused on—they were also much too willing to bet on them.
These findings shed new light on the planning fallacy and other manifestations of optimism. The successful execution of a plan is specific and easy to imagine when one tries to forecast the outcome of a project. In contrast, the alternative of failure is diffuse, because there are innumerable ways for things to go wrong. Entrepreneurs and the investors who evaluate their prospects are prone both to overestimate their chances and to overweight their estimates.
As we have seen, prospect theory differs from utility theory in the rel Bmun q rel Bmuationship it suggests between probability and decision weight. In utility theory, decision weights and probabilities are the same. The decision weight of a sure thing is 100, and the weight that corresponds to a 90% chance is exactly 90, which is 9 times more than the decision weight for a 10% chance. In prospect theory, variations of probability have less effect on decision weights. An experiment that I mentioned earlier found that the decision weight for a 90% chance was 71.2 and the decision weight for a 10% chance was 18.6. The ratio of the probabilities was 9.0, but the ratio of the decision weights was only 3.83, indicating insufficient sensitivity to probability in that range. In both theories, the decision weights depend only on probability, not on the outcome. Both theories predict that the decision weight for a 90% chance is the same for winning $100, receiving a dozen roses, or getting an electric shock. This theoretical prediction turns out to be wrong.
Psychologists at the University of Chicago published an article with the attractive title “Money, Kisses, and Electric Shocks: On the Affective Psychology of Risk.” Their finding was that the valuation of gambles was much less sensitive to probability when the (fictitious) outcomes were emotional (“meeting and kissing your favorite movie star” or “getting a painful, but not dangerous, electric shock”) than when the outcomes were gains or losses of cash. This was not an isolated finding. Other researchers had found, using physiological measures such as heart rate, that the fear of an impending electric shock was essentially uncorrelated with the probability of receiving the shock. The mere possibility of a shock triggered the full-blown fear response. The Chicago team proposed that “affect-laden imagery” overwhelmed the response to probability. Ten years later, a team of psychologists at Princeton challenged that conclusion.
The Princeton team argued that the low sensitivity to probability that had been observed for emotional outcomes is normal. Gambles on money are the exception. The sensitivity to probability is relatively high for these gambles, because they have a definite expected value.
What amount of cash is as attractive as each of these gambles?
A. 84% chance to win $59
B. 84% chance to receive one dozen red roses in a glass vase
What do you notice? The salient difference is that question A is much easier than question B. You did not stop to compute the expected value of the bet, but you probably knew quickly that it is not far from $50 (in fact it is $49.56), and the vague estimate was sufficient to provide a helpful anchor as you searched for an equally attractive cash gift. No such anchor is available for question B, which is therefore much harder to answer. Respondents also assessed the cash equivalent of gambles with a 21% chance to win the two outcomes. As expected, the difference between the high-probability and low-probability gambles was much more pronounced for the money than for the roses.
To bolster their argument that insensitivity to probability is not caused by emotion, the Princeton team compared willingness to pay to avoid gambles:
21% chance (or 84% chance) to spend a weekend painting someone’s three-bedroom apartment
21% chance (or 84% chance) to clean three stalls in a dormitory bath Bmun qbath Bmuroom after a weekend of use
The second outcome is surely much more emotional than the first, but the decision weights for the two outcomes did not differ. Evidently, the intensity of emotion is not the answer.
Another experiment yielded a surprising result. The participants received explicit price information along with the verbal description of the prize. An example could be:
84% chance to win: A dozen red roses in a glass vase. Value $59.
21% chance to win: A dozen red roses in a glass vase. Value $59.
It is easy to assess the expected monetary value of these gambles, but adding a specific monetary value did not alter the results: evaluations remained insensitive to probability even in that condition. People who thought of the gift as a chance to get roses did not use price information as an anchor in evaluating the gamble. As scientists sometimes say, this is a surprising finding that is trying to tell us something. What story is it trying to tell us?
The story, I believe, is that a rich and vivid representation of the outcome, whether or not it is emotional, reduces the role of probability in the evaluation of an uncertain prospect. This hypothesis suggests a prediction, in which I have reasonably high confidence: adding irrelevant but vivid details to a monetary outcome also disrupts calculation. Compare your cash equivalents for the following outcomes: 21% (or 84%) chance to receive $59 next Monday
21% (or 84%) chance to receive a large blue cardboard envelope containing $59 next Monday morning
The new hypothesis is that there will be less sensitivity to probability in the second case, because the blue envelope evokes a richer and more fluent representation than the abstract notion of a sum of money. You constructed the event in your mind, and the vivid image of the outcome exists there even if you know that its probability is low. Cognitive ease contributes to the certainty effect as well: when you hold a vivid image of an event, the possibility of its not occurring is also represented vividly, and overweighted. The combination of an enhanced possibility effect with an enhanced certainty effect leaves little room for decision weights to change between chances of 21% and 84%.
The idea that fluency, vividness, and the ease of imagining contribute to decision weights gains support from many other observations. Participants in a well-known experiment are given a choice of drawing a marble from one of two urns, in which red marbles win a prize:
Urn A contains 10 marbles, of which 1 is red.
Urn B contains 100 marbles, of which 8 are red.
Which urn would you choose? The chances of winning are 10% in urn A and 8% in urn B, so making the right choice should be easy, but it is not: about 30%–40% of students choose the urn Bmun q urn Bmu with the larger number of winning marbles, rather than the urn that provides a better chance of winning. Seymour Epstein has argued that the results illustrate the superficial processing characteristic of System 1 (which he calls the experiential system).
As you might expect, the remarkably foolish choices that people make in this situation have attracted the attention of many researchers. The bias has been given several names; following Paul Slovic I will call it denominator neglect. If your attention is drawn to the winning marbles, you do not assess the number of nonwinning marbles with the same care. Vivid imagery contributes to denominator neglect, at least as I experience it. When I think of the small urn, I see a single red marble on a vaguely defined background of white marbles. When I think of the larger urn, I see eight winning red marbles on an indistinct background of white marbles, which creates a more hopeful feeling. The distinctive vividness of the winning marbles increases the decision weight of that event, enhancing the possibility effect. Of course, the same will be true of the certainty effect. If I have a 90% chance of winning a prize, the event of not winning will be more salient if 10 of 100 marbles are “losers” than if 1 of 10 marbles yields the same outcome.
The idea of denominator neglect helps explain why different ways of communicating risks vary so much in their effects. You read that “a vaccine that protects children from a fatal disease carries a 0.001% risk of permanent disability.” The risk appears small. Now consider another description of the same risk: “One of 100,000 vaccinated children will be permanently disabled.” The second statement does something to your mind that the first does not: it calls up the image of an individual child who is permanently disabled by a vaccine; the 999,999 safely vaccinated children have faded into the background. As predicted by denominator neglect, low-probability events are much more heavily weighted when described in terms of relative frequencies (how many) than when stated in more abstract terms of “chances,” “risk,” or “probability” (how likely). As we have seen, System 1 is much better at dealing with individuals than categories.
The effect of the frequency format is large. In one study, people who saw information about “a disease that kills 1,286 people out of every 10,000” judged it as more dangerous than people who were told about “a disease that kills 24.14% of the population.” The first disease appears more threatening than the second, although the former risk is only half as large as the latter! In an even more direct demonstration of denominator neglect, “a disease that kills 1,286 people out of every 10,000” was judged more dangerous than a disease that “kills 24.4 out of 100.” The effect would surely be reduced or eliminated if participants were asked for a direct comparison of the two formulations, a task that explicitly calls for System 2. Life, however, is usually a between-subjects experiment, in which you see only one formulation at a time. It would take an exceptionally active System 2 to generate alternative formulations of the one you see and to discover that they evoke a different response.
Experienced forensic psychologists and psychiatrists are not immune to the effects of the format in which risks are expressed. In one experiment, professionals evaluated whether it was safe to discharge from the psychiatric hospital a patient, Mr. Jones, with a history of violence. The information they received included an expert’s assessment of the risk. The same statistics were described in two ways: Patients similar to Mr. Jones are estimated to have a 10% probability of committing an act of violence against others during the first several months after discharge.
Of every 100 patients similar to Mr. Jones, 10 are estimated to commit an act of violence against others during the first several months after discharge.
The professionals who saw the frequency format were almost twice as likely to deny the discharge (41%, compared to 21% in the probability format). The more vivid description produces a higher decision weight for the same probability.
The power of format creates opportunities for manipulation, which people with an axe to grind know how to exploit. Slovic and his colleagues cite an article that states that “approximately 1,000 homicides a year are committed nationwide by seriously mentally ill individuals who are not taking their medication.” Another way of expressing the same fact is that “1,000 out of 273,000,000 Americans will die in this manner each year.” Another is that “the annual likelihood of being killed by such an individual is approximately 0.00036%.” Still another: “1,000 Americans will die in this manner each year, or less than one-thirtieth the number who will die of suicide and about one-fourth the number who will die of laryngeal cancer.” Slovic points out that “these advocates are quite open about their motivation: they want to frighten the general public about violence by people with mental disorder, in the hope that this fear will translate into increased funding for mental health services.” A good attorney who wishes to cast doubt on DNA evidence will not tell the jury that “the chance of a false match is 0.1%.” The statement that “a false match occurs in 1 of 1,000 capital cases” is far more likely to pass the threshold of reasonable doubt. The jurors hearing those words are invited to generate the image of the man who sits before them in the courtroom being wrongly convicted because of flawed DNA evidence. The prosecutor, of course, will favor the more abstract frame—hoping to fill the jurors’ minds with decimal points.
Decisions from Global Impressions
The evidence suggests the hypothesis that focal attention and salience contribute to both the overestimation of unlikely events and the overweighting of unlikely outcomes. Salience is enhanced by mere mention of an event, by its vividness, and by the format in which probability is described. There are exceptions, of course, in which focusing on an event does not raise its probability: cases in which an erroneous theory makes an event appear impossible even when you think about it, or cases in which an inability to imagine how an outcome might come about leaves you convinced that it will not happen. The bias toward overestimation and overweighting of salient events is not an absolute rule, but it is large and robust.
There has been much interest in recent years in studies of choice from experience, which follow different rules from the choices from description that are analyzed in prospect theory. Participants in a typical experiment face two buttons. When pressed, each button produces either a monetary reward or nothing, and the outcome is drawn randomly according to the specifications of a prospect (for example, “5% to win $12” or “95% chance to win $1”). The process is truly random, s Bmun qm, s Bmuo there is no guarantee that the sample a participant sees exactly represents the statistical setup. The expected values associated with the two buttons are approximately equal, but one is riskier (more variable) than the other. (For example, one button may produce $10 on 5% of the trials and the other $1 on 50% of the trials). Choice from experience is implemented by exposing the participant to many trials in which she can observe the consequences of pressing one button or another. On the critical trial, she chooses one of the two buttons, and she earns the outcome on that trial. Choice from description is realized by showing the subject the verbal description of the risky prospect associated with each button (such as “5% to win $12”) and asking her to choose one. As expected from prospect theory, choice from description yields a possibility effect—rare outcomes are overweighted relative to their probability. In sharp contrast, overweighting is never observed in choice from experience, and underweighting is common.
The experimental situation of choice by experience is intended to represent many situations in which we are exposed to variable outcomes from the same source. A restaurant that is usually good may occasionally serve a brilliant or an awful meal. Your friend is usually good company, but he sometimes turns moody and aggressive. California is prone to earthquakes, but they happen rarely. The results of many experiments suggest that rare events are not overweighted when we make decisions such as choosing a restaurant or tying down the boiler to reduce earthquake damage.
The interpretation of choice from experience is not yet settled, but there is general agreement on one major cause of underweighting of rare events, both in experiments and in the real world: many participants never experience the rare event! Most Californians have never experienced a major earthquake, and in 2007 no banker had personally experienced a devastating financial crisis. Ralph Hertwig and Ido Erev note that “chances of rare events (such as the burst of housing bubbles) receive less impact than they deserve according to their objective probabilities.” They point to the public’s tepid response to long-term environmental threats as an example.
These examples of neglect are both important and easily explained, but underweighting also occurs when people have actually experienced the rare event. Suppose you have a complicated question that two colleagues on your floor could probably answer. You have known them both for years and have had many occasions to observe and experience their character. Adele is fairly consistent and generally helpful, though not exceptional on that dimension. Brian is not quite as friendly and helpful as Adele most of the time, but on some occasions he has been extremely generous with his time and advice. Whom will you approach?
Consider two possible views of this decision:
It is a choice between two gambles. Adele is closer to a sure thing; the prospect of Brian is more likely to yield a slightly inferior outcome, with a low probability of a very good one. The rare event will be overweighted by a possibility effect, favoring Brian.
It is a choice between your global impressions of Adele and Brian. The good and the bad experiences you have had are pooled in your representation of their normal behavior. Unless the rare event is so extreme that it comes to mind separately (Brian once verbally abused a colleague who asked for his help), the norm will be biased toward typical and recent instances, favoring Adele.
In a two-system mind, the second interpretation a Bmun qon a Bmuppears far more plausible. System 1 generates global representations of Adele and Brian, which include an emotional attitude and a tendency to approach or avoid. Nothing beyond a comparison of these tendencies is needed to determine the door on which you will knock. Unless the rare event comes to your mind explicitly, it will not be overweighted. Applying the same idea to the experiments on choice from experience is straightforward. As they are observed generating outcomes over time, the two buttons develop integrated “personalities” to which emotional responses are attached.
The conditions under which rare events are ignored or overweighted are better understood now than they were when prospect theory was formulated. The probability of a rare event will (often, not always) be overestimated, because of the confirmatory bias of memory. Thinking about that event, you try to make it true in your mind. A rare event will be overweighted if it specifically attracts attention. Separate attention is effectively guaranteed when prospects are described explicitly (“99% chance to win $1,000, and 1% chance to win nothing”). Obsessive concerns (the bus in Jerusalem), vivid images (the roses), concrete representations (1 of 1,000), and explicit reminders (as in choice from description) all contribute to overweighting. And when there is no overweighting, there will be neglect. When it comes to rare probabilities, our mind is not designed to get things quite right. For the residents of a planet that may be exposed to events no one has yet experienced, this is not good news.
Speaking of Rare Events
“Tsunamis are very rare even in Japan, but the image is so vivid and compelling that tourists are bound to overestimate their probability.”
“It’s the familiar disaster cycle. Begin by exaggeration and overweighting, then neglect sets in.”
“We shouldn’t focus on a single scenario, or we will overestimate its probability. Let’s set up specific alternatives and make the probabilities add up to 100%.”
“They want people to be worried by the risk. That’s why they describe it as 1 death per 1,000. They’re counting on denominator neglect.”
مشارکت کنندگان در این صفحه
تا کنون فردی در بازسازی این صفحه مشارکت نداشته است.
🖊 شما نیز میتوانید برای مشارکت در ترجمهی این صفحه یا اصلاح متن انگلیسی، به این لینک مراجعه بفرمایید.