Broadening the Frame

Daniel Kahneman’s Thinking Fast and Slow

Of many striking findings presented in Daniel Kahneman’s Thinking Fast and Slow, two, when juxtaposed, are particularly jarring: (1) that rationality, as Kahneman says, is served by “broader framing,” and (2) that moral intuitions are reversible though frame-switching. I’ll define “frame” below. But first, to illustrate Kahneman’s and his colleague Amos Tversky’s general theoretical framework—prospect theory—consider the following pair of problems:

Problem 1: Which do you choose?
          Get $900 for sure OR a 90% chance of getting $1,000 and a 10% chance of getting nothing

Problem 2: Which do you choose?

          Lose $900 for sure OR a 90% chance of losing $1,000 and a 10% of losing nothing

Overwhelmingly, people prefer the sure thing ($900 for sure) in problem 1 and the gamble (a 90% chance of losing $1,000 and a 10% chance of losing nothing) in problem 2. Both choices conform to the predictions of prospect theory. Prospect theory says, in essence, three things: (a) that people evaluate outcomes with respect to a reference point (e.g., one’s current wealth), (b) that people have diminished sensitivity to gains and losses the further these gains and losses are from the reference point (e.g., the difference in valuation between a loss of $900 and $1,000 is less than that between a loss of $100 and $200), and (c) that people are loss averse (i.e., they weight prospective losses more heavily than prospective gains). Prospect theory is often represented by this diagram:


​​​














Kahneman writes that if prospect theory had a flag, the image of this diagram would be on it. Psychological value is measured on the y-axis; the dollar amount of gains or losses is measured on the x-axis. The reference point is the origin where gains and losses are zero, so that psychological valuation is neutral at that point. Diminished sensitivity to gains and losses the further they are from the reference point is captured by the S-shape (the shallower slopes as we move further in either direction). And loss aversion is seen in the fact that the slope of the curve is steeper below the reference point than above it.

In problem 1, people prefer the sure gain of $900 over the 90% chance of getting $1,000 because of loss aversion: they’re so averse to the 10% prospect of getting nothing at all in the case of the gamble that they lean heavily toward the sure thing and thus prefer point A to point B. In problem 2, people prefer the gamble because of diminished sensitivity to gains and losses the further they are from their reference point: in this case, there is a 90% chance of losing even more than $900, but the difference between $900 and $1,000 seems slight from the perspective of the reference point of no gains or losses (plus there is a 10% chance of losing nothing at all). So people prefer point C to point D.

This exercise demonstrates the principle that people are risk averse when there’s a prospect of gain and risk-seeking when there’s a prospect of loss. We see this in the diagram: to the right of the origin, sure things have higher psychological valuations than the expected values of gambles; to the left of the origin, the expected values of gambles have higher (i.e., less negative) psychological valuations than sure things.

To take another illustration, imagine you’re presented, in two consecutive tasks, with the following choices:

Task 1: Choose between:
​          a sure gain of $240
        a 25% chance of gaining $1,000 and a 75% chance of gaining nothing

Task 2: Choose between:
         a sure loss of $750
         a 75% chance of losing $1,000 and a 25% chance of losing nothing

Again, people overwhelmingly show the patterns predicted by prospect theory. When there’s a prospect of gain, people are risk averse, so they select A in the first case. And when there’s a prospect of loss, people are risk-seeking, so they choose D in the second case. (Kahneman and Tversky found in their initial experiment that 73 percent of subjects chose A and D, while just 3 percent chose B and C.) And note that in task 1, people choose the “sure gain” over the gamble, even though the value of the sure gain, $240, is less than the expected value of the gamble, $250. As with the first illustration above, of prospect theory, the impetuses behind these choices are loss aversion in the selection of A over B and diminished sensitivity to gains and losses the further these gains and losses are from the reference point in the selection of D over C.

Framing

Related to the notion of a reference point is that of a frame. A frame is simply the manner in which a choice is presented, independent of information used in making the choice. In the above examples, the frame consists of the presentation of choices as two consecutive tasks, one involving gains, the other losses. But these choices can be reframed as one large problem with four possible choices rather than two consecutive problems with two possible choices in each problem. Thus, in the new version, each choice corresponds to one possible combination of choices, AC, AD, BC, and BD. As noted, people overwhelmingly choose AD when the choice problems are presented as two independent single-choice problems. It takes a bit of work to combine the two choices and view them in a single frame. (It requires the use of what Kahneman calls System 2 cognition, described below.) But once the hard work of reframing the options has been done, we have:

Choose between:
          AD: a 25% of gaining $240 and a 75% chance of losing $760
          BC: a 25% chance of gaining $250 and a 75% chance of losing $750

A no-brainer. BC dominates AD. Almost no one will now select AD, the combination overwhelmingly chosen when the choices are presented independently. What this says, as Kahneman tells us, is that “rationality is served by broader framing.” By broadening the frame, we incorporate more information into our assessment of the choices in a way that enforces overall consistency. In effect, we become more rational.

In the rational agent model of economics, by contrast, choices are invariant to framing, so that subjects would be just as likely to choose A and D when the choices are presented as one large problem as they would be when the choices are presented as two consecutive problems. Kahneman and Tversky famously found otherwise.

Framing and Moral Intuition

Now let’s take a very different problem, one that Kahneman says is his favorite illustration of the framing effect. The problem comes from economist Thomas Schelling. Schelling explained to his students that the federal income tax allows a per-child standard exemption and that the exemption is independent of income. He then asked: “Should the child exemption be larger for the rich than for the poor?” Overwhelmingly, his students said no. But Schelling then noted that the reference number of children, zero, is arbitrary. Suppose now that it’s two, and consider this question: “Should the childless poor pay as large a surcharge as the childless rich?” His students again overwhelmingly said no. The problem, of course, is that these answers conflict. The rich paying a higher surcharge than the poor on two or fewer unborn children when the reference number is two is deemed desirable. The rich getting a larger exemption than the poor when the reference number is zero is deemed undesirable. The thing is, these two states of affairs are equivalent.[1]

Here’s Kahneman’s comment:

You have moral intuitions about the difference between the rich and the poor, but these intuitions depend on an arbitrary reference point, and they are not about the real problem. This problem—the question about actual states of the world—is how much tax individual families should pay, how to fill the cells in the matrix of the tax code. You have no compelling moral intuitions to guide you in solving that problem. Your moral feelings are attached to the frames, to descriptions of reality rather than to reality itself. The message about the nature of framing is stark: framing should not be viewed as an intervention that masks or distorts an underlying preference. At least in this instance…there is no underlying preference that is masked or distorted by the frame. Our preferences are about framed problems, and our moral intuitions are about descriptions, not about substance. (p. 370)

Note the difference between the above exercises. The sure thing-versus-gamble task involves frame-broadening. There we see that broadening the frame, as Kahneman says, “serves rationality,” more information now incorporated into a larger framework that enforces consistency. The tax exemption task involves frame-switching. In this case, we entertain two informationally-equivalent states of affairs that evoke contradictory moral intuitions. By pondering both frames simultaneously, we in effect broaden the frame. And we wind up with a perspective that may enhance rationality but robs us of any moral impetus in the matter.

System 1 vs. System 2

To set these considerations in the broader frame of Kahneman’s book, the book is about the interplay between two modes of cognition, modes Kahneman calls System 1 and System 2. System 1 operations are, as Kahneman says, “fast, automatic, effortless, associative, and difficult to control or modify.” System 2 operations are “slower, serial, effortful, and deliberately controlled.” In the above exercises, our immediate intuitions regarding the given choices are system 1 operations. The effortful reframing of the choices (done by Kahneman and Tversky in the first case and by Schelling in the second) are system 2 operations.

System 1 operations are costless. Evolution has hardwired them into our brains, so we perform them effortlessly and automatically. System 2 operations are costly. They require allocation of the finite resource of attention and thus run on a budget. The picture that emerges is economic and, I would say, Darwinian. Navigating the complex human environment is cognitively taxing. So natural selection has bequeathed us many instincts or heuristics that make the task viable, enabling us to economize where possible on costly deliberation. But to meet the demands of flexibility imposed by that environment, we’ve also been given rational powers, system 2 capabilities. These enable us to reframe states of affairs in ways that enhance our ability to reason about them.

The above examples show how beneficial this capacity can be. Once the effortful system 2 work of reframing the choices is done, it’s very easy for the human mind to utilize the new frames to conceive states of affairs more clearly and reason about them more effectively. In the sure thing-versus-gamble task, once the combined alternatives are laid out, it’s easy to see that BC dominates AD. In the tax exemption case, once the reversal of moral intuition is shown, it would be difficult (though not, alas, impossible) to stubbornly insist that one of the arbitrary moral intuitions evoked by one of the narrow frames is the “right” one. So once the hard work of broadening the frame is done, the benefits can be spread widely, even to those who take no part in the effortful system 2 work of broadening the frame.

The problem natural selection faces is to design a brain that can perform, at minimal cost (minimal glucose metabolism), the cognitive tasks needed for the organism to pass its genes to the next generation. Obviously, for humans, the more these tasks can be delegated to system 1, all else equal, the better. So system 1 is the default mode of cognition. Anything that can be handled by system 1 is handled by system 1. (And conversely, nothing that needn’t be handled by system 2 is handled by system 2.) This follows a basic economic principle, that of least effort, a principle that life-bearing forms must respect. One result, however, is systematic overreliance on system 1 and under-reliance on system 2, with adverse effects in many circumstances. Documenting the latter has been the life’s work of Kahneman and Tversky.

Humans differ from other animals in that we can construct our own frames, using our system 2 capabilities. A frame could be one’s intuitive perceptions of the world but also a theory or a sophisticated mathematical model. Morality, predominantly a human trait, is largely about framing: frame-switching, as when one takes the perspective of another; or frame-broadening, as when one seeks to reconcile competing interests. A frame is a nonlinguistic cognitive space, a kind of workspace in which one “sees” how components of states of affairs are related.

Importantly, a frame is independent of the states of affairs themselves, so multiple frames of varying breadth can be used to assess a single state of affairs.

Efficiency vs. Flexibility

Cognition is a functionality that enables an organism to respond flexibly to environmental contingencies. So the great evolutionary advantage of cognition is flexibility. At the same time, cognition is always under pressure to perform its functions efficiently, at minimal cost. A frame ideally balances these considerations. It makes certain mental contents—thoughts, ideas, responses, etc.—accessible, which fosters flexibility, and certain other contents inaccessible, which fosters efficiency.

There is, however, a tension between flexibility and efficiency. Since re-framing (frame- broadening or frame-switching) is effortful, once a frame is adopted, people will tend to stick to it even when re-framing is warranted by new information. Thus, people will tend to lock into frames in interpreting states of affairs, limiting cognitive flexibility. Kahneman and Tversky’s central contribution to psychology is probably their identification of various heuristics or cognitive shortcuts (availability, representativeness, anchoring, etc.) humans utilize in judgment and decision-making. These heuristics reflect the tradeoff between flexibility and efficiency. In particular, they enhance efficiency at the cost of flexibility.[2]

Consider the “availability heuristic,” i.e., assessing probability by the ease with which examples come to mind. One may, for example, conclude that smoking is safe because a grandparent who smoked two packs a day lived to be 100 or that Frenchmen are “jerks” because of one unpleasant encounter with a Frenchman. Inference here relies on what is salient in memory. In making such inferences, one isn’t rationally exploiting all relevant information. One is simply consulting concrete, available exemplars. One is, in effect, tied to a frame, inhibiting flexibility.

Or consider the “representativeness heuristic,” i.e., judging probabilities on the basis of resemblance. The most famous illustration of the representativeness heuristic is the “Linda problem,” in which test subjects are given this description:

Linda is 31 years old, single, outspoken and very bright. She majored in philosophy. As a student she was deeply concerned with issues of discrimination and social justice and also participated in antinuclear demonstrations.

The subjects are then asked to rank, in order of probability, various statements describing Linda’s present employment and activities. Two of the items are (a) “Linda is a bank teller” and

(b) “Linda is a bank teller and active in the feminist movement.” Go ahead. Consider which of these is more probable.

Overwhelmingly, people say “b,” a choice that exhibits the conjunction fallacy. If Linda is a “bank teller” and “active in the feminist movement,” she is necessarily also “a bank teller.” And it cannot be more probable that she’s both these things than that she is one of them. Originally, Kahneman and Tversky obtained this result when they bunched the above descriptions with six others. But Kahneman later found that even when the choices were confined to the above two, people still mostly considered “b” more probable. A striking result.

Why would people be so confused? In making their choice, people are consulting their concept (image, stereotype) of a woman who is active in the feminist movement. And Linda resembles that concept. So in the case of “representativeness,” people consult a constructed image or stereotype and look for resemblances between that stereotype and a real-world exemplar. In the case of “availability,” by contrast, people consult concrete exemplars that lie in memory. But the underlying phenomenon is the same. In both cases, people utilize images (nonlinguistic information) to make inferences. To override these system 1 intuitions and assess probabilities correctly, one must engage in the effortful system 2 work of broadening one’s frame and thinking logically about the “event space.” Of course, it helps to have a background in probability theory, which would enable one to use statistical tools to assess states of affairs in broader perspective.

An even more brazen manifestation of this general phenomenon is “anchoring.” Anchoring occurs when people are primed to focus on a certain value for a quantity and then, asked to estimate the actual value of the quantity, estimate a value that’s close to the value on which they were primed. For instance, Kahneman and Tversky once rigged a spinning wheel to stop at either 10 or 65 and then asked test subjects to write down whatever number came up. Then they asked their subjects (1) “Is the percentage of African nations among UN member larger or smaller than the number you just wrote?” And (2) “what is your best guess of the percentage of African nations in the UN?” Obviously, whatever number comes up on a spinning wheel should be irrelevant to one’s estimate of the percentage of African nations in the UN. Yet, as Kahneman notes, “The average estimate of those who saw 10 and 65 were 5% and 45%, respectively.” Anchoring, Kahneman tell us, is “one of the most reliable and robust results of experimental psychology” (p. 119).

Another example. Justice should be impartial. So judges in our society undergo an onerous apprenticeship: law school, the bar exam, years of legal practice, and finally, for a select few, elevation to the bench. The reason for this extended vetting is so that we can have confidence that those chosen will have the expertise and character to overcome normal human biases and administer justice fairly. Thus, one demonstration of anchoring is particularly unsettling:

German judges with an average of more than 15 years of experience on the bench first read a description of a woman who had been caught shoplifting, then rolled a pair of dice that were loaded so every roll resulted in either a 3 or a 9. As soon as the dice came to a stop, the judges were asked whether they would sentence her to a term in prison greater or less, in months, than the number showing on the dice. Finally, the judges were asked to specify the exact prison sentence they would give to the shoplifter. On average, those who had rolled a 9 said they would sentence her to 8 months; those who had rolled a 3 said they would sentence her to 5 months; the anchoring effect was 50%. (pp. 125-6)

Evidence suggests that anchoring occurs partly because the suggested value subconsciously primes people to utilize it to form their estimate, a system 1 operation. Evidence also suggests that when people try to reason about a plausible estimate, a system 2 operation, they move from the primed value toward the correct value but incompletely so (“incomplete adjustment,” as it’s called). Either way, anchoring embodies the principles illustrated above with availability and representativeness. Cognition is costly, so we utilize the cognitive contents that are accessible. In the cases of availability and representativeness, what’s accessible are images, so we consult them in forming judgments. In the case of anchoring, what’s accessible is a value on which we’ve been primed, so we consult that. System 2 cognition, which enables us to broaden our frames and surmount biases, is costly. So people stay with their intuitive frames to the extent possible, i.e., until they have good reason to doubt them.

Biases

The moral of Kahneman and Tversky’s work is that human cognition isn’t omniscient but an evolved capacity—one that confers enormous advantages but is costly and hence involves tradeoffs. The grand evolutionary tradeoff, as noted above, appears to be between flexibility and efficiency. Flexibility is fostered by the ease with which disparate mental contents can be summoned within a frame. Efficiency arises from the intense focus on elements within a frame to the exclusion of all else. The various heuristics identified by Kahneman and Tversky are energy-saving devices, cognitive shortcuts that promote efficiency at the expense of flexibility.

Kahneman and Tversky’s work, in a nutshell, documents various heuristics that promote cognitive efficiency and corresponding biases that limit cognitive flexibility. (Hence, the name: the “heuristics and biases” approach.)

Among several biases of judgment and choice that Kahneman cites throughout his book is one he labels “what you see is all there is” (or WYSIATI). This is a tendency people have to presume that all information relevant to a cognitive task is present-to-mind, accessible within one’s present frame, compelling people to systematically base conclusions on limited evidence. As an example, Kahneman cites a study where participants were exposed to either the plaintiff’s or the defense’s version of events in a court case. Fully aware that they had just one side of the story, and having the information needed to construct the other side’s version of events, people overwhelmingly based their assessments on the side of the story they heard. As Kahneman writes,

Participants who saw one-sided evidence were more confident of their judgments than those who saw both sides. This is just what you would expect if the confidence that people experience is determined by the coherence of the story they manage to construct from available information. It is the consistency of the information that matters for a good story, not its completeness. Indeed, you will often find that knowing little makes it easier to fit everything you know into a coherent pattern. (p. 87)

Coherence is usually an effective criterion of truth. Since information is invariably partial, the best method available for learning about the world beyond one’s evidentiary base is typically to look at how relationships cohere at the micro level and extrapolate outward. The alternative, verified correspondence to reality, is simply not available to most people most of the time. Thus, in making sense of the world, WYSIATI can be adaptive. As Kahneman says, it helps explain “why we can think fast, and how we are able to make sense of partial information in a complex world” (p. 87). It will, however, sometimes lead to erroneous inferences.

Kahneman usually illustrates biases using prosaic illustrations from everyday life. But presumably these biases are found in more sophisticated cognition as well, e.g., an economist using a mathematical model to represent the economy. Such frame-drawing is essential for seeing the systematic relationships that are the object of economic enquiry. But there’s always the question of whether the economist is suffering from WYSIATI bias, of whether, in building a model, the economist is making the “right” simplifications. A case can be made that such bias helped blind the profession to the dangers of deregulating finance and the housing bubble.

Rationality

Kahneman and Tversky’s work leads to an illuminating perspective on rationality. “Rationality,” as I shall define it, is: 1: taking a broader perspective on states of affairs, i.e., broadening one’s frame; 2: a state of openness to new information; a willingness and ability to re-frame states of affairs to accommodate new information. Through our rational powers, we can create altered cognitive spaces in which states of affaires can be represented and reasoned about more effectively. However, rationality is not our default mode of cognition. The more powerful player is system 1. For good reason, Kahneman says, system 1 is the “star of the show,” the main subject of his book. If one wants to understand the persistent, systematic judgments and choices people make, one studies primarily system 1, not system 2, cognition.

Though rationality is not a stable property of individuals (except perhaps idealized, i.e., nonexistent, ones), it can be a stable property of institutions. I’m thinking of science. The modern world exists because of the institutionalized openness to new information and the automatic re-framing of states of affairs, when necessary, embodied by science. Individual scientists have, of course, all the biases that Kahneman and Tversky identified as inherent to humans generally. But science itself is a means of systematically overcoming those biases.

What about markets? In some ways, markets resemble science. One may, for example, have a belief about the direction of a price. Since disconfirmation of a false belief in this case is tangible (one loses money), there is a systematic testing of one’s hypothesis, and in the long run, corrections get made. So rationality has a place in markets. But it’s hardly the “star of the show,” contrary to what many economists came to believe in the last four decades. The kind of information markets utilize is ephemeral, i.e., it’s generally about demand or supply conditions in specific places over finite time periods. Most market information may be obsolete before equilibrium outcomes even occur. The human participants in markets are, of course, subject to all the biases Kahneman and Tversky identified. And those biases themselves undoubtedly play a significant role in market outcomes. The implication of this is that the proper object of economic enquiry isn’t just long-run rational expectations equilibria but the inherent chaos of markets, the processes that lead to panics, manias, crashes, bubbles, etc. Of course, behavioral economics, inspired by the work of Kahneman and Tversky, has taken up that charge.

The heuristics Kahneman and Tversky identified evolved because they were adaptive in the environment in which humans evolved. But what about the modern world? Are these heuristics adaptive today? For most purposes of daily life, no doubt, they’re still adaptive. But in an industrialized democracy, citizens are expected to make choices about economic policy, foreign affairs, government budgets, environmental policy, etc., all of which require system 2 cognition. Information relevant to these issues is overwhelming. The necessary frame-building to accommodate that information is effortful—so much so that we mostly outsource it to others, to pundits, politicians, religious authorities, public intellectuals, academics, etc. The problem is that the people doing the frame-building have diverse expertise and motives in framing issues in certain ways. Politicians, in particular, are incentivized to draw frames that exploit moral intuitions among potential voters while enabling them to enact policies that benefit not the public but narrow private interests, e.g., campaign donors and industries with large lobbying presences.

The cost-benefit ratio of the heuristics Kahneman and Tversky identified has almost certainly changed in modern times, arguably for the worse, since the potential consequences of locking into frames are more severe now than in earlier times. Earlier humans may, for example, have been no better than modern humans at absorbing scientific findings about human effects on the environment. But their failure to do so normally wouldn’t have threatened their existence. There are exceptions, of course—episodes, well documented in Jared Diamond’s magisterial Collapse, where societies destroyed their resource bases and thus themselves. In modern times, however, we have an institutionalized frame-broadening process, science, that tells us what the dangers are and how they might be addressed. Whether we will use it to this end, however, remains to be seen.




Notes


1 If at a reference point of zero children, the exemption is $2,000 per child, a rich or poor family with two children has a total exemption of $4,000. If at a reference point of two children, the rich pay a surcharge of $2,500 per unborn child up to two, while the poor pay just $2,000 per unborn child up to two, a rich family with two children has a de facto exemption of $5,000, while a poor family still has an effective exemption of just $4,000.

2 Typically, it should be noted, the efficiency gains of narrow framing outweigh the costs of lost flexibility. When a cat encounters a mouse, optimally, it should focus its limited attentional resources on the task at hand. That means limiting its thoughts, ideas, responses, etc., to just those relevant to task of catching the mouse, avoiding distracters. One could say that when the cat stalks its prey, its “frame” narrows. Cognitive flexibility is compromised, but the net effect is positive (for the cat).

Broadening the Frame by Matt Carlson