Broadening the Frame by Matt Carlson



​​A Thought on Expectations Formation

A psychological experiment of the 1960s, called the Wason selection task, revealed a curious phenomenon. The task, developed by psychologist Peter Wason, confronts a test subject with four cards placed on a table before one. On one side of each card is a letter. On the reverse side is a number:







And the subject is given a rule such as: If a card has a vowel on one side, then it has an even number on the reverse side. The subject is then asked to indicate all and only those cards that must be turned over to determine whether the rule is true.

The correct answers are “A” and “5.” Why? To verify that the rule holds, we want to find potential violators of it. If we find none, the rule holds. The rule says that if a card has a vowel on one side, then it has an even number on the reverse side. So we check the “A” card to see whether it does indeed have an odd number on the reverse side. If it has, the rule is falsified. If it hasn’t, the rule holds (at least in the case of this card). We wouldn’t check the “M” card, since the rule says nothing about cards with consonants on them. So whatever is on the reverse side of the “M” card, that couldn’t violate the rule. We also wouldn’t check the “6” card. The rule says that if a card has a vowel on it, then it must have an even number on the reverse side. It doesn’t say that if a card has an even number on it, then it must have a vowel on the reverse side. So turning over the “6” card could not tell us whether the rule holds. Finally, we would check the “5” card. Again, the rule says that if a card has a vowel on one side, then it must have an even number on the reverse side. But what if a card has an odd number, like “5,” on one side and a vowel, like “A,” on the reverse side? That would violate the rule. So we must check the “5” card.

Nearly everyone gets this wrong. People usually correctly choose the “A” card, but only 5 to 10 percent also choose the “5” card. Usually, subjects choose two cards: “A” and “6.” Why this seemingly dismal performance? One might think it shows that humans have a poor grasp of conditional implication. But that turns out not to be the case. Consider this contrasting example from follow-up studies carried out by various researchers:

You’re a police officer charged with enforcing the drinking laws. You enter a tavern where the tavern owner says: If a person is 20 years old or younger, he or she is not drinking alcohol. Each card represents one person. The reverse sides of the first two cards state whether the person is drinking alcohol. The reverse sides of the second two cards state the person’s age.


 


 

You’re asked to indicate all and only those cards that must be turned over to determine whether the tavern owner’s statement is true. The answers are, of course, “Age: 15” and “Drinking Beer.” You’ll check what a 15-year-old, but not a 27-year-old, is drinking. And you’ll check the age of a beer drinker but not a tea drinker. Logically, the task is the same as the abstract version of the task described above. In both cases, one seeks to verify a conditional rule by looking for violators of it. Yet nearly everyone gets this one right. Why the difference?

Psychologists have debated the issue for many years. My view is that there’s a semantic and a syntactic aspect to it. The semantic aspect is that people understand from everyday life what the conditional statement of the tavern owner means. That is, they know about drinking laws and how they work. On the other hand, at an abstract level, to understand the concrete rule, one must also understand the structure of the statement, its syntax. My hypothesis is that one’s understanding of the semantics of the situation (the context) cues an underlying syntactic structure (part of universal grammar) onto which the content is fitted. By fitting the content to an underlying syntactic structure, I propose, one gains a full grasp of conditional implication, including the inference rules modus ponens and modus tollens. With the abstract rule, by contrast, there is no semantic clue to cue the relevant syntactic structure, so people don’t grasp conditional implication in this case.

To illustrate, consider what a proposition communicates. First, take a simple declarative sentence: “The unicorn lies in the rain.” It’s a concrete statement. It picks out a real (real in a world that includes unicorns) occurrence that humans can easily grasp conceptually. Comprehension of the sentence does not mean imagining a “pattern” of a unicorn lying in the rain but rather an actual unicorn lying in the rain, raindrops moistening its coat, etc. Now take the conditional statement, “If it rains, the unicorn lies beneath the tree.” The topic is no longer a unicorn but a rule, namely, the rule that if it rains, the unicorn lies beneath the tree. Superficially, the sentence may seem to depict a mere pattern—the unicorn lying beneath the tree when it rains. But there’s more to it than that. The sentence expresses a regularity: it says that whenever it rains, the unicorn lies beneath the tree. Thus, in grasping the sentence, one imagines not just a pattern but some plausible, perhaps vaguely understood, mechanism underlying the pattern—some reason why the unicorn (systematically) lies beneath the tree when it rains. Such inferences can occur in multiple ways. But in this case, it occurs through the semantics of the situation—through an understanding of context. Humans, with their experience of the world, know about the undesirability of exposure to rain. So if an animal goes beneath a tree when it rains, we generally presume it’s to avoid getting wet. This suggests an implicit syntax, an additional “clause” not explicitly stated in the sentence but there nonetheless. I’ll set it in brackets:

“If it rains, the unicorn lies beneath the tree [to avoid getting wet].”

For some, it may not be “to avoid getting wet” but to avoid getting watermarks on its horn. Nevertheless, to grasp the statement, one must posit something in the implicit clause, some underlying mechanism that associates antecedent and consequent systematically. The statement in effect says that there’s something in the nature of things that causes the unicorn to go beneath the tree when it rains. Absent the implicit clause, the sentence lacks meaning. It would be like an incomplete sentence (say, “The unicorn beneath the tree”).

In the concrete version of the selection task (the drinking age rule), there is also an implicit clause:

“If a person is 20 years old or younger, he or she is not drinking alcohol [since we’re prohibiting such activity in accordance with the drinking laws].”

As with the unicorn, it’s the semantics of the situation (our knowledge of the drinking laws) that compels us to posit this particular mechanism as the underlying force generating the pattern.

In the abstract version of the selection task, by contrast, there’s no such mechanism. Confronted with the rule, “If a card has a vowel on one side, then it has an even number on the reverse side,” people can conceive of no plausible reason why a card with a vowel on one side should have an even number on the reverse side. There could be a game that has such a rule, which people could learn. And having learned it, people will perform the modus tollens inference nearly effortlessly. But when confronted with the rule initially, we perceive it to be about as meaningful as an incomplete sentence.

Thus, the key factor distinguishing performance on the abstract and concrete versions of the selection task appears to be that in the concrete case, one perceives a mechanism (a law, a social convention) that links antecedent and consequent, whereas in the abstract case, one perceives no such mechanism.

Note that the posited mechanism needn’t be well understood. To take another example: “If a mushroom is parasol-shaped and has white gills [description of a poisonous amanita mushroom], then one shouldn’t eat it.” The test of whether people grasp conditional implication is whether they can make the modus tollens inference, i.e., whether they can tell that if one eats such a mushroom and suffers no ill-effects, then either the rule is wrong, or the mushroom isn’t in fact white-gilled and parasol-shaped. And it seems fair to say that most people will grasp this. But note that people needn’t know specifically why some mushrooms are toxic or how the toxins affect the body, etc. They need only understand that there’s something in the nature of things that ensures the stated consequent.

Another example: “If I flick the light switch, then the light turns on.” Again, people needn’t understand how light bulbs work to grasp conditional implication here, only that there’s something in the nature of things such that flicking light switches causes lights to turn on. It seems fair to say that most people will be able to make the modus tollens inference: if one flicks the switch and nothing happens, there’s something wrong in the normal workings of the mechanism whereby flicking switches causes lights to turn on.

Now to the implications for expectations formation. “Rational expectations” is often described as the idea that people will not be systematically wrong. In terms of the above analysis, this would seem to mean: if people believe if x then y, and then not-y occurs, yet x has also occurred, people won’t continue to believe if x then y. Suppose people believe, “If we set this three-year wage contract so that wages rise by 2 percent per year, then real purchasing power will be constant over that period.” And now suppose the contract is set in the specified way, yet real purchasing power falls. People are confronted with a falsification. And if they’re rational, they’ll discard their prior belief.

Parsing the rule according to the above schema, we have: “If we set this three-year wage contract so that wages rise by 2 percent per year, then real purchasing power will remain constant over that period [because inflation over the next three years will be 2 percent annually].” People generally won’t know exactly why inflation would be 2 percent annually. They may vaguely accept that the government, perhaps the central bank, will act in a way that keeps inflation at that rate. Or they may view it as resulting from something the president does. Or they may think it’s a physical law. But the idea that there is some concrete mechanism underlying their belief about the maintenance of purchasing power at a constant level is integral to the belief. The question of whether rational expectations applies in this case then is whether, when confronted by a false consequent, people make the modus tollens inference, i.e., whether, when not-y occurs (though x has occurred), people rationally conclude that if x then y is false. And probably in the case of wage contracts and inflation, people will be rational in this sense.

So “rationality” doesn’t mean having perfect knowledge but efficiently utilizing all available information to discard false beliefs. This, of course, is what science is all about. Science is a system for efficiently allocating the best available information to those who can use it to update or falsify prior beliefs. The result is a body of unfalsified hypotheses, i.e., knowledge. But note that even scientific “facts” are posited relationships between variables, where the mechanisms underlying the relationships are ultimately mysterious. Hence, there is a certain frothiness even in science, especially at the cutting edge.

The key point here is that there is an inherent mimetic aspect to the transmission of information. A rule such as if x then y is transmitted from mind to mind, with the implicit bracketed expression that specifies the mechanism linking x and y at least somewhat (and perhaps very) murky in the minds of both informer and informed. The paradigmatic illustration is a recipe. A recipe is an algorithm that can be passed intact from mind to mind, enabling its possessor to replicate a particular dish. No one, including the recipe’s inventor, need understand the chemistry by which the recipe has its effect. But the extraordinary thing is that, even without knowledge of the underlying mechanisms, many people can, by following a series of if-then rules, achieve the same result.

Additionally, observe that information structures, as Kenneth Arrow noted many years ago, are indivisible. In the case of a recipe, this means one cannot modify it and still have the same dish. One way of looking at this: it’s easily imaginable that one’s utility is maximized, given one’s resources, with recipe A, and that a slight modification of the algorithm, to recipe A’ (not a change in the quantities of the ingredients but in the process of combining them), causes utility to drop significantly but that the initial utility level is re- attained through a massive modification of the recipe, to recipe A”. In other words, there are “non-convexities,” a problem for consumer choice theory. The indivisibility of information suggests that chosen options can be utility-maximizing, somewhat arbitrary (since utility may be maximized at disparate points along a utility curve), and widely shared (since information structures are indivisible, and the transmission of information is mimetic).

Rational expectations is a long-run equilibrium condition that can be useful in thought experimentation. By assuming that all agents are in effect scientists, the theorist eliminates the black box of human cognition from the analysis (controls for the “randomness” of human psychology), laying bare the “systematic” forces in the economy. Such a condition plausibly applies to the world of science. Science as an institution systematically funnels the best available information to scientists in a given field.

Experimentation is simplified through controls, so there is minimal ambiguity about when a claim if x then y has been falsified. And there is an ethic of acceptance when one’s belief or hypothesis has been falsified. I say “ethic,” but there’s no real choice in the matter. Refusal to accept empirical disproof of a hypothesis is to derogate oneself to pariah status, as religious heretics were of old.

Misdirection occurs in science, of course. Eugenics and the bleeding cure for ailments are examples. But these are “short-run” phenomena. Historically, the “short run” in science could be quite long, as with the bleeding cure and the Ptolemaic model of the solar system. But it’s fair to say that the “short run” in science gets shorter and shorter. The cold fusion “breakthrough” of Pons and Fleishman lasted about three weeks. When the South Korean scientist Hwang Woo Suk faked stem cell data, it took just a few years for science to correct its error.

A kind of leveling also occurs in the economic world in the long run. If an expected price is wrong, the real world will reveal that tangibly, typically in the form of lost money. In the “short run” in which we live, however, beliefs and expectations are constantly buffeted by many forces. And unlike in science, there’s no disciplined subjection of beliefs to controlled experiment. Falsification in the economic world occurs but often through “hard landings”—certainly not through controlled laboratory experiments—and typically with significant costs.

A bubble occurs when people purchase an asset because they expect its price to keep rising. The idea that a price will keep rising implies that people believe the phenomenon is systematic, i.e., that there’s something in the nature of things generating the continuous price rise. In effect, people believe, “If I purchase asset x, then I will get a capital gain [because something is causing the price of asset x to rise].” The bracketed expression is necessarily present and necessarily vague: present, because people must believe the relationship is systematic (otherwise, they won’t act); vague, because the mechanism that would underlie such a relationship is largely hidden.

Why would such a meme get propagated? Wishful thinking is no doubt part of it. The economic self-interest of speculators seeking a well-timed exit is another factor. And of course all this feeds on itself: a rising price reinforces belief in a rising price and thus in the meme that says something is systematically generating the price rise. Then there’s the issue of irreducible uncertainty. By definition, from inside a bubble, one can’t tell with certainty that one is in a bubble. Until the bubble bursts, and there’s a tangible falsification of the belief, there will be debate about whether a bubble exists. And as long as there’s debate, interested parties will spin arguments or narratives convincing to at least some that the asset is a good investment. Bubbles burst when the price starts to fall, i.e., when supply outstrips demand. The meme is then empirically disconfirmed, and people have what Paul Krugman has called their “Wile E. Coyote moment” and start plummeting back to earth.

Two properties of information help explain bubbles: (1) the inherently mimetic aspect of information transfer, and (2) the indivisibility of information. Together they imply that many people can simultaneously hold the same wrong belief. Given the prevalence of certain conditions—a surfeit of savings, asymmetric information between borrowers and lenders, wealth inequality combined with aspirational consumption proclivities (as has been extensively analyzed by Robert Frank), moral hazard of banks (either because they’re too big to fail or because they can pass on loans to other parties), and minimal banking regulation—destructive bubbles will be commonplace (“white swans” rather than “black swans,” as Nouriel Roubini and Stephen Mihm have described them).​