So well do we collaborate, sloman and Fernbach argue, that we can hardly tell where our own understanding ends and others begins. One implication of the naturalness with which we divide cognitive labor, they write, is that theres no sharp boundary between one persons ideas and knowledge and those of other members of the group. This borderlessness, or, if you prefer, confusion, is also crucial to what we consider progress. As people invented new tools for new ways of living, they simultaneously created new realms of ignorance; if everyone had insisted on, say, mastering the principles of metalworking before picking up a knife, the Bronze age wouldn t have amounted to much. When it comes to new technologies, incomplete understanding is empowering. Where it gets us into trouble, according to Sloman and Fernbach, is in the political domain. Its one thing for me to flush a toilet without knowing how it operates, and another for me to favor (or oppose) an immigration ban without knowing what Im talking about. Sloman and Fernbach cite a survey conducted in 2014, not long after Russia annexed the ukrainian territory of Crimea.
Women, don ' t, ask: The high Cost of avoiding
In a study conducted at Yale, graduate students were asked to rate their understanding of everyday devices, including toilets, zippers, and cylinder locks. They were then asked to write detailed, step-by-step explanations of how the devices work, and to rate their understanding again. Apparently, the effort revealed to the students their own ignorance, because their self-assessments dropped. (Toilets, it turns out, are more complicated than they appear.). Sloman and Fernbach see this effect, which they call the illusion of explanatory depth, just about everywhere. People believe that they know way more than they actually. What allows us bibliographies to persist in this belief is other people. In the case of my toilet, someone else designed it so that I can operate it easily. This is something humans are very good. Weve been relying on one anothers expertise ever since we figured out how to hunt together, which was probably a key development in our evolutionary history.
As Mercier and Sperber write, this is one of many cases in which the environment changed too quickly for natural selection to catch. Steven Sloman, a professor at Brown, and Philip Fernbach, a professor at the University of Colorado, are also cognitive scientists. They, too, believe sociability is the key to how the human mind functions or, perhaps more pertinently, malfunctions. They begin their book, the Knowledge Illusion: Why we never Think Alone (riverhead with a look at toilets. Virtually everyone in the United States, and indeed throughout the developed world, is familiar with toilets. A typical flush toilet has a ceramic bowl filled with water. When the handle thesis is depressed, or the button pushed, the water—and everything thats been deposited in it—gets sucked into a pipe and from there into the sewage system. But how does this actually happen?
Among the other half, suddenly people became a lot more critical. Nearly sixty per cent now rejected the responses that theyd earlier been satisfied with. This lopsidedness, according to mercier and Sperber, reflects the task that reason evolved to perform, which is to prevent us from getting screwed by the other members of our group. Living in small bands of hunter-gatherers, our ancestors were primarily concerned with their social standing, and with making sure that they weren t the ones risking their lives on the hunt while others loafed around in the cave. There was little advantage in reasoning clearly, while much was to be gained from winning arguments. Among the many, many issues our forebears didn t worry about were the deterrent effects of capital punishment and the ideal attributes of a firefighter. Nor did they have to contend with fabricated studies, or fake news, or Twitter. Its no wonder, then, that today reason often seems to fail.
Participants were asked to answer a series of simple reasoning problems. They were then asked to explain their responses, and were given a chance to modify them if they identified mistakes. The majority were satisfied with their original choices; fewer than fifteen per cent changed their minds in step two. In step three, participants were shown one of the same problems, along with their answer and the answer of another participant, whod come to a different conclusion. Once again, they were given the chance to change their responses. But a trick had been played: the answers presented to them as someone elses were actually their own, and vice versa. About half the participants realized what was going.
I don t want
Both studies—you guessed it—were made up, and systems had been designed to present what were, objectively speaking, equally compelling statistics. The students who had originally supported capital punishment rated the pro-deterrence data highly credible and the anti-deterrence data unconvincing; the students whod originally opposed capital punishment did the reverse. At the end of the experiment, the students were asked once again about their views. Those whod started out pro-capital punishment were now even more in favor of it; those whod opposed it were even more hostile. If reason is designed to generate sound judgments, then its hard to conceive of a more serious design flaw than confirmation bias. Imagine, mercier and Sperber suggest, a mouse that thinks the way. Such a mouse, bent on confirming its belief that there are no cats around, would soon be dinner.
To the extent that confirmation bias leads people to dismiss evidence of new or underappreciated threats—the human equivalent of the cat around the corner—its a trait that should have been selected against. The fact that both we and it survive, mercier and Sperber argue, proves that it must have some adaptive function, and that function, they maintain, is related to our hypersociability. Mercier and Sperber prefer the term myside bias. Humans, they point out, aren t randomly credulous. Presented with someone elses argument, were quite adept at spotting the weaknesses. Almost invariably, the positions were blind about are our own. A recent experiment performed by mercier and some european colleagues neatly demonstrates this asymmetry.
Stripped of a lot of what might be called cognitive-science-ese, mercier and Sperbers argument runs, more or less, as follows: Humans biggest advantage over other species is our ability to coöperate. Coöperation is difficult to establish and almost as difficult to sustain. For any individual, freeloading is always the best course of action. Reason developed not to enable us to solve abstract, logical problems or even to help us draw conclusions from unfamiliar data; rather, it developed to resolve the problems posed by living in collaborative groups. Reason is an adaptation to the hypersocial niche humans have evolved for themselves, mercier and Sperber write.
Habits of mind that seem weird or goofy or just plain dumb from an intellectualist point of view prove shrewd when seen from a social interactionist perspective. Consider whats become known as confirmation bias, the tendency people have to embrace information that supports their beliefs and reject information that contradicts them. Of the many forms of faulty thinking that have been identified, confirmation bias is among the best catalogued; its the subject of entire textbooks worth of experiments. One of the most famous of these was conducted, again, at Stanford. For this experiment, researchers rounded up a group of students who had opposing opinions about capital punishment. Half the students were in favor of it and thought that it deterred crime; the other half were against it and thought that it had no effect on crime. The students were asked to respond to two studies. One provided data in support of the deterrence argument, and the other provided data that called it into question.
Don ' t, make me think, revisited: a common Sense
It isn t any longer. Thousands of subsequent experiments have confirmed (and elaborated on) summary this finding. As everyone whos followed the research—or even occasionally picked up a copy. Psychology today —knows, any graduate student with a clipboard can demonstrate that reasonable-seeming people are often totally irrational. Rarely has this insight seemed more relevant than it does right now. Still, an essential puzzle remains: How did we come to be this way? In a new book, the Enigma of reason (Harvard the cognitive scientists Hugo mercier and Dan Sperber take a stab at answering this question. Mercier, who works at a french research institute in lyon, and Sperber, now based at the central European University, in Budapest, point out that reason is an evolved trait, like bipedalism or three-color vision. It emerged on the savannas of Africa, and has to be understood in that context.
Once again, midway through the study, the students were informed that theyd been misled, and that the information theyd received was entirely fictitious. The students were then asked to describe their own beliefs. What sort of attitude toward risk did they think a successful firefighter would have? The students whod received the first packet thought that he would avoid. The students in the second group essay thought hed embrace. Even after the evidence for their beliefs has been totally refuted, people fail to make appropriate revisions in those beliefs, the researchers noted. In this case, the failure was particularly impressive, since two data points would never have been enough information to generalize from. The Stanford studies became famous. Coming from a group of academics in the nineteen-seventies, the contention that people can t think straight was shocking.
to the low-score group said that they thought they had done significantly worse than the average student—a conclusion that was equally unfounded. Once formed, the researchers observed dryly, impressions are remarkably perseverant. A few years later, a new set of Stanford students was recruited for a related study. The students were handed packets of information about a pair of firefighters, Frank. Franks bio noted that, among other things, he had a baby daughter and he liked to scuba dive. George had a small son and played golf. The packets also included the mens responses on what the researchers called the risky-conservative choice test. According to one version of the packet, Frank was a successful firefighter who, on the test, almost always went with the safest option. In the other version, Frank also chose the safest option, but he was a lousy firefighter whod been put on report by his supervisors several times.
They identified the real note in only ten instances. As is often the case with psychological studies, the whole setup was a put-on. Though half the notes were indeed genuine—theyd been obtained from the los Angeles county coroners office—the scores were fictitious. The students whod been told they were almost always right were, on average, no more discerning than those who had been told they were mostly wrong. In the second phase of the study, the deception was revealed. The students were told that the real point of the experiment was to gauge their responses to thinking they were right or wrong. (This, write it turned out, was also a deception.) Finally, the students were asked to estimate how many suicide notes they had actually categorized correctly, and how many they thought an average student would get right. At this point, something curious happened.
Why facts Dont Change our, minds
The vaunted human capacity for reason may have more to do with winning arguments than with thinking straight. Illustration by gérard dubois, in 1975, researchers at Stanford invited a summary group of undergraduates to take part in a study about suicide. They were presented with pairs of suicide notes. In each pair, one note had been composed by a random individual, the other by a person who had subsequently taken his own life. The students were then asked to distinguish between the genuine notes and the fake ones. Some students discovered that they had a genius for the task. Out of twenty-five pairs of notes, they correctly identified the real one twenty-four times. Others discovered that they were hopeless.