The Knowledge Illusion
$17.00
Quantity | Discount |
---|---|
5 + | $12.75 |
- Description
- Additional information
Description
“The Knowledge Illusion is filled with insights on how we should deal with our individual ignorance and collective wisdom.” —Steven Pinker
We all think we know more than we actually do.
Humans have built hugely complex societies and technologies, but most of us don’t even know how a pen or a toilet works. How have we achieved so much despite understanding so little? Cognitive scientists Steven Sloman and Philip Fernbach argue that we survive and thrive despite our mental shortcomings because we live in a rich community of knowledge. The key to our intelligence lies in the people and things around us. We’re constantly drawing on information and expertise stored outside our heads: in our bodies, our environment, our possessions, and the community with which we interact—and usually we don’t even realize we’re doing it.
The human mind is both brilliant and pathetic. We have mastered fire, created democratic institutions, stood on the moon, and sequenced our genome. And yet each of us is error prone, sometimes irrational, and often ignorant. The fundamentally communal nature of intelligence and knowledge explains why we often assume we know more than we really do, why political opinions and false beliefs are so hard to change, and why individual-oriented approaches to education and management frequently fail. But our collaborative minds also enable us to do amazing things. The Knowledge Illusion contends that true genius can be found in the ways we create intelligence using the community around us.“In The Knowledge Illusion, the cognitive scientists Steven Sloman and Philip Fernbach hammer another nail into the coffin of the rational individual… positing that not just rationality but the very idea of individual thinking is a myth.” —The New York Times Book Review
“Sloman and Fernbach offer clever demonstrations of how much we take for granted, and how little we actually understand… The book is stimulating, and any explanation of our current malaise that attributes it to cognitive failures—rather than putting it down to the moral wickedness of one group or another—is most welcome. Sloman and Fernbach are working to uproot a very important problem… [The Knowledge Illusion is] written with vigour and humanity.” —Financial Times
“The Knowledge Illusion is at once both obvious and profound: the limitations of the mind are no surprise, but the problem is that people so rarely think about them… In the context of partisan bubbles and fake news, the authors bring a necessary shot of humility: be sceptical of your own knowledge, and the wisdom of your crowd.” —The Economist
“A breezy guide to the mechanisms of human intelligence.” —Psychology Today
“In an increasingly polarized culture where certainty reigns supreme, a book advocating intellectual humility and recognition of the limits of understanding feels both revolutionary and necessary. The fact that it’s a fun and engaging page-turner is a bonus benefit for the reader.” —Publishers Weekly
“An utterly fascinating and unsettling book, The Knowledge Illusion shows us how everything we know is bound together with knowledge of others. Sloman and Fernbach break down many of our assumptions about science, how we think and how we know anything at all about the world in which we live. Despite the wide-scale deconstruction, the authors are upbeat… Anyone engaged in the work of nurturing healthy and flourishing communities will ultimately have to wrestle with the questions posed in this book. Sloman and Fernbach help us to do so gracefully, acknowledging the truth of how little we know, and finding hope in this precarious situation.” —Relevant Magazine
“We all know less than we think we do, including how much we know about how much we know. There’s no cure for this condition, but there is a treatment: this fascinating book. The Knowledge Illusion is filled with insights on how we should deal with our individual ignorance and collective wisdom.” —Steven Pinker, Johnstone Family Professor of Psychology, Harvard University, and author of How the Mind Works and The Stuff of Thought
“I love this book. A brilliant, eye-opening treatment of how little each of us knows, and how much all of us know. It’s magnificent, and it’s also a lot of fun. Read it!” —Cass R. Sunstein, coauthor of Nudge and founder and director, Program on Behavioral Economics and Public Policy, Harvard Law School
Steven Sloman is a professor of cognitive, linguistic, and psychological sciences at Brown University. He is the editor in chief of the journal Cognition. He lives with his wife in Providence, Rhode Island. His two children have flown the coop.
Philip Fernbach is a cognitive scientist and professor of marketing at the University of Colorado’s Leeds School of Business. He lives in Boulder, Colorado, with his wife and two children.One
What We Know
Nuclear warfare lends itself to illusion. Alvin Graves was the scientific director of the U.S. military’s bomb testing program in the early fifties. He was the person who gave the order to go ahead with the disastrous Castle Bravo detonation discussed in the last chapter. No one in the world should have understood the dangers of radioactivity better than Graves. Eight years before Castle Bravo, in 1946, Graves was one of eight men in a room in Los Alamos, the nuclear laboratory in New Mexico, while another researcher, Louis Slotin, performed a tricky maneuver the great physicist Richard Feynman nicknamed “tickling the dragon’s tail.” Slotin was experimenting with plutonium, one of the radioactive ingredients used in nuclear bombs, to see how it behaved. The experiment involved closing the gap between two hemispheres of beryllium surrounding a core of plutonium. As the hemispheres got closer together, neutrons released from the plutonium reflected back off the beryllium, causing more neutrons to be released. The experiment was dangerous. If the hemispheres got too close, a chain reaction could release a burst of radiation. Remarkably, Slotin, an experienced and talented physicist, was using a flathead screwdriver to keep the hemispheres separated. When the screwdriver slipped and the hemispheres crashed together, the eight physicists in the room were bombarded with dangerous doses of radiation. Slotin took the worst of it and died in the infirmary nine days later. The rest of the team eventually recovered from the initial radiation sickness, though several died young of cancers and other diseases that may have been related to the accident.
How could such smart people be so dumb?
It’s true that accidents happen all the time. We’re all guilty of slicing our fingers with a knife or closing the car door on someone’s hand by mistake. But you’d hope a group of eminent physicists would know to depend on more than a handheld flathead screwdriver to separate themselves from fatal radiation poisoning. According to one of Slotin’s colleagues, there were much safer ways to do the plutonium experiment, and Slotin knew it. For instance, he could have fixed one hemisphere in position and raised the other from below. Then, if anything slipped out of position, gravity would separate the hemispheres harmlessly.
Why was Slotin so reckless? We suspect it’s because he experienced the same illusion that we have all experienced: that we understand how things work even when we don’t. The physicists’ surprise was like the surprise you feel when you try to fix a leaky faucet and end up flooding the bathroom, or when you try to help your daughter with her math homework and end up stumped by quadratic equations. Too often, our confidence that we know what’s going on is greater at the beginning of an episode than it is at the end.
Are such cases just random examples, or is there something more systematic going on? Do people have a habit of overestimating their understanding of how things work? Is knowledge more superficial than it seems? These are the questions that obsessed Frank Keil, a cognitive scientist who worked at Cornell for many years and moved to Yale in 1998. At Cornell, Keil had been busy studying the theories people have about how things work. He soon came to realize how shallow and incomplete those theories are, but he ran into a roadblock. He could not find a good method to demonstrate scientifically how much people know relative to how much they think they know. The methods he tried took too long or were too hard to score or led participants to just make stuff up. And then he had an epiphany, coming up with a method to show what he called the illusion of explanatory depth (IoED, for short) that did not suffer from these problems: “I distinctly remember one morning standing in the shower in our home in Guilford, Connecticut, and almost the entire IoED paradigm spilled out in that one long shower. I rushed into work and grabbed Leon Rozenblit, who had been working with me on the division of cognitive labor, and we started to map out all the details.”
Thus a method for studying ignorance was born, a method that involved simply asking people to generate an explanation and showing how that explanation affected their rating of their own understanding. If you were one of the many people that Rozenblit and Keil subsequently tested, you would be asked a series of questions like the following:
1.On a scale from 1 to 7, how well do you understand how zippers work?
2.How does a zipper work? Describe in as much detail as you can all the steps involved in a zipper’s operation.
If you’re like most of Rozenblit and Keil’s participants, you don’t work in a zipper factory and you have little to say in answer to the second question. You just don’t really know how zippers work. So, when asked this question:
3.Now, on the same 1 to 7 scale, rate your knowledge of how a zipper works again.
This time, you show a little more humility by lowering your rating. After trying to explain how a zipper works, most people realize they have little idea and thus lower their knowledge rating by a point or two.
This sort of demonstration shows that people live in an illusion. By their own admission, respondents thought they understood how zippers work better than they did. When people rated their knowledge the second time as lower, they were essentially saying, “I know less than I thought.” It’s remarkable how easy it is to disabuse people of their illusion; you merely have to ask them for an explanation. And this is true of more than zippers. Rozenblit and Keil obtained the same result with speedometers, piano keys, flush toilets, cylinder locks, helicopters, quartz watches, and sewing machines. And everyone they tested showed the illusion: graduate students at Yale as well as undergraduates at both an elite university and a regional public one. We have found the illusion countless times with undergraduates at a different Ivy League university, at a large public school, and testing random samples of Americans over the Internet. We have also found that people experience the illusion not only with everyday objects but with just about everything: People overestimate their understanding of political issues like tax policy and foreign relations, of hot-button scientific topics like GMOs and climate change, and even of their own finances. We have been studying psychological phenomena for a long time and it is rare to come across one as robust as the illusion of understanding.
One interpretation of what occurs in these experiments is that the effort people make to explain something changes how they interpret what “knowledge” means. Maybe when asked to rate their knowledge, they are answering a different question the first time they are asked than they are the second time. They may interpret the first question as “How effective am I at thinking about zippers?” After attempting to explain how the object works, they instead assess how much knowledge they are actually able to articulate. If so, their second answer might have been to a question that they understood more as “How much knowledge about zippers am I able to put into words?” This seems unlikely, because Rozenblit and Keil used such careful and explicit instructions when they asked the knowledge questions. They told participants precisely what they meant by each scale value (1 to 7). But even if respondents were answering different questions before and after they tried to explain how the object worked, it remains true that their attempts to generate an explanation taught them about themselves: They realized that they have less knowledge that they can articulate than they thought. This is the essence of the illusion of explanatory depth. Before trying to explain something, people feel they have a reasonable level of understanding; after explaining, they don’t. Even if they lower their score because they’re defining the term “knowledge” differently, it remains a revelation to them that they know relatively little. According to Rozenblit and Keil, “many participants reported genuine surprise and new humility at how much less they knew than they originally thought.”
A telling example of the illusion of explanatory depth can be found in what people know about bicycles. Rebecca Lawson, a psychologist at the University of Liverpool, showed a group of psychology undergraduates a schematic drawing of a bicycle that was missing several parts of the frame as well as the chain and the pedals.
She asked the students to fill in the missing parts. Try it. What parts of the frame are missing? Where do the chain and pedals go?
It’s surprisingly difficult to answer these questions. In Lawson’s study, about half the students were unable to complete the drawings correctly (you can see some examples on the next page). They didn’t do any better when they were shown the correct drawings as well as three incorrect ones and were asked to pick out the correct one. Many chose pictures showing the chain around the front wheel as well as the back wheel, a configuration that would make it impossible to turn. Even expert cyclists were far less than perfect on this apparently easy task. It is striking how sketchy and shallow our understanding of familiar objects is, even objects that we encounter all the time that operate via mechanisms that are easily perceived.
How Much Do We Know?
So we overestimate how much we know, suggesting that we’re more ignorant than we think we are. But how ignorant are we? Is it possible to estimate how much we know? Thomas Landauer tried to answer this question.
Landauer was a pioneer of cognitive science, holding academic appointments at Harvard, Dartmouth, Stanford, and Princeton and also spending twenty-five years trying to apply his insights at Bell Labs. He started his career in the 1960s, a time when cognitive scientists took seriously the idea that the mind is a kind of computer. Cognitive science emerged as a field in sync with the modern computer. As great mathematical minds like John von Neumann and Alan Turing developed the foundations of computing as we know it, the question arose whether the human mind works in the same way. Computers have an operating system that is run by a central processor that reads and writes to a digital memory using a small set of rules. Early cognitive scientists ran with the idea that the mind does too. The computer served as a metaphor that governed how the business of cognitive science was done. Thinking was assumed to be a kind of computer program that runs in people’s brains. One of Alan Turing’s claims to fame is that he took this idea to its logical extreme. If people work like computers, then it should be possible to program a computer to do what a human being can. Motivated by this idea, his classic paper “Computing Machinery and Intelligence” in 1950 addressed the question Can machines think?
In the 1980s, Landauer decided to estimate the size of human memory on the same scale that is used to measure the size of computer memories. As we write this book, a laptop computer comes with around 250 or 500 gigabytes of memory as long-term storage. Landauer used several clever techniques to measure how much knowledge people have. For instance, he estimated the size of an average adult’s vocabulary and calculated how many bytes would be required to store that much information. He then used the result of that to estimate the size of the average adult’s entire knowledge base. The answer he got was half of a gigabyte.
He also made the estimate in a completely different way. Many experiments have been run by psychologists that ask people to read text, look at pictures, or hear words (real or nonsensical), sentences, or short passages of music. After a delay of between a few minutes and a few weeks, the psychologists test the memory of their subjects. One way to do this is to ask people to reproduce the material originally presented to them. This is a test of recall and can be quite punishing. Do you think you could recall a passage right now that you had heard only once before, a few weeks ago? Landauer analyzed a number of experiments that weren’t so hard on people. The experiments tended to test recognition—whether participants could identify a newly presented item (often a picture, word, or passage of music) as one that had been presented before or not. In some of these experiments, people were shown several items and had to pick the one they had seen before. This is a very sensitive way of testing memory; people would be able to do well even if their memories were weak. To estimate how much people remembered, Landauer relied on the difference in recognition performance between a group that had been exposed to the items and a group that had not. This difference is as pure a measure of memory as one can get.
Landauer’s brilliant move was to divide the measure of memory (the difference in recognition performance between the two groups) by the amount of time people spent learning the material in the first place. This told him the rate at which people are able to acquire information that they later remember. He also found a way to take into account the fact that people forget. The remarkable result of his analysis is that people acquire information at roughly the same rate regardless of the details of the procedure used in the experiment or the type of material being learned. They learned at approximately the same rate whether the items were visual, verbal, or musical.
Landauer next calculated how much information people have on hand—what the size of their knowledge base is—by assuming they learn at this same rate over the course of a seventy-year lifetime. Every technique he tried led to roughly the same answer: 1 gigabyte. He didn’t claim that this answer is precisely correct. But even if it’s off by a factor of 10, even if people store 10 times more or 10 less than 1 gigabyte, it remains a puny amount. It’s just a tiny fraction of what a modern laptop can retain. Human beings are not warehouses of knowledge.
From one perspective, this is shocking. There is so much to know and, as functioning adults, we know a lot. We watch the news and don’t get hopelessly confused. We engage in conversations about a wide range of topics. We get at least a few answers right when we watch Jeopardy! We all speak at least one language. Surely we know much more than a fraction of what can be retained by a small machine that can be carried around in a backpack.
But this is only shocking if you believe the human mind works like a computer. The model of the mind as a machine designed to encode and retain memories breaks down when you consider the complexity of the world we interact with. It would be futile for memory to be designed to hold tons of information because there’s just too much out there.US
Additional information
Weight | 9.4 oz |
---|---|
Dimensions | 0.8000 × 5.5100 × 8.2000 in |
Imprint | |
ISBN-13 | |
ISBN-10 | |
Author | |
Audience | |
BISAC | |
Subjects | BUS019000, knowledge, money, problem solving, decision making, behavioral science, PSY003000, ignorance, illusion, entrepreneurship, business books, consciousness, IQ test, cognitive science, science books for adults, cognitive psychology, biology gifts, neuroscience gifts, A.I., human intelligence, hive mind, deliberation, Human nature, business, science, education, intuition, innovation, creativity, human behavior, group think, memory, community, psychology, mind, intelligence, cognition, artificial intelligence, Brain, neuroscience, iq, economy, ai |
Format |