• August 27, 2015

The Trouble With Intuition

The Trouble With Intuition 1

Figure © Daniel Simons

When subjects were shown a video and told to count the number of times a basketball was passed, half of the viewers failed to notice a research assistant in a gorilla suit strolling through the scene.

"How do I love thee? Let me count the ways." Those lines by Elizabeth Barrett Browning, written while she was being courted by Robert Browning, and among the most famous in all of poetry, open one of 44 of her love poems that are collectively known as Sonnets From the Portuguese. The Sonnets were first published in book form in 1850, as part of the second edition of her collected poems. At least that's what poetry scholars and bibliophiles thought for several decades until Thomas J. Wise announced that he had discovered a previously unknown earlier printing.

Wise was a celebrated British collector of rare books and manuscripts in the late 19th and early 20th centuries; the catalog of his private library filled 11 volumes. In 1885 an author named W.C. Bennett showed Wise several copies of a 47-page, privately printed pamphlet of the Sonnets dated 1847 and marked "not for publication." Private printings of literature were not unusual in that era. What was unusual was the discovery of a previously unknown collection of such important poetry that predated the first known public printing. Wise immediately recognized the rarity and value of the pamphlets, and bought one for £10 (about $1,200 today). Over the ensuing years, Wise discovered other previously unknown collections of minor works by major authors, including some by Alfred Tennyson, Charles Dickens, and Robert Louis Stevenson. Collectors and libraries snapped up those volumes, and Wise's fame and wealth grew.

At first glance, Wise's items seemed authentic, especially to a buyer considering just one pamphlet at a time. Each one fit nicely with the rest of its author's body of work. For example, the date of Browning's private printing of the Sonnets corresponded with a gap of four years between when the poems were finished and when they were officially published. The pamphlets also appeared to be authentic in format and typography—just how an expert would expect them to look and feel. Although the steady stream of new discoveries by Wise did raise isolated suspicions that something might be amiss, the pamphlets he distributed were broadly respected as genuine for decades.

Some 45 years after Wise found the private edition of the Sonnets, two British book dealers, named John Carter and Graham Pollard, decided to investigate his finds. They re-examined the Browning volume and identified eight reasons why its existence was inconsistent with typical practices of the era. For example, none of the copies had been inscribed by the author, none were trimmed and bound in the customary way, and the Brownings never mentioned the special private printing in any letters, memoirs, or other documents.

The array of circumstantial evidence was impressive, but not conclusive. Carter and Pollard found their smoking gun with scientific analysis. First they documented the fact that all paper used for printing before the 1860s was created from rags, straw, or a strawlike material called esparto. Carter and Pollard then examined one of Wise's Sonnets pamphlets under a microscope and found that the paper was made from chemically treated wood pulp, a technique that wasn't used in Britain until the 1870s. The 1847 edition had to be a fake. The two dealers proved that nearly half of the other Wise pamphlets they examined were also fraudulent, and published their findings in a 412-page book. Wise denied the charges until he died, in 1937, but subsequent investigations confirmed Carter and Pollard's work. Today Wise is celebrated as one of the greatest forgers of all time.

If you have read Malcolm Gladwell's 2005 book, Blink, which is subtitled The Power of Thinking Without Thinking, this tale might seem familiar. Blink begins with a similar story, about an ancient Greek statue known as a kouros, that was offered to the Getty Museum, in Los Angeles. The curators believed the kouros to be genuine, and, relying on scientific tests of its authenticity, they bought it for nearly $10-million. But other art historians, upon first viewing the statue, instantly thought that it was hinky. The former director of the Metropolitan Museum of Art said his first reaction was "fresh"—as in, too fresh-looking to be so old. A Greek archaeologist "saw the statue and immediately felt cold." According to Gladwell, those experts' intuitions proved correct, and the initial scientific tests that authenticated the statue turned out to have been faulty.

Gladwell uses the kouros forgery to launch his case for the surprising power of intuitive snap judgments and instinctive gut feelings, which he calls "rapid cognition." As he puts it, "there can be as much value in the blink of an eye as in months of rational analysis." Gladwell goes on to argue that rapid intuitions often outperform rational analyses, and that excessive thinking can lead us to mistakenly second-guess what we know in our gut to be true. Is that conclusion merited? In the case of Wise's pamphlets, the top experts and collectors of the time trusted in their authenticity, but their rapid judgments were wrong, and only painstaking systematic analysis, which integrated multiple types of information from a variety of sources, uncovered the truth. And even for the kouros, expert intuition was divided: The Getty's curators must have initially thought the statue looked authentic, or they wouldn't have considered buying it in the first place. In fact, some experts still believe the kouros to be authentic, and the Getty today labels it "Greek, about 530 B.C., or modern forgery."

Cases in which forgeries that intuitively appear real but later are discovered through analysis to be frauds are fairly common in the art world. Many of the master forger Han van Meegeren's paintings hung in galleries around the world before scientific analysis showed that they were not authentic Vermeers. Indeed, the skill of the forger is precisely in creating works that appear at first glance, even to experts, to be genuine, and that can be exposed as fakes only through lengthy, expensive study. Like Wise's pamphlets, the infamous "Hitler Diaries" were declared authentic and made public in the 1980s before paper-testing proved that they had been created after the end of World War II.

Gladwell's message in Blink has been interpreted by some readers as a broad license to rely on intuition and dispense with analysis, which can lead to flawed decisions. In his book, Too Big to Fail: The Inside Story of How Wall Street and Washington Fought to Save the Financial System From Crisis—and Themselves (Viking, 2009), the New York Times journalist Andrew Ross Sorkin notes that the former Lehman Brothers president Joseph Gregory was a devotee of Blink who even hired Gladwell to lecture his employees "on trusting their instincts when making difficult decisions." (Gregory was removed from power as his firm circled the bankruptcy drain in 2008.)

Intuition means different things to different people. To some it refers to a sudden flash of insight, or even the spiritual experience of discovering a previously hidden truth. In its more mundane form, intuition refers to a way of knowing and deciding that is distinct from and complements logical analysis. The psychologist Daniel Kahneman nicely contrasts the two: "Intuitive thinking is perception-like, rapid, effortless. ... Deliberate thinking is reasoning-like, critical, and analytic; it is also slow, effortful, controlled, and rule-governed." Intuition can help us make good decisions without expending the time and effort needed to calculate the optimal decision, but shortcuts sometimes lead to dead ends. Kahneman received the Nobel Memorial Prize in Economic Science in 2002 for his work with the late Amos Tversky that showed how people often rely on intuitive heuristics (rules of thumb) rather than rational analysis, and how those mental shortcuts often lead us to make decisions that are systematically biased and suboptimal.

Gerd Gigerenzer, director of the Max Planck Institute for Human Development and author of Gut Feelings: The Intelligence of the Unconscious (Viking, 2007), takes a more benign view of intuition: Intuitive heuristics are often well adapted to the environments in which the human mind evolved, and they yield surprisingly good results even in the modern world. For example, he argues, choosing to invest in companies based on whether you recognize their names can produce reasonably good returns. The same holds for picking which tennis player is likely to win a match. Recognition is a prime example of intuitive, rapid, effortless cognition. Gigerenzer's book jacket describes his research as a "major source for Malcolm Gladwell's Blink," but the popular veneration of intuitive decision-making that sprang from Blink and similar works lacks the nuance of Gigerenzer's claims or those of other experimental psychologists who have studied the strengths and limits of intuition.

The idea that hunches can outperform reason is neither unique nor original to Malcolm Gladwell, of course. Most students and professors have long believed that, when in doubt, test-takers should stick with their first answers and "go with their gut." But data show that test-takers are more than twice as likely to change an incorrect answer to a correct one than vice versa.

Intuition does have its uses, but it should not be exalted above analysis. Intuition can't be beat when we are deciding which ice cream we like more, which songs are catchier, which politician is most charismatic. The essence of those examples is the absence of any objective standard of quality—there's no method of analysis that will decisively determine which supermodel is more attractive or which orchestra audition was superior. The key to successful decision making is knowing when to trust your intuition and when to be wary of it. And that's a message that has been drowned out in the recent celebration of intuition, gut feelings, and rapid cognition.

There is, moreover, one class of intuitions that consistently leads us astray—dangerously astray. These intuitions are stubbornly resistant to analysis, and it is exactly these intuitions that we shouldn't trust. Unfortunately, they are also the intuitions that we find the most compelling: mistaken intuitions about how our own minds work.

We met in the late 1990s at Harvard University, where Dan was a new psychology professor and Chris was a graduate student. As part of an undergraduate laboratory course Dan was teaching, we decided to re-examine some landmark studies the cognitive psychologist Ulric Neisser conducted in the 1970s. In one of those experiments, observers counted the number of times a group of three people wearing white shirts passed a basketball to one another while ignoring three people wearing black shirts who were also passing a ball. In the middle of the video, a woman carrying an open umbrella walked through the scene. Surprisingly, many of the observers didn't notice her. Some psychologists assumed that this failure was a side effect of the unusual video displays Neisser used—the players and the umbrella woman were all partially transparent and looked ghostly, making them somewhat harder to see. As a class project, we decided to test whether people could miss something that was opaque and fully visible.

We filmed the basketball-passing game with a single camera and, like Neisser, we had a female research assistant stroll through the game with an open umbrella. We also made a version in which we replaced the umbrella woman with a woman in a full-body gorilla suit, even having her stop in the middle of the game, turn toward the camera, thump her chest, and exit on the other side of the display nine seconds later. People might miss a woman, we thought, but they would definitely see a gorilla.

We were wrong. Fifty percent of the subjects in our study failed to notice the gorilla! Later research by others, with equipment that tracks subjects' eye movements, showed that people can miss the gorilla even when they look right at it. We were stunned, and so were the subjects themselves. When they viewed the video a second time without counting the passes, they often expressed shock: "I missed that?!" A few even accused us of sneakily replacing the "first tape" with a "second tape" that had a gorilla added in.

The finding that people fail to notice unexpected events when their attention is otherwise engaged is interesting. What is doubly intriguing is the mismatch between what we notice and what we think we will notice. In a separate study, Daniel Levin, of Vanderbilt University, and Bonnie Angelone, of Rowan University, read subjects a brief description of the gorilla experiment and asked them whether they would see the gorilla. Ninety percent said yes. Intuition told those research subjects (and us) that unexpected and distinctive events should draw attention, but our gorilla experiment revealed that intuition to be wrong. There are many cases in which this type of intuition—a strong belief about how our own minds work—can be consistently, persistently, and even dangerously wrong.

The existence of this class of faulty intuitions would just be an academic curiosity if it did not have such significant practical consequences. If you believe you will notice unexpected events regardless of how much of your attention is devoted to other tasks, you won't be vigilant enough for possible risks. Consider talking or texting on a cellphone while driving. Most people who do this believe, or act as though they believe, that as long as they keep their eyes on the road, they will notice anything important that happens, like a car suddenly braking or a child chasing a ball into the street. Cellphones, however, impair our driving not because holding one takes a hand off the wheel, but because holding a conversation with someone we can't see—and often can't even hear well—uses up a considerable amount of our finite capacity for paying attention.

Flawed intuitions about the mind extend to virtually every other domain of cognition. Consider eyewitness memory. In the vast majority of cases in which DNA evidence exonerated a death-row inmate, the original conviction was based largely on the testimony of a confident eyewitness with a vivid memory of the crime. Jurors (and everyone else) tend to intuitively trust that when people are certain, they are likely to be right. Almost all of us have precise memories of how we heard about the attacks of 9/11 or, if we're old enough, the Challenger explosion or President John F. Kennedy's assassination. But you should not be certain that your detailed memories of those events are accurate. Study after study has shown that memories of important events like those are no more accurate than run-of-the-mill memories. They are more vivid, and we are therefore more confident about their accuracy, but that confidence is largely an illusion.

Other intuitions about the mind's workings fail in the same way. For example, it's easy to fall prey to the belief that you understand complex systems better than you really do. This instinct played a role in the financial crisis, especially among investors who bought newfangled mortgage-related bonds whose risks they did not truly appreciate.

The most troublesome aspect of intuition may be the misleading role it plays in how we perceive patterns and identify causal relationships. When two events occur in close temporal proximity, and the first one plausibly could have caused the second one, we tend to infer that this is what must have happened. A tendency to jump to that conclusion is not a bad "default setting" for the human mind, especially in light of the circumstances in which it evolved. In a nonindustrialized society, with no computers, Internet, Google, or even public libraries to access information, the only ways to infer cause and effect were personal experiences and the stories told by others. If a friend ate berries from a particular bush and soon became sick, you might wisely avoid those berries yourself. But your friend's illness might have had nothing to do with the berries.

To determine whether two events are truly associated, we must consider how frequently each one occurs by itself, and how frequently they occur together. With just one or a few anecdotes, that's impossible, so it pays to err on the side of caution when inferring the existence of an association from a small number of examples. Verifying the existence of a genuine association becomes trivial, though, when we can rely on the accumulated experience of hundreds, thousands, or even millions of people. We can decide which car to buy based on the compiled ratings in Consumer Reports rather than on the rantings of a disgruntled owner who happens to be a cousin (or on the manufacturer's slick ad campaign). We can rely on accumulated data, but too often we don't. Why not? Because our intuitions respond to vivid stories, not abstract statistics.

Imagine that your 2-year-old child is diagnosed with an ear infection. Your pediatrician prescribes an antibiotic and, within 48 hours, your child feels better and the infection is gone. Did the antibiotic work? There is no evidence that it did. The infection might have resolved on its own without antibiotics. The first step in demonstrating the efficacy of a drug is to see whether taking it leads to greater improvements than not taking it. To do that, you would first need to show that improvement rates are higher for people who receive the drug than for those who do not. Showing that association is a necessary first step, but it still does not show that the drug caused the improvement. A crucial second step is to randomly assign some patients to receive the antibiotic for their ear infections and others to receive a placebo. Only if the antibiotic group healed faster than the placebo group could you conclude that the antibiotic caused the improvement.

Compared with epidemiological studies and clinical trials, anecdotes—with their lack of control groups—look downright pitiful. Yet we rely on anecdotal causal reasoning all the time, without even realizing the giant leaps of logic we are making. In a recent issue of The New Yorker, John Cassidy writes about U.S. Treasury Secretary Timothy Geithner's efforts to combat the financial crisis. "It is inarguable," writes Cassidy, "that Geithner's stabilization plan has proved more effective than many observers expected, this one included." It's easy for even a highly educated reader to pass over a sentence like that one and miss its unjustified inference about causation. The problem lies with the word "effective." How do we know what effect Geithner's plan had? History gives us a sample size of only one—in essence, a very long anecdote. We know what financial conditions were before the plan and what they are now (in each case, only to the extent that we can measure them reliably—another pitfall in assessing causality), but how do we know that things wouldn't have improved on their own had the plan never been adopted? Perhaps they would have improved even more without Geithner's intervention, or much less. The "data" are consistent with all of those possibilities, but Cassidy and most of his readers are drawn to the most intuitive conclusion: that Geithner's 2009 plan caused the improvements seen in 2010.

We are not naïvely arguing that people should trust only double-blind studies with random assignment when inferring cause. If a man points a loaded gun at us, we don't doubt the outcome of his pulling the trigger, and we won't wait to be shown a peer-reviewed journal article about the appropriate controlled experiment before we start running. There is a plausible and well-established mechanism by which bullets fired from a gun kill people. In simple situations like that, we can safely generalize from a set of principles (Newtonian mechanics) that are well understood and that do involve causal tests that long ago proved the principles correct. In the Geithner example, though, and in many, many other situations, there is no simple analogy with well-understood causal relationships. For complex systems like the global economy, human physiology, or the human mind itself, inferring cause from single examples is not logically justified, because we do not have a complete enough understanding of the internal workings of the system.

Take the case of the perceived link between childhood vaccinations and autism. Nowadays children receive several vaccines before age 2, and autism is often diagnosed in 2- and 3-year-olds. When a child is diagnosed with autism, parents naturally and understandably seek possible causes. Vaccination involves the introduction of unusual foreign substances (dead viruses, attenuated live viruses, and preservative chemicals) into the body, so it's easy to imagine that those things could profoundly affect a child's behavior. But more than a dozen large-scale epidemiological studies, involving hundreds of thousands of subjects, have shown that children who were vaccinated are no more likely to be diagnosed with autism than are children who were not vaccinated. In other words, there is no association between vaccination and autism. And in the absence of an association, there cannot be a causal link.

Many people who believe that vaccination can cause autism are aware of those data. But the intuitive cause-detector in our minds is driven by stories, not statistics, and once a compelling story leads us to ascribe an effect to a cause, we can hold to that belief as stubbornly as when we trust in our ability to talk on a phone while driving—or to spot a person wearing a gorilla suit. In a way, intuition and statistics are like oil and water: They can easily coexist in our minds without ever interacting. That's one reason some in the media continue to treat the vaccine-autism link as a "controversy"—the emotional stories of parents have a constant tug on our beliefs because their effects can't be wiped away by knowing the statistics, no matter how solid they are.

Malcolm Gladwell is regarded as an exceptional science writer in part because of the effective way he uses stories. But it's not just that Gladwell is a better storyteller than his peers. He deploys his stories—anecdotes, really—as part of a compelling rhetorical strategy. Gladwell surrounds his arguments with examples that suggest an association, letting his readers infer the causal relationships he wants to convey. In Blink, he begins his argument with a case in which intuition revealed the kouros fraud, and readers conclude for themselves that putting more trust in their intuition can make them better thinkers. Indeed, experiments have shown that the more mental work readers have to do to infer a cause from a set of facts, the more memorable the causal inference will be. Gladwell, like most good writers, is a master of letting readers "discover" his argument rather than hitting them over the head with it.

There is nothing wrong with using Gladwell's rhetorical technique, as long as the examples are truly illustrative of a valid causal relationship. Charities do this when they highlight the plight of a single individual, with a name and a face, rather than the numerical magnitude of a problem: The stories bring in more money than the statistics. The danger comes from the fact that we promiscuously infer cause from such positive anecdotes in the absence of proper evidence, or even in the face of contradictory evidence. Most people aren't inveterate skeptics vigilantly testing each anecdote to make sure it is representative of an overall pattern.

The actress Jenny McCarthy has used her celebrity to promote proposed cures for autism, such as a special diet she designed for her own autistic son. She often talks about the thousands of parents who have let her know that her regimen helped their children. McCarthy believes, and wants her audience to believe, that those parents have made a valid inference about the effects of the diet. The accumulation of examples in which a possible cause and effect co-occur, no matter how emotionally compelling, provides no evidence of a true association. In McCarthy's case, parents who tried her cure and had no success are unlikely to write to her. Parents who didn't try the cure at all are even less likely to drop her a note reporting that their children got better without trying it—especially if they are among the millions of parents who have never even heard of her proposal.

To know whether intuition should trump analysis, we need more than case studies of initial impressions that were later vindicated. What we need to know is how often experts intuitively identify a forgery despite preliminary scientific analysis suggesting that it was genuine (the kouros case), and how often experts intuitively believe a piece to be genuine only to be proven wrong (the Thomas J. Wise forgeries). Conversely, how often do experts make the mistake of intuiting a forgery when scientific analysis later proves the work to be authentic? Without comparing how frequently intuitions outperform analysis for both genuine and fake items, there is no way to draw general lessons about the power of intuition.

The kouros example is effective because it capitalizes on our tendency to generalize from a single positive association, leading to the conclusion that intuition trumps reason. But in this case, a bit of thought would show that conclusion to be unlikely, even within the confined realm of art fakery. Think about how often experts throughout history have been duped by forgers because intuition told them that they were looking at the real thing. It is ironic that Gladwell (knowingly or not) exploits one of the greatest weaknesses of intuition—our tendency to blithely infer cause from anecdotes—in making his case for intuition's extraordinary power.

Intuition is not always wrong, but neither is it a shortcut around the hard work of logical analysis and rational choice. The trouble with intuition is that while intuitive modes of thought are easier to use than analytical modes, they are poorly adapted to many circumstances and decisions we face in the modern world. If we follow our gut instincts, we will talk on the telephone while we drive, have too much trust in eyewitnesses, and believe we know what causes what—in health care, finance, politics, and every other domain—without even realizing that we haven't considered the right evidence, let alone come to the right conclusions.

Daniel J. Simons is a professor of psychology at the University of Illinois at Urbana-Champaign. Christopher F. Chabris is an assistant professor of psychology at Union College in New York. They are the authors of the new book The Invisible Gorilla, and Other Ways Our Intuitions Deceive Us (Crown Publishers).


1. marka - June 01, 2010 at 07:51 pm

Thank you, thank you, thank you, for this article!

This reliance on Blink and its ilk is one of my pet peeves, and is difficult to dislodge, as noted.

I've seen these kinds of results often enough to underscore the many studies supporting skepticism about the overreaching claims for intuition (and positive thinking, and health cures, and ... so on). Don't bother us with the facts ... we know what's what.

One, in particular, that initially stumped me in my trial experiences - why would jurors, and occasionally judges, fall for an illogical argument that was demonstrably false? The pattern I detected was the confidence with which the advocate pushed the matter -- the more confident, the more absurb an argument could be advanced. Checking on the work of Tetlock @ Berkeley, among others, provided some support for that notion: the inverse relationship between confidence and accuracy.

So, evidence-based medicine initiatives, among others, have an uphill battle. My spouse - an allopathic physician - notes how large numbers of practising physicians reject evidence-based medicine, and instead rely on their own anecdotal experiences. And that this rejection means that evidence is harder to come by -- many physicians won't participate in studies. A number of surveys & other studies support this - so even in a 'scientific' field such as medicine, individual experience will often trump evidence. Don't bother me with statistics -- I know what's what ...

2. goxewu - June 03, 2010 at 09:29 am

There's a lot wrong with this post. I take "intuition" to be fairly synomous with "hunch" and "gut feeling," especially in terms of making a decision. The errors in reasoning of post hoc ergo propter hoc (thinking that just because A preceded B, A must have caused B), mistaking correlation for causation, and giving too much credence to anecdotal evidence are not, at least to me, errors of relying on "intuition."

And "intution" itself is often simply rapid subconscious reasoning, a mental activity similar to a physical reflex. When I'm playing third base in a softball game and a line drive is coming right at my head, I don't consciously calculate the speed and trajectory of the ball (or try to recall a statistical survey of third basemen knocked unconscious); I just duck. And I'm "right" in the sense that if I'd not ducked, the ball would have hit me in the face. Similarly, when I make an "inutitive" decision to, say, take a different driving route to Aunt Bertha's house, I'm acutally rapidly and subconsciously collating the factors of weather, terrain, traffic, etc., in a rather reasonable, albeit speeded-up and subconscious manner.

There's also a danger in overly dismissing anecdotal evidence to discount that great body of anecdotal evidence everybody over the age of toddlerhood possesses: "experience." People who have a lot of experience in, say, whitewater rafting, might look at a given section of river and tell you that it's just to dangerous to navigate. They don't have stats on the water speed, height and depth of the tubulence, or what percentage of rafters who've attempted to navigate this section this season have capsized, but they're probably right and they're not relying on "intution." And Warren Buffet seems to have done rather well with investments based on anecdotal evidence--what his wife sees people buying at the mall.

A doctor's experience is increasingly discounted in allopathitc--i.e., science-based Western--medicine. Doctors are less and less willing to diagnose even mild problems without subjecting the patient to an increasing number of tests with printout results. Of course, most of this is covering their rear ends against possible malpractice suits (a legit concern), but sometimes the doctor is a member of a "medical corporation" with a financial interest in a testing company.

Finally, the whole gorilla experiment proves little about "intution." If the subjects were told simply to watch the video and then failed to see the gorilla, the experiement might signify something. But common sense (experience + reason) and even "intution" tells you that if you tell people to concentrate intensely on X (e.g., the number of times the basketball is passed), they're liable to miss Y (e.g., the gorilla), which is totally unrelated to X. Any cook can relate an incident in which concentrating on one dish caused him/her to miss the smoke and fire of another dish burning right there in the same kitchen.

What I learned from this post was less not to rely on "intuition," and more not to rely on those two lab-coated gorillas walking through scenes of common sense--psychology professors.

3. gahnett - June 03, 2010 at 06:59 pm

I think there is a lot of interesting ideas in this post but agree with goxewu that it is not arranged well.

For example, there should be a discussion about sensibility- that part of us that gets shaped with experience and is the source of our judgment.

We draw on it when faced with situations where there is little information, which results in a quick decision; an intuitive process. Thus, there is analysis-just at a different timescale.

4. raghuvansh1 - June 04, 2010 at 07:42 am

Intuition is really very vague term,how can we definite it than only we can say some thing about it.Long long ago Einstein told that imagination is more important than knowledge, I think he told the half truth.How can imagination developed without knowledge? Same is true about intuition.Intuition developed with experiences and knowledge.Some time it may be wrong some time it is correct.Writing book on intuition is making fool to people and make money.There is no rule of intuition.

5. dank48 - June 04, 2010 at 09:00 am

Great article, well written, well arranged, well presented.

This ought to be included with every copy of Malcolm Gladwell's book, just for balance.

6. dank48 - June 04, 2010 at 11:44 am

And maybe I'm being unfair to Goxewu, but I think there's something self-refuting about his/her criticisms of the article: "When I'm playing third base in a softball game and a line drive is coming right at my head, I don't consciously calculate the speed and trajectory of the ball (or try to recall a statistical survey of third basemen knocked unconscious); I just duck. And I'm 'right' in the sense that if I'd not ducked, the ball would have hit me in the face."

It seems to me that Goxewu's missing the point. That's reflex, not intuition. And a better, more productive response would be to get the glove up and catch the ball. I think the article deserves a more careful reading, with more thought and less shooting from the hip.

7. goxewu - June 04, 2010 at 12:23 pm

Re #6:

Thanks to dank48 for referring to my comment so that I can correct the horrible error he revealed to be in it:

"When I'm coaching third base in a softball game..." If I'm playing third base, I should obviously use those lightning reflexes to catch the line drive.

But I didn't say that the third-base/line-drive incident was an example of intution. I said that intuition is "often...a mental activity similar to a physical reflex." Ducking the line drive is an example of a physical reflex; choosing the driving route to Aunt Bertha's is an example of intuition.

A comment, too, "deserves a more careful reading, with more thought and less shooting from the hip."

8. dank48 - June 04, 2010 at 03:09 pm

Thanks to Goxewu, gulp, for the character-building. I asked for it, I got it, and I deserve it. Nicely understated, too. Yes, there's a difference between coaching third base and playing third base, as I might have realized, were I capable of, ah, reading more carefully and less given to shooting from the hip. "Touche" doesn't really seem to cover the case; "skewered" or "transfixed," perhaps might do it.

Today certainly isn't my day for preaching, as Elmer Gantry might have said. Even though I still think it was a really fine article, I should at least have the decency to get off the soapbox.

9. goxewu - June 04, 2010 at 04:07 pm

Damn! Now I have a role model in dank48, that I have to live up to the next time I'm wrong on a comment. I'm too thin-skinned, snarly, self-obsessed, and insecure to do that. (Good thing I'm never, ever, wrong, huh?)

(Hey, I make the mistake about coaching, not playing, third base, not dank48).

10. dr_zed - June 04, 2010 at 04:11 pm

I find the terms "pre-rational," rational, and "trans-rational" useful in this discussion. The first two terms are self-explanatory. Pre-rational would include reflexive activities that don't rely heavily on higher cognitive function, like ducking to avoid a projectile coming at your head. I'm not sure that fits with my conception of intuition.

Trans-rational describes activity that while built upon higher cognitive function, is operating with rational discourse in suspension. This would include activities that most of us would agree function better without cognitive "chatter." Like the perfect golf-swing, or a virtuoso violin concerto. This coincides with my conception of intuition.

11. nacrandell - June 04, 2010 at 05:33 pm

Sadly, the people that missed the gorilla vote.

12. alex2001 - June 04, 2010 at 08:10 pm

It seems to me that goxewu falsely believes the authors to be against the validity of intuition at all times. To the contrary, they acknowledge the work of Gigerenzer and his nuanced views, and I don't think they would disagree that quick actions, especially physical actions, are best made intuitively, or based on reflex. The point they make is that given proper time, more information and reasoned analysis usually yields better results.

A different take on the intuition vs. reason debate can be found in James Suroweicki's The Wisdom of Crowds. He discusses the accuracy of certain types of information markets where many people have a wide range of information, and as long as they develop their opinions independently, their accumulated predictions are more reliable that any single expert.

13. pulseguy - June 04, 2010 at 11:18 pm

The article just didn't have a lot to do with what people normally refer to as intuition.

Being told to focus on something to the exclusion of something else, white shirted passers versus black shirted passers almost is the opposite of a situation where intuition leaps up and tells you something. The fact the gorilla was missed has nothing to do with intuition.

The author would respond, I think, with he wasn't saying missing the gorilla was a lack of intuition, but the fact people think they wouldn't have missed the gorilla. He says they are wrong about they intuitively believe about their minds. What people think they might or might not do also has nothing to do with intuition.

It seems to me, and this is just my intuition speaking, but I think the author doesn't like anyone making what he thinks is an irrational decision, doesn't like Gladwell, and hasn't really ever thought about intuition much whatsoever.

Just my intuition.

Intuition, for me, is that quiet little thought that pops clearly into my head and says, 'don't do this', or 'do this', which I ignore often at my peril. I was driving along a twisty road where cars notoriously drive slowly, and I notoriously drive quickly. I tend to pass cars there. My intuition, that quiet voice, said, 'don't pass this car in front'. Very clearly. I paid attention to it, for awhile, then I grew impatient and passed him. I rounded a corner and a truck was in my lane heading towards me. No real danger, I drove off the road onto the shoulder but I hit a hole and ripped my tire in half. As i did it I knew, I just knew, I should have listened to my intuition.

Hey, who knows? Maybe I didn't pass the car in front of me, and he went around the corner as the truck did, didn't respond as I did, and the trucker, the car in front of me, and me all would have died in a flaming accident. I can't prove this would not have happened.

But, I still think I should have sat back and followed my intuition.

14. stevewitham - June 04, 2010 at 11:43 pm

Emphasizing the value of rationality and studying the pitfalls of intuition and intuition about intuition are really valuable. There's a strange sour note in the presentation, though:

The authors, people they quote, and some commenters may have missed a couple chapters of _Blink_ that discuss the (sometimes horrific) results of bad intuition, when slower reasoning is better, methods to guard against bad intuitions, ways to train the intuition, ways to recognize when one is and isn't in the area that one's intuition is good at, institutional efforts to train, e.g., police officers to think beyond first impressions in the heat of decisionmaking, and cases where narrowed-information methods have objectively tested superior to more-informed methods.

But other than that, repeatedly exaggerating the first chapter is fair.

Gladwell may be better at storytelling than sober lecturing, but _Blink_ was not the one-sided promotion of intuition over reason some take it to stand for.

15. pancho_angry - June 05, 2010 at 01:14 am

Whether intuition is on the mark or not is beside the point. Intuition, by definition, cannot be definitive and conclusive and, therefore, it merely promotes further conflict and chaos.

Not convinced? See what religion has wrought.

16. carholm - June 05, 2010 at 06:15 am

Both pulseguy and gowexu seem to have missed the gorilla in commenting on the gorilla experiment.

It is true that missing the gorilla has nothing to do with intuition, that is not the point of the observation. The point is that the participants intuitively expected to notice it.

Where intuition comes into play, then, is in the expectation that you will notice something as noticeable as a gorilla when your attention is focused elsewhere.

To quote the article:
"Intuition told those research subjects (and us) that unexpected and distinctive events should draw attention, but our gorilla experiment revealed that intuition to be wrong."

So when gowexu states:
"But common sense (experience + reason) and even "intuition" tells you that if you tell people to concentrate intensely on X (e.g., the number of times the basketball is passed), they're liable to miss Y (e.g., the gorilla), which is totally unrelated to X. Any cook can relate an incident in which concentrating on one dish caused him/her to miss the smoke and fire of another dish burning right there in the same kitchen."

this does not match the intuitive believe of the research subjects, who clearly believed the opposite to be the case.

In this case, concentrating intensely on the outcome of the experiment seems to have lead some readers to miss the real point of the example, i.e. the gap between the expected outcome, based on the intuitive beliefs of research subjects and researchers alike, and the actual outcome.

17. goxewu - June 05, 2010 at 08:45 am

Re #11:

I'm surprised that somebody didn't hit that high hanging curveball by saying, "Yeah, but the gorilla votes for [fill in the blank with whatever politician/party you don't like]."

Re #12:

"It seems to me that goxewu falsely believes the authors to be against the validity of intuition at all times."

No. What I said at the outset of #2 is, "There's a lot wrong with this post." A lot wrong, not everything wrong and not the authors allegedly being "against the validity of intuition at all times." (BTW, is "It seems to me..." and intuitive judgment?)

Re #16:

"The point is that the participants intuitively expected to notice it."

I think what carholm means is, "the participants expected to notice it intuitively." That's more accurate, but still wrong. See below.

But more to the point: a) What the participants "expected" is somewhat irrelevant to the issue of intuition. b) The subjects were not--according the authors' recounting of the experiement in the post--told to "expect" anything (e.g., nobody said, "There's going to be something else going on here beside people passing a basketball; will you be able to spot it?"). c) Problems in perception--e.g. people not noticing a retrospectively obvious event X while they're under orders to concentrate on perceiving Y--doesn't seem to have a lot to do with intuition as commonly understood. (I think that pulseguy, in #13, is pretty close with, "Intuition, for me, is that quiet little thought that pops clearly into my head and says, 'don't do this', or 'do this.'")

18. carholm - June 05, 2010 at 11:51 am

RE #17

I will rephrase:

The research subjects had a gut feeling - independently of the actual observation - that they would be able to notice something as obvious as a gorilla walking through the stage of the experiment.

The example with the gorilla also involved asking research subjects - separate from the experiment itself - whether they would expect to notice the gorilla, to which and overwhelming majority replied "yes". Many of the group, that had actually been involved in the experiment could not believe afterwards, that they had failed to notice it.

"What is doubly intriguing is the mismatch between what we notice and what we think we will notice. In a separate study, Daniel Levin, of Vanderbilt University, and Bonnie Angelone, of Rowan University, read subjects a brief description of the gorilla experiment and asked them whether they would see the gorilla. Ninety percent said yes. Intuition told those research subjects (and us) that unexpected and distinctive events should draw attention, but our gorilla experiment revealed that intuition to be wrong."

This kind of pre-analytical, gut-feeling judgment clearly falls within the author's definition of intuition. And indeed it would be strange not to include pre-experiment expectations under the concept of intuition.

It is not the problem of perception that is main point. It is the gap between what people have a gut feeling that they will be able to perceive, and what they will actually perceive.

The part where the author argues that intuition comes into play is in the belief that: "If this is the experiment, I will be able to notice the gorilla". Which is what 90 percent of the subjects asked apparently believed. And apparently, that intuitive judgment is in the vast majority of cases inaccurate.

Whether they notice it or not has nothing to with intuition, that is true. But that is not what the author argues, as I see it.

19. goxewu - June 05, 2010 at 01:38 pm

Re #18:

"The example with the gorilla also involved asking research subjects - separate from the experiment itself - whether they would expect to notice the gorilla, to which an overwhelming majority replied 'yes'."

Crucial: Were the subjects asked this BEFORE or AFTER they participated in the experiment?

If they were asked before the experiment, one would think--common-sensibly--that they couldn't fail to notice the gorilla.

If they were asked afterward, their answers--"Yeah, I would've thought I'd notice something as obvious as the gorilla"--would seem to have value only as an opinion survey of the subjects' estimate of their "intuition"*, and not intuition itself.

* Intuition vs. perception. There seem to be two basically different views of intuition operative here. Pulseguy and I rather think it's thought/act, i.e., a thought that causes you to decide or act on something. The authors and carholm seem to regard it as a perceptual phenomenon, i.e., missing seeing something that's hiding in plain sight because you're consciously looking for something else (especially if you're following orders from people in labcoats with clipboards telling you to look for something else).

Intuition doesn't = noticing, and not noticing doesn't + lack of intuition. I still go with pulseguy.

20. carholm - June 05, 2010 at 02:31 pm


As I read it, the group that was asked had not done the experiment. They had had the experiment described to them, and based on that description strongly felt that they would notice the gorilla.

"* Intuition vs. perception. There seem to be two basically different views of intuition operative here. Pulseguy and I rather think it's thought/act, i.e., a thought that causes you to decide or act on something. The authors and carholm seem to regard it as a perceptual phenomenon, i.e., missing seeing something that's hiding in plain sight because you're consciously looking for something else (especially if you're following orders from people in labcoats with clipboards telling you to look for something else)."

I think you misunderstood my view. As I see it, expressing an expectation is a "thought/act". That is what I - and I think the authors too - think of as the "intuitive element" in this part of the article.

The thing about noticing or not noticing an object hiding in plain sight is, I agree, a question of perception.

The key observation was that the experiment yielded a result that was perceived by the authors as counter-intuitive because it contradicted the author's intuition (and that of the research objects), which told him, that the research subjects would notice something as obvious as a gorilla walking past right in front of them.

The experiment in itself is immaterial except as an example of how a prediction made on a gut feeling was contradicted by the outcome of an actual experiment.

The author then goes on to say that many people daily make decisions based on the intuitive belief that they will be able to notice unexpected objects appearing in front of them.

But the author doesn't say that noticing has anything to do with intuition. And neither do I.

Intuition, in this case, comes into play in the gut feeling that you will be able to do something based on a description of a scenario.

The actual experiment does not test intuition itself. It tests perception. The failure to notice the gorilla is a failure of perception.

It is the strong but erroneous belief that you will always or most likely be able to notice it that is the result of intuition faialing.

21. goxewu - June 05, 2010 at 03:26 pm

If somebody asks a bunch of subjects whether or not they think they'd rush to rescue a fallen comrade, under hostile fire, on the battlefield, and they majorly reply, "Yes," and then somebody does an experiment that proves the majority of people would run away and desert the fallen comrade--does that prove anything but people's general tendency to overrate themselves? In this hypothetical case, it's about bravery. In the gorilla experiment, it's about perception.

So people tend to overrate their ability to perceive. So what? People tend to overrate their abilities to play sports, learn a language, impress the girls, impress the boys, cook an omelette, bicycle to work, select clothes that look good on them, train a dog, swim across a lake, read "War and Peace" this summer, etc., etc., etc., etc. All these a priori conceits that turn out not to be true are "the result of intution failing"?

And even still within the gorilla experiment, it's difficult to see how the question could get anything but a "Yes" answer, because in the verbal description it's impossible to hide/disguise the gorilla. "If you were asked to watch a brief video of a bunch of people passing a basketball around, and to count the number of passes they made, but a gorilla would walk right through the group--would you notice the gorilla?" Who would say, "No," or even, "Gee, I don't know"? Nobody, that's who.

The relation between those majorly "Yes" answers by one separate group of people to the verbal description and the majorly "No" results with another separate group of people who see the video proves nothing except that there's a difference between a verbal description of an event a video of that event.

Are we sure that this post didn't first appear in The Onion?

22. pulseguy - June 05, 2010 at 06:29 pm

Carholm...No, I did make the point the researchers were commenting on people's error in what they believed they would notice, and not on the fact the subjects missed the gorilla. I just don't think making an error in perception, or making an error in what you think you would have seen, or would do in a given situation has anything to do with intuition. I'm completely in agreement with 'goxewu' in this regard. His last post covers it really well.

And...I might as well say it. I'm probably not as good a dancer as I think I am. Especially, after a few beers. But, I don't think that says anything about intuition because I don't think when I say 'Dude, after a few beers, I can really dance' is something I intuit. It is my erroneous self-perception.

There are many, many situations we find ourselves in in which a purely rational decision is not possible. Most situations for that matter are too open-ended to be much more than guesswork.

Math problems, crossword puzzles, and sudokus are closed systems. They have a solution that can be confirmed. Almost nothing else is like that. Even a lot of academia is really open-ended, there is just a consensus that says 'this is the right answer'. It might not be, but it gets the tick and not the x after the answer, so it appears less open-ended than it really is.

Yet, we are often forced to make decisions with very incomplete data, and in situations that are changing in ways that can't be calculated. In those situations we tend to use intuition.

(Many of you probably are in academia. I'm in business. I know for a fact most of my decisions are made, because I am forced into it, with very incomplete data. I go with my head, my experience, and my intuition. Most of my decisions tend to be reasonably correct. I also know I miss a lot visually.....poor eyesight. I have no doubt I would miss the gorilla. I don't think my eyesight has much at all to do with my intuition.)

What is intuition? I don't know. But, I sometimes get a clear idea on which way to go, and it works more than it doesn't. I'm not saying it is mystical either. Perhaps it is the mind calculating way faster than the conscious rational part of me can comprehend. Perhaps I am really relying on past experience at those times without knowing it. Maybe there is some sort of group intelligence that we have not yet discovered. Who knows? But, I do think it is a worthy thing to research. I just don't think the article, nor the gorilla really dealt with the subject.

23. nilmat - June 06, 2010 at 01:06 pm

The assumption made here is that people overestimate the likelihood that they'll see the gorilla because they're using intuition, not logic. However, there's no way to know that this is the case. For example, someone might think "hey, I'm an excellent driver, which must mean that I see unexpected things that come into my field of vision, which must mean that I would see the gorilla." This may not be true, but it's not a failure of intuition. It's a failure of logic. This study, as do many others, clearly shows that we have a tendency to overestimate our abilities. What it does not show is how 90% of the people asked came to the conclusion that they would likely see the gorilla.

24. rclariana - June 07, 2010 at 07:51 am

I'm pretty sure that the gorilla experiment results are an example of the Zeigarnik effect, a memory phenomenon (not attention). If you 'paused' the video 1 second before the end for a randomly selected half of the participants, that half would remember the gorilla 99% of the time; but for the half that 'finished' the video, you get the results that you observed.

25. nomentanus - June 10, 2010 at 11:27 pm

This ignores all the most relevant research, Antonio Domasio's name isn't even mentioned, for starts. Richard Wiseman - not mentioned. Nor Ken Paller. Nor research that shows reason beaten by intuition in very complex choices such as house purchases. It ain't reason that beats intuition, it's empiricism (the traditional enemy of reason and Idealism.) But if I may be a bit testy: you have to respect others' research, not just your own, for that to work well.

It also thoroughly conflates feeling, intuition, bias, and heuristics.

26. toddstark - June 13, 2010 at 10:49 am

A discussion that includes the endless dialectic of intuitive vs. deliberate strategies for solving problems should also consider the interesting work of Dijksterhuis and colleagues and their "deliberation without attention effect." Their data may be relevant to some of these points as well.

The comment by stevewitham that Gladwell is unfairly said to promote rapid cognition over deliberation is very reasonable I think. I've been guilty of criticizing Gladwell's book on this basis as well, but it is mostly in the power of his storytelling that this emphasis comes out. If you read carefully he truly does acknowledge the pros and cons of deliberation vs. rapid cognition in some places.

27. zthudson - June 14, 2010 at 12:58 am

The authors think that Gladwell argues that "intuition trumps analysis". But Gladwell was exploring when and why intuition trumps analysis and when it doesn't. One of may favorite sections of the book was when Gladwell analyzed why intuition failed so utterly in the Amadu Diallo shooting. It annoys me that they accuse him of being tricky when they simply miss his point.

Add Your Comment

Commenting is closed.

  • 1255 Twenty-Third St., N.W.
  • Washington, D.C. 20037
subscribe today

Get the insight you need for success in academe.