The Bad Science Reporting Effect

The press coverage of the so-called “QWERTY effect” in early March left me somewhat worried that it is so easy to publish bad science, but absolutely appalled at the state of science reporting.

The alleged effect is that average scores on reported positivity or happiness associations are slightly higher for words having more letters from the right-hand side of the keyboard.

By late on March 8, Mark Liberman at Language Log had re-examined the relevant statistics, noting that the effect is extremely weak. It could explain about a 10th of one percent of the variance in positive vs. negative affective judgments about words, if it existed.

He then replicated the effect on a new data set. “It’s comforting,” he says, “to see apparent confirmation … in an independently-collected data set, with a similar adjusted multiple r2 of 0.0013. At least, it’s comforting until we recognize that the source of this data was a random number generator.”

Yes, he created new meaningless data by random re-sampling and re-pairing, and got a slight bias of about the same size toward one arbitrary group rather than the other. (A second random set showed a similar effect in the other direction, and a third showed no bias either way.)

Much worse was to come by the evening of March 13. Liberman obtained a new and improved data set of happiness/positivity scores on high-frequency words, ten times bigger than the original one but roughly agreeing with it. He re-ran the experiment on this improved data. The result: total absence of the alleged right-hand bias (0.000004 of the variance; chances of a similar result by mere chance are about 85 percent). There is no QWERTY effect. It was all spurious. [At least, if Liberman did his re-run correctly. Naturally, the replication is bitterly disputed: the authors maintain that Liberman has it all wrong. See the reference to Language Log at the end of the post. —GKP, 24 March 2012.]

The two researchers, Kyle Jasmin of University College London and Daniel Casasanto of the New School, were so careless that the list of left-hand letters they gave in two papers (it was “q, w, e, r, t, a, s, d, f, g, z, x, c, y, b”) has an obvious mistake: y should be v. They seem to have taken much more care in matters like generating a catchy effect name and a press release.

Publicity for the unresult of their paper in Psychonomics Bulletin and Review has garnered them some appallingly stupid press coverage (“The Keyboards Are Changing Our Language!”; Just Typing ‘LOL’ Makes You Happy”; etc.). The worst I saw was in the Metro, a free tabloid in Britain: “SEX is depressing—but only if you use your left hand,” they began. “Typing letters with your left hand conveys more negative emotions than typing with your right, British and U.S. scientists say.” (The authors say nothing about what “conveys more negative emotions,” of course.) And in conclusion: “despite their meaning, words such as ‘lonely’ cheer us up more than, say, ‘sex’.” (If there was ever a worse example of illicit inference about particular cases from aggregated results, don’t show it to me, I might cry.)

One might argue that the two young psychologists are not responsible for jokey press reports. But they are not blameless. Jasmin told Wired: “Technology changes words, and by association languages. It’s an important thing to look at.” All of this is false. There has been no demonstration that technology “changes words.” If connotative valences of some words did alter slightly for some reason, that wouldn’t change the language at all. And above all, this is not “an important thing to look at”: No scientific importance would attach to a very weak correlation between spelling and affective attitudes toward isolated words, even if there was one.

Liberman cites Joseph Simmons, Leif D. Nelson and Uri Simonsohn’s paper “False-Positive Psychology,” a paper on how “undisclosed flexibility in data collection and analysis allows presenting anything as significant,” published in Psychological Science last year. It uses computer simulations and experiments to show how “a researcher is more likely to falsely find evidence that an effect exists than to correctly find evidence that it does not” and “how unacceptably easy it is to accumulate (and report) statistically significant evidence for a false hypothesis.”

Compounding this problem with psychological science is the pathetic state of science reporting: the problem of how unacceptably easy it is to publish total fictions about science, and falsely claim relevance to real everyday life.

[Update: You can read Casasanto and Jasmin responding to Liberman here, and also Liberman's rejoinder to their response here — a response that Casasanto and Jasmin insist is still in error.]

Return to Top