Hillary Bonhomme
What do you get when you combine small sample sizes, poor research design, and too many variables? A whole lot of erroneous "findings." Stay with me for this, I’m about to blow your mind.

Ideas Science

You may have missed it (I certainly did), but over the holidays the Journal of the American Medical Association published an enigmatic little article about nutrition research: “Disclosures in Nutrition Research: Why It Is Different.” The title may suggest a real tub-thumper, an anti-industry screed full of dark accusations (at least, it did to me), but it’s not. Instead, it’s a modest, temperate little suggestion that nutrition researchers might do well to be a bit more open about their conflicts of interest.

Well, duh, huh?

“Nutrition research is among the most contentious fields of science,” the authors write. “Although the totality of an individual’s diet has important effects on health, most nutrients and foods individually have ambiguously tiny (or nonexistent) effects. Substantial reliance on observational data for which causal inference is notoriously difficult also limits the clarifying ability of nutrition science. When the data are not clear, opinions and conflicts of interest both financial and nonfinancial may influence research articles, editorials, guidelines, and laws.”

Is that worth another duh? Probably.

If you don’t know who John Ioannidis is, you should.
Where things start to get interesting is when the article explains what it means by conflicts. It calls out industry funding but argues that “the puritanical view that accepting funding from the food industry ipso facto automatically biases the results is outdated.” There are plenty of ways for nutrition researchers to have conflicts—from the indirect sort of financial rewards you get from publishing a best-selling diet guide to the way that advocacy groups have to keep their supporters happy by sticking to the party line. And then there’s the subtler kind of conflict: Researchers often come to believe what they’ve discovered or the causes they advocate. It’s all bias, and even “white hat” bias ought to be disclosed. (The National Institutes of Health describes “white hat” bias, in lay language, as “bias leading to distortion of information in the service of what may be perceived to be righteous ends.” It’s especially common in obesity research.)

Who could disagree? Which, of course, raises the question of why we’re hearing it at all—and why nutrition research is being singled out.

Related: The famous study that convinced us fat was good wasn’t very good science.

To answer that, it helps to know that the lead author of the piece is a guy named John Ioannidis. If you don’t know the name, you should. He’s a mathematician and physician, a professor at Stanford, and one of the 20 most-cited researchers in the world. (“This probably only proves that citation metrics are highly unreliable,” he writes in his Stanford bio.)

Ioannidis’s claim to fame is a pair of extraordinary articles about medical research. The first, published in Journal of the American Medical Association in July 2005, examined all the highly cited original clinical research studies published in three top medical journals in 1990–2003, comparing them with later studies that used bigger samples. There were 45 studies that claimed that a specific medical intervention worked. Seven were disproven, and another seven were shown to have overstated the effectiveness of the intervention. Twenty were replicated, and 11 were unchallenged.

Think of what that means: Fewer than half of the studies were confirmed by subsequent research, and as for the ones researchers did try to confirm, roughly 40 percent were overturned.

Your spider sense should be tingling just now.
This is, needless to say, not the kind of thing you want to hear about the great scientific engine driving modern medicine. And things got even worse a month later, when the open-source journal PLoS Medicine published an Ioannidis article with the discouraging title “Why Most Published Research Findings Are False.” In it, he described mathematically the factors that affected the likelihood of a particular study being true. It’s a pretty complex argument that I can’t really do justice to here, but it boils down to this: In any given field, there are lots of potential relationships to be looked at; some are real, some not. And the ratio between the reals and the nots has an enormous impact on whether the results of any individual study are true or not. That ratio (“R”) gets modified by things like the frequency of false positive and false negative results, researcher bias, and so forth. In many fields, Ioannidis explains, it’s hard to get to a point where a study is likelier true than false, and in some fields, positive research results do little more than measure the net bias of scientists working in the field.

When is research likelier to be wrong? Ioannidis lists a few situations: Smaller studies yield less reliable results. Studies that look at smaller effects (a hangnail as opposed to instant death) are more likely to be wrong. You should also be wary of fields that are testing large numbers of potential relationships (the way that genetic researchers are screening thousands of genes to see what effects they might have on health), fields with lots of flexibility in experimental design, fields where there are lots of potential financial conflicts, and “hot” fields with lots of researchers jumping on board.

Nutrition research needs to pay more attention than many other fields.
Your spider sense should be tingling just now. Because what Ioannidis is describing is nutrition research: Tiny studies, often with tens of patients, a jillion food components to test against a jillion possible health effects, generally poor research designs (especially studies based on observation rather than controlled clinical trials), and bias—financial and ideological—everywhere you look. Oh, and effect sizes: “Smoking increases the risk of many cancers approximately 10- to 20-fold,” Ioannidis writes, “but red meat intake may increase the risk of colorectal cancer 1.02-fold (or may have no effect), and intake of fruits or vegetables may decrease the risk of cancer 1.002-fold per serving (or may have no effect).” A one-fold increase in risk is down around the level where Ioannidis says false positives are almost ubiquitous.

The way the math seems to work out (though I’m no expert), the more factors you have lowering the likelihood of accurate results, the more impact bias has. Which means that nutrition research needs to pay more attention than many other fields.

I wanted to ask Ioannidis what he had in mind when he published the JAMA piece, but he never got back to me. But here’s what I take away from it: One of the world’s foremost experts on scientific error thinks that it’s possible or even likely that most scientific findings are wrong. And he thinks that nutrition research is in a worse state than that.

And that, I’d say, gets more than a duh. It gets an uh-oh, or maybe a “holy shit.”

The story doesn’t stop there. Come around next time, and I’ll introduce you to a researcher who’s taken the obvious next step: He doesn’t just find nutrition research problematic. He thinks it’s almost all completely wrong, and that today’s obsession with food as the key to health is just bunk. He’ll explain why he thinks so (he’s pretty persuasive), why the committee preparing for the next round of the United States dietary standards doesn’t want to hear about it—and, of course, what he’s doing now that his university has dumped him.

See you then.

Patrick Clinton

Patrick Clinton is a long-time journalist and educator. He edited the Chicago Reader during the politically exciting years that surrounded the election of the city’s first black mayor, Harold Washington; University Business during the early days of for-profit universities and online instruction; and Pharmaceutical Executive during a period that saw the Vioxx scandal and the ascendancy of biotech. He has written and worked as a staff editor for a variety of publications, including Chicago, Men’s Journal, and Outside (for which he ran down the answer to everyone’s most burning question about porcupines). For seven years, he taught magazine writing and editing at Northwestern University's Medill School of Journalism.