Privileged Knowledge

In 2002 journalist Gary Taubes wrote a very influential article titled “What If It’s All Been a Big Fat Lie?” about how institutional science clings to the myth that fat harms your health despite substantial science to the contrary. So I was looking forward to his article in last Sunday’s New York Times titled “Do We Really Know What Makes Us Healthy?”

What a disappointment. It is so filled with misinformation, sloppy reasoning, and false logic that it would take me an hour to untangle it. So I’ll stick to how the article promotes privileged knowledge.

The article is an extended criticism of the observational studies that end up in contradictory headlines-for example, whether hormone replacement therapy does or doesn’t protect women against heart attacks. The problem, he says, is that “they cannot inherently determine causation.” What can determine causation, he says, is a clinical trial. His reasoning is that, in what has become a commonplace, observational studies can only show an association between one thing and another while clinical trials do not. Unfortunately, he’s wrong. Mr. Taubes needs to read David Hume or more recently George Lakoff about the concept of causation.

Provocation studies such as clinical trials only show associations as well. They, like observational studies, draw their conclusions from statistical relationships. But provocation studies and clinical trials in particular are privileged as the “gold standard” because of the sociology of institutional science, the self-legitimating source of health knowledge that Mr. Taubes seems to accept uncritically. Everyone falls in line behind this privileged position. I do not.

In a provocation study, people are assigned to different groups. Some are given a treatment (that is, provoked) while others are given nothing or a fake (that is, a placebo). At the end of the study, the researchers use statistical methods to associate an endpoint such as a heart attack or stroke with who was and wasn’t provoked-for example with conventional hormone replacement therapy.

As an aside, we have discussed many times over the years that these conventional HRT studies are deeply flawed because the drugs used do not have the same chemical structure as a woman’s own hormones. Hormone replacement using bioidentical hormones is an entirely different matter. This is an issue that Mr. Taubes misses entirely. But I digress.

In an observational study, unlike a clinical trial, no one is provoked. Instead, researchers observe what people do, either directly or indirectly, associating specific attributes of each person’s life with an endpoint-using the same statistical methods as provocation studies. The official position is that observational studies are more problematic than provocation studies because they are less controlled.

What this means is that the less like real life a study is, the more highly regarded it is as science. In my opinion, this is nonsense. Good science is about asking the right question and paying attention to the information you get when the answers arrive.

Both provocation and observational studies provide valuable information. The problem is that scientists and the journalists who follow them do not know how to report that information so it will be useful knowledge for us-unless you’re a geek like me. Instead, it’s turned into headlines and answers to health questions that go way beyond what the study’s data actually supports. Which, in a way, is the problem that concerns Mr. Taubes.

What I wonder is why he’s stirring up skepticism about observational studies. Early in the article he says that “because these studies often provide the only available evidence outside the laboratory on critical issues of our well-being, they have come to play a significant role in generating public-health recommendations.”

And yet his article undermines this very fact by promoting skepticism of observational studies but not clinical trials. One word: Vioxx. If an observational study discovers an association between cell phone use and brain tumors, should we be skeptical as Mr. Taubes suggests?

And where is your experience in this stew? Recognition of electrohypersensitivity has come about because people experienced it, made the associations, and caused trouble. It hasn’t come from statistical analysis.

Mr. Taubes’s article is a promotion piece for privileged information from institutional science. It promotes the culture of expertise: only the experts know what’s good for you. Don’t believe it. Or better still, use your own best judgment.