Who Watches the Watchers?

STUDIES SHOW: HOW RESEARCHERS CAN MANIPULATE STATISTICS AND HOW THE MEDIA GOBBLES IT UP UNCRITICALLY
Heron
The magnificant Great Blue Heron
Could measured use of DDT still have saved
the Heron, but also millions of human lives?

Editor’s Note: Any issue where science and public policy collides can fall prey to some combination of political opportunism and scientific corruption. Even when motives are pure, there is still potential for well intentioned researchers to go down paths that are later revealed to be completely off the track. When powerful vested interests and deeply rooted emotions intersect, the truth is only one card in the deck, hard to find, and relatively easy to stack.

The following report by veteran EcoWorld science correspondant Edward Wheeler identifies the “seven deadly sins” of epidemiological studies, and how many of these flawed studies pass from the laboratory press release into the uncritical hands of journalists and before you know it, are enshrined in new legislation or regulations. But far too often these studies are not nearly as conclusive as they are made to appear, and the consequent actions we take are not rational.


The point of all this goes beyond just epidemiology, to the relationship between scientific inquiry, media reporting, popular sentiment and public policy. Scientists who indulge in dramatic proclamations, becoming rich and famous in the process, need ongoing critical review. Today one has to ask: Is scientific peer review a way to challenge conventional wisdom and expose conclusions that aren’t clearly indicated by the underlying data, or has peer review become precisely the opposite – a way to exclude contrarian notions? Have certain scientifically developed hypotheses prematurely assumed the mantle of truth beyond debate?

Who will watch the watchers, when the watchers are our scientists, whose currency of reason is so arcane, so specialized and diverse, that nobody, not even among the scientists themselves, has sufficient credentials to question the conventional wisdom? The first step is to remember the fallibility of studies, to restore the innate and vital skepticism of journalists, and to remind the public that debate is the crucible of truth. To that end, read on. – Ed “Redwood” Ring

Studies Show – How researchers can manipulate statistics and how the media gobbles it all up uncritically.
by Edward Wheeler, April 29, 2008
California Coast with Cliffs
California’s magnificant Central Coast,
home to the elusive North American Condor.
Saving this precious species is one of
environmentalism’s finest achievements.

We should all be scared, VERY scared. It seems as if every day a new “study” is reported somewhere in the national media showing a statistical association between diet, lifestyle, or environmental chemicals and some disease or disorder.

Do you eat the “wrong” foods such as red meat, hot dogs, french fries, coffee, alcohol, grilled meats, too much fat, artificial sweeteners, preservatives, or NOT eat enough vegetables?

Are you overweight, and don’t exercise enough? Do you use deodorants, mouthwash, nail polish, electric razors or blankets, cell phones? Do you live near power lines, use birth control pills or take hormone treatments, have some radon in your basement, breathe polluted air or second hand smoke?

Do you worry and fret about all these things after reading the terrifying results of some new study? Then for sure you will surely die from some form of cancer or heart disease sometime next week, probably from the stress and loss of sleep of worrying so much!

All these studies are called epidemiological studies, which seek to find statistical correlations, mostly quite subtle, between diet, lifestyle, or environmental factors and disease. Real sciences, like chemistry and physics, seek to find cause and effect. Epidemiological studies supply only statistical links between this or that risk factor and some disease. Such studies almost never prove cause and effect, and they are subject to researcher bias and political agendas, poor design, confounding variables, bad data gathering and more. Unfortunately, most reporters who write articles on these studies are scientifically ignorant and simply parrot whatever the study authors say.

Author Mark Twain popularized the saying, “there are three kinds of lies: lies, damned lies, and statistics”. An even better quote is from the renowned epidemiologist Alvan Feinstein of Yale University who quoted that, “statistics are like a bikini bathing suit: what is revealed is interesting, but what is concealed is crucial.” Let’s look at the history of this field of study.

Epidemiology: “The study of the distribution of diseases in populations and of factors that influence the occurrence of disease.” Classical epidemiology was fathered by an Italian physician named Ramazzini. Around 1700, he started looking into the possibility that various diseases in patients might be connected to their occupations. For example, miners and chemical workers might have some lung disease because they are exposed to dust, various chemicals, or toxic metals over the course of their careers. Years later a London surgeon, Percivall Pott, noted that virtually everyone he treated for cancer of the scrotum was a chimney sweep. Hummm, he must have thought to himself. It’s a non-communicable disease, so I wonder if all that soot and coal tar they breathe and get all over them every day might be the cause. This was a monumental proof of concept!

Classical epidemiology is like police work. If there is an outbreak of some kind of stomach ailment in a number of people in a city who all seek medical treatment (they were up all night throwing up and sitting on the pot, maybe some even died), public health investigators would seek to determine what history all these sick people might have in common. If it turns out 95% of them ate at Joe’s diner within the last few days, odds are Joe was serving E. coli burgers or maybe Salmonella oysters. A simple test would confirm it.

A recent example is the incidence of a disease identified in 1976, later named “legionnaire’s” disease, which is a form of pneumonia unknown before then. Hundreds of men were affected, and 32 died. It was found that all of them had attended an American Legion convention in Philadelphia. Voila, they identified a bacterium living in the ventilation system of the hotel where the convention took place. That is classical epidemiology. Now let’s discuss a branch of epidemiology that uses “clinical trials” to try to find the facts about disease, cause, and prevention.

Probably the first “study” we now might loosely call a clinical trial occurred in 1753. Scurvy was a common illness among sailors at the time. James Lind, a surgeon in the British Royal Navy, wondered if perhaps it had something to do with the fact that sailors on long voyages ate almost no fresh fruits and vegetables. We now know that scurvy is caused by a deficiency of vitamin C, but at the time the necessity of vitamins to our health was unknown. He tested his hypothesis by dividing a number of scurvy sailors into two groups, one of which was given fresh fruits (we now know to contain vitamin C) to eat, while the other group continued eating hardtack and rum. ALL the sailors sucking limes got over their illness, while ALL the sailors in the group that we now would call the no veggie “control” group still had scurvy. Eureka! From then on British sailors sucked on limes and stayed healthy, while those poor French and Spanish sailors stayed sick and lost lots of sea battles to the British.

Another classic example of an early clinical trial was conducted by Walter Reed, a U.S. army medical officer stationed in Cuba in the 1890s. Yellow fever was rampant at the time, and he wondered why it was only prevalent in tropical climates. His trial could never be done today for ethical reasons. He suspected mosquitoes might somehow be spreading the disease through their bites. He recruited a small number of healthy volunteers, half of whom deliberately were bitten by mosquitoes, while the other half were not bitten. Most of the poor guys with the bites came down with yellow fever, and one of them died! None of the bite free guys got the fever. That was definitive, whereas today various studies and trials are rarely so (with one famous exception that I discussed in my Ecoworld article entitled “Chemophobia”).

The following is a perfect example of how an epidemiological study should be conducted in order to give definitive results, NO question about the results, even if it wasn’t planned that way and would be considered unethical and way too expensive to conduct if it were. AND, this was a really BIG study.

Back in the 1960s, this study enrolled tens of millions of volunteers (the test group) who volunteered to inhale huge amounts of suspected carcinogens every day of their lives for at least 20 years, AT THEIR OWN EXPENSE! The same number of people who did not inhale the suspected carcinogens (the control group) was compared with the test group after 20-30 years to determine the rates of various cancers in the two groups. Absolutely unequivocal results showed that people in the test group had an increased incidence of various cancers and heart disease over the control group, and the most striking result was that people in the test group were about 15 times more likely to get lung cancer than people in the control group!

Thus we now know for sure that smoking can cause lung cancer and various other health problems. Now THAT was a really good epidemiological study! It is, however, not even conceivable to design and carry out such a clinical trial for ethical reasons: and in addition, the time and expense would be prohibitive. So let’s look at how those “study” authors do things today. The reader may have already figured out that I perceive most of the “studies” to be mostly what is often called “junk science.” I do not, however, believe real science is involved at all in most statistical studies, so I call them “bad (pretend) science” or BS for short. I won’t go into the statistical details and methods, but I will show many wonderful examples of famous BS. You can get the underlying methods by reading Steven Milloy’s “Science Without Sense” and “Junk Science Judo.”

Here are the “seven deadly sins” of epidemiology (epiBS from now on) as practiced today:

1) Have a political, health, or moral agenda and design (rig) your study in order to get the results you want. This applies to all sides of the political spectrum and official government agencies. Real scientific method is: put forth a hypothesis, then gather data to determine whether your hypothesis is correct or not. EpiBS method is: Have a mandated or acceptable conclusion in mind, then go select only the data that appear to support your already reached conclusion (see famous example below).

2) Assume that a statistical correlation that you found in your latest study between some disease or disorder and some exposure to some perceived risk factor is proof of a cause and effect relationship. EVEN if there is no apparent biological reason to think so, you can still think of some improbable rationalization for your results!

3) Data dredging: Don’t bother with any hypothesis prior to gathering your data, just ask a large group of people lots of questions about lifestyle, diet, drinking habits, ect., over a period of time. Feed the data into your computer statistics program and see if something correlates to something, who knows what you might find?

4) Don’t bother to verify any data you gather through questionnaires. Just assume nobody ever mis-remembers or lies about their lifestyle, diet, shoe size, or anything else you might have thought to ask about. Ask a subject how much alcohol they drink per day, and they understate the amount by 3 or 4 times at least. It’s like a wife asking her husband how much money he lost playing the slots at the casino.

5) Design studies that are fatally flawed from the beginning, but because you don’t know anything about the biochemistry involved (after all, you are either a medical guy or a statistician), you have no clue why you got the associations you did, but you believe it and publish it anyway.

6) If your study doesn’t find any association between, say, radon exposure and lung cancer, perform a meta-analysis combining the weak, statistically insignificant results of numerous studies by other researchers with your own, and you don’t even have to do a study of your own at all. It doesn’t matter how good or bad or even how similar in their design all those studies were (the old apples and oranges comparisons), combining them just might give a statistically significant correlation.

7) ALWAYS call the news media immediately after finding an association between, say, exposure to some hot issue chemical and some disease state. The reporters know absolutely nothing about how these studies are done and will uncritically report whatever you say. Your study will make big headlines tomorrow, and you will be quoted as saying, “the results are important, but more research is needed”. That translates into, “I need more grant money to continue to do BS.”

The most egregious example I will give of epiBS combines the deadly sins #1 and #6, and its results have had enormous implications on nanny state public policy. The U.S. Environmental Protection Agency’s (EPA) original mission was to establish rules and regulations meant to protect the environment, such as from air and water pollution. However, over the years mission creep occurred, and they now exist to protect public health. This gives them vastly more power to institute regulations that the agency was never originally intended to do.

In 1993, the EPA conducted a now infamous study that kicked off the anti-smoking crusade that continues today. At the time, more than 30 epidemiological studies from around the world had been conducted to see if the spouses of smokers were more likely to get lung cancer than spouses of non-smokers. None of them were definitive, perhaps showing a very weak correlation. Some of those studies actually suggested (also weakly) that exposure to second hand smoke, or environmental tobacco smoke (ETS) might actually protect the spouse against getting lung cancer. This is a plausible biological process called “hormesis”, i.e., very low levels of exposure to a toxin can protect a person against high levels of exposure later.

The EPA has even admitted that the average annual exposure to ETS particles for a non-smoker is less than actively smoking one cigarette. Anyway, the EPA ignored those studies and selected only 11 studies to combine in a meta-analysis that they hoped would establish a statistically significant correlation between ETS and lung cancer in spouses of smokers. They also chose not to include in their meta-analysis any of some 30 available studies that were designed to determine if ETS in the workplace, as opposed to spouses of smokers, could be responsible for an increased risk of lung cancer in non-smokers so exposed. A large majority of those workplace studies found no statistically significant association between workplace ETS exposure and lung cancer risk. Is that why they did not include any of those studies in their meta-analysis?

The meta-analysis used by the EPA to analyse the effects of environmental tobacco smoke (ETS, or 2nd hand smoke) committed two of the cardinal sins of epidemiology. First, they selected only those studies that might show that ETS causes lung cancer. Thus they designed it to be a one-tailed test. That means you assume a priori that the test substance can only be bad, so you don’t include any data that might show the opposite of what you expect (or want) to see. Including all studies in their meta-analysis, even those that may indicate that ETS could possibly be beneficial, would make it a statistically acceptable two-tailed test.

The second cardinal sin of epidemiology they performed is that they used a confidence interval (CI) of 90%, instead of the gold standard 95%, in order to get a statistically significant result. What does that mean, a statistics-ignorant person might ask? At a 95% CI, your statistically significant results have a 1 in 20 probability of being due to pure chance (1/20 means p=.05 in stat language). All good epidemiological studies use the 0.05 CI. The EPA chose to use a CI of 0.1 (one in 10 chance of your results being false instead of a 1 in 20 chance) because they knew beforehand that their results would not be significant otherwise. In other words, they rigged the “study” to get the result they wanted, epiBS in its most flagrant form. They knew that smoking is bad for the health of smokers, but they couldn’t regulate smoking unless they could claim ETS could cause disease in innocent bystanders. This is perverting science because they believe, that for a worthy cause, the end justifies the means.

What, you still think ETS causes lung cancer in non-smokers? The EPA epiBS meta-analysis study was done in 1993. A study sponsored by the World Health Organization in 1998, which covered seven countries over seven years, showed no increase in cancer risk for spouses and co-workers of smokers. It was, however, another meta-analysis. I don’t like meta-analyses in general, even when there may be no political agenda involved as in the EPA study. So has there been one huge study done right, no meta-analysis BS? YES!

In 2003, a study published in the British Journal of Medicine found no relationship between exposure to passive smoke and mortality. It was a HUGE, very believable study. It spanned 39 years and included over 35,000 Californians. So why is such a really good study ignored in the media and the epiBS community? POLITICAL CORRECTNESS?

Disclaimer: I DON’T SMOKE, AND I AM VERY ANNOYED BY ETS, so don’t accuse me of loving tobacco companies. I agree with laws banning smoking in public buildings, transportation, and enclosed areas where one must go to do business.

About the Author: Dr. Wheeler earned a Ph.D. in chemistry from the University of California, Berkeley in 1970. As a research scientist for the U.S. Department of Agriculture in Berkeley, he did pioneering research on how one’s nutritional status and cancer are interrelated, and how our immune systems handle food bourn carcinogens. He published 25 research papers in peer reviewed scientific journals and gave numerous talks (and listened to many, many more) at various scientific meetings. He left the USDA to work for Nabisco in New Jersey as head of the food science research unit. Now retired, he writes brilliant articles for “ecoworld” pro bono. He is the resident contrarian for ecoworld.com.

Email the Editor about this Article
EcoWorld - Nature and Technology in Harmony

Leave a Reply

You must be logged in to post a comment.

Advertisement