Getting Risk Right: Understanding the Science of Elusive Health Risks, by Geoffrey C. Kabat, Columbia University Press, 248 pages, $35
Columbia University Press
Eating bacon and ham four times a week could make asthma symptoms worse. Drinking hot coffee and tea may cause cancer of the esophagus. South Africa's minister of health warns that doggy-style sex is a major cause of stroke and cancer in men. And those claims come from the health headlines of just one December week.
The media inundate us daily with studies that seem to show that modern life is increasingly risky. Most of those stories must be false, given that life expectancy for American men and women, respectively, has risen from 71.8 and 78.8 years in 1990 to 76.3 and 81.1 years now. Apparently, we are suffering through an epidemic of bad epidemiology.
When it comes to separating the wheat of good public health research from the chaff of studies that are mediocre or just plain bad, Albert Einstein College of Medicine epidemiologist Geoffrey Kabat is a national treasure. "Most research findings are false or exaggerated, and the more dramatic the result, the less likely it is to be true," he declares in his excellent new book Getting Risk Right. Kabat's earlier book, 2008's Hyping Health Risks (Columbia University Press), thoroughly dismantled the prevalent medical myths that man-made chemicals, electromagnetic fields, radon, and passive smoking were significant causes of such illnesses as cancer and heart disease. His new book shows how scientific research so often goes wrong—and how hard it is for it to go right.
Kabat first reminds readers that finding a correlation between phenomena X and Y does not mean that X causes Y. Nevertheless, many researchers are happy to overinterpret such findings to suggest causation. "If researchers can slip into this way of interpreting and presenting results of their studies," observes Kabat, "it becomes easier to understand how journalists, regulators, activists of various stripes, self-appointed health gurus, promoters of health-related foods and products, and the public can make the unwarranted leap that the study being reported provides evidence of a causal relationship and therefore is worthy of our interest."
He offers some principles to keep in mind when evaluating studies. First and foremost is the toxicological maxim that the dose makes the poison. The more exposure to a toxin, the greater the harm. Potency matters greatly too. Often very sensitive assays show that two different compounds can bind to the same receptors in the body, but what really matters biologically is how avidly and how strongly one binds compared to the other.
Another principle: Do not confuse hazard, a potential source of harm, with risk, the likelihood that the hazard will cause harm. Consider bacon. The influential International Agency for Research on Cancer declared bacon a hazard for cancer in 2015, but the agency does not make risk assessments. Eating two slices of bacon per day is calculated to increase your lifetime risk of colorectal cancer from 5 to 6 percent. Put that way, I suspect most people would choose to continue to enjoy cured pork products.
Kabat also argues that an editorial bias skews the scientific literature toward publishing results suggesting harms. Such findings, he notes, get more attention from other researchers, from regulators, from journalists, and from activists. Ever since Rachel Carson's 1962 book Silent Spring wrongly linked cancer with exposures to trace amounts of pesticides, the American public has been primed to blame external causes rather than personal behaviors for their health problems. Unfortunately, as Kabat notes, the existence of an alarmed and sensitized public is all too useful to regulators and other interest groups. He quotes an honest but incautious remark in the air pollution researcher Robert Phalen's 2010 testimony to the California Air Resources Board: "It benefits us personally to have the public be afraid, even if these risks are trivial."
Kabat suggests that the precautionary principle—"better safe than sorry"—is largely an ideological ploy to alarm the public into supporting advocates' policy preferences. He also decries "the simplistic notion that the 'consensus among scientists' is always correct." He notes that the scientific consensus once held that ulcers were caused by spicy foods and stress instead of bacteria, and that estrogen-progestin therapy protected post-menopausal women against heart disease instead of increasing their risk of breast cancer. "The history of medical science is littered with long-held dogmas that, when confronted by better evidence, turned out to be wrong," he observes.
Kabat then offers two case studies in how epidemiology has been misused. The first involves cellphones. After a couple of decades of research, the bulk of epidemiological evidence has found that they have not increased the incidence of brain cancer. (A recent experiment did report that exposure to cell tower radio waves for nine hours per day boosted cancer in male, but not female, rats.) Despite the overwhelming evidence that cellphones are safe to use, in 2015 the city council of Berkeley, California, succumbed to scaremongering and passed an ordinance requiring retailers to warn consumers not to carry their phones in their pants pockets, their shirt pockets, or their bras.
Kabat's second example involves "endocrine disruption," an idea attributing ill effects to man-made substances, such as the plastic softener Bisphenol A (BPA), that supposedly mimic the behavior of hormones like estrogen and testosterone. Exposure to such substances has allegedly produced epidemics of lower sperm counts and hypospadias, a condition in which the opening of the urethra is located on the underside of the penis.
This hypothesis developed after the discovery that women who took therapeutic doses of the synthetic estrogen DES to prevent miscarriages were associated with a higher risk of vaginal cancer in their daughters. But to make a long story short, the amount of DES to which people were exposed was 100,000 times greater than average BPA exposure today. On top of that, BPA's potency is 10,000 times lower than that of DES, which means that the estrogenic effects of current exposures to BPA is 1 billion times lower than the exposures to DES that were associated with increased risks of cancer.
Proponents of the endocrine-disruption theory throw away the maxim that the dose makes the poison, proposing instead the novel notion that small amounts might actually have bigger effects than larger amounts. They even claim to have experiments to prove this. Unfortunately, nobody outside of their insular world has been able to replicate their studies. In a 2013 review article, a group of toxicologists damningly concluded, "Taking into account the large resources spent on this topic, one should expect that, in the meantime, some endocrine disruptors that cause actual human injury or disease should have been identified." Yet "with the exception of natural or synthetic hormones, not a single, man-made chemical endocrine disruptor has been identified that poses an identifiable, measurable risk to human health."
What about falling sperm counts and the alleged increase in deformed penises? The research has not actually found that sperm counts are down. And a 2010 review in the Journal of Pediatric Urology concluded that "the epidemiologic data on this issue amassed to date clearly demonstrates that the bulk of evidence refutes claims for an increase in hypospadias rates."
As Kabat shows, good epidemiology can identify the real causes of real diseases. He traces how researchers linked renal failure in Belgian women to a Chinese herbal weight loss concoction mistakenly adulterated with the Aristolochia plant, which contains a toxin peculiarly damaging to kidneys. An American researcher then connected the Belgian cases to an epidemic of renal failure among farmers in the Balkans. It turns out that the farmers ate bread made from their own wheat, which was grown in fields infested with Aristolochia.
Kabat also recounts how researchers determined that human papilloma virus (HPV) is the chief cause of cervical cancer. This process began when a physician in the 1960s figured out that a type of lymphoma afflicting African children must be associated with some infectious disease. Kabat traces the epidemiological and experimental work that led to the finding that HPV causes about 5 percent of all cancers in the world. A woman infected with HPV is at a 100- to 500-fold greater risk of getting cervical cancer than a woman without such an infection. (By comparison, a smoker is at a 20- to 50-fold greater risk for lung cancer than a nonsmoker.) Thanks to these discoveries, there is now a vaccine that can prevent this scourge.
"As we have seen," concludes Kabat, "the landscape in which health risks are studied and in which findings are disseminated is pervaded by false claims, oversold results, biases operating at the level of observational studies as well as psychological and cognitive biases, and professional and political agendas." Getting Risk Right is a potent antidote to the toxic misinformation polluting our public health discourse.
Comentarios