Two weeks ago (while I was camping in Colorado), Michael Siegel highlighted a study presented at the American Heart Association's annual meeting that further undermines widely publicized claims that smoking bans lead to immediate, dramatic reductions in heart attack rates. As I have been saying since anti-tobacco activists began making these claims in 2003, hundreds of jurisdictions have smoking bans, and you would expect heart attack rates to decline in some of them purely by chance while rising or remaining essentially unchanged in others. If you focus only on the jurisdictions where heart attacks happen to fall substantially—such as Helena, Montana, or Pueblo, Colorado—it is not hard to create a misleading impression. But as Siegel notes, "The studies which have systematically examined the effect of smoking bans on heart attacks in all cities across the country that have implemented such bans have found that while heart attacks have declined in many cities, they have increased in others. The overall effect is nil, or very close to it." The new study (PDF), by Robin Mathews of the Duke Clinical Research Institute, fits this pattern.
Mathews looked at heart attack rates among people 65 and older (measured by hospital admissions and Medicare claims) in 74 U.S. cities the year before and the year after the implementation of smoking bans. Over all, there was a 3 percent decline. In his presentation of the study, Mathews concludes that "the measured impact of [smoking bans] on AMI [acute myocardial infarction] rates after ban implementation was less than previously estimated from published literature." That's a bit of an understatement, for several reasons. First, ban boosters have cited reductions as big as 47 percent—more than 15 times the change Mathews found. Second, Mathews did not compare the cities with smoking bans to cities without smoking bans, so we don't know whether heart attacks fell more in the first group than in the second. (Siegel notes that a 3 percent drop is smaller than the nationwide declines seen in recent years.) Third, when Mathews restricted his analysis to the 43 cities with laws that represented "a meaningful increase in restrictiveness," he found no statistically significant decline in heart attacks. That means heart attacks fell more in cities that made insignificant changes to their laws than in cities that tightened restrictions in a way that had a practical impact—which makes no sense if smoking bans really do drive down heart attack rates. Nevertheless, Siegel writes in a follow-up post, "a number of anti-smoking researchers" argue that Mathews' study "actually supports the prior research." Mathews himself, who says one of his research goals was "to validate the existing effect estimate," evidently was hoping for more politically convenient results as well.
As Siegel notes, those are the results that tend to be published in scientific journals and publicized in the general press:
It seems clear that the explanation for the discrepancy [between published studies and broader analyses like Mathews'] is publication bias. There are many factors operating which discourage researchers from reporting "negative" findings. It is also much more difficult to get negative findings published, especially on this topic. No researchers are running out to publish a study showing no decline in heart attacks following a smoking ban.
No matter how biologically implausible it is to expect big drops in heart attacks immediately after smoking bans take effect, tobacco control researchers, journal editors, anti-smoking activists, public health officials, and health reporters all want it to be true—so much so that a CDC-commissioned report issued by the Institute of Medicine two years ago endorsed this basic story, ignoring a nationwide analysis that found heart attacks were as likely to rise in cities with smoking bans as they were to fall. (That study, available at the time as a working paper from the National Bureau of Economic Research, was later published by the Journal of Policy Analysis and Management.) The authors of the IOM report hedged their bets by declining to estimate the magnitude of the effect, leaving open the possibility that it is in practice indistinguishable from zero. That way they never have to set the record straight.
Christopher Snowdon has more here.
Comments