Jason Keisling
Researchers at the Open Science Collaboration (OSC) trying to replicate the results of 100 prominent psychology studies found that only about 40 percent of them could be. These disheartening results were published in Science and widely reported in the popular press. This finding was in line with the growing number of instances in which researchers were reporting that they could not reproduce the published work of their colleagues. In fact, as early as 2005, Stanford University statistician John Ioannidis had asserted that most published research findings are false.
Now some leading psychologists are charging that the results published by the Open Science Collaboration are, in fact, false. In a rebuttal commentary published in today Science, they argue that mistakes in various attempts at replicating research led the Collaboration researchers to dramatically underestimate the actual reliability of the studies being examined. Among other things, they claim that
many of OSC's replication studies drew their samples from different populations than the original studies did. An original study that measured American's attitudes toward African-Americans was replicated with Italians, who do not share the same stereotypes; an original study that asked college students to imagine being called on by a professor was replicated with participants who had never been to college; and an original study that asked students who commute to school to choose between apartments that were short and long drives from campus was replicated with students who do not commute to school.
These are pretty clearly serious flaws, if true. The researchers also question the overall study's statistical analysis of the replication data:
OSC used a benchmark that did not take into account the multiple sources of error in their data, used a relatively low-powered design that demonstrably underestimates the true rate of replication, and permitted considerable infidelities that almost certainly biased their replication studies toward failure. As a result, OSC seriously underestimated the reproducibility of psychological science.
According to the New York Times, University of Virginia psychologist Brian Nosek, who headed up the OSC replication study, countered
that the critique was highly biased: "They are making assumptions based on selectively interpreting data and ignoring data that's antagonistic to their point of view."
Only time will tell if the rebuttal is the grousing of some embarrassed old-timers or if psychological research is actually reliable. Stay tuned.
In the meantime, the Open Science Collaboration is working on a replicability study of cancer biology research. Perhaps the results derived from cells and lab rats will be more tractable than those from psychological assays.
For more background, read my feature article dealing with the replicability problem, "Broken Science."
Comentarios