Evaluating the quality of Covid-19 research is challenging, even for the scientists who study it. Studies are rapidly pouring out of labs and hospitals, but not all of that information is rigorously vetted before it makes its way into the world. Some studies are small and anecdotal. Others are based on bad data or misplaced assumptions. Many are released as preprints without peer review. Others are hyped up with big press releases that overstate the results—but when scientists are finally able to dive into the research, sometimes the study isn’t as groundbreaking as it seemed.
Everything You Need to Know About the CoronavirusHere's all the WIRED coverage in one place, from how to keep your children entertained to how this outbreak is affecting the economy. Take hydroxychloroquine, an antimalarial drug that appeared promising in the early stages of the pandemic. Anecdotal evidence from a Chinese hospital performing a clinical trial showed the drug might have some benefits, and an early trial in France seemed promising. The US Food and Drug Administration allowed the medication to be available to Covid-19 patients for emergency use. But then the story got complicated. One trial found the drug increased the death rate among patients, but was later retracted because it relied on data that could not be verified. Finally, a large-scale, double-blind trial found the drug didn’t hurt patients, but didn’t help them, either. The FDA finally revoked its emergency use authorization for the drug on June 15.
The promise of hydroxychloroquine rose and fell in just three short months, a lightning-fast turnaround for scientific research. Keeping up with this flood of information about coronavirus therapies and reviewing all the studies coming out is a daunting task, especially for readers without a research or medical background who just want to know what’s going on and how to stay healthy. “We can’t expect everyone to be able to pick up any research paper and know that it’s high quality,” says Elizabeth Stuart, a professor at the Johns Hopkins Bloomberg School of Public Health.
Stuart is part of a team of colleagues at Johns Hopkins who run the Novel Coronavirus Research Compendium (NCRC). The team includes statisticians, epidemiologists, and experts on vaccines, clinical research, and disease modeling; together they rapidly review new studies and make reliable information accessible to the public. For those of us who don’t have advanced degrees in these areas, distinguishing an inflated headline from a genuinely important discovery can feel impossible. But by looking at where a study was published, what data it uses, and how it fits into the larger body of scientific research, even the armchair experts among us can start to be more savvy science information consumers.
Here are a few things experts say we should all do when evaluating new research.
Check the SourceFirst step: Look at where it was published. That can offer clues about things like whether the research is finished or still in revision, if it’s been reviewed by other scientists, or whether it’s rigorous enough to be accepted by top journals like the Journal of the American Medical Association, The Lancet, or The New England Journal of Medicine.
Normally scientists submit their studies to scholarly journals. Editors review each one, and if a study’s design or data don’t live up to the journal’s standards, they might reject it entirely. Next, the editor will send the paper out to a group of scientists working in the authors’ field. Those peers examine the study even more closely for mistakes, and send it back to the authors with suggestions for ways to make the paper stronger. The authors then revise their paper to address those concerns. The whole process, from the time authors submit a paper until it’s finally published, generally takes about a year.