“Hype” or Uncertainty: The Reporting of Initial Scientific Findings in Newspapers

One of the cornerstones of scientific research is the reproducibility of findings. Novel scientific observations need to be validated by subsequent studies in order to be considered robust. This has proven to be somewhat of a challenge for many biomedical research areas, including high impact studies in cancer research and stem cell research. The fact that an initial scientific finding of a research group cannot be confirmed by other researchers does not mean that the initial finding was wrong or that there was any foul play involved. The most likely explanation in biomedical research is that there is tremendous biological variability. Human subjects and patients examined in one research study may differ substantially from those in follow-up studies. Biological cell lines and tools used in basic science studies can vary widely, depending on so many details such as the medium in which cells are kept in a culture dish. The variability in findings is not a weakness of biomedical research, in fact it is a testimony to the complexity of biological systems. Therefore, initial findings need to always be treated with caution and presented with the inherent uncertainty. Once subsequent studies – often with larger sample sizes – confirm the initial observations, they are then viewed as being more robust and gradually become accepted by the wider scientific community.

Even though most scientists become aware of the scientific uncertainty associated with an initial observation as their career progresses, non-scientists may be puzzled by shifting scientific narratives. People often complain that “scientists cannot make up their minds” – citing examples of newspaper reports such as those which state drinking coffee may be harmful only to be subsequently contradicted by reports which laud the beneficial health effects of coffee drinking. Accurately communicating scientific findings as well as the inherent uncertainty of such initial findings is a hallmark of critical science journalism.

A group of researchers led by Dr. Estelle Dumas-Mallet at the University of Bordeaux recently studied the extent of uncertainty communicated to the public by newspapers when reporting initial medical research findings in their recently published paper “Scientific Uncertainty in the Press: How Newspapers Describe Initial Biomedical Findings“. Dumas-Mallet and her colleagues examined 426 English-language newspaper articles published between 1988 and 2009 which described 40 initial biomedical research studies. They focused on scientific studies in which a new risk factor such as smoking or old age had been newly associated with a disease such as schizophrenia, autism, Alzheimer’s disease or breast cancer (total of 12 diseases). The researchers only included scientific studies which had subsequently been re-evaluated by follow-up research studies and found that less than one third of the scientific studies had been confirmed by subsequent research. Dumas-Mallet and her colleagues were therefore interested in whether the newspaper articles, which were published shortly after the release of the initial research paper, adequately conveyed the uncertainty surrounding the initial findings and thus adequately preparing their readers for subsequent research that may confirm or invalidate the initial work.

The University of Bordeaux researchers specifically examined whether headlines of the newspaper articles were “hyped” or “factual”, whether they mentioned whether or not this was an initial study and clearly indicated they need for replication or validation by subsequent studies. Roughly 35% of the headlines were “hyped”. One example of a “hyped” headline was “Magic key to breast cancer fight” instead of using a more factual headline such as “Scientists pinpoint genes that raise your breast cancer risk“. Dumas-Mallet and her colleagues found that even though 57% of the newspaper articles mentioned that these medical research studies were initial findings, only 21% of newspaper articles included explicit “replication statements” such as “Tests on larger populations of adults must be performed” or “More work is needed to confirm the findings”.

The researchers next examined the key characteristics of the newspaper articles which were more likely to convey the uncertainty or preliminary nature of the initial scientific findings. Newspaper articles with “hyped” headlines were less likely to mention the need for replicating and validating the results in subsequent studies. On the other hand, newspaper articles which included a direct quote from one of the research study authors were three times more likely to include a replication statement. In fact, approximately half of all the replication statements mentioned in the newspaper articles were found in author quotes, suggesting that many scientists who conducted the research readily emphasize the preliminary nature of their work. Another interesting finding was the gradual shift over time in conveying scientific uncertainty. “Hyped” headlines were rare before 2000 (only 15%) and become more frequent during the 2000s (43%). On the other hand, replication statements were more common before 2000 (35%) than after 2000 (16%). This suggests that there was a trend towards conveying less uncertainty after 2000, which is surprising because debate about scientific replicability in the biomedical research community seems to have become much more widespread in the past decade.

As in all scientific studies, we need to be aware of the analysis performed by Dumas-Mallet and her colleagues. They focused on analyzing a very narrow area of biomedical research – newly identified risk factors for selected diseases. It remains to be seen whether other areas of biomedical research such as treatment of diseases or basic science discoveries of new molecular pathways are also reported with “hyped” headlines and without replication statements. In other words – this research on “replication statements” in newspaper articles also needs to be replicated. It is not clear that the worrisome trend of over-selling robustness of initial research findings after the year 2000 still persists since the work by Dumas-Mallet and colleagues stopped analyzing studies published after 2009. One would hope that the recent discussions about replicability issues in science among scientists would reverse this trend. Even though the findings of the University of Bordeaux researchers need to be replicated by others, science journalists and readers of newspapers can glean some important information from this study: One needs to be wary of “hyped” headlines and it can be very useful to interview authors of scientific studies when reporting about new research, especially asking them about the limitations of their work. “Hyped” newspaper headlines and an exaggerated sense of certainty in initial scientific findings may erode the long-term trust of the public in scientific research, especially if subsequent studies fail to replicate the initial results. Critical and comprehensive reporting of biomedical research studies – including their limitations and uncertainty – by science journalists is therefore a very important service to society which contributes to science literacy and science-based decision making.

Reference

Dumas-Mallet, E., Smith, A., Boraud, T., & Gonon, F. (2018). Scientific Uncertainty in the Press: How Newspapers Describe Initial Biomedical FindingsScience Communication, 40(1), 124-141.

Note: An earlier version of this article was first published on the 3Quarksdaily blog.

Novelty in science – real necessity or distracting obsession?

File 20171219 5004 1ecssnn.jpg?ixlib=rb 1.1
It may take time for a tiny step forward to show its worth.
ellissharp/Shutterstock.com

Jalees Rehman, University of Illinois at Chicago

In a recent survey of over 1,500 scientists, more than 70 percent of them reported having been unable to reproduce other scientists’ findings at least once. Roughly half of the surveyed scientists ran into problems trying to reproduce their own results. No wonder people are talking about a “reproducibility crisis” in scientific research – an epidemic of studies that don’t hold up when run a second time.

Reproducibility of findings is a core foundation of science. If scientific results only hold true in some labs but not in others, then how can researchers feel confident about their discoveries? How can society put evidence-based policies into place if the evidence is unreliable?

Recognition of this “crisis” has prompted calls for reform. Researchers are feeling their way, experimenting with different practices meant to help distinguish solid science from irreproducible results. Some people are even starting to reevaluate how choices are made about what research actually gets tackled. Breaking innovative new ground is flashier than revisiting already published research. Does prioritizing novelty naturally lead to this point?

Incentivizing the wrong thing?

One solution to the reproducibility crisis could be simply to conduct lots of replication studies. For instance, the scientific journal eLife is participating in an initiative to validate and reproduce important recent findings in the field of cancer research. The first set of these “rerun” studies was recently released and yielded mixed results. The results of 2 out of 5 research studies were reproducible, one was not and two additional studies did not provide definitive answers.

There’s no need to restrict these sort of rerun studies to cancer research – reproducibility issues can be spotted across various fields of scientific research.

Researchers should be rewarded for carefully shoring up the foundations of the field.
Alexander Raths/Shutterstock.com

But there’s at least one major obstacle to investing time and effort in this endeavor: the quest for novelty. The prestige of an academic journal depends at least partly on how often the research articles it publishes are cited. Thus, research journals often want to publish novel scientific findings which are more likely to be cited, not necessarily the results of newly rerun older research.

A study of clinical trials published in medical journals found the most prestigious journals prefer publishing studies considered highly novel and not necessarily those that have the most solid numbers backing up the claims. Funding agencies such as the National Institutes of Health ask scientists who review research grant applications to provide an “innovation” score in order to prioritize funding for the most innovative work. And scientists of course notice these tendencies – one study found the use of positive words like “novel,” “amazing,” “innovative” and “unprecedented” in paper abstracts and titles increased almost ninefold between 1974 and 2014.

Genetics researcher Barak Cohen at Washington University in St. Louis recently published a commentary analyzing this growing push for novelty. He suggests that progress in science depends on a delicate balance between novelty and checking the work of other scientists. When rewards such as funding of grants or publication in prestigious journals emphasize novelty at the expense of testing previously published results, science risks developing cracks in its foundation.

Houses of brick, mansions of straw

Cancer researcher William Kaelin Jr., a recipient of the 2016 Albert Lasker Award for Basic Medical Research, recently argued for fewer “mansions of straw” and more “houses of brick” in scientific publications.

One of his main concerns is that scientific papers now inflate their claims in order to emphasize their novelty and the relevance of biomedical research for clinical applications. By exchanging depth of research for breadth of claims, researchers may be at risk of compromising the robustness of the work. By claiming excessive novelty and impact, researchers may undermine its actual significance because they may fail to provide solid evidence for each claim.

Kaelin even suggests that some of his own work from the 1990s, which transformed cell biology research by discovering how cells can sense oxygen, may have struggled to get published today.

Prestigious journals often now demand complete scientific stories, from basic molecular mechanisms to proving their relevance in various animal models. Unexplained results or unanswered questions are seen as weaknesses. Instead of publishing one exciting novel finding that is robust, and which could spawn a new direction of research conducted by other groups, researchers now spend years gathering a whole string of findings with broad claims about novelty and impact.

There should be more than one path to a valuable journal publication.
Mehaniq/Shutterstock.com

Balancing fresh findings and robustness

A challenge for editors and reviewers of scientific manuscripts is assessing the novelty and likely long-term impact of the work they’re assessing. The eventual importance of a new, unique scientific idea is sometimes difficult to recognize even by peers who are grounded in existing knowledge. Many basic research studies form the basis of future practical applications. One recent study found that of basic research articles that received at least one citation, 80 percent were eventually cited by a patent application. But it takes time for practical significance to come to light.

A collaborative team of economics researchers recently developed an unusual measure of scientific novelty by carefully studying the references of a paper. They ranked a scientific paper as more novel if it cited a diverse combination of journals. For example, a scientific article citing a botany journal, an economics journal and a physics journal would be considered very novel if no other article had cited this combination of varied references before.

This measure of novelty allowed them to identify papers which were more likely to be cited in the long run. But it took roughly four years for these novel papers to start showing their greater impact. One may disagree with this particular indicator of novelty, but the study makes an important point: It takes time to recognize the full impact of novel findings.

The ConversationRealizing how difficult it is to assess novelty should give funding agencies, journal editors and scientists pause. Progress in science depends on new discoveries and following unexplored paths – but solid, reproducible research requires an equal emphasis on the robustness of the work. By restoring the balance between demands and rewards for novelty and robustness, science will achieve even greater progress.

Jalees Rehman, Associate Professor of Medicine and Pharmacology, University of Illinois at Chicago

This article was originally published on The Conversation. Read the original article.

Critical Science Writing: A Checklist for the Life Sciences

One major obstacle in the “infotainment versus critical science writing” debate is that there is no universal definition of what constitutes “critical analysis” in science writing. How can we decide whether or not critical science writing is adequately represented in contemporary science writing or science journalism, if we do not have a standardized method of assessing it? For this purpose, I would like to propose the following checklist of points that can be addressed in news articles or blog-posts which focus on the critical analysis of published scientific research. This checklist is intended for the life sciences – biological and medical research – but it can be easily modified and applied to critical science writing in other areas of research. Each category contains examples of questions which science writers can direct towards members of the scientific research team, institutional representatives or by performing an independent review of the published scientific data. These questions will have to be modified according to the specific context of a research study.

 

1. Novelty of the scientific research:

Most researchers routinely claim that their findings are novel, but are the claims of novelty appropriate? Is the research pointing towards a fundamentally new biological mechanism or introducing a completely new scientific tool? Or does it just represent a minor incremental growth in our understanding of a biological problem?

 

2. Significance of the research:

How does the significance of the research compare to the significance of other studies in the field? A biological study might uncover new regulators of cell death or cell growth, but how many other such regulators have been discovered in recent years? How does the magnitude of the effect in the study compare to magnitude of effects in other research studies? Suppressing a gene might prolong the survival of a cell or increase the regeneration of an organ, but have research groups published similar effects in studies which target other genes? Some research studies report effects that are statistically significant, but are they also biologically significant?

 

3. Replicability:

Have the findings of the scientific study been replicated by other research groups? Does the research study attempt to partially or fully replicate prior research? If the discussed study has not yet been replicated, is there any information available on the general replicability success rate in this area of research?

 

4. Experimental design:

Did the researchers use an appropriate experimental design for the current study by ensuring that they included adequate control groups and addressed potential confounding factors? Were the experimental models appropriate for the questions they asked and for the conclusions they are drawing? Did the researchers study the effects they observed at multiple time points or just at one single time point? Did they report the results of all the time points or did they just pick the time points they were interested in?

Examples of issues: 1) Stem cell studies in which human stem cells are transplanted into injured or diseased mice are often conducted with immune deficient mice to avoid rejection of the human cells. Some studies do not assess whether the immune deficiency itself impacted the injury or disease, which could be a confounding factor when interpreting the results. 2) Studies which investigate the impact of the 24-hour internal biological clock on the expression of genes sometimes perform the studies in humans and animals who maintain a regular sleep-wake schedule. This obscures the cause-effect relationship because one is unable to ascertain whether the observed effects are truly regulated by an internal biological clock or whether they merely reflect changes associated with being awake versus asleep.

 

5. Experimental methods:

Are the methods used in the research study accepted by other researchers? If the methods are completely novel, have they been appropriately validated? Are there any potential artifacts that could explain the findings? How did the findings in a dish (“in vitro“) compare to the findings in an animal experiment (“in vivo“)? If new genes were introduced into cells or into animals, was the level of activity comparable to levels found in nature or were the gene expression levels 10-, 100- or even 1000-fold higher than physiologic levels?

Examples of issues: In stem cell research, a major problem faced by researchers is how stem cells are defined, what constitutes cell differentiation and how the fate of stem cells is tracked. One common problem that has plagued peer-reviewed studies published in high-profile journals is the inadequate characterization of stem cells and function of mature cells derived from the stem cells. Another problem in the stem cell literature is the fact that stem cells are routinely labeled with fluorescent markers to help track their fate, but it is increasingly becoming apparent that unlabeled cells (i.e. non-stem cells) can emit a non-specific fluorescence that is quite similar to that of the labeled stem cells. If a study does not address such problems, some of its key conclusions may be flawed.

 

6. Statistical analysis:

Did the researchers use the appropriate statistical tests to test the validity of their results? Were the experiments adequately powered (have a sufficient sample size) to draw valid conclusions? Did the researchers pre-specify the number of repeat experiments, animals or humans in their experimental groups prior to conducting the studies? Did they modify the number of animals or human subjects in the experimental groups during the course of the study?

 

7. Consensus or dissent among scientists:

What do other scientists think about the published research? Do they agree with the novelty, significance and validity of the scientific findings as claimed by the authors of the published paper or do they have specific concerns in this regard?

 

8. Peer review process:

What were the major issues raised during the peer review process? How did the researchers address the concerns of the reviewers? Did any journals previously reject the study before it was accepted for publication?

 

9. Financial interests:

How was the study funded? Did the organization or corporation which funded the study have any say in how the study was designed, how the data was analyzed and what data was included in the publication? Do the researchers hold any relevant patents, own stock or receive other financial incentives from institutions or corporations that could benefit from this research?

 

10. Scientific misconduct, fraud or breach of ethics

Are there any allegations or concerns about scientific misconduct, fraud or breach of ethics in the context of the research study? If such concerns exist, what are the specific measures taken by the researchers, institutions or scientific journals to resolve the issues? Have members of the research team been previously investigated for scientific misconduct or fraud? Are there concerns about how informed consent was obtained from the human subjects?

 

This is just a preliminary list and I would welcome any feedback on how to improve this list in order to develop tools for assessing the critical analysis content in science writing. It may not always be possible to obtain the pertinent information. For example, since the peer review process is usually anonymous, it may be impossible for a science writer to find out details about what occurred during the peer review process if the researchers themselves refuse to comment on it.

One could assign a point value to each of the categories in this checklist and then score individual science news articles or science blog-posts that discuss specific research studies. A greater in-depth discussion of any issue should result in a greater point score for that category.

Points would not only be based on the number of issues raised but also on the quality of analysis provided in each category. Listing all the funding sources is not as helpful as providing an analysis of how the funding could have impacted the data interpretation. Similarly, if the science writer notices errors in the experimental design, it would be very helpful for the readers to understand whether these errors invalidate all major conclusions of the study or just some of its conclusions. Adding up all the points would then generate a comprehensive score that could become a quantifiable indicator of the degree of critical analysis contained in a science news article or blog-post.

 

********************

EDIT: The checklist now includes a new category – scientific misconduct, fraud or breach of ethics.