One major obstacle in the “infotainment versus critical science writing” debate is that there is no universal definition of what constitutes “critical analysis” in science writing. How can we decide whether or not critical science writing is adequately represented in contemporary science writing or science journalism, if we do not have a standardized method of assessing it? For this purpose, I would like to propose the following checklist of points that can be addressed in news articles or blog-posts which focus on the critical analysis of published scientific research. This checklist is intended for the life sciences – biological and medical research – but it can be easily modified and applied to critical science writing in other areas of research. Each category contains examples of questions which science writers can direct towards members of the scientific research team, institutional representatives or by performing an independent review of the published scientific data. These questions will have to be modified according to the specific context of a research study.
1. Novelty of the scientific research:
Most researchers routinely claim that their findings are novel, but are the claims of novelty appropriate? Is the research pointing towards a fundamentally new biological mechanism or introducing a completely new scientific tool? Or does it just represent a minor incremental growth in our understanding of a biological problem?
2. Significance of the research:
How does the significance of the research compare to the significance of other studies in the field? A biological study might uncover new regulators of cell death or cell growth, but how many other such regulators have been discovered in recent years? How does the magnitude of the effect in the study compare to magnitude of effects in other research studies? Suppressing a gene might prolong the survival of a cell or increase the regeneration of an organ, but have research groups published similar effects in studies which target other genes? Some research studies report effects that are statistically significant, but are they also biologically significant?
Have the findings of the scientific study been replicated by other research groups? Does the research study attempt to partially or fully replicate prior research? If the discussed study has not yet been replicated, is there any information available on the general replicability success rate in this area of research?
4. Experimental design:
Did the researchers use an appropriate experimental design for the current study by ensuring that they included adequate control groups and addressed potential confounding factors? Were the experimental models appropriate for the questions they asked and for the conclusions they are drawing? Did the researchers study the effects they observed at multiple time points or just at one single time point? Did they report the results of all the time points or did they just pick the time points they were interested in?
Examples of issues: 1) Stem cell studies in which human stem cells are transplanted into injured or diseased mice are often conducted with immune deficient mice to avoid rejection of the human cells. Some studies do not assess whether the immune deficiency itself impacted the injury or disease, which could be a confounding factor when interpreting the results. 2) Studies which investigate the impact of the 24-hour internal biological clock on the expression of genes sometimes perform the studies in humans and animals who maintain a regular sleep-wake schedule. This obscures the cause-effect relationship because one is unable to ascertain whether the observed effects are truly regulated by an internal biological clock or whether they merely reflect changes associated with being awake versus asleep.
5. Experimental methods:
Are the methods used in the research study accepted by other researchers? If the methods are completely novel, have they been appropriately validated? Are there any potential artifacts that could explain the findings? How did the findings in a dish (“in vitro“) compare to the findings in an animal experiment (“in vivo“)? If new genes were introduced into cells or into animals, was the level of activity comparable to levels found in nature or were the gene expression levels 10-, 100- or even 1000-fold higher than physiologic levels?
Examples of issues: In stem cell research, a major problem faced by researchers is how stem cells are defined, what constitutes cell differentiation and how the fate of stem cells is tracked. One common problem that has plagued peer-reviewed studies published in high-profile journals is the inadequate characterization of stem cells and function of mature cells derived from the stem cells. Another problem in the stem cell literature is the fact that stem cells are routinely labeled with fluorescent markers to help track their fate, but it is increasingly becoming apparent that unlabeled cells (i.e. non-stem cells) can emit a non-specific fluorescence that is quite similar to that of the labeled stem cells. If a study does not address such problems, some of its key conclusions may be flawed.
6. Statistical analysis:
Did the researchers use the appropriate statistical tests to test the validity of their results? Were the experiments adequately powered (have a sufficient sample size) to draw valid conclusions? Did the researchers pre-specify the number of repeat experiments, animals or humans in their experimental groups prior to conducting the studies? Did they modify the number of animals or human subjects in the experimental groups during the course of the study?
7. Consensus or dissent among scientists:
What do other scientists think about the published research? Do they agree with the novelty, significance and validity of the scientific findings as claimed by the authors of the published paper or do they have specific concerns in this regard?
8. Peer review process:
What were the major issues raised during the peer review process? How did the researchers address the concerns of the reviewers? Did any journals previously reject the study before it was accepted for publication?
9. Financial interests:
How was the study funded? Did the organization or corporation which funded the study have any say in how the study was designed, how the data was analyzed and what data was included in the publication? Do the researchers hold any relevant patents, own stock or receive other financial incentives from institutions or corporations that could benefit from this research?
10. Scientific misconduct, fraud or breach of ethics
Are there any allegations or concerns about scientific misconduct, fraud or breach of ethics in the context of the research study? If such concerns exist, what are the specific measures taken by the researchers, institutions or scientific journals to resolve the issues? Have members of the research team been previously investigated for scientific misconduct or fraud? Are there concerns about how informed consent was obtained from the human subjects?
This is just a preliminary list and I would welcome any feedback on how to improve this list in order to develop tools for assessing the critical analysis content in science writing. It may not always be possible to obtain the pertinent information. For example, since the peer review process is usually anonymous, it may be impossible for a science writer to find out details about what occurred during the peer review process if the researchers themselves refuse to comment on it.
One could assign a point value to each of the categories in this checklist and then score individual science news articles or science blog-posts that discuss specific research studies. A greater in-depth discussion of any issue should result in a greater point score for that category.
Points would not only be based on the number of issues raised but also on the quality of analysis provided in each category. Listing all the funding sources is not as helpful as providing an analysis of how the funding could have impacted the data interpretation. Similarly, if the science writer notices errors in the experimental design, it would be very helpful for the readers to understand whether these errors invalidate all major conclusions of the study or just some of its conclusions. Adding up all the points would then generate a comprehensive score that could become a quantifiable indicator of the degree of critical analysis contained in a science news article or blog-post.
EDIT: The checklist now includes a new category – scientific misconduct, fraud or breach of ethics.