“Hype” or Uncertainty: The Reporting of Initial Scientific Findings in Newspapers

One of the cornerstones of scientific research is the reproducibility of findings. Novel scientific observations need to be validated by subsequent studies in order to be considered robust. This has proven to be somewhat of a challenge for many biomedical research areas, including high impact studies in cancer research and stem cell research. The fact that an initial scientific finding of a research group cannot be confirmed by other researchers does not mean that the initial finding was wrong or that there was any foul play involved. The most likely explanation in biomedical research is that there is tremendous biological variability. Human subjects and patients examined in one research study may differ substantially from those in follow-up studies. Biological cell lines and tools used in basic science studies can vary widely, depending on so many details such as the medium in which cells are kept in a culture dish. The variability in findings is not a weakness of biomedical research, in fact it is a testimony to the complexity of biological systems. Therefore, initial findings need to always be treated with caution and presented with the inherent uncertainty. Once subsequent studies – often with larger sample sizes – confirm the initial observations, they are then viewed as being more robust and gradually become accepted by the wider scientific community.

Even though most scientists become aware of the scientific uncertainty associated with an initial observation as their career progresses, non-scientists may be puzzled by shifting scientific narratives. People often complain that “scientists cannot make up their minds” – citing examples of newspaper reports such as those which state drinking coffee may be harmful only to be subsequently contradicted by reports which laud the beneficial health effects of coffee drinking. Accurately communicating scientific findings as well as the inherent uncertainty of such initial findings is a hallmark of critical science journalism.

A group of researchers led by Dr. Estelle Dumas-Mallet at the University of Bordeaux recently studied the extent of uncertainty communicated to the public by newspapers when reporting initial medical research findings in their recently published paper “Scientific Uncertainty in the Press: How Newspapers Describe Initial Biomedical Findings“. Dumas-Mallet and her colleagues examined 426 English-language newspaper articles published between 1988 and 2009 which described 40 initial biomedical research studies. They focused on scientific studies in which a new risk factor such as smoking or old age had been newly associated with a disease such as schizophrenia, autism, Alzheimer’s disease or breast cancer (total of 12 diseases). The researchers only included scientific studies which had subsequently been re-evaluated by follow-up research studies and found that less than one third of the scientific studies had been confirmed by subsequent research. Dumas-Mallet and her colleagues were therefore interested in whether the newspaper articles, which were published shortly after the release of the initial research paper, adequately conveyed the uncertainty surrounding the initial findings and thus adequately preparing their readers for subsequent research that may confirm or invalidate the initial work.

The University of Bordeaux researchers specifically examined whether headlines of the newspaper articles were “hyped” or “factual”, whether they mentioned whether or not this was an initial study and clearly indicated they need for replication or validation by subsequent studies. Roughly 35% of the headlines were “hyped”. One example of a “hyped” headline was “Magic key to breast cancer fight” instead of using a more factual headline such as “Scientists pinpoint genes that raise your breast cancer risk“. Dumas-Mallet and her colleagues found that even though 57% of the newspaper articles mentioned that these medical research studies were initial findings, only 21% of newspaper articles included explicit “replication statements” such as “Tests on larger populations of adults must be performed” or “More work is needed to confirm the findings”.

The researchers next examined the key characteristics of the newspaper articles which were more likely to convey the uncertainty or preliminary nature of the initial scientific findings. Newspaper articles with “hyped” headlines were less likely to mention the need for replicating and validating the results in subsequent studies. On the other hand, newspaper articles which included a direct quote from one of the research study authors were three times more likely to include a replication statement. In fact, approximately half of all the replication statements mentioned in the newspaper articles were found in author quotes, suggesting that many scientists who conducted the research readily emphasize the preliminary nature of their work. Another interesting finding was the gradual shift over time in conveying scientific uncertainty. “Hyped” headlines were rare before 2000 (only 15%) and become more frequent during the 2000s (43%). On the other hand, replication statements were more common before 2000 (35%) than after 2000 (16%). This suggests that there was a trend towards conveying less uncertainty after 2000, which is surprising because debate about scientific replicability in the biomedical research community seems to have become much more widespread in the past decade.

As in all scientific studies, we need to be aware of the analysis performed by Dumas-Mallet and her colleagues. They focused on analyzing a very narrow area of biomedical research – newly identified risk factors for selected diseases. It remains to be seen whether other areas of biomedical research such as treatment of diseases or basic science discoveries of new molecular pathways are also reported with “hyped” headlines and without replication statements. In other words – this research on “replication statements” in newspaper articles also needs to be replicated. It is not clear that the worrisome trend of over-selling robustness of initial research findings after the year 2000 still persists since the work by Dumas-Mallet and colleagues stopped analyzing studies published after 2009. One would hope that the recent discussions about replicability issues in science among scientists would reverse this trend. Even though the findings of the University of Bordeaux researchers need to be replicated by others, science journalists and readers of newspapers can glean some important information from this study: One needs to be wary of “hyped” headlines and it can be very useful to interview authors of scientific studies when reporting about new research, especially asking them about the limitations of their work. “Hyped” newspaper headlines and an exaggerated sense of certainty in initial scientific findings may erode the long-term trust of the public in scientific research, especially if subsequent studies fail to replicate the initial results. Critical and comprehensive reporting of biomedical research studies – including their limitations and uncertainty – by science journalists is therefore a very important service to society which contributes to science literacy and science-based decision making.

Reference

Dumas-Mallet, E., Smith, A., Boraud, T., & Gonon, F. (2018). Scientific Uncertainty in the Press: How Newspapers Describe Initial Biomedical FindingsScience Communication, 40(1), 124-141.

Note: An earlier version of this article was first published on the 3Quarksdaily blog.

Advertisement

Neutrality, Balance and Anonymous Sources in Science Blogging – #scioStandards

This is Part 2 of a series of blog posts in anticipation of the Upholding standards in scientific blogs (Session 10B, #scioStandards) session which I will be facilitating at noon on Saturday, March 1 at the upcoming ScienceOnline conference (February 27 – March 1, 2014 in Raleigh, NC – USA). Please read Part 1 here. The goal of these blog posts is to raise questions which readers can ponder and hopefully discuss during the session.

scioStandards

1.       Neutrality

Neutrality is prized by scientists and journalists. Scientists are supposed to report and analyze their scientific research in a neutral fashion. Similarly, journalistic professionalism requires a neutral and objective stance when reporting or analyzing news. Nevertheless, scientists and journalists are also aware of the fact that there is no perfect neutrality. We are all victims of our conscious and unconscious biases and how we report data or events is colored by our biases. Not only is it impossible to be truly “neutral”, but one can even question whether “neutrality” should be a universal mandate. Neutrality can make us passive, especially when we see a clear ethical mandate to take action. Should one report in a neutral manner about genocide instead of becoming an advocate for the victims? Should a scientist who observes a destruction of ecosystems report on this in a neutral manner? Is it acceptable or perhaps even required for such a scientist to abandon neutrality and becoming an advocate to protect the ecosystems?

Science bloggers or science journalists have to struggle to find the right balance between neutrality and advocacy. Political bloggers and journalists who are enthusiastic supporters of a political party will find it difficult to preserve neutrality in their writing, but their target audiences may not necessarily expect them to remain neutral. I am often fascinated and excited by scientific discoveries and concepts that I want to write about, but I also notice how my enthusiasm for science compromises my neutrality. Should science bloggers strive for neutrality and avoid advocacy? Or is it understood that their audiences do not expect neutrality?

 

2.       Balance

One way to increase objectivity and neutrality in science writing is to provide balanced views. When discussing a scientific discovery or concept, one can also cite or reference scientists with opposing views. This underscores that scientific opinion is not a monolith and that most scientific findings can and should be challenged. However, the mandate to provide balance can also lead to “false balance” when two opposing opinions are presented as two equivalent perspectives, even though one of the two sides has little to no scientific evidence to back up its claims. More than 99% of all climatologists agree about the importance of anthropogenic global warming, therefore it would be “false balance” to give equal space to opposing fringe views. Most science bloggers would also avoid “false balance” when it comes to reporting about the scientific value of homeopathy since nearly every scientist in the world agrees that homeopathy has no scientific data to back it up.

But how should science bloggers decide what constitutes “necessary balance” versus “false balance” when writing about areas of research where the scientific evidence is more ambivalent. How about a scientific discovery which 80% of scientists think is a landmark finding and 20% of scientists believe is a fluke? How does one find out about the scientific rigor of the various viewpoints and how should a blog post reflect these differences in opinion? Press releases of universities or research institutions usually only cite the researchers that conducted a scientific study, but how does one find out about other scientists who disagree with the significance of the new study?

 

3.       Anonymous Sources

Most scientific peer review is conducted with anonymous sources. The editors of peer reviewed scientific journals send out newly submitted manuscripts to expert reviewers in the field but they try to make sure that the names of the reviewers remain confidential. This helps ensure that the reviewers can comment freely about any potential flaws in the manuscript without having to fear retaliation from the authors who might be incensed about the critique. Even in the post-publication phase, anonymous commenters can leave critical comments about a published study at the post-publication peer review website PubPeer. The comments made by anonymous as well as identified commenters at PubPeer played an important role in raising questions about recent controversial stem cell papers. On the other hand, anonymous sources may also use their cover to make baseless accusations and malign researchers. In the case of journals, the responsibility lies with the editors to ensure that their anonymous reviewers are indeed behaving in a professional manner and not abusing their anonymity.

Investigative political journalists also often rely on anonymous sources and whistle-blowers to receive critical information that would have otherwise been impossible to obtain. Journalists are also trained to ensure that their anonymous sources are credible and that they are not abusing their anonymity.

Should science bloggers and science journalists also consider using anonymous sources? Would unnamed scientists provide a more thorough critical appraisal of the quality of scientific research or would this open the door to abuse?

 

I hope that you leave comments on this post, tweet your thoughts using the #scioStandards hashtag and discuss your views at the Science Online conference.

Is It Possible To Have Excess Weight And Still Be Healthy?

Is it possible to be overweight or obese and still be considered healthy? Most physicians advise their patients who are overweight or obese to lose weight because excess weight is a known risk factor for severe chronic diseases such as diabetes, high blood pressure or cardiovascular disease. However, in recent years, a controversy has arisen regarding the actual impact of increased weight on an individual’s life expectancy or risk of suffering from heart attacks. Some researchers argue that being overweight (body mass index between 25 and 30; calculate your body mass index here) or obese (body mass index greater than 30) primarily affects one’s metabolic health and it is the prolonged exposure to metabolic problems that in turn lead to cardiovascular disease or death.

256px-Obesity-waist_circumference.svg

 

According to this view, merely having excess weight is not dangerous. It only becomes a major problem if it causes metabolic problems such as high cholesterol levels, high blood sugar levels and diabetes or high blood pressure. This suggests that there is a weight/health spectrum which includes overweight or obese individuals with normal metabolic parameters who are not yet significantly impacted by the excess weight (“healthy overweight” and “healthy obesity”). The other end of the spectrum includes overweight and obese individuals who also have significant metabolic abnormalities due to the excess weight and these individuals are at a much higher risk for heart disease and death because of the metabolic problems.

Other researchers disagree with this view and propose that all excess weight is harmful, independent of whether the overweight or obese individuals have normal metabolic parameters. To resolve this controversy, researchers at the Mount Sinai Hospital and University of Toronto recently performed a meta-analysis and evaluated the data from major clinical studies comparing the mortality (risk of death) and heart disease (as defined by events such as heart attacks) in normal weight, overweight and obese individuals and grouping them by their metabolic health.

The study was recently published in the Annals of Internal Medicine (2014) as “Are Metabolically Healthy Overweight and Obesity Benign Conditions?: A Systematic Review and Meta-analysis” and provided data on six groups of individuals: 1) metabolically healthy and normal weight, 2) metabolically healthy and overweight, 3) metabolically healthy and obese, 4) metabolically unhealthy and normal weight, 5) metabolically unhealthy and overweight and 6) metabolically unhealthy and obese. The researchers could only include studies which had measured metabolic health (normal blood sugar, blood pressure, cholesterol, etc.) alongside with weight.

The first important finding was that metabolically healthy overweight individuals did NOT have a significantly higher risk of death and cardiovascular events when compared to metabolically healthy normal weight individuals. The researchers then analyzed the risk profile of the metabolically healthy obese individuals and found that their risk was 1.19-fold higher than the normal weight counterparts, but this slight increase in risk was not statistically significant. The confidence intervals were 0.98 to 1.38 and for this finding to be statistically significant, the lower confidence interval would have needed to be higher than 1.0 instead of 0.98.

The researchers then decided to exclude studies which did not provide at least 10 years of follow up data on the enrolled subjects. This new rule excluded studies which had shown no significant impact of obesity on survival. When the researchers now re-analyzed their data after the exclusions, they found that metabolically healthy obese individuals did have a statistically significant higher risk! Metabolically healthy obese subjects had a 1.24-fold higher risk, with a confidence interval of 1.02 to 1.55. The lower confidence interval was now a tick higher than the 1.0 threshold and thus statistically significant.

Another important finding was that among metabolically unhealthy individuals, all three groups (normal weight, overweight, obese) had a similar risk profile. Metabolically unhealthy normal weight subjects had a three-fold higher than metabolically healthy normal weight individuals. The metabolically unhealthy overweight and obese groups also had a roughly three—fold higher risk when compared to metabolically healthy counterparts. This means that metabolic parameters are far more important as predictors of cardiovascular health than just weight (compare the rather small 1.24-fold higher risk with the 3-fold higher risk).

Unfortunately, the authors of the study did not provide a comprehensive discussion of these findings. Instead, they conclude that there is no “healthy obesity” and suggest that all excess weight is bad, even if one is metabolically healthy. The discussion section of the paper glosses over the important finding that metabolically healthy overweight individuals do not have a higher risk. They also do not emphasize that even the purported effects of obesity in metabolically healthy individuals were only marginally significant. The editorial accompanying the paper is even more biased and carries the definitive title “ The Myth of Healthy Obesity”. “Myth” is a rather strong word considering the rather small impact of the individuals’ weight on their overall risk.

 

Some press reports also went along with the skewed interpretation presented by the study authors and the editorial.

 

A BBC article describing the results stated:

 

It has been argued that being overweight does not necessarily imply health risks if individuals remain healthy in other ways.

The research, published in Annals of Internal Medicine, contradicts this idea.

 

This BBC article conflates the terms overweight and obese, ignoring the fact that the study showed that metabolically healthy overweight individuals actually do not have a higher risk.

 

The New York Times blog cited a study author:

 

“The message here is pretty clear,” said the lead author, Dr. Caroline K. Kramer, a researcher at the University of Toronto. “The results are very consistent. It’s not O.K. to be obese. There is no such thing as healthy obesity.”

 

Suggesting that the message is “pretty clear” is somewhat overreaching. One of the key problems with using this meta-analysis to reach definitive conclusions about “healthy overweight” or “healthy obesity” is that the study authors and editorial equate increased risk with unhealthy. Definitions of what constitutes “health” or “disease” should be based on scientific parameters (biomarkers in the blood, functional assessments of cardiovascular health, etc.) and not just on increased risk. Men have an increased risk of dying from cardiovascular disease than women. Does this mean that being a healthy man is a myth? Another major weakness of the study was that there was no data included on regular exercise. Numerous studies have shown that regular exercise reduces the risk of cardiovascular events. It is quite possible that the mild increase in cardiovascular risk in the metabolically healthy obese group may be due, in part, to lower levels of exercise.

This study does not prove that healthy obesity is a “myth”. Overweight individuals with normal metabolic health do not yet have a significant elevation in their cardiovascular risk. At this stage, one can indeed be “overweight” as defined by one’s body mass index but still be considered “healthy” as long as all the other metabolic parameters are within the normal ranges and one abides by the general health recommendations such as avoiding tobacco, exercising regularly. If an overweight person progresses to becoming obese, he or she may be at slightly higher risk for cardiovascular events even if their metabolic health remains intact. The important take-home message from this study is that while obesity itself can be a risk factor for increased risk of cardiovascular disease, it is far more important to ensure metabolic health by controlling cholesterol levels, blood pressure, preventing diabetes and important additional interventions such as encouraging regular exercise instead of just focusing on an individual’s weight.

 

ResearchBlogging.org

Kramer CK, Zinman B, & Retnakaran R (2013). Are metabolically healthy overweight and obesity benign conditions?: A systematic review and meta-analysis. Annals of internal medicine, 159 (11), 758-69 PMID: 24297192

‘Infotainment’ and Critical Science Journalism

I recently wrote an op-ed piece for the Guardian in which I suggested that there is too much of an emphasis on ‘infotainment’ in contemporary science journalism and there is too little critical science journalism. The response to the article was unexpectedly strong, provoking some hostile comments on Twitter, and some of the most angry comments seem to indicate a misunderstanding of the core message.

One of the themes that emerged in response to the article was the Us-vs.-Them perception that “scientists” were attacking “journalists”. This was surprising because as a science blogger, I assumed that I, too, was a science journalist. My definitions of scientist and journalist tend to be rather broad and inclusive. I think of scientists with a special interest and expertise in communicating science to a broad readership as science journalists. I also consider journalists with a significant interest and expertise in science as scientists. My inclusive definitions of scientists and journalists have been in part influenced by an article written by Bora Zivkovic, an outstanding science journalist and scientist and the person who inspired me to become a science blogger.  As Bora Zivokovic reminds us, scientists and journalists have a lot in common: They are supposed to be critical and skeptical, they obtain and analyze data and they communicate their findings to an audience after carefully evaluating their data.  However, it is apparent that some scientists and journalists are protective of their respective domains. Some scientists may not accept science journalists as fellow scientists unless they are part of an active science laboratory. Conversely, some journalists may not accept scientists as fellow journalists unless their primary employer is a media organization. For the purpose of this discussion, I will try to therefore use the more generic term “science writing” instead of “science journalism”.

Are infotainment science writing and critical science writing opposites? This was one of the major questions that arose in the Twitter discussion. The schematic below illustrates infotainment and critical science writing.

Triangle

Although this schematic of a triangle might seem oversimplified, it is a tool that I use to help me in my own science writing. “Critical science writing” (base of the triangle) tends to provide information and critical analysis of scientific research to the readers. Infotainment science writing minimizes the critical analysis of the research and instead focuses on presenting content about scientific research in an entertaining style. Scientific satire as a combination of entertainment and critical analysis was not discussed in the Guardian article, but I think that this too is a form of science writing that should be encouraged.

Articles or blog-posts can fall anywhere within this triangle, which is why infotainment and critical science writing are not true dichotomies, they just have distinct emphases. Infotainment science writing can include some degree of critical analysis, and critical science writing can be somewhat entertaining. However, it is rare for science writing (or other forms of writing) to strike a balance that is able to include accurate scientific information, entertainment, as well as a profound critical analysis that challenges the scientific methodology or scientific establishment, all in one article. In American political journalism, Jon Stewart and the Daily Show are perhaps one example of how one can inform, entertain and be critical – all in one succinct package. Currently, contemporary science writing which is informative and entertaining (‘infotainment’), rarely challenges the scientific establishment the way Jon Stewart challenges the political establishment.

Is ‘infotainment’ a derogatory term?  Some readers of the Guardian article assumed that I was not only claiming that all science journalism is ‘infotainment’, but also putting down ‘infotainment’ science journalism. There is nothing wrong with writing about science in an informative and entertaining manner, therefore ‘infotainment’ science writing should not be construed as a derogatory term. There are differences between good and sloppy infotainment science writing. Good infotainment science writing is accurate in terms of the scientific information it conveys, whereas sloppy infotainment science writing discards scientific accuracy to maximize hype and entertainment value. Similarly, there is good and sloppy critical science writing. Good critical science writing is painstakingly careful in the analysis of the scientific data and its scientific context by reviewing numerous other related scientific studies in the field and putting the scientific work in perspective. Sloppy critical science writing, on the other hand, might just single out one scientific study and attempt to discredit a whole area of research without examining context. Examples of sloppy critical science writing can be found in the anti-global warming literature, which hones in on a few minor scientific discrepancies, but ignores the fact that 98-99% of climate scientists agree on the fact that humans are the primary cause of global warming.

Instead of just discussing these distinctions in abstract terms, I will use some of my prior blog-posts to illustrate differences between different types of science writing, such as infotainment, critical science writing or scientific satire. I find it easier to critique my own science writing than that of other science writers, probably because I am plagued by the same self-doubts that most writers struggle with. The following analysis may be helpful for other science writers who want to see where their articles and blog-posts fall on the information – critical analysis – entertainment spectrum.

 

A.     Infotainment science writing

Infotainment science writing allows me to write about exciting or unusual new discoveries in a fairly manageable amount of time, without having to extensively review the literature in the field or perform an in-depth analysis of the statistics and every figure in the study under discussion. After providing some background for the non-specialist reader, one can focus on faithfully reporting the data in the paper and the implications of the work without discussing all the major caveats and pitfalls in the published paper. This writing provides a bit of an escapist pleasure for me, because so much of my time as a scientist is spent performing a critical analysis of the experimental data acquired in my own laboratory or in-depth reviews of scientific manuscripts and grants of either collaborators or as a peer reviewer. Infotainment science writing is a reminder of the big picture, excitement and promise of science, even though it might gloss over certain important experimental flaws and caveats of scientific studies.

Infotainment Science Writing Example 1: Using Viagra To Burn Fat

This blog-post discusses a paper published in the FASEB Journal, which suggested that white (“bad”) fat cells could be converted into brown (“good”) fat cells using Viagra. The study reminded me of a collision between two groups of spam emails: weight loss meets Viagra. The blog-post provides background on white and brown adipose tissue and then describes the key findings of the paper. A few limitations of the study are mentioned, such as the fact that the researchers never document weight loss in the mice they treated, as well as the fact that the paper ignores long-term consequences of chronic Viagra treatment. The reason I consider this piece an infotainment style of science writing is that there were numerous criticisms of the research study that could have been brought to the attention of the readers. The researchers concluded the fat cells were being converted into brown fat using only indirect measures without adequately measuring the metabolic activity and energy expenditure. It is not clear why the researchers did not extend the duration of the animal studies to show that the Viagra treatment could induce weight loss. If all of these criticisms had been included in the blog-post, the fun Viagra-weight loss idea would have been drowned in a whirlpool of details.

Infotainment Science Writing Example 2: The Healing Power of Sweat Glands

The idea of “icky” sweat glands promoting wound healing was the main hook. Smelly apocrine sweat glands versus eccrine sweat glands are defined in the background of this blog-post, and the findings of the paper published in the American Journal of Pathology are summarized.  Limitations of the study included little investigation of the mechanism of regeneration, whether cells primarily proliferate or differentiate to promote the wound healing and an important question: Does sweating itself affect the regenerative capacity of the sweat glands? Although these limitations are briefly mentioned in the blog-post, they are not discussed in-depth and there is no comparison made between the observed wound healing effects of sweat gland cells to the wound healing capacity of other cells. This blog-post is heavy on the “information” end, and it provides little entertainment, other than evoking the image of a healing sweat gland.

 

B.     Critical science writing

Critical science writing is exceedingly difficult because it is time-consuming and challenging to present critiques of scientific studies in a jargon-free manner. An infotainment science blog-post can be written in a matter of a few hours. A critical science writing piece, on the other hand, requires an in-depth review of multiple studies in the field to better understand the limitations and strengths of each report.

Critical Science Writing Example 1: Bone Marrow Cell Infusions Do NOT Improve Cardiac Function After Heart Attack

This blog-post describes an important negative study conducted in Switzerland. Bone marrow cells were injected into the hearts of patients in one of the largest randomized cardiovascular cell therapy trials performed to date. The researchers found no benefit of the cell injections on cardiac function. This research has important implications because it could stave off quack medicine. Clinics in some countries offer “miracle cures” to cardiovascular patients, claiming that the stem cells in the bone marrow will heal their diseased hearts. Desperate patients, who fall for these scams, fly to other countries, undergo risky procedures and end up spending $20,000 or $40,000 out of pocket for treatments that simply do not work. This blog-post is in the critical science writing category because it not only mentions some limitations of the Swiss study, but also puts the clinical trial into context of the problems associated with unproven therapies. It does not specifically discuss other bone marrow injection studies, but it provides a link to an editorial I wrote for an academic journal which contains all the pertinent references. A number of readers of the Guardian article raised the question whether one can make such critical science writing appear entertaining, but I am not sure how to incorporate entertainment into this type of an analysis.

Critical Science Writing Example 2: Cellular Alchemy: Converting Fibroblasts Into Heart Cells

This blog-post was a review of multiple distinct studies on converting fibroblasts – either found in the skin or the hearts – into beating heart cells. The various research groups described the outcomes of their research, but the studies were not perfect replications of each other. For example, one study that reported a very low efficiency of fibroblast conversion not only used cells derived from older animals but also used a different virus to introduce the genes. The challenge for a critical science writer is to decide which of these differences need to be highlighted, because obviously not all differences and discrepancies can be adequately accommodated in a single article or blog-post. I decided to highlight the electrical heterogeneity of the generated cells as the major limitation of the research because this seemed like the most likely problem when trying to move this work forward into clinical therapies. Regenerating a damaged heart following a heart attack would be the ultimate goal, but do we really want to create islands of heart cells that have distinct electrical properties and could give rise to heart rhythm problems?

 

C.     Science Satire

In closing, I just want to briefly mention scientific satire – satirical or humorous descriptions of real-life science. One of the best science satire websites is PhD Comics, because the comics do a brilliant job of portraying real world science issues, such as the misery of PhD students and the vicious cycle of not having enough research funding to apply for research funding. My own attempts at scientific satire take the form of spoof news articles such as “Professor Hands Out “Erase Undesirable Data Points” Coupons To PhD Students” or “Academic Publisher Unveils New Journal Which Prevents All Access To Its Content”. Science satire is usually not informative, but it can provide entertainment and some critical introspection. This kind of satire is best suited for people with experiences that allow them to understand inside jokes. I hope that we will see more writing that satirizes the working world of how scientists interpret data, compete for tenure and grants or interact with graduate students.

 

//storify.com/jalees_rehman/reactions-to-critical-science-journalism-piece-in.js[View the story “Reactions to the “Critical Science Journalism” piece in The Guardian” on Storify]

Are Scientists Divided Over Divining Rods?

When I read a statement which starts with “Scientists are divided over……“, I expect to learn about a scientific controversy involving scientists who offer distinct interpretations or analyses of published scientific data. This is not uncommon in stem cell biology. For example, scientists disagree about the differentiation capacity of adult bone marrow stem cells. Some scientists are convinced that these adult stem cells have a broad differentiation capacity and that a significant proportion can turn into heart cells or brain cells. On the other hand, there are many stem cell researchers who disagree and instead believe that adult bone marrow stem cells are very limited in their differentiation capacity. Both groups of scientists can point to numerous experiments and papers published in peer-reviewed scientific journals which back up their respective points of view. At any given stem cell meeting, the percentages of scientists favoring one view over the other can range from 30% to 70%, depending on who is attending and who is organizing that specific stem cell conference. We still have not reached a consensus in this field, so I think it is reasonable to say “scientists are divided over the differentiation capacity of adult bone marrow stem cells“.

In contrast, when it comes to the issue of global warming, there is a broad consensus in the scientific community. A 2010 study in the Proceedings of the National Academy of Sciences by Anderegg and colleagues reviewed published papers and statements made by climate researchers. The authors found that 97% to 98% of climate researchers were convinced by the scientific evidence for anthropogenic climate change, i.e. that humans are primarily responsible for global warming. When there is such a broad consensus among scientists and such overwhelming scientific data that supports anthropogenic climate change, one cannot really say “scientists are divided” merely because two or three scientists out of one hundred are not convinced.

Today, when I saw the headline “Scientists divided over device that ‘remotely detects hepatitis C’ ” in the Guardian, I assumed that a major scientific study had been published describing a new way to diagnose Hepatitis C and that there was considerable disagreement among Hepatitis C experts as to the value of this new device. To my surprise, I found this description in the Guardian:

The device the doctor held in his hand was not a contraption you expect to find in a rural hospital near the banks of the Nile.

 For a start, it was adapted from a bomb detector used by the Egyptian army. Second, it looked like the antenna for a car radio. Third, and most bizarrely, it could – the doctor claimed – remotely detect the presence of liver disease in patients sitting several feet away, within seconds.

 The antenna was a prototype for a device called C-Fast. If its Egyptian developers are to be believed, C-Fast is a revolutionary means of using bomb detection technology to scan for hepatitis C – a strongly contested discovery that, if proven, would contradict received scientific understanding, and potentially change the way many diseases are diagnosed.

This “C-Fast device”, co-developed by the Egyptian liver specialist Gamal Shiha, sounded like magic, and sure enough, even the Guardian referred to it as a “mechanical divining rod“.

Witnessed in various contexts by the Guardian, the prototype operates like a mechanical divining rod – though there are digital versions. It appears to swing towards people who suffer from hepatitis C, remaining motionless in the presence of those who don’t. Shiha claimed the movement of the rod was sparked by the presence of a specific electromagnetic frequency that emanates from a certain strain of hepatitis C.

After I read the remainder of the article, it turned out there are no published scientific studies to confirm that this rod, antenna or wand can detect hepatitis viruses at a distance.  The article says it “has been successfully trialled in 1,600 cases across three countries, without ever returning a false negative result“, but this data has not been published in a peer-reviewed journal. As a scientist and a physician, I am of course very skeptical. The physicians using this device claim it has 100% sensitivity without presenting the data in a peer-reviewed forum. But what is even more surprising is the suggestion that electromagnetic signals travel from the virus in the body of a patient to this remote device, without any scientific evidence to back this up.

The Guardian then also quotes a University College London expert:

“If the application can be expanded, it is actually a revolution in medicine,” said Pinzani, head of UCL’s liver institute. “It means that you can detect any problem you want.”

 By way of example, Pinzani said the device could conceivably be used to instantaneously detect certain kinds of cancer symptoms: “You could go into a clinic, and a GP could find out if you had a tumour marker.”

This expert is already fantasizing about cancer diagnostics with this divining rod even though there is no credible published scientific data. The Guardian article also mentions that well-known scientific journals have rejected articles about this new device and that the “scientific basis has been strongly questioned by other scientists“, but the Guardian is compromising its journalistic integrity by presenting this as a legitimate scientific debate and claiming that “scientists are divided” in the title of the article. How can scientists be divided if the data has not been made public and if it has not undergone peer review? For now, this claim of a diagnostic divining rod is pure sensationalism and not an actual scientific controversy. Such sensationalism will attract many readers, but it should not be an excuse for shoddy journalism.

 

Image Credit: Public domain image of Otto Edler von Graeve in 1913 with a divining rod via Wikimedia Commons

UPDATE: The comment thread of the Guardian article indicates that Pinzani feels misrepresented by the article and cites a letter that Pinzani has purportedly written in response to the article. I am not able to verify whether this letter was indeed written by him and how exactly Pinzani was misrepresented by the Guardian.

UPDATE February 26, 2012: The Guardian has now changed the headline to Scientists sceptical about device that ‘remotely detects hepatitis C’. I think this headline is much better than the previous one which suggested that “scientists were divided”. I still think that newspapers and magazines sometimes unnecessarily portray pseudo-scientific viewpoints as legitimate, equal partners in a scientific debate. This type of even-handedness only makes sense if certain viewpoints are backed up by rigorous scientific studies.

Is Kindness Key to Happiness and Acceptance for Children?

The study “Kindness Counts: Prompting Prosocial Behavior in Preadolescents Boosts Peer Acceptance and Well-Being” published by Layous and colleagues in the journal PLOS One on December 26, 2012 was cited by multiple websites as proof of how important it is to teach children to be kind. NPR commented on the study in the blog post “Random Acts Of Kindness Can Make Kids More Popular“, and the study was also discussed in ScienceDaily in “Kindness Key to Happiness and Acceptance for Children“, Fox News in “No bullies: Kind kids are most popular” and the Huffington Post in “Kind Kids Are Happier And More Popular (STUDY)“.

According to most of these news reports, the design of the study was rather straightforward. Schoolchildren ages 9 to 11 in a Vancouver school district were randomly assigned to two groups for a four week intervention: Half of the children were asked to perform kind acts, while the other half were asked to keep track of pleasant places they visited. Happiness and acceptance by their peers was assessed at the beginning and the end of the four week intervention period. The children were allowed to choose the “acts of kindness” or the “pleasant places”. The “acts of kindness” group chose acts such as sharing their lunch or giving their mothers a hug. The “pleasant places” group chose to visit places such as the playground or a grandparent’s house.

At the end of the four week intervention, both groups of children showed increased signs of happiness, but the news reports differed in terms of the impact of the intervention on the acceptance of the children.

 

The NPR blog reported:

… the children who performed acts of kindness were much more likely to be accepting of their peers, naming more classmates as children they’d like to spend time with.

This would mean that the children performing the “acts of kindness” were the ones that became more accepting of others.

 

The conclusion in the Huffington Post was quite different:

 

The students were asked to report how happy they were and identify classmates they would like to work with in school activities. After four weeks, both groups said they were happier, but the kids who had performed acts of kindness reported experiencing greater acceptance from their peers  –  they were chosen most often by other students as children the other students wanted to work with.

The Huffington Post interpretation (a re-post from Livescience) was that the children performing the “acts of kindness” became more accepted by others, i.e. more popular.

 

Which of the two interpretations was the correct one? Furthermore, how significant were the improvements in happiness and acceptance?

 

I decided to read the original PLOS One paper and I was quite surprised by what I found:

The manuscript (in its published form, as of December 27, 2012) had no figures and no tables in the “Results” section. The entire “Results” section consisted of just two short paragraphs. The first paragraph described the affect and happiness scores:

 

Consistent with previous research, overall, students in both the kindness and whereabouts groups showed significant increases in positive affect (γ00 = 0.15, S.E. = 0.04, t(17) = 3.66, p<.001) and marginally significant increases in life satisfaction (γ00 = 0.09, S.E. = 0.05, t(17) = 1.73, p = .08) and happiness (γ00 = 0.11, S.E. = 0.08, t(17) = 1.50, p = .13). No significant differences were detected between the kindness and whereabouts groups on any of these variables (all ps>.18). Results of t-tests mirrored these analyses, with both groups independently demonstrating increases in positive affect, happiness, and life satisfaction (all ts>1.67, all ps<.10).

 

There are no actual values given, so it is difficult to know how big the changes are. If a starting score is 15, then a change of 1.5 is only a 10% change. On the other hand, if the starting score is 3, then a change of 1.5 represents a 50% change. The Methods section of the paper also does not describe the statistics employed to analyze the data. Just relying on arbitrary p-value thresholds is problematic, but if one were to use the infamous p-value threshold of 0.05 for significance, one can assume that there was a significant change in the affect or mood of children (p-value <0.001), a marginally significant trend of increased life satisfaction (p-value of 0.08) and no really significant change in happiness (p-value of 0.13).

It is surprising that the authors do not show the actual scores for each of the two groups. After all, one of the goals of the study was to test whether performing “acts of kindness” has a bigger impact on happiness and acceptance than the visiting “pleasant places” (“whereabouts” group). There is a generic statement “ No significant differences were detected between the kindness and whereabouts groups on any of these variables (all ps>.18).”, but what were the actual happiness and satisfaction scores for each of the groups? The next sentence is also cryptic: “Results of t-tests mirrored these analyses, with both groups independently demonstrating increases in positive affect, happiness, and life satisfaction (all ts>1.67, all ps<.10).” Does this mean that p<0.1 was the threshold of significance? Do these p-values refer to the post-intervention versus pre-intervention analysis for each tested variable in each of the two groups? If yes, why not show the actual data for both groups?

 

The second (and final) paragraph of the Results section described acceptance of the children by their peers. Children were asked who they would like to “would like to be in school activities [i.e., spend time] with’’:

 

All students increased in the raw number of peer nominations they received from classmates (γ00 = 0.68, S.E. = 0.27, t(17) = 2.37, p = .02), but those who performed kind acts (M = +1.57; SD = 1.90) increased significantly more than those who visited places (M = +0.71; SD = 2.17), γ01 = 0.83, S.E. = 0.39, t(17) = 2.10, p = .05, gaining an average of 1.5 friends. The model excluded a nonsignificant term controlling for classroom size (p = .12), which did not affect the significance of the kindness term. The effects of changes in life satisfaction, happiness, and positive affect on peer acceptance were tested in subsequent models and all found to be nonsignificant (all ps>.54). When controlling for changes in well-being, the effect of the kindness condition on peer acceptance remained significant. Hence, changes in well-being did not predict changes in peer acceptance, and the effect of performing acts of kindness on peer acceptance was over and above the effect of changes in well-being.

 

This is again just a summary of the data, and not the actual data itself. Going to “pleasant places” increased the average number of “friends” (I am not sure I would use “friend” to describe someone who nominates me as a potential partner in a school activity) by 0.71, performing “acts of kindness” increased the average number of friends by 1.57. It did answer the question that was raised by the conflicting news reports. According to the presented data, the “acts of kindness” kids were more accepted by others and there was no data on whether they also became more accepting of others. I then looked at the Methods section to understand the statistics and models used for the analysis and found that there were no details included in the paper. The Methods section just ended with the following sentences:

 

Pre-post changes in self-reports and peer nominations were analyzed using multilevel modeling to account for students’ nesting within classrooms. No baseline condition differences were found on any outcome variables. Further details about method and results are available from the first author.

 

Based on reviewing the actual paper, I am quite surprised that PLOS One accepted it for publication. There are minimal data presented in the paper, no actual baseline scores regarding peer acceptance or happiness, incomplete methods and the rather grand title of “Kindness Counts: Prompting Prosocial Behavior in Preadolescents Boosts Peer Acceptance and Well-Being” considering the marginally significant data. One is left with many unanswered questions:

1) What if kids had not been asked to perform additional “acts of kindness” or additional visits to “pleasant places” and had instead merely logged these positive activities that they usually performed as part of their routine? This would have been a very important control group.

2) Why did the authors only show brief summaries of the analyses and omit to show all of the actual affect, happiness, satisfaction and peer acceptance data?

3) Did the kids in both groups also become more accepting of their peers?

 

It is quite remarkable that going to places one likes, such as a shopping mall is just as effective pro-social behavior (performing “acts of kindness”) in terms of improving happiness and well-being. The visits to pleasant places also helped gain peer acceptance, just not quite as much as performing acts of kindness. However, the somewhat selfish sounding headline “Hanging out at the mall makes kids happier and a bit more popular” is not as attractive as the warm and fuzzy headline “Random acts of kindness can make kids more popular“. This may be the reason why the “prosocial” or “kindness” aspect of this study was emphasized so strongly by the news media.

 

In summary, the limited data in this published paper suggests that children who are asked to intentionally hang out at places they like and keep track of these for four weeks seem to become happier, similar to kids who make an effort to perform additional acts of kindness. Both groups of children gain acceptance by their peers, but the children who perform acts of kindness fare slightly better. There are no clear descriptions of the statistical methods, no actual scores for the two groups (only the changes in scores are shown) and important control groups (such as children who keep track of their positive activities, without increasing them) are missing. Therefore, definitive conclusions cannot be drawn from these limited data. Unfortunately, none of the above-mentioned news reports highlighted the weaknesses, and instead jumped on the bandwagon of interpreting this study as scientific evidence for the importance of kindness. Some of the titles of the news reports even made references to bullying, even though bullying was not at all assessed in the study.

This does not mean that we should discourage our children from being kind. On the contrary, there are many moral reasons to encourage our children to be kind, and there is no need for a scientific justification for kindness. However, if one does invoke science as a reason for kindness, it should be based on scientifically rigorous and comprehensive data.

 

The PhD Route To Becoming a Science Writer

If you know that you want to become a science writer, should you even bother with obtaining a PhD in science? There is no easy answer to this question. Any answer is bound to reflect the personal biases and experiences of the person answering the question. The science writer Akshat Rathi recently made a good case for why an aspiring science writer should not pursue a PhD. I would like to offer a different perspective, which is primarily based on my work in the life sciences and may not necessarily apply to other scientific disciplines.

I think that obtaining a PhD in science a very reasonable path for an aspiring science writer, and I will list some of the “Pros” as well as the “Cons” of going the PhD route. Each aspiring science writer has to weigh the “Pros” and “Cons” carefully and reach a decision that is based on their individual circumstances and goals.

Pros: The benefits of obtaining a science PhD

 

1. Actively engaging in research gives you a first-hand experience of science

A PhD student works closely with a mentor to develop and test hypotheses, learn how to perform experiments, analyze data and reach conclusions based on the data. Scientific findings are rarely clear-cut. A significant amount of research effort is devoted to defining proper control groups, dealing with outliers and trouble-shooting experiments that have failed. Exciting findings are not always easy to replicate. A science writer who has had to actively deal with these issues may be in a better position to appreciate these intricacies and pitfalls of scientific research than someone without this first-hand experience.

 

2. PhD students are exposed to writing opportunities

All graduate students are expected to write their own PhD thesis. Many PhD programs also require that the students write academic research articles, abstracts for conferences or applications for pre-doctoral research grants. When writing these articles, PhD students usually work closely with their faculty mentors. Most articles or grant applications undergo multiple revisions until they are deemed to be ready for submission. The process of writing an initial draft and then making subsequent revisions is an excellent opportunity to improve one’s writing skills.

Most of us are not born with an innate talent for writing. To develop writing skills, the aspiring writer needs to practice and learn from critiques of one’s peers. The PhD mentor, the members of the thesis committee and other graduate students or postdoctoral fellows can provide valuable critiques during graduate school. Even though most of this feedback will likely focus on the science and not the writing, it can reveal whether or not the readers were able to clearly understand the core ideas that the student was trying to convey.

 

3. Presentation of one’s work

Most PhD programs require that students present their work at departmental seminars and at national or international conferences. Oral presentations for conferences need to be carefully crafted so that the audience learns about the background of the work, the novel findings and the implications of the research – all within the tight time constraint of a 15-20 minute time slot. A good mentor will work with PhD students to teach them how to communicate the research findings in a concise and accurate manner. Some presentations at conferences take the form of a poster, but the challenge of designing a first-rate poster is quite similar to that of a short oral presentation. One has to condense months or years of research data into a very limited space. Oral presentations as well as poster presentations are excellent opportunities to improve one’s communication skills, which are a valuable asset for any future science writer.

 

4. Peer review

Learning to perform an in-depth critical review of scientific work is an important pre-requisite for an aspiring science writer. When PhD students give presentations at departmental seminars or at conferences, they interact with a broad range of researchers, who can offer novel perspectives on the work that are distinct from what the students may have encountered in their own laboratory. Such scientific dialogue helps PhD students learn how to critically evaluate their own scientific results and realize that there can be many distinct interpretations of their data. Manuscripts or grant applications submitted by the PhD student undergo peer review by anonymous experts in the field. The reviews can be quite harsh and depressing, but they also help PhD students and their mentors identify potential flaws in their scientific work. The ability to critically evaluate scientific findings is further enhanced when PhD students participate in journal clubs to discuss published papers or when they assist their mentors in the peer review of manuscripts.

 

5. Job opportunities

Very few writers derive enough income from their writing to cover their basic needs. This is not only true for science writers, but for writers in general and it forces writers to take on jobs that help pay the bills. A PhD degree provides the aspiring science writer with a broad range of professional opportunities in academia, industry or government. After completing the PhD program, the science writer can take on such a salaried job, while building a writing portfolio and seeking out a paid position as a science writer.

 

6. Developing a scientific niche

It is not easy to be a generalist when it comes to science writing. Most successful science writers acquire in-depth knowledge in selected areas of science. This enables them to understand the technical jargon and methodologies used in that area of research and read the original scientific papers so that they do not have to rely on secondary sources for their science writing. Conducting research, writing and reviewing academic papers and attending conferences during graduate school all contribute to the development of such a scientific niche. Having such a niche is especially important when one starts out as a science writer, because it helps define the initial focus of the writing and it also provides “credentials” in the eyes of prospective employers. This does not mean that one is forever tied to this scientific niche. Science writers and scientists routinely branch out into other disciplines, once they have established themselves.

 

Cons: The disadvantages of obtaining a science PhD

 

1. Some PhD mentors abuse their graduate students

It is no secret that there are a number of PhD mentors which treat graduate students as if they were merely an additional pair of hands. Instead of being given opportunities to develop thinking and writing skills, students are sometimes forced to just produce large amounts of experimental data. 

 

2. Some of the best science writers did not obtain PhDs in science

Even though I believe that obtaining a PhD in science is a good path to becoming a science writer, I am also aware of the fact that many excellent science writers did not take this route. Instead, they focused on developing their writing skills in other venues. One such example is Steve Silberman who is a highly regarded science writer. He has written many outstanding feature articles for magazines and blog posts for his superb PLOS blog Neurotribes. Steve writes about a diverse array of topics related to neuroscience and psychology, but has also developed certain niche areas of expertise, such as autism research.

 

3. Science writer is not a career that garners much respect among academics

PhD degrees are usually obtained under the tutelage of tenure-track or tenured academics. Their natural bias is to assume that “successful” students should follow a similar career path, i.e. obtain a PhD, engage in postdoctoral research and pursue a tenure-track academic career. Unfortunately, alternate career paths, such as becoming a science writer, are not seen in a very positive light. The mentor’s narcissistic pleasure of seeing a trainee follow in one’s foot-steps is not the only reason for this. Current academic culture is characterized by a certain degree of snobbery that elevates academic research careers and looks down on alternate careers. This lack of respect for alternate careers can be very disheartening for the student. Some PhD mentors or programs may not even take on a student if he or she discloses that their ultimate goal is to become a science writer instead of pursuing a tenure-track academic career.

 

4. A day only has 24 hours

Obtaining a PhD is a full-time job. Conducting experiments, analyzing and presenting data, reading journal articles, writing chapters for the thesis and manuscripts – all of these activities are very time-consuming. It is not easy to carve out time for science writing on the side, especially if the planned science writing is not directly related to the PhD research.

 

Choosing the right environment

 

The caveats mentioned above highlight that a future science writer has to carefully choose a PhD program. The labs/mentors that publish the most papers in high-impact journals or that happen to be located in one’s favorite city may not necessarily be the ones that are best suited to prepare the student for a future career as a science writer. On the other hand, a lab that has its own research blog indicates an interest in science communication and writing. A frank discussion with a prospective mentor about the career goal of becoming a science writer will also reveal how the mentor feels about science writing and whether the mentor would be supportive of such an endeavor. The most important take home message is that the criteria one uses for choosing a PhD program have to be tailored to the career goal of becoming a science writer.

 

Image via Wikimedia Commons(Public Domain): Portrait of Dmitry Ivanovich Mendeleev wearing the Edinburgh University professor robe by Ilya Repin.

Science Journalism and the Inner Swine Dog

A search of the PubMed database, which indexes scholarly biomedical articles, reveals that 997,508 articles were published in the year 2011, which amounts to roughly 2,700 articles per day. Since the database does not include all published biomedical research articles, the actual number of published biomedical papers is probably even higher. Most biomedical researchers work in defined research areas, so perhaps only 1% of the published articles may be relevant for their research. As an example, the major focus of my research is the biology of stem cells, so I narrowed down the PubMed search to articles containing the expression “stem cells”. I found that 14291 “stem cells” articles were published in 2011, which translates to an average of 39 articles per day (assuming that one reads scientific papers on week-ends and during vacations, which is probably true for most scientists). Many researchers also tend to have two or three areas of interest, which further increases the number of articles one needs to read.


Needless to say, it has become impossible for researchers to read all the articles published in their fields of interest, because if they did that, they would not have any time left to conduct experiments of their own. To avoid drowning in the information overload, researchers have developed multiple strategies how to survive and navigate their way through all this published data. These strategies include relying on recommendations of colleagues, focusing on articles published in high-impact journals, only perusing articles that are directly related to one’s own work or only reading articles that have been cited or featured in major review articles, editorials or commentaries. As a stem cell researcher, I can use the above-mentioned strategies to narrow down the stem cell articles that I ought to read to the manageable number of about three or four articles a day. However, scientific innovation in research is fueled by the cross-fertilization of ideas. The most creative ideas are derived from combining seemingly unrelated research questions. Therefore, the challenge for me is to not only stay informed about important developments in my own areas of interest. I also need to know about major developments in other scientific domains such as network theory, botany or neuroscience, because discoveries in such “distant” fields could inspire me to develop innovative approaches in my own work.
In order to keep up with scientific developments outside of my area of expertise, I have begun to rely on high-quality science journalism, which can be found in selected print and online publications or in science blogs. Good science journalists accurately convey complex scientific concepts in simple language, without oversimplifying the actual science. This is easier said than done, because it requires a solid understanding of the science as well as excellent communication skills. Most scientists are not trained to communicate to the general audience and most journalists have had very limited exposure to actual scientific work. To become good science journalists, either scientists have to be trained in the art of communicating results to non-specialists or journalists have to acquire the scientific knowledge pertinent to the topics they want to write about. The training of science journalists requires time, resources and good mentors.
Once they have completed their training and start working as science journalists, they still need adequate time, resources and mentors. When writing about an important new scientific development, good science journalists do not just repeat the information provided by the researchers or contained in the press release of the university where the research was conducted. Instead, science journalists perform the necessary fact-checking to ensure that the provided information is indeed correct. They also consult the scientific literature as well as other scientific experts to place the new development in the context of the existing research. Importantly, science journalists then analyze the new scientific development, separating the actual scientific data from speculation as well as point out limitations and implications of the work. Science journalists also write for a very broad audience, and this also poses a challenge. Their readership includes members of the general public interested in new scientific findings, politicians and members of the private industry that may base political and economic decisions on scientific findings, patients and physicians that want to stay informed about innovative new treatments and, as mentioned above, scientists that want to know about new scientific research outside of their area of expertise.
Unfortunately, I do not think that it is widely appreciated how important high-quality science journalism is and how much effort it requires. Limited resources, constraints on a journalist’s time and the pressure to publish sensationalist articles that exaggerate or oversimplify the science in order to attract a larger readership can compromise the quality of the work. Two recent examples illustrate this: The so-called Jonah Lehrer controversy, where the highly respected and popular science journalist Jonah Lehrer was found to fabricate quotes, plagiarize and oversimplify the research as well as the more recent case where the Japanese newspaper Yomiuri Shimbun ran a story about the use of induced pluripotent stem cells to treat patients with heart disease, which turned out to be a fraudulent claim of the researcher. The case of Jonah Lehrer was a big shock for me. I had enjoyed reading a number of his articles and blogs that he had written and, at first, it was difficult for me to accept that his work contained so many errors and evidence of misconduct. Boris Kachka has recently written a very profound analysis of the Jonah Lehrer controversy in New York Magazine:

Lehrer was the first of the Millennials to follow his elders into the dubious promised land of the convention hall, where the book, blog, TED talk, and article are merely delivery systems for a core commodity, the Insight.

The Insight is less of an idea than a conceit, a bit of alchemy that transforms minor studies into news, data into magic. Once the Insight is in place—Blink, Nudge, Free, The World Is Flat—the data becomes scaffolding. It can go in the book, along with any caveats, but it’s secondary. The purpose is not to substantiate but to enchant.

Kachka’s expression “Insight” describes our desire to believe in simple narratives. Any active scientist knows that scientific findings tend to be more complex and difficult to interpret than we anticipated. There are few simple truths or “Insights” in science, even though part of us wants to seek out these elusive simple truths. The metaphor that comes to mind is the German expression “der innere Schweinehund”. This literally translates to “the inner swine dog”. The expression may evoke the image of a chimeric pig-dog beast created by a mad German scientist in a Hollywood World War II movie, but in Germany this expression is actually used to describe a metaphorical inner creature that wants us to be lazy, seek out convenience and avoid challenges. In my view, scientific work is an ongoing battle with our “inner swine dog”. We start experiments with simple hypotheses and models, and we are usually quite pleased with results that confirm these anticipated findings because they allow us to be intellectually lazy. However, good scientists know that more often than not, scientific truths are complex and we need to force ourselves to continuously challenge our own scientific concepts. Usually this involves performing more experiments, analyzing more data and trying to interpret data from many different perspectives. Overcoming the intellectual laziness requires work, but most of us who are passionate about science enjoy these challenges and seek out opportunities to battle against our “inner swine dog” instead of succumbing to a state of perpetual intellectual laziness.
When I read Kachka’s description of why Lehrer was able to get away with his fabrications and over-simplifications, I realized that it was probably because Lehrer gave us the narratives we wanted to believe. He provided “Insight” – portraying scientific research in a false shroud of certainty and simplicity. Even though many of us look forward to overcoming intellectual laziness in our own work, we may not be used to challenging our “inner swine dog” when we learn about scientific topics outside of our own areas of expertise. This is precisely why we need good science journalists, who challenge us intellectually by avoiding over-simplifications.

A different but equally instructive case of poor science journalism occurred when the widely circulated Japanese newspaper Yomiuri Shimbun reported in early October of 2012 that the Japanese researcher Hisashi Moriguchi had transplanted induced pluripotent stem cells into patients with heart disease. This was quite a sensation, because it would have been the first transplantation of this kind of stem cells into real patients. For those of us in the field of stem cell research, this came as a big surprise and did not sound very believable, because the story suggested that the work had been performed in the United States and most of us knew that obtaining approvals for using such stem cells in clinical studies would have been very challenging. However, it is very likely that many people who were not acquainted with the complexities of using stem cells in patients may have believed the story. Within days, it became apparent that the researcher’s claims were fraudulent. He had said that he had conducted the studies at Harvard, but Harvard stated that he was not currently affiliated with them and there was no evidence of any such studies ever being conducted there. His claims of how he derived the cells and in how little time he supposedly performed the experiments were also debunked.
This was not the first incident of scientific fraud in the world of stem cell research and it unfortunately will not be the last. What makes this incident noteworthy is how the newspaper Yomiuri Shimbun responded to their reporting of these fraudulent claims. They removed the original story from their page and issued public apologies for their poor reporting. The English-language version of the newspaper listed the mistakes in an article entitled “iPS REPORTS–WHAT WENT WRONG / Moriguchi reporting left questions unanswered”. These problems include inadequate fact-checking regarding the researcher’s claims and affiliations by the reporter and lack of consultation with other scientists whether the findings sounded reasonable. Interestingly, the reporter had identified some red flags and concerns:

–Moriguchi had not published any research on animal experiments.
–The reporter had not been able to contact people who could confirm the iPS cell clinical applications.
–Moriguchi’s affiliation with Harvard University could not be confirmed online.
–It was possible that different cells, instead of iPS cells, had been effective in the treatments.
–It was odd that what appeared to be major world news was appearing only in the form of a poster at a science conference.
–The reporter wondered if it was really possible that transplant operations using iPS cells had been approved at Harvard.
The reporter sent the e-mail to three others, including another news editor in charge of medical science, on the same day, and the reporter’s regular updates on the topic were shared among them.
The science reporter said he felt “at ease” after informing the editors about such dubious points. After receiving explanations from Moriguchi, along with the video clip and other materials, the reporter sought opinions from only one expert and came to believe the doubts had been resolved.

In spite of these red flags, the reporter and the editors decided to run the story. The reporter and the editors gave in to their intellectual laziness and desire of running a sensational story instead of tediously following up on all the red flags. They had a story about a Japanese researcher making a ground-breaking discovery in a very competitive area of stem cell research and this was the story that their readers would probably love. This unprofessional conduct is why the reporter and the editors received reprimands and penalties for their actions. Another article in the newspaper summarizes the punitive measures:

Effective as of next Thursday, The Yomiuri Shimbun will take disciplinary action against the following officials and employees:
–Yoshimitsu Ohashi, senior managing director and managing editor of the company, and Takeshi Mizoguchi, corporate officer and senior deputy managing editor, will each return 30 percent of their remuneration and salary for two months.
–Fumitaka Shibata, a deputy managing editor and editor of the Science News Department, will be replaced and his salary will be reduced.
–Another deputy managing editor in charge of editorial work for the Oct. 11 edition will receive an official reprimand.
–The salaries of two deputy editors of the Science News Department will be cut.
–A reporter in charge of the Oct. 11 series will receive an official reprimand.

I have mixed feelings about these punitive actions. I think it is commendable that the newspaper made apologies without reservations or excuses and listed its mistakes. The reprimands and penalties also highlight that the newspaper takes it science journalism very seriously and recognizes the importance of high professional standards. The penalties were also more severe for its editors than for the reporter, which may reflect the fact that the reporter did consult with the editors and they decided to run the story even though the red flags had been pointed out to them. My concerns arise from the fact that I am not sure punitive actions will solve the problem and they leave a lot of questions unanswered. Did the newspaper evaluate whether the science journalists and editors had been appropriately trained? Did the science journalist have the time and resources to conduct his or her research in a conscientious manner? Importantly, will science journalists be given the appropriate resources and protected from pressures or constraints that encourage unprofessional science journalism? We do not know the answers to these questions, but providing the infrastructure for high quality science journalism is probably going to be more useful than mere punitive actions. We can also hope that media organizations all over the world learn from this incident and recognize the importance of science journalism and put mechanisms in place to ensure that its quality.

Image via Wikimedia Commons/ Norbert Schnitzler: Statue “Mein Innerer Schweinhund” in Bonn