The Road to Bad Science Is Paved with Obedience and Secrecy

We often laud intellectual diversity of a scientific research group because we hope that the multitude of opinions can help point out flaws and improve the quality of research long before it is finalized and written up as a manuscript. The recent events surrounding the research in one of the world’s most famous stem cell research laboratories at Harvard shows us the disastrous effects of suppressing diverse and dissenting opinions.

Cultured cells
Cultured cells via Shutterstock

The infamous “Orlic paper” was a landmark research article published in the prestigious scientific journal Nature in 2001, which showed that stem cells contained in the bone marrow could be converted into functional heart cells. After a heart attack, injections of bone marrow cells reversed much of the heart attack damage by creating new heart cells and restoring heart function. It was called the “Orlic paper” because the first author of the paper was Donald Orlic, but the lead investigator of the study was Piero Anversa, a professor and highly respected scientist at New York Medical College.

Anversa had established himself as one of the world’s leading experts on the survival and death of heart muscle cells in the 1980s and 1990s, but with the start of the new millennium, Anversa shifted his laboratory’s focus towards the emerging field of stem cell biology and its role in cardiovascular regeneration. The Orlic paper was just one of several highly influential stem cell papers to come out of Anversa’s lab at the onset of the new millenium. A 2002 Anversa paper in the New England Journal of Medicine – the world’s most highly cited academic journal –investigated the hearts of human organ transplant recipients. This study showed that up to 10% of the cells in the transplanted heart were derived from the recipient’s own body. The only conceivable explanation was that after a patient received another person’s heart, the recipient’s own cells began maintaining the health of the transplanted organ. The Orlic paper had shown the regenerative power of bone marrow cells in mouse hearts, but this new paper now offered the more tantalizing suggestion that even human hearts could be regenerated by circulating stem cells in their blood stream.

Woman having a heart attack via Shutterstock
Heart attack via Shutterstock

2003 publication in Cell by the Anversa group described another ground-breaking discovery, identifying a reservoir of stem cells contained within the heart itself. This latest coup de force found that the newly uncovered heart stem cell population resembled the bone marrow stem cells because both groups of cells bore the same stem cell protein called c-kit and both were able to make new heart muscle cells. According to Anversa, c-kit cells extracted from a heart could be re-injected back into a heart after a heart attack and regenerate more than half of the damaged heart!

These Anversa papers revolutionized cardiovascular research. Prior to 2001, most cardiovascular researchers believed that the cell turnover in the adult mammalian heart was minimal because soon after birth, heart cells stopped dividing. Some organs or tissues such as the skin contained stem cells which could divide and continuously give rise to new cells as needed. When skin is scraped during a fall from a bike, it only takes a few days for new skin cells to coat the area of injury and heal the wound. Unfortunately, the heart was not one of those self-regenerating organs. The number of heart cells was thought to be more or less fixed in adults. If heart cells were damaged by a heart attack, then the affected area was replaced by rigid scar tissue, not new heart muscle cells. If the area of damage was large, then the heart’s pump function was severely compromised and patients developed the chronic and ultimately fatal disease known as “heart failure”.

Anversa’s work challenged this dogma by putting forward a bold new theory: the adult heart was highly regenerative, its regeneration was driven by c-kit stem cells, which could be isolated and used to treat injured hearts. All one had to do was harness the regenerative potential of c-kit cells in the bone marrow and the heart, and millions of patients all over the world suffering from heart failure might be cured. Not only did Anversa publish a slew of supportive papers in highly prestigious scientific journals to challenge the dogma of the quiescent heart, he also happened to publish them at a unique time in history which maximized their impact.

In the year 2001, there were few innovative treatments available to treat patients with heart failure. The standard approach was to use medications that would delay the progression of heart failure. But even the best medications could not prevent the gradual decline of heart function. Organ transplants were a cure, but transplantable hearts were rare and only a small fraction of heart failure patients would be fortunate enough to receive a new heart. Hopes for a definitive heart failure cure were buoyed when researchers isolated human embryonic stem cells in 1998. This discovery paved the way for using highly pliable embryonic stem cells to create new heart muscle cells, which might one day be used to restore the heart’s pump function without  resorting to a heart transplant.

human heart jigsaw puzzle
Human heart jigsaw puzzle via Shutterstock

The dreams of using embryonic stem cells to regenerate human hearts were soon squashed when the Bush administration banned the generation of new human embryonic stem cells in 2001, citing ethical concerns. These federal regulations and the lobbying of religious and political groups against human embryonic stem cells were a major blow to research on cardiovascular regeneration. Amidst this looming hiatus in cardiovascular regeneration, Anversa’s papers appeared and showed that one could steer clear of the ethical controversies surrounding embryonic stem cells by using an adult patient’s own stem cells. The Anversa group re-energized the field of cardiovascular stem cell research and cleared the path for the first human stem cell treatments in heart disease.

Instead of having to wait for the US government to reverse its restrictive policy on human embryonic stem cells, one could now initiate clinical trials with adult stem cells, treating heart attack patients with their own cells and without having to worry about an ethical quagmire. Heart failure might soon become a disease of the past. The excitement at all major national and international cardiovascular conferences was palpable whenever the Anversa group, their collaborators or other scientists working on bone marrow and cardiac stem cells presented their dizzyingly successful results. Anversa received numerous accolades for his discoveries and research grants from the NIH (National Institutes of Health) to further develop his research program. He was so successful that some researchers believed Anversa might receive the Nobel Prize for his iconoclastic work which had redefined the regenerative potential of the heart. Many of the world’s top universities were vying to recruit Anversa and his group, and he decided to relocate his research group to Harvard Medical School and Brigham and Women’s Hospital 2008.

There were naysayers and skeptics who had resisted the adult stem cell euphoria. Some researchers had spent decades studying the heart and found little to no evidence for regeneration in the adult heart. They were having difficulties reconciling their own results with those of the Anversa group. A number of practicing cardiologists who treated heart failure patients were also skeptical because they did not see the near-miraculous regenerative power of the heart in their patients. One Anversa paper went as far as suggesting that the whole heart would completely regenerate itself roughly every 8-9 years, a claim that was at odds with the clinical experience of practicing cardiologists.  Other researchers pointed out serious flaws in the Anversa papers. For example, the 2002 paper on stem cells in human heart transplant patients claimed that the hearts were coated with the recipient’s regenerative cells, including cells which contained the stem cell marker Sca-1. Within days of the paper’s publication, many researchers were puzzled by this finding because Sca-1 was a marker of mouse and rat cells – not human cells! If Anversa’s group was finding rat or mouse proteins in human hearts, it was most likely due to an artifact. And if they had mistakenly found rodent cells in human hearts, so these critics surmised, perhaps other aspects of Anversa’s research were similarly flawed or riddled with artifacts.

At national and international meetings, one could observe heated debates between members of the Anversa camp and their critics. The critics then decided to change their tactics. Instead of just debating Anversa and commenting about errors in the Anversa papers, they invested substantial funds and efforts to replicate Anversa’s findings. One of the most important and rigorous attempts to assess the validity of the Orlic paper was published in 2004, by the research teams of Chuck Murry and Loren Field. Murry and Field found no evidence of bone marrow cells converting into heart muscle cells. This was a major scientific blow to the burgeoning adult stem cell movement, but even this paper could not deter the bone marrow cell champions.

Despite the fact that the refutation of the Orlic paper was published in 2004, the Orlic paper continues to carry the dubious distinction of being one of the most cited papers in the history of stem cell research. At first, Anversa and his colleagues would shrug off their critics’ findings or publish refutations of refutations – but over time, an increasing number of research groups all over the world began to realize that many of the central tenets of Anversa’s work could not be replicated and the number of critics and skeptics increased. As the signs of irreplicability and other concerns about Anversa’s work mounted, Harvard and Brigham and Women’s Hospital were forced to initiate an internal investigation which resulted in the retraction of one Anversa paper and an expression of concern about another major paper. Finally, a research group published a paper in May 2014 using mice in which c-kit cells were genetically labeled so that one could track their fate and found that c-kit cells have a minimal – if any – contribution to the formation of new heart cells: a fraction of a percent!

The skeptics who had doubted Anversa’s claims all along may now feel vindicated, but this is not the time to gloat. Instead, the discipline of cardiovascular stem cell biology is now undergoing a process of soul-searching. How was it possible that some of the most widely read and cited papers were based on heavily flawed observations and assumptions? Why did it take more than a decade since the first refutation was published in 2004 for scientists to finally accept that the near-magical regenerative power of the heart turned out to be a pipe dream.

One reason for this lag time is pretty straightforward: It takes a tremendous amount of time to refute papers. Funding to conduct the experiments is difficult to obtain because grant funding agencies are not easily convinced to invest in studies replicating existing research. For a refutation to be accepted by the scientific community, it has to be at least as rigorous as the original, but in practice, refutations are subject to even greater scrutiny. Scientists trying to disprove another group’s claim may be asked to develop even better research tools and technologies so that their results can be seen as more definitive than those of the original group. Instead of relying on antibodies to identify c-kit cells, the 2014 refutation developed a transgenic mouse in which all c-kit cells could be genetically traced to yield more definitive results – but developing new models and tools can take years.

The scientific peer review process by external researchers is a central pillar of the quality control process in modern scientific research, but one has to be cognizant of its limitations. Peer review of a scientific manuscript is routinely performed by experts for all the major academic journals which publish original scientific results. However, peer review only involves a “review”, i.e. a general evaluation of major strengths and flaws, and peer reviewers do not see the original raw data nor are they provided with the resources to replicate the studies and confirm the veracity of the submitted results. Peer reviewers rely on the honor system, assuming that the scientists are submitting accurate representations of their data and that the data has been thoroughly scrutinized and critiqued by all the involved researchers before it is even submitted to a journal for publication. If peer reviewers were asked to actually wade through all the original data generated by the scientists and even perform confirmatory studies, then the peer review of every single manuscript could take years and one would have to find the money to pay for the replication or confirmation experiments conducted by peer reviewers. Publication of experiments would come to a grinding halt because thousands of manuscripts would be stuck in the purgatory of peer review. Relying on the integrity of the scientists submitting the data and their internal review processes may seem naïve, but it has always been the bedrock of scientific peer review. And it is precisely the internal review process which may have gone awry in the Anversa group.

Just like Pygmalion fell in love with Galatea, researchers fall in love with the hypotheses and theories that they have constructed. To minimize the effects of these personal biases, scientists regularly present their results to colleagues within their own groups at internal lab meetings and seminars or at external institutions and conferences long before they submit their data to a peer-reviewed journal. The preliminary presentations are intended to spark discussions, inviting the audience to challenge the veracity of the hypotheses and the data while the work is still in progress. Sometimes fellow group members are truly skeptical of the results, at other times they take on the devil’s advocate role to see if they can find holes in their group’s own research. The larger a group, the greater the chance that one will find colleagues within a group with dissenting views. This type of feedback is a necessary internal review process which provides valuable insights that can steer the direction of the research.

Considering the size of the Anversa group – consisting of 20, 30 or even more PhD students, postdoctoral fellows and senior scientists – it is puzzling why the discussions among the group members did not already internally challenge their hypotheses and findings, especially in light of the fact that they knew extramural scientists were having difficulties replicating the work.

Retraction Watch is one of the most widely read scientific watchdogs which tracks scientific misconduct and retractions of published scientific papers. Recently, Retraction Watch published the account of an anonymous whistleblower who had worked as a research fellow in Anversa’s group and provided some unprecedented insights into the inner workings of the group, which explain why the internal review process had failed:

“I think that most scientists, perhaps with the exception of the most lucky or most dishonest, have personal experience with failure in science—experiments that are unreproducible, hypotheses that are fundamentally incorrect. Generally, we sigh, we alter hypotheses, we develop new methods, we move on. It is the data that should guide the science.

 In the Anversa group, a model with much less intellectual flexibility was applied. The “Hypothesis” was that c-kit (cd117) positive cells in the heart (or bone marrow if you read their earlier studies) were cardiac progenitors that could: 1) repair a scarred heart post-myocardial infarction, and: 2) supply the cells necessary for cardiomyocyte turnover in the normal heart.

 This central theme was that which supplied the lab with upwards of $50 million worth of public funding over a decade, a number which would be much higher if one considers collaborating labs that worked on related subjects.

 In theory, this hypothesis would be elegant in its simplicity and amenable to testing in current model systems. In practice, all data that did not point to the “truth” of the hypothesis were considered wrong, and experiments which would definitively show if this hypothesis was incorrect were never performed (lineage tracing e.g.).”

Discarding data that might have challenged the central hypothesis appears to have been a central principle.

Hood over screen
via Shutterstock

According to the whistleblower, Anversa’s group did not just discard undesirable data, they actually punished group members who would question the group’s hypotheses:

In essence, to Dr. Anversa all investigators who questioned the hypothesis were “morons,” a word he used frequently at lab meetings. For one within the group to dare question the central hypothesis, or the methods used to support it, was a quick ticket to dismissal from your position.

The group also created an environment of strict information hierarchy and secrecy which is antithetical to the spirit of science:

“The day to day operation of the lab was conducted under a severe information embargo. The lab had Piero Anversa at the head with group leaders Annarosa Leri, Jan Kajstura and Marcello Rota immediately supervising experimentation. Below that was a group of around 25 instructors, research fellows, graduate students and technicians. Information flowed one way, which was up, and conversation between working groups was generally discouraged and often forbidden.

 Raw data left one’s hands, went to the immediate superior (one of the three named above) and the next time it was seen would be in a manuscript or grant. What happened to that data in the intervening period is unclear.

 A side effect of this information embargo was the limitation of the average worker to determine what was really going on in a research project. It would also effectively limit the ability of an average worker to make allegations regarding specific data/experiments, a requirement for a formal investigation.

This segregation of information is a powerful method to maintain an authoritarian rule and is more typical for terrorist cells or intelligence agencies than for a scientific lab, but it would definitely explain how the Anversa group was able to mass produce numerous irreproducible papers without any major dissent from within the group.

In addition to the secrecy and segregation of information, the group also created an atmosphere of fear to ensure obedience:

“Although individually-tailored stated and unstated threats were present for lab members, the plight of many of us who were international fellows was especially harrowing. Many were technically and educationally underqualified compared to what might be considered average research fellows in the United States. Many also originated in Italy where Dr. Anversa continues to wield considerable influence over biomedical research.

 This combination of being undesirable to many other labs should they leave their position due to lack of experience/training, dependent upon employment for U.S. visa status, and under constant threat of career suicide in your home country should you leave, was enough to make many people play along.

 Even so, I witnessed several people question the findings during their time in the lab. These people and working groups were subsequently fired or resigned. I would like to note that this lab is not unique in this type of exploitative practice, but that does not make it ethically sound and certainly does not create an environment for creative, collaborative, or honest science.”

Foreign researchers are particularly dependent on their employment to maintain their visa status and the prospect of being fired from one’s job can be terrifying for anyone.

This is an anonymous account of a whistleblower and as such, it is problematic. The use of anonymous sources in science journalism could open the doors for all sorts of unfounded and malicious accusations, which is why the ethics of using anonymous sources was heavily debated at the recent ScienceOnline conference. But the claims of the whistleblower are not made in a vacuum – they have to be evaluated in the context of known facts. The whistleblower’s claim that the Anversa group and their collaborators received more than $50 million to study bone marrow cell and c-kit cell regeneration of the heart can be easily verified at the public NIH grant funding RePORTer website. The whistleblower’s claim that many of the Anversa group’s findings could not be replicated is also a verifiable fact. It may seem unfair to condemn Anversa and his group for creating an atmosphere of secrecy and obedience which undermined the scientific enterprise, caused torment among trainees and wasted millions of dollars of tax payer money simply based on one whistleblower’s account. However, if one looks at the entire picture of the amazing rise and decline of the Anversa group’s foray into cardiac regeneration, then the whistleblower’s description of the atmosphere of secrecy and hierarchy seems very plausible.

The investigation of Harvard into the Anversa group is not open to the public and therefore it is difficult to know whether the university is primarily investigating scientific errors or whether it is also looking into such claims of egregious scientific misconduct and abuse of scientific trainees. It is unlikely that Anversa’s group is the only group that might have engaged in such forms of misconduct. Threatening dissenting junior researchers with a loss of employment or visa status may be far more common than we think. The gravity of the problem requires that the NIH – the major funding agency for biomedical research in the US – should look into the prevalence of such practices in research labs and develop safeguards to prevent the abuse of science and scientists.

Note: An earlier version of this article was first published on 3quarksdaily.com.

Advertisement

To Err Is Human, To Study Errors Is Science

The family of cholesterol lowering drugs known as ‘statins’ are among the most widely prescribed medications for patients with cardiovascular disease. Large-scale clinical studies have repeatedly shown that statins can significantly lower cholesterol levels and the risk of future heart attacks, especially in patients who have already been diagnosed with cardiovascular disease. A more contentious issue is the use of statins in individuals who have no history of heart attacks, strokes or blockages in their blood vessels. Instead of waiting for the first major manifestation of cardiovascular disease, should one start statin therapy early on to prevent cardiovascular disease?

If statins were free of charge and had no side effects whatsoever, the answer would be rather straightforward: Go ahead and use them as soon as possible. However, like all medications, statins come at a price. There is the financial cost to the patient or their insurance to pay for the medications, and there is a health cost to the patients who experience potential side effects. The Guideline Panel of the American College of Cardiology (ACC) and the American Heart Association (AHA) therefore recently recommended that the preventive use of statins in individuals without known cardiovascular disease should be based on personalized risk calculations. If the risk of developing disease within the next 10 years is greater than 7.5%, then the benefits of statin therapy outweigh its risks and the treatment should be initiated. The panel also indicated that if the 10-year risk of cardiovascular disease is greater than 5%, then physicians should consider prescribing statins, but should bear in mind that the scientific evidence for this recommendation was not as strong as that for higher-risk individuals.

 

Oops button - via Shutterstock
Oops button – via Shutterstock

Using statins in low risk patients

The recommendation that individuals with comparatively low risk of developing future cardiovascular disease (10-year risk lower than 10%) would benefit from statins was met skepticism by some medical experts. In October 2013, the British Medical Journal (BMJ) published a paper by John Abramson, a lecturer at Harvard Medical School, and his colleagues which re-evaluated the data from a prior study on statin benefits in patients with less than 10% cardiovascular disease risk over 10 years. Abramson and colleagues concluded that the statin benefits were over-stated and that statin therapy should not be expanded to include this group of individuals. To further bolster their case, Abramson and colleagues also cited a 2013 study by Huabing Zhang and colleagues in the Annals of Internal Medicine which (according to Abramson et al.) had reported that 18 % of patients discontinued statins due to side effects. Abramson even highlighted the finding from the Zhang study by including it as one of four bullet points summarizing the key take-home messages of his article.

The problem with this characterization of the Zhang study is that it ignored all the caveats that Zhang and colleagues had mentioned when discussing their findings. The Zhang study was based on the retrospective review of patient charts and did not establish a true cause-and-effect relationship between the discontinuation of the statins and actual side effects of statins. Patients may stop taking medications for many reasons, but this does not necessarily mean that it is due to side effects from the medication. According to the Zhang paper, 17.4% of patients in their observational retrospective study had reported a “statin related incident” and of those only 59% had stopped the medication. The fraction of patients discontinuing statins due to suspected side effects was at most 9-10% instead of the 18% cited by Abramson. But as Zhang pointed out, their study did not include a placebo control group. Trials with placebo groups document similar rates of “side effects” in patients taking statins and those taking placebos, suggesting that only a small minority of perceived side effects are truly caused by the chemical compounds in statin drugs.

 

Admitting errors is only the first step

Whether 18%, 9% or a far smaller proportion of patients experience significant medication side effects is no small matter because the analysis could affect millions of patients currently being treated with statins. A gross overestimation of statin side effects could prompt physicians to prematurely discontinue medications that have been shown to significantly reduce the risk of heart attacks in a wide range of patients. On the other hand, severely underestimating statin side effects could result in the discounting of important symptoms and the suffering of patients. Abramson’s misinterpretation of statin side effect data was pointed out by readers of the BMJ soon after the article published, and it prompted an inquiry by the journal. After re-evaluating the data and discussing the issue with Abramson and colleagues, the journal issued a correction in which it clarified the misrepresentation of the Zhang paper.

Fiona Godlee, the editor-in-chief of the BMJ also wrote an editorial explaining the decision to issue a correction regarding the question of side effects and that there was not sufficient cause to retract the whole paper since the other points made by Abramson and colleagues – the lack of benefit in low risk patients – might still hold true. Instead, Godlee recognized the inherent bias of a journal’s editor when it comes to deciding on whether or not to retract a paper. Every retraction of a peer reviewed scholarly paper is somewhat of an embarrassment to the authors of the paper as well as the journal because it suggests that the peer review process failed to identify one or more major flaws. In a commendable move, the journal appointed a multidisciplinary review panel which includes leading cardiovascular epidemiologists. This panel will review the Abramson paper as well as another BMJ paper which had also cited the inaccurately high frequency of statin side effects, investigate the peer review process that failed to identify the erroneous claims and provide recommendations regarding the ultimate fate of the papers.

 

Reviewing peer review

Why didn’t the peer reviewers who evaluated Abramson’s article catch the error prior to its publication? We can only speculate as to why such a major error was not identified by the peer reviewers. One has to bear in mind that “peer review” for academic research journals is just that – a review. In most cases, peer reviewers do not have access to the original data and cannot check the veracity or replicability of analyses and experiments. For most journals, peer review is conducted on a voluntary (unpaid) basis by two to four expert reviewers who routinely spend multiple hours analyzing the appropriateness of the experimental design, methods, presentation of results and conclusions of a submitted manuscript. The reviewers operate under the assumption that the authors of the manuscript are professional and honest in terms of how they present the data and describe their scientific methodology.

In the case of Abramson and colleagues, the correction issued by the BMJ refers not to Abramson’s own analysis but to the misreading of another group’s research. Biomedical research papers often cite 30 or 40 studies, and it is unrealistic to expect that peer reviewers read all the cited papers and ensure that they are being properly cited and interpreted. If this were the expectation, few peer reviewers would agree to serve as volunteer reviewers since they would have hardly any time left to conduct their own research. However, in this particular case, most peer reviewers familiar with statins and the controversies surrounding their side effects should have expressed concerns regarding the extraordinarily high figure of 18% cited by Abramson and colleagues. Hopefully, the review panel will identify the reasons for the failure of BMJ’s peer review system and point out ways to improve it.

 

To err is human, to study errors is science

All researchers make mistakes, simply because they are human. It is impossible to eliminate all errors in any endeavor that involves humans, but we can construct safeguards that help us reduce the occurrence and magnitude of our errors. Overt fraud and misconduct are rare causes of errors in research, but their effects on any given research field can be devastating. One of the most notorious occurrences of research fraud is the case of the Dutch psychologist Diederik Stapel who published numerous papers based on blatant fabrication of data – showing ‘results’ of experiments on non-existent study subjects. The field of cell therapy in cardiovascular disease recently experienced a major setback when a university review of studies headed by the German cardiologist Bodo Strauer found evidence of scientific misconduct. The significant discrepancies and irregularities in Strauer’s studies have now lead to wide-ranging skepticism about the efficacy of using bone marrow cell infusions to treat heart disease.

 

It is difficult to obtain precise numbers to quantify the actual extent of severe research misconduct and fraud since it may go undetected. Even when such cases are brought to the attention of the academic leadership, the involved committees and administrators may decide to keep their findings confidential and not disclose them to the public. However, most researchers working in academic research environments would probably agree that these are rare occurrences. A far more likely source of errors in research is the cognitive bias of the researchers. Researchers who believe in certain hypotheses and ideas are prone to interpreting data in a manner most likely to support their preconceived notions. For example, it is likely that a researcher opposed to statin usage will interpret data on side effects of statins differently than a researcher who supports statin usage. While Abramson may have been biased in the interpretation of the data generated by Zhang and colleagues, the field of cardiovascular regeneration is currently grappling in what appears to be a case of biased interpretation of one’s own data. An institutional review by Harvard Medical School and Brigham and Women’s Hospital recently determined that the work of Piero Anversa, one of the world’s most widely cited stem cell researchers, was significantly compromised and warranted a retraction. His group had reported that the adult human heart exhibited an amazing regenerative potential, suggesting that roughly every 8 to 9 years the adult human heart replaces its entire collective of beating heart cells (a 7% – 19% yearly turnover of beating heart cells). These findings were in sharp contrast to a prior study which had found only a minimal turnover of beating heart cells (1% or less per year) in adult humans. Anversa’s finding was also at odds with the observations of clinical cardiologists who rarely observe a near-miraculous recovery of heart function in patients with severe heart disease. One possible explanation for the huge discrepancy between the prior research and Anversa’s studies was that Anversa and his colleagues had not taken into account the possibility of contaminations that could have falsely elevated the cell regeneration counts.

 

Improving the quality of research: peer review and more

Despite the fact that researchers are prone to make errors due to inherent biases does not mean we should simply throw our hands up in the air, say “Mistakes happen!” and let matters rest. High quality science is characterized by its willingness to correct itself, and this includes improving methods to detect and correct scientific errors early on so that we can limit their detrimental impact. The realization that lack of reproducibility of peer-reviewed scientific papers is becoming a major problem for many areas of research such as psychology, stem cell research and cancer biology has prompted calls for better ways to track reproducibility and errors in science.

One important new paradigm that is being discussed to improve the quality of scholar papers is the role of post-publication peer evaluation. Instead of viewing the publication of a peer-reviewed research paper as an endpoint, post publication peer evaluation invites fellow scientists to continue commenting on the quality and accuracy of the published research even after its publication and to engage the authors in this process. Traditional peer review relies on just a handful of reviewers who decide about the fate of a manuscript, but post publication peer evaluation opens up the debate to hundreds or even thousands of readers which may be able to detect errors that could not be identified by the small number of traditional peer reviewers prior to publication. It is also becoming apparent that science journalists and science writers can play an important role in the post-publication evaluation of published research papers by investigating and communicating research flaws identified in research papers. In addition to helping dismantle the Science Mystique, critical science journalism can help ensure that corrections, retractions or other major concerns about the validity of scientific findings are communicated to a broad non-specialist audience.

In addition to these ongoing efforts to reduce errors in science by improving the evaluation of scientific papers, it may also be useful to consider new pro-active initiatives which focus on how researchers perform and design experiments. As the head of a research group at an American university, I have to take mandatory courses (in some cases on an annual basis) informing me about laboratory hazards, ethics of animal experimentation or the ethics of how to conduct human studies. However, there are no mandatory courses helping us identify our own research biases or how to minimize their impact on the interpretation of our data. There is an underlying assumption that if you are no longer a trainee, you probably know how to perform and interpret scientific experiments. I would argue that it does not hurt to remind scientists regularly – no matter how junior or senior- that they can become victims of their biases. We have to learn to continuously re-evaluate how we conduct science and to be humble enough to listen to our colleagues, especially when they disagree with us.

 

Note: A shorter version of this article was first published at The Conversation with excellent editorial input provided by Jo Adetunji.

 

ResearchBlogging.org
Abramson, J., Rosenberg, H., Jewell, N., & Wright, J. (2013). Should people at low risk of cardiovascular disease take a statin? BMJ, 347 (oct22 3) DOI: 10.1136/bmj.f6123

The ENCODE Controversy And Professionalism In Science

The ENCODE (Encyclopedia Of DNA Elements) project received quite a bit of attention when its results were publicized last year. This project involved a very large consortium of scientists with the goal to identify all the functional elements in the human genome. In September 2012, 30 papers were published in a coordinated release and their extraordinary claim was that roughly 80% of the human genome was “functional”. This was in direct contrast to the prevailing view among molecular biologists that the bulk of human DNA was just “junk DNA”, i.e. sequences of DNA for which one could not assign any specific function. The ENCODE papers contained huge amounts of data, collating the work of hundreds of scientists who had worked on this for nearly a decade. But what garnered most attention, among scientists, the media and the public was the “80%” claim and the supposed “death of junk DNA“.

Soon after the discovery of DNA, the primary function ascribed to DNA was its role as a template from which messenger RNA could be transcribed and then translated into functional proteins. Using this definition of “function”, only 1-2% of the human DNA would be functional because they actually encoded for proteins. The term “junk DNA” was coined to describe the 98-99% of non-coding DNA which appeared to primarily represent genetic remnants of our evolutionary past without any specific function in our present day cells.

However, in the past decades, scientists have uncovered more and more functions for the non-coding DNA segments that were previously thought to be merely “junk”. Non-coding DNA can, for example, act as a binding site for regulatory proteins and exert an influence on protein-coding DNA. There has also been an increasing awareness of the presence of various types of non-coding RNA molecules, i.e. RNA molecules which are transcribed from the DNA but not subsequently translated into proteins. Some of these non-coding RNAs have known regulatory functions, others may not have any or their functions have not yet been established.

Despite these discoveries, most scientists were in agreement that only a small fraction of DNA was “functional”, even when all the non-coding pieces of DNA with known functions were included. The bulk of our genome was still thought to be non-functional. The term “junk DNA” was used less frequently by scientists, because it was becoming apparent that we were probably going to discover even more functional elements in the non-coding DNA.

In September 2012, everyone was talking about “junk DNA” again, because the ENCODE scientists claimed their data showed that 80% of the human genome was “functional”. Most scientists had expected that the ENCODE project would uncover some new functions for non-coding DNA, but the 80% figure was way out of proportion to what everyone had expected. The problem was that the ENCODE project used a very low bar for “function”. Binding to the DNA or any kind of chemical DNA modification was already seen as a sign of “function”, without necessarily proving that these pieces of DNA had any significant impact on the function of a cell.

The media hype with the “death of junk DNA” headlines and the lack of discussion about what constitutes function were appropriately criticized by many scientists, but the recent paper by Dan Graur and colleagues “On the immortality of television sets: “function” in the human genome according to the evolution-free gospel of ENCODE” has grabbed everyone’s attention. Not necessarily because of the fact that it criticizes the claims made by the ENCODE scientists, but because of the sarcastic tone it uses to ridicule ENCODE.

There have been so many other blog posts and articles that either praise or criticize the Graur paper, so I decided to list some of them here:

1. PZ Myers writes “ENCODE gets a public reaming” and seems to generally agree with Graur and colleagues.

2. Ashutosh Jogalekar says Graur’s paper is a “devastating takedown of ENCODE in which they pick apart ENCODE’s claims with the tenacity and aplomb of a vulture picking apart a wildebeest carcass.”

3. Ryan Gregory highlights some of the “zingers” in the Graur paper

Other scientists, on the other hand, agree with some of the conclusions of the Graur paper and its criticism of how the ENCODE data was presented, but disagree with the sarcastic tone:

1. OpenHelix reminds us that this kind of spanking” should not distract from all the valuable data that ENCODE has generated.

2. Mick Watson shows how Graur and colleagues could have presented their key critiques in a very non-confrontational manner and foster a constructive debate.

3. Josh Witten points out the irony of Graur accusing ENCODE of seeking hype, even though Graur and his colleagues seem to use sarcasm and ridicule to also increase the visibility of their work. I think Josh’s blog post is an excellent analysis of the problems with ENCODE and the problems associated with Graur’s tone.

On Twitter, I engaged in a debate with Benoit Bruneau, my fellow Scilogs blogger Malcolm Campbell and Jonathan Eisen and I thought it would be helpful to share the Storify version here. There was a general consensus that even though some of the points mentioned by Graur and colleagues are indeed correct, their sarcastic tone was uncalled for. Scientists can be critical of each other, but can and should do so in a respectful and professional manner, without necessarily resorting to insults or mockery.
//storify.com/jalees_rehman/encode-debate.js

[<a href=”//storify.com/jalees_rehman/encode-debate” target=”_blank”>View the story “ENCODE controversy and professionalism in scientific debates” on Storify</a>]

ResearchBlogging.org
Graur D, Zheng Y, Price N, Azevedo RB, Zufall RA, & Elhaik E (2013). On the immortality of television sets: “function” in the human genome according to the evolution-free gospel of ENCODE. Genome biology and evolution PMID: 23431001

Armchair Psychiatry and Violence

Following tragic mass shootings such as the one that unfolded in Newtown, Connecticut, it is natural to try to “make sense” of the events. The process of “making sense” and understanding the underlying causes is part of the healing process. It also gives hope to society that if we were able to address the causes of the tragedy, we could prevent future tragedies. It is not unexpected that mental illness is often invoked as a possible reason for mass shootings. After all, the slaying of fellow human beings seems so far removed from what we consider normal human behavior. Since mental illness directly affects human behavior, it seems like the most straightforward explanation for a mass shooting. It is surmised that the mental illness severely impairs the decision-making capacity and perceptions of the afflicted person so that he or she is prone to acting out in a violent manner and causing great harm to others. Once evidence for “mental illness” in a shooter is found, one may also be tempted to stop looking for other factors that may have caused the tragedy. The nebulous expression “mental illness” can appear like a convenient catch-all explanation that requires no further investigation, because the behavior of a “mentally ill” person might be beyond comprehension.

The problem with this convenient explanation is that “mental illness” is not a homogeneous entity. There are many different types of mental illness, and specific psychiatric disorders, such as major depression, anxiety disorder or schizophrenia represent a broad spectrum of disease. These illnesses do not only vary in their severity from patient to patient, but even within a single patient, mental illnesses vary over time in severity. Just because someone carries the diagnosis of schizophrenia does not mean that the patient will continuously have severe manifestations of the disease. Some patients may show signs of withdrawal and introversion, others may act out with aggressive behavior. Making a direct causal link between a person’s diagnosis of mental illness and their violent behavior requires a careful psychiatric examination of that individual patient, as well as other circumstances, such as recent events in their lives or possible substance abuse.

When shooters kill themselves after the murders they commit, it is impossible to perform such a psychiatric examination and all that one can go by are prior medical records, but it becomes extremely difficult to retrospectively construct cause-effect relationships. In the case of Adam Lanza, the media and the public do not have access to his medical records. However, soon after the shooting, there was frequent mention in the media that Lanza had been diagnosed with either Asperger syndrome, autism or a personality disorder and potential links between these diagnoses and the shooting were implied. Without carefully perusing his medical records, it is difficult to assess whether these diagnoses were accurate, how severe his symptoms were and how they were being treated. To make matters worse, some newspapers and websites have resorted to generating narratives about Adam Lanza’s behavior and mental health based on subjective and anecdotal experiences of class-mates, family friends and in perhaps the most ridiculous case, Lanza’s hair stylist. Snippets of subjective information regarding odd behaviors exhibited by Lanza have been offered to readers and viewers so that they can perform an armchair evaluation of Lanza’s mental health from afar and search for potential clues in his past that might point to why he went on a shooting rampage. Needless to say, this form of armchair analysis is fraught with error.

It is difficult enough to diagnose a patient during a face-to-face evaluation and then try to make causal links between the symptoms and the observed pathology. In the setting of cardiovascular disease, for example, the healthcare professional has access to blood tests which accurately measure cholesterol levels or biomarkers of heart disease, angiograms that generate images of the coronary arteries and even ultrasound images of the heart (echocardiograms) that can rather accurately assess the strength of the heart. Despite all of these objective measurements, it requires a careful and extensive discussion with the patient to understand whether his shortness of breath is truly linked to his heart disease or whether it might be related to other factors. Someone might have mild heart disease by objective testing, but the shortness of breath he experiences when trying to walk up the stairs may be due to months of physical inactivity and not due to his mild heart disease.

In psychiatry, making diagnoses and causally linking symptoms and signs to mental illness is even more difficult, because there are fewer objective tests available. There are, as of now, no CT-Scans or blood tests that can accurately and consistently diagnose a mental illness such as depression. There are numerous reports of documented abnormalities of brain imaging observed in patients with mental illness, but their reliability and their ability to predict specific outcomes of the respective diseases remains unclear. The mental health professional has to primarily rely on subjective reports of the patient and the patient’s caregivers or family members in order to arrive at a diagnosis. In the case of Adam Lanza, who killed himself as well as his mother, all one can go by are his most recent mental health evaluations, which could provide a diagnosis, but may still not reliably explain his killing spree. Retrospective evaluations of his mental health by former class-mates, hair stylists or family members are of little help. Comments on the past behavior of a mass shooter will invariably present a biased and subjective view of the past, colored by the knowledge of the terrible shooting. Incidents of “odd behaviors” will be remembered, without objectively assessing how common these behaviors were in other people who did not go on to become mass shooters.

An article written by Liza Long with the sensationalist title “I Am Adam Lanza’s Mother” was widely circulated after the shooting. Long was obviously not the mother of Adam Lanza, and merely took advantage of the opportunity to describe her frustration with the mental health care system and her heart-wrenching struggles with the mental health of her son who was prone to violent outbursts. In addition to violating the privacy of her son and making him a likely target of future prejudice and humiliation, Long implied that the observed violent outbursts she had seen in her son indicated that he might become a mass shooter like Adam Lanza. Long, like the rest of the public, had no access to Lanza’s medical records, did not know whether Lanza had been diagnosed with the same illnesses as her own son and whether Lanza had exhibited the same behaviors. Nevertheless, Long’s emotional story and the sensationalist title of her article caught on, and many readers may have accepted her story as a proof of the link between certain forms of mental illness and predisposition to becoming a mass shooter.

Instead of relying on retrospective analyses and anecdotes, it may be more helpful to review the scientific literature on the purported link between mental illness and violence.

 

The link between mental illness and violence

There is a widespread notion that mental illness causes violent behavior, but the scientific evidence for this presumed link is not that solid.  “Mental illness” is a very heterogeneous term, comprising a wide range of disorders and degrees of severity for each disorder, so many studies that have tried to establish a link between “mental illness” and violence have focused on the more severe manifestations of mental illness. The 1998 land-mark study “Violence by People Discharged From Acute Psychiatric Inpatient Facilities and by Others in the Same Neighborhoods” by Henry Steadman and colleagues was published in the highly cited psychiatry journal Archives of General Psychiatry and focused on patients whose mental illness was severe enough to require hospitalization. The study followed patients for one year after they were released from the acute psychiatric inpatient units, and assessed how likely they were to engage in violence. At one of the sites (Pittsburgh), the researchers also compared the likelihood of the psychiatric patients to engage in violence with that of other residents of the same neighborhood.  Steadman and colleagues found that there was a higher rate of violence observed in psychiatric patients, this was associated with the higher rate of substance abuse. Psychiatric patients without substance abuse had the same rate of violence as other residents of the neighborhood without substance abuse.

The recent large-scale study “The Intricate Link Between Violence and Mental Disorder” was published in the Archives of General Psychiatry by Elbogen and Johnson in 2009 and also found that severe mental illness by itself was not a strong predictor of violence. Instead, future violence was more closely associated with a history of past violence, substance abuse or contextual factors, such as unemployment or a recent divorce. A 2009 meta-analysis by Fazel and colleagues was published in PLOS Medicine and reviewed major studies that had investigated the potential link between schizophrenia and violence. The authors found an increased risk of violence and homicide in patients with schizophrenia, but this was again primarily due to the higher rates of substance abuse in the patient population. The risk of homicide in individuals with schizophrenia was 0.3%, and the risk of homicide was also 0.3% in people with a history of substance abuse. All of the studies noted a great degree of variability in terms of violence, again reminding us that mental illnesses are very heterogeneous diseases. An individual diagnosed with “schizophrenia” is not necessarily at higher risk for engaging in violent behavior. One also has to assess their specific context, their past history of violence, their social circumstances and especially their degree of substance abuse, which can refer to alcohol abuse or alcohol dependence as well as the abuse of illegal substances such as cocaine. The data on whether Asperger syndrome, one of the conditions that Adam Lanza is said to have been diagnosed with, is far sparser. Stål Bjørkly recently reviewed the studies in this area and found that there has been no systematic research in this field. The hypothesized link between Asperger syndrome and violence is based on just a few studies, mostly dealing with case reports of selected incidents.

It is quite noteworthy that multiple large-scale studies investigating the association between mental illness and violence have come up with the same conclusion: Patients with mental illnesses may be at greater risk for engaging in violence, but this appears to be primarily linked to concomitant substance abuse. In the absence of substance abuse, mental illness by itself does not significantly increase the likelihood of engaging in violence. Richard Friedman summarized it best in an article for the New England Journal of Medicine:

The challenge for medical practitioners is to remain aware that some of their psychiatric patients do in fact pose a small risk of violence, while not losing sight of the larger perspective — that most people who are violent are not mentally ill, and most people who are mentally ill are not violent.

Human behavior and mental illness

One rarely encounters armchair diagnoses in cardiovascular disease, neurologic disease or cancer. Journalists do not usually interview relatives or friends of cancer patients to ascertain whether there had been early signs of the cancer that had been missed before the definitive diagnosis was made or a patient died of cancer. If medical details about public persona are disclosed, such as for example the heart disease in the case of former vice-president Cheney, journalists and TV viewers or readers without medical expertise rarely offer their own opinion whether the diagnosis of cardiovascular disease was correct and how the patient should be treated. There were no interviews with other cardiovascular patients regarding their own personal history of heart disease and they were also not asked to comment on how they felt Cheney was being treated. In the case of the 2012 US meningitis outbreak, which resulted in the death of at least 35 people, many questions were raised in the media regarding the underlying causes and there was understandable concern about how to contain the outbreak and address underlying causes, but the advice was usually sought from experts in infectious disease.

When it comes to mental illness, on the other hand, nearly everyone with access to the media seems to think they are an expert on mental health and one finds a multitude of opinions on the efficacy of psychoactive medications, whether or not psychiatric patients should be institutionalized and warning signs that lead up to violent behavior. There are many potential reasons for why non-experts feel justified in commenting on mental illness, but remain reticent to offer their opinion on cardiovascular disease, cancer or infectious disease.  One reason is the subject matter of psychiatry. As humans, we often define ourselves by our thoughts, emotions and behaviors – and psychiatry primarily concerns itself with thoughts, emotions and behaviors. Our personal experiences may embolden us to offer our opinions on mental health, even though we have not had any formal training in mental health.

The psychiatric profession itself may have also contributed to this phenomenon by blurring the boundaries between true mental illness and the broad spectrum of human behavior. The criteria for mental illness have been broadened to such an extent that according to recent studies, nearly half of all Americans will meet the criteria for a mental illness by the time they have reached the age of 75. There is considerable debate among psychiatrists about the potential for over-diagnosis of mental illness and what the consequences of such over-diagnoses might be. The labeling of mildly “abnormal” behaviors as mental illnesses not only results in the over-prescription of psychoactive medications, but it may also take away mental health resources from patients with truly disabling forms of mental illness. For example, the upcoming edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM – which establishes the diagnostic criteria for each mental disorder) will remove the bereavement exemption for the diagnosis of depression. This means that people suffering from severe grief after the death of their loved ones, such as the parents of the children that were murdered in Newtown, could conceivably be diagnosed with the mental disorder “Major Depression”. 

Romanticizing and vilifying mental illness

The topic of mental illness also lends itself to sensationalism. Occasionally, mental illness is romanticized, such as the idea that mental illness somehow fosters creativity for which there is little scientific evidence. More often, however, patients with mental illness are vilified. Broad generalizations are made and violent tendencies or criminal behaviors are ascribed to patients, without taking into account the heterogeneity of mental illness. Wayne LaPierre of the National Rifle Association (NRA) recently called for the creation of an “active national database of the mentally ill” and on a subsequent event, LaPierre referred to mentally ill patients as “monsters” and “lunatics”. Such sensationalist rants may make for good publicity, but they also help further undermine an objective discussion about mental health. Especially the call for a national database of mentally ill people comes seems somewhat counter-intuitive since the NRA has often portrayed itself as a defender of personal liberty and privacy. Are organizations such as the NRA aware of the fact that nearly half of all Americans will at some point in their life qualify for mental illness diagnoses and would have to be registered in such a database? Who would have access to such a database? For what purposes would the database be used? Would everyone listed in the database be barred from buying guns? How about household members living with a patient who has been diagnosed with a mental illness? Would these household members also be barred from buying guns?  If indeed all patients with at least one psychiatric diagnosis were registered in a national database and if they and their household members were barred from owning guns, nearly all US households would probably become gun-free. If one were to follow along the logic of the NRA, one might even have to generate a national database of people with a history of substance abuse or a past history of violence, since the above-mentioned research showed that substance abuse and past history of violence may be even stronger predictors of future violence than mental illness.

When it comes to reporting about mental illness it is especially important to avoid the pitfalls of sensationalism. Mental illness should be neither romanticized nor vilified. Potential links between mental illness and behaviors such as violence should always be made in the context of the existing medical and scientific literature and one should avoid generalizations and pronouncements based on occasional anecdotes. Journalists and mental health professionals need to help ensure the accuracy and objectivity of analyses regarding the mental health of individuals as well as specific mental illnesses. It never hurts to have a discussion about mental health. There is clearly a need for the improvement of the mental health infrastructure and for the development of better therapies for psychiatric disease, but this discussion should be based on facts and not on myths.

 

Image Credit: Brain of MRI scan data for child onset schizophrenia showing areas of brain growth and loss of tissue via NIMH

 

Science Journalism and the Inner Swine Dog

A search of the PubMed database, which indexes scholarly biomedical articles, reveals that 997,508 articles were published in the year 2011, which amounts to roughly 2,700 articles per day. Since the database does not include all published biomedical research articles, the actual number of published biomedical papers is probably even higher. Most biomedical researchers work in defined research areas, so perhaps only 1% of the published articles may be relevant for their research. As an example, the major focus of my research is the biology of stem cells, so I narrowed down the PubMed search to articles containing the expression “stem cells”. I found that 14291 “stem cells” articles were published in 2011, which translates to an average of 39 articles per day (assuming that one reads scientific papers on week-ends and during vacations, which is probably true for most scientists). Many researchers also tend to have two or three areas of interest, which further increases the number of articles one needs to read.


Needless to say, it has become impossible for researchers to read all the articles published in their fields of interest, because if they did that, they would not have any time left to conduct experiments of their own. To avoid drowning in the information overload, researchers have developed multiple strategies how to survive and navigate their way through all this published data. These strategies include relying on recommendations of colleagues, focusing on articles published in high-impact journals, only perusing articles that are directly related to one’s own work or only reading articles that have been cited or featured in major review articles, editorials or commentaries. As a stem cell researcher, I can use the above-mentioned strategies to narrow down the stem cell articles that I ought to read to the manageable number of about three or four articles a day. However, scientific innovation in research is fueled by the cross-fertilization of ideas. The most creative ideas are derived from combining seemingly unrelated research questions. Therefore, the challenge for me is to not only stay informed about important developments in my own areas of interest. I also need to know about major developments in other scientific domains such as network theory, botany or neuroscience, because discoveries in such “distant” fields could inspire me to develop innovative approaches in my own work.
In order to keep up with scientific developments outside of my area of expertise, I have begun to rely on high-quality science journalism, which can be found in selected print and online publications or in science blogs. Good science journalists accurately convey complex scientific concepts in simple language, without oversimplifying the actual science. This is easier said than done, because it requires a solid understanding of the science as well as excellent communication skills. Most scientists are not trained to communicate to the general audience and most journalists have had very limited exposure to actual scientific work. To become good science journalists, either scientists have to be trained in the art of communicating results to non-specialists or journalists have to acquire the scientific knowledge pertinent to the topics they want to write about. The training of science journalists requires time, resources and good mentors.
Once they have completed their training and start working as science journalists, they still need adequate time, resources and mentors. When writing about an important new scientific development, good science journalists do not just repeat the information provided by the researchers or contained in the press release of the university where the research was conducted. Instead, science journalists perform the necessary fact-checking to ensure that the provided information is indeed correct. They also consult the scientific literature as well as other scientific experts to place the new development in the context of the existing research. Importantly, science journalists then analyze the new scientific development, separating the actual scientific data from speculation as well as point out limitations and implications of the work. Science journalists also write for a very broad audience, and this also poses a challenge. Their readership includes members of the general public interested in new scientific findings, politicians and members of the private industry that may base political and economic decisions on scientific findings, patients and physicians that want to stay informed about innovative new treatments and, as mentioned above, scientists that want to know about new scientific research outside of their area of expertise.
Unfortunately, I do not think that it is widely appreciated how important high-quality science journalism is and how much effort it requires. Limited resources, constraints on a journalist’s time and the pressure to publish sensationalist articles that exaggerate or oversimplify the science in order to attract a larger readership can compromise the quality of the work. Two recent examples illustrate this: The so-called Jonah Lehrer controversy, where the highly respected and popular science journalist Jonah Lehrer was found to fabricate quotes, plagiarize and oversimplify the research as well as the more recent case where the Japanese newspaper Yomiuri Shimbun ran a story about the use of induced pluripotent stem cells to treat patients with heart disease, which turned out to be a fraudulent claim of the researcher. The case of Jonah Lehrer was a big shock for me. I had enjoyed reading a number of his articles and blogs that he had written and, at first, it was difficult for me to accept that his work contained so many errors and evidence of misconduct. Boris Kachka has recently written a very profound analysis of the Jonah Lehrer controversy in New York Magazine:

Lehrer was the first of the Millennials to follow his elders into the dubious promised land of the convention hall, where the book, blog, TED talk, and article are merely delivery systems for a core commodity, the Insight.

The Insight is less of an idea than a conceit, a bit of alchemy that transforms minor studies into news, data into magic. Once the Insight is in place—Blink, Nudge, Free, The World Is Flat—the data becomes scaffolding. It can go in the book, along with any caveats, but it’s secondary. The purpose is not to substantiate but to enchant.

Kachka’s expression “Insight” describes our desire to believe in simple narratives. Any active scientist knows that scientific findings tend to be more complex and difficult to interpret than we anticipated. There are few simple truths or “Insights” in science, even though part of us wants to seek out these elusive simple truths. The metaphor that comes to mind is the German expression “der innere Schweinehund”. This literally translates to “the inner swine dog”. The expression may evoke the image of a chimeric pig-dog beast created by a mad German scientist in a Hollywood World War II movie, but in Germany this expression is actually used to describe a metaphorical inner creature that wants us to be lazy, seek out convenience and avoid challenges. In my view, scientific work is an ongoing battle with our “inner swine dog”. We start experiments with simple hypotheses and models, and we are usually quite pleased with results that confirm these anticipated findings because they allow us to be intellectually lazy. However, good scientists know that more often than not, scientific truths are complex and we need to force ourselves to continuously challenge our own scientific concepts. Usually this involves performing more experiments, analyzing more data and trying to interpret data from many different perspectives. Overcoming the intellectual laziness requires work, but most of us who are passionate about science enjoy these challenges and seek out opportunities to battle against our “inner swine dog” instead of succumbing to a state of perpetual intellectual laziness.
When I read Kachka’s description of why Lehrer was able to get away with his fabrications and over-simplifications, I realized that it was probably because Lehrer gave us the narratives we wanted to believe. He provided “Insight” – portraying scientific research in a false shroud of certainty and simplicity. Even though many of us look forward to overcoming intellectual laziness in our own work, we may not be used to challenging our “inner swine dog” when we learn about scientific topics outside of our own areas of expertise. This is precisely why we need good science journalists, who challenge us intellectually by avoiding over-simplifications.

A different but equally instructive case of poor science journalism occurred when the widely circulated Japanese newspaper Yomiuri Shimbun reported in early October of 2012 that the Japanese researcher Hisashi Moriguchi had transplanted induced pluripotent stem cells into patients with heart disease. This was quite a sensation, because it would have been the first transplantation of this kind of stem cells into real patients. For those of us in the field of stem cell research, this came as a big surprise and did not sound very believable, because the story suggested that the work had been performed in the United States and most of us knew that obtaining approvals for using such stem cells in clinical studies would have been very challenging. However, it is very likely that many people who were not acquainted with the complexities of using stem cells in patients may have believed the story. Within days, it became apparent that the researcher’s claims were fraudulent. He had said that he had conducted the studies at Harvard, but Harvard stated that he was not currently affiliated with them and there was no evidence of any such studies ever being conducted there. His claims of how he derived the cells and in how little time he supposedly performed the experiments were also debunked.
This was not the first incident of scientific fraud in the world of stem cell research and it unfortunately will not be the last. What makes this incident noteworthy is how the newspaper Yomiuri Shimbun responded to their reporting of these fraudulent claims. They removed the original story from their page and issued public apologies for their poor reporting. The English-language version of the newspaper listed the mistakes in an article entitled “iPS REPORTS–WHAT WENT WRONG / Moriguchi reporting left questions unanswered”. These problems include inadequate fact-checking regarding the researcher’s claims and affiliations by the reporter and lack of consultation with other scientists whether the findings sounded reasonable. Interestingly, the reporter had identified some red flags and concerns:

–Moriguchi had not published any research on animal experiments.
–The reporter had not been able to contact people who could confirm the iPS cell clinical applications.
–Moriguchi’s affiliation with Harvard University could not be confirmed online.
–It was possible that different cells, instead of iPS cells, had been effective in the treatments.
–It was odd that what appeared to be major world news was appearing only in the form of a poster at a science conference.
–The reporter wondered if it was really possible that transplant operations using iPS cells had been approved at Harvard.
The reporter sent the e-mail to three others, including another news editor in charge of medical science, on the same day, and the reporter’s regular updates on the topic were shared among them.
The science reporter said he felt “at ease” after informing the editors about such dubious points. After receiving explanations from Moriguchi, along with the video clip and other materials, the reporter sought opinions from only one expert and came to believe the doubts had been resolved.

In spite of these red flags, the reporter and the editors decided to run the story. The reporter and the editors gave in to their intellectual laziness and desire of running a sensational story instead of tediously following up on all the red flags. They had a story about a Japanese researcher making a ground-breaking discovery in a very competitive area of stem cell research and this was the story that their readers would probably love. This unprofessional conduct is why the reporter and the editors received reprimands and penalties for their actions. Another article in the newspaper summarizes the punitive measures:

Effective as of next Thursday, The Yomiuri Shimbun will take disciplinary action against the following officials and employees:
–Yoshimitsu Ohashi, senior managing director and managing editor of the company, and Takeshi Mizoguchi, corporate officer and senior deputy managing editor, will each return 30 percent of their remuneration and salary for two months.
–Fumitaka Shibata, a deputy managing editor and editor of the Science News Department, will be replaced and his salary will be reduced.
–Another deputy managing editor in charge of editorial work for the Oct. 11 edition will receive an official reprimand.
–The salaries of two deputy editors of the Science News Department will be cut.
–A reporter in charge of the Oct. 11 series will receive an official reprimand.

I have mixed feelings about these punitive actions. I think it is commendable that the newspaper made apologies without reservations or excuses and listed its mistakes. The reprimands and penalties also highlight that the newspaper takes it science journalism very seriously and recognizes the importance of high professional standards. The penalties were also more severe for its editors than for the reporter, which may reflect the fact that the reporter did consult with the editors and they decided to run the story even though the red flags had been pointed out to them. My concerns arise from the fact that I am not sure punitive actions will solve the problem and they leave a lot of questions unanswered. Did the newspaper evaluate whether the science journalists and editors had been appropriately trained? Did the science journalist have the time and resources to conduct his or her research in a conscientious manner? Importantly, will science journalists be given the appropriate resources and protected from pressures or constraints that encourage unprofessional science journalism? We do not know the answers to these questions, but providing the infrastructure for high quality science journalism is probably going to be more useful than mere punitive actions. We can also hope that media organizations all over the world learn from this incident and recognize the importance of science journalism and put mechanisms in place to ensure that its quality.

Image via Wikimedia Commons/ Norbert Schnitzler: Statue “Mein Innerer Schweinhund” in Bonn