To Err Is Human, To Study Errors Is Science

The family of cholesterol lowering drugs known as ‘statins’ are among the most widely prescribed medications for patients with cardiovascular disease. Large-scale clinical studies have repeatedly shown that statins can significantly lower cholesterol levels and the risk of future heart attacks, especially in patients who have already been diagnosed with cardiovascular disease. A more contentious issue is the use of statins in individuals who have no history of heart attacks, strokes or blockages in their blood vessels. Instead of waiting for the first major manifestation of cardiovascular disease, should one start statin therapy early on to prevent cardiovascular disease?

If statins were free of charge and had no side effects whatsoever, the answer would be rather straightforward: Go ahead and use them as soon as possible. However, like all medications, statins come at a price. There is the financial cost to the patient or their insurance to pay for the medications, and there is a health cost to the patients who experience potential side effects. The Guideline Panel of the American College of Cardiology (ACC) and the American Heart Association (AHA) therefore recently recommended that the preventive use of statins in individuals without known cardiovascular disease should be based on personalized risk calculations. If the risk of developing disease within the next 10 years is greater than 7.5%, then the benefits of statin therapy outweigh its risks and the treatment should be initiated. The panel also indicated that if the 10-year risk of cardiovascular disease is greater than 5%, then physicians should consider prescribing statins, but should bear in mind that the scientific evidence for this recommendation was not as strong as that for higher-risk individuals.


Oops button - via Shutterstock
Oops button – via Shutterstock

Using statins in low risk patients

The recommendation that individuals with comparatively low risk of developing future cardiovascular disease (10-year risk lower than 10%) would benefit from statins was met skepticism by some medical experts. In October 2013, the British Medical Journal (BMJ) published a paper by John Abramson, a lecturer at Harvard Medical School, and his colleagues which re-evaluated the data from a prior study on statin benefits in patients with less than 10% cardiovascular disease risk over 10 years. Abramson and colleagues concluded that the statin benefits were over-stated and that statin therapy should not be expanded to include this group of individuals. To further bolster their case, Abramson and colleagues also cited a 2013 study by Huabing Zhang and colleagues in the Annals of Internal Medicine which (according to Abramson et al.) had reported that 18 % of patients discontinued statins due to side effects. Abramson even highlighted the finding from the Zhang study by including it as one of four bullet points summarizing the key take-home messages of his article.

The problem with this characterization of the Zhang study is that it ignored all the caveats that Zhang and colleagues had mentioned when discussing their findings. The Zhang study was based on the retrospective review of patient charts and did not establish a true cause-and-effect relationship between the discontinuation of the statins and actual side effects of statins. Patients may stop taking medications for many reasons, but this does not necessarily mean that it is due to side effects from the medication. According to the Zhang paper, 17.4% of patients in their observational retrospective study had reported a “statin related incident” and of those only 59% had stopped the medication. The fraction of patients discontinuing statins due to suspected side effects was at most 9-10% instead of the 18% cited by Abramson. But as Zhang pointed out, their study did not include a placebo control group. Trials with placebo groups document similar rates of “side effects” in patients taking statins and those taking placebos, suggesting that only a small minority of perceived side effects are truly caused by the chemical compounds in statin drugs.


Admitting errors is only the first step

Whether 18%, 9% or a far smaller proportion of patients experience significant medication side effects is no small matter because the analysis could affect millions of patients currently being treated with statins. A gross overestimation of statin side effects could prompt physicians to prematurely discontinue medications that have been shown to significantly reduce the risk of heart attacks in a wide range of patients. On the other hand, severely underestimating statin side effects could result in the discounting of important symptoms and the suffering of patients. Abramson’s misinterpretation of statin side effect data was pointed out by readers of the BMJ soon after the article published, and it prompted an inquiry by the journal. After re-evaluating the data and discussing the issue with Abramson and colleagues, the journal issued a correction in which it clarified the misrepresentation of the Zhang paper.

Fiona Godlee, the editor-in-chief of the BMJ also wrote an editorial explaining the decision to issue a correction regarding the question of side effects and that there was not sufficient cause to retract the whole paper since the other points made by Abramson and colleagues – the lack of benefit in low risk patients – might still hold true. Instead, Godlee recognized the inherent bias of a journal’s editor when it comes to deciding on whether or not to retract a paper. Every retraction of a peer reviewed scholarly paper is somewhat of an embarrassment to the authors of the paper as well as the journal because it suggests that the peer review process failed to identify one or more major flaws. In a commendable move, the journal appointed a multidisciplinary review panel which includes leading cardiovascular epidemiologists. This panel will review the Abramson paper as well as another BMJ paper which had also cited the inaccurately high frequency of statin side effects, investigate the peer review process that failed to identify the erroneous claims and provide recommendations regarding the ultimate fate of the papers.


Reviewing peer review

Why didn’t the peer reviewers who evaluated Abramson’s article catch the error prior to its publication? We can only speculate as to why such a major error was not identified by the peer reviewers. One has to bear in mind that “peer review” for academic research journals is just that – a review. In most cases, peer reviewers do not have access to the original data and cannot check the veracity or replicability of analyses and experiments. For most journals, peer review is conducted on a voluntary (unpaid) basis by two to four expert reviewers who routinely spend multiple hours analyzing the appropriateness of the experimental design, methods, presentation of results and conclusions of a submitted manuscript. The reviewers operate under the assumption that the authors of the manuscript are professional and honest in terms of how they present the data and describe their scientific methodology.

In the case of Abramson and colleagues, the correction issued by the BMJ refers not to Abramson’s own analysis but to the misreading of another group’s research. Biomedical research papers often cite 30 or 40 studies, and it is unrealistic to expect that peer reviewers read all the cited papers and ensure that they are being properly cited and interpreted. If this were the expectation, few peer reviewers would agree to serve as volunteer reviewers since they would have hardly any time left to conduct their own research. However, in this particular case, most peer reviewers familiar with statins and the controversies surrounding their side effects should have expressed concerns regarding the extraordinarily high figure of 18% cited by Abramson and colleagues. Hopefully, the review panel will identify the reasons for the failure of BMJ’s peer review system and point out ways to improve it.


To err is human, to study errors is science

All researchers make mistakes, simply because they are human. It is impossible to eliminate all errors in any endeavor that involves humans, but we can construct safeguards that help us reduce the occurrence and magnitude of our errors. Overt fraud and misconduct are rare causes of errors in research, but their effects on any given research field can be devastating. One of the most notorious occurrences of research fraud is the case of the Dutch psychologist Diederik Stapel who published numerous papers based on blatant fabrication of data – showing ‘results’ of experiments on non-existent study subjects. The field of cell therapy in cardiovascular disease recently experienced a major setback when a university review of studies headed by the German cardiologist Bodo Strauer found evidence of scientific misconduct. The significant discrepancies and irregularities in Strauer’s studies have now lead to wide-ranging skepticism about the efficacy of using bone marrow cell infusions to treat heart disease.


It is difficult to obtain precise numbers to quantify the actual extent of severe research misconduct and fraud since it may go undetected. Even when such cases are brought to the attention of the academic leadership, the involved committees and administrators may decide to keep their findings confidential and not disclose them to the public. However, most researchers working in academic research environments would probably agree that these are rare occurrences. A far more likely source of errors in research is the cognitive bias of the researchers. Researchers who believe in certain hypotheses and ideas are prone to interpreting data in a manner most likely to support their preconceived notions. For example, it is likely that a researcher opposed to statin usage will interpret data on side effects of statins differently than a researcher who supports statin usage. While Abramson may have been biased in the interpretation of the data generated by Zhang and colleagues, the field of cardiovascular regeneration is currently grappling in what appears to be a case of biased interpretation of one’s own data. An institutional review by Harvard Medical School and Brigham and Women’s Hospital recently determined that the work of Piero Anversa, one of the world’s most widely cited stem cell researchers, was significantly compromised and warranted a retraction. His group had reported that the adult human heart exhibited an amazing regenerative potential, suggesting that roughly every 8 to 9 years the adult human heart replaces its entire collective of beating heart cells (a 7% – 19% yearly turnover of beating heart cells). These findings were in sharp contrast to a prior study which had found only a minimal turnover of beating heart cells (1% or less per year) in adult humans. Anversa’s finding was also at odds with the observations of clinical cardiologists who rarely observe a near-miraculous recovery of heart function in patients with severe heart disease. One possible explanation for the huge discrepancy between the prior research and Anversa’s studies was that Anversa and his colleagues had not taken into account the possibility of contaminations that could have falsely elevated the cell regeneration counts.


Improving the quality of research: peer review and more

Despite the fact that researchers are prone to make errors due to inherent biases does not mean we should simply throw our hands up in the air, say “Mistakes happen!” and let matters rest. High quality science is characterized by its willingness to correct itself, and this includes improving methods to detect and correct scientific errors early on so that we can limit their detrimental impact. The realization that lack of reproducibility of peer-reviewed scientific papers is becoming a major problem for many areas of research such as psychology, stem cell research and cancer biology has prompted calls for better ways to track reproducibility and errors in science.

One important new paradigm that is being discussed to improve the quality of scholar papers is the role of post-publication peer evaluation. Instead of viewing the publication of a peer-reviewed research paper as an endpoint, post publication peer evaluation invites fellow scientists to continue commenting on the quality and accuracy of the published research even after its publication and to engage the authors in this process. Traditional peer review relies on just a handful of reviewers who decide about the fate of a manuscript, but post publication peer evaluation opens up the debate to hundreds or even thousands of readers which may be able to detect errors that could not be identified by the small number of traditional peer reviewers prior to publication. It is also becoming apparent that science journalists and science writers can play an important role in the post-publication evaluation of published research papers by investigating and communicating research flaws identified in research papers. In addition to helping dismantle the Science Mystique, critical science journalism can help ensure that corrections, retractions or other major concerns about the validity of scientific findings are communicated to a broad non-specialist audience.

In addition to these ongoing efforts to reduce errors in science by improving the evaluation of scientific papers, it may also be useful to consider new pro-active initiatives which focus on how researchers perform and design experiments. As the head of a research group at an American university, I have to take mandatory courses (in some cases on an annual basis) informing me about laboratory hazards, ethics of animal experimentation or the ethics of how to conduct human studies. However, there are no mandatory courses helping us identify our own research biases or how to minimize their impact on the interpretation of our data. There is an underlying assumption that if you are no longer a trainee, you probably know how to perform and interpret scientific experiments. I would argue that it does not hurt to remind scientists regularly – no matter how junior or senior- that they can become victims of their biases. We have to learn to continuously re-evaluate how we conduct science and to be humble enough to listen to our colleagues, especially when they disagree with us.


Note: A shorter version of this article was first published at The Conversation with excellent editorial input provided by Jo Adetunji.
Abramson, J., Rosenberg, H., Jewell, N., & Wright, J. (2013). Should people at low risk of cardiovascular disease take a statin? BMJ, 347 (oct22 3) DOI: 10.1136/bmj.f6123

Neutrality, Balance and Anonymous Sources in Science Blogging – #scioStandards

This is Part 2 of a series of blog posts in anticipation of the Upholding standards in scientific blogs (Session 10B, #scioStandards) session which I will be facilitating at noon on Saturday, March 1 at the upcoming ScienceOnline conference (February 27 – March 1, 2014 in Raleigh, NC – USA). Please read Part 1 here. The goal of these blog posts is to raise questions which readers can ponder and hopefully discuss during the session.


1.       Neutrality

Neutrality is prized by scientists and journalists. Scientists are supposed to report and analyze their scientific research in a neutral fashion. Similarly, journalistic professionalism requires a neutral and objective stance when reporting or analyzing news. Nevertheless, scientists and journalists are also aware of the fact that there is no perfect neutrality. We are all victims of our conscious and unconscious biases and how we report data or events is colored by our biases. Not only is it impossible to be truly “neutral”, but one can even question whether “neutrality” should be a universal mandate. Neutrality can make us passive, especially when we see a clear ethical mandate to take action. Should one report in a neutral manner about genocide instead of becoming an advocate for the victims? Should a scientist who observes a destruction of ecosystems report on this in a neutral manner? Is it acceptable or perhaps even required for such a scientist to abandon neutrality and becoming an advocate to protect the ecosystems?

Science bloggers or science journalists have to struggle to find the right balance between neutrality and advocacy. Political bloggers and journalists who are enthusiastic supporters of a political party will find it difficult to preserve neutrality in their writing, but their target audiences may not necessarily expect them to remain neutral. I am often fascinated and excited by scientific discoveries and concepts that I want to write about, but I also notice how my enthusiasm for science compromises my neutrality. Should science bloggers strive for neutrality and avoid advocacy? Or is it understood that their audiences do not expect neutrality?


2.       Balance

One way to increase objectivity and neutrality in science writing is to provide balanced views. When discussing a scientific discovery or concept, one can also cite or reference scientists with opposing views. This underscores that scientific opinion is not a monolith and that most scientific findings can and should be challenged. However, the mandate to provide balance can also lead to “false balance” when two opposing opinions are presented as two equivalent perspectives, even though one of the two sides has little to no scientific evidence to back up its claims. More than 99% of all climatologists agree about the importance of anthropogenic global warming, therefore it would be “false balance” to give equal space to opposing fringe views. Most science bloggers would also avoid “false balance” when it comes to reporting about the scientific value of homeopathy since nearly every scientist in the world agrees that homeopathy has no scientific data to back it up.

But how should science bloggers decide what constitutes “necessary balance” versus “false balance” when writing about areas of research where the scientific evidence is more ambivalent. How about a scientific discovery which 80% of scientists think is a landmark finding and 20% of scientists believe is a fluke? How does one find out about the scientific rigor of the various viewpoints and how should a blog post reflect these differences in opinion? Press releases of universities or research institutions usually only cite the researchers that conducted a scientific study, but how does one find out about other scientists who disagree with the significance of the new study?


3.       Anonymous Sources

Most scientific peer review is conducted with anonymous sources. The editors of peer reviewed scientific journals send out newly submitted manuscripts to expert reviewers in the field but they try to make sure that the names of the reviewers remain confidential. This helps ensure that the reviewers can comment freely about any potential flaws in the manuscript without having to fear retaliation from the authors who might be incensed about the critique. Even in the post-publication phase, anonymous commenters can leave critical comments about a published study at the post-publication peer review website PubPeer. The comments made by anonymous as well as identified commenters at PubPeer played an important role in raising questions about recent controversial stem cell papers. On the other hand, anonymous sources may also use their cover to make baseless accusations and malign researchers. In the case of journals, the responsibility lies with the editors to ensure that their anonymous reviewers are indeed behaving in a professional manner and not abusing their anonymity.

Investigative political journalists also often rely on anonymous sources and whistle-blowers to receive critical information that would have otherwise been impossible to obtain. Journalists are also trained to ensure that their anonymous sources are credible and that they are not abusing their anonymity.

Should science bloggers and science journalists also consider using anonymous sources? Would unnamed scientists provide a more thorough critical appraisal of the quality of scientific research or would this open the door to abuse?


I hope that you leave comments on this post, tweet your thoughts using the #scioStandards hashtag and discuss your views at the Science Online conference.

Is Kindness Key to Happiness and Acceptance for Children?

The study “Kindness Counts: Prompting Prosocial Behavior in Preadolescents Boosts Peer Acceptance and Well-Being” published by Layous and colleagues in the journal PLOS One on December 26, 2012 was cited by multiple websites as proof of how important it is to teach children to be kind. NPR commented on the study in the blog post “Random Acts Of Kindness Can Make Kids More Popular“, and the study was also discussed in ScienceDaily in “Kindness Key to Happiness and Acceptance for Children“, Fox News in “No bullies: Kind kids are most popular” and the Huffington Post in “Kind Kids Are Happier And More Popular (STUDY)“.

According to most of these news reports, the design of the study was rather straightforward. Schoolchildren ages 9 to 11 in a Vancouver school district were randomly assigned to two groups for a four week intervention: Half of the children were asked to perform kind acts, while the other half were asked to keep track of pleasant places they visited. Happiness and acceptance by their peers was assessed at the beginning and the end of the four week intervention period. The children were allowed to choose the “acts of kindness” or the “pleasant places”. The “acts of kindness” group chose acts such as sharing their lunch or giving their mothers a hug. The “pleasant places” group chose to visit places such as the playground or a grandparent’s house.

At the end of the four week intervention, both groups of children showed increased signs of happiness, but the news reports differed in terms of the impact of the intervention on the acceptance of the children.


The NPR blog reported:

… the children who performed acts of kindness were much more likely to be accepting of their peers, naming more classmates as children they’d like to spend time with.

This would mean that the children performing the “acts of kindness” were the ones that became more accepting of others.


The conclusion in the Huffington Post was quite different:


The students were asked to report how happy they were and identify classmates they would like to work with in school activities. After four weeks, both groups said they were happier, but the kids who had performed acts of kindness reported experiencing greater acceptance from their peers  –  they were chosen most often by other students as children the other students wanted to work with.

The Huffington Post interpretation (a re-post from Livescience) was that the children performing the “acts of kindness” became more accepted by others, i.e. more popular.


Which of the two interpretations was the correct one? Furthermore, how significant were the improvements in happiness and acceptance?


I decided to read the original PLOS One paper and I was quite surprised by what I found:

The manuscript (in its published form, as of December 27, 2012) had no figures and no tables in the “Results” section. The entire “Results” section consisted of just two short paragraphs. The first paragraph described the affect and happiness scores:


Consistent with previous research, overall, students in both the kindness and whereabouts groups showed significant increases in positive affect (γ00 = 0.15, S.E. = 0.04, t(17) = 3.66, p<.001) and marginally significant increases in life satisfaction (γ00 = 0.09, S.E. = 0.05, t(17) = 1.73, p = .08) and happiness (γ00 = 0.11, S.E. = 0.08, t(17) = 1.50, p = .13). No significant differences were detected between the kindness and whereabouts groups on any of these variables (all ps>.18). Results of t-tests mirrored these analyses, with both groups independently demonstrating increases in positive affect, happiness, and life satisfaction (all ts>1.67, all ps<.10).


There are no actual values given, so it is difficult to know how big the changes are. If a starting score is 15, then a change of 1.5 is only a 10% change. On the other hand, if the starting score is 3, then a change of 1.5 represents a 50% change. The Methods section of the paper also does not describe the statistics employed to analyze the data. Just relying on arbitrary p-value thresholds is problematic, but if one were to use the infamous p-value threshold of 0.05 for significance, one can assume that there was a significant change in the affect or mood of children (p-value <0.001), a marginally significant trend of increased life satisfaction (p-value of 0.08) and no really significant change in happiness (p-value of 0.13).

It is surprising that the authors do not show the actual scores for each of the two groups. After all, one of the goals of the study was to test whether performing “acts of kindness” has a bigger impact on happiness and acceptance than the visiting “pleasant places” (“whereabouts” group). There is a generic statement “ No significant differences were detected between the kindness and whereabouts groups on any of these variables (all ps>.18).”, but what were the actual happiness and satisfaction scores for each of the groups? The next sentence is also cryptic: “Results of t-tests mirrored these analyses, with both groups independently demonstrating increases in positive affect, happiness, and life satisfaction (all ts>1.67, all ps<.10).” Does this mean that p<0.1 was the threshold of significance? Do these p-values refer to the post-intervention versus pre-intervention analysis for each tested variable in each of the two groups? If yes, why not show the actual data for both groups?


The second (and final) paragraph of the Results section described acceptance of the children by their peers. Children were asked who they would like to “would like to be in school activities [i.e., spend time] with’’:


All students increased in the raw number of peer nominations they received from classmates (γ00 = 0.68, S.E. = 0.27, t(17) = 2.37, p = .02), but those who performed kind acts (M = +1.57; SD = 1.90) increased significantly more than those who visited places (M = +0.71; SD = 2.17), γ01 = 0.83, S.E. = 0.39, t(17) = 2.10, p = .05, gaining an average of 1.5 friends. The model excluded a nonsignificant term controlling for classroom size (p = .12), which did not affect the significance of the kindness term. The effects of changes in life satisfaction, happiness, and positive affect on peer acceptance were tested in subsequent models and all found to be nonsignificant (all ps>.54). When controlling for changes in well-being, the effect of the kindness condition on peer acceptance remained significant. Hence, changes in well-being did not predict changes in peer acceptance, and the effect of performing acts of kindness on peer acceptance was over and above the effect of changes in well-being.


This is again just a summary of the data, and not the actual data itself. Going to “pleasant places” increased the average number of “friends” (I am not sure I would use “friend” to describe someone who nominates me as a potential partner in a school activity) by 0.71, performing “acts of kindness” increased the average number of friends by 1.57. It did answer the question that was raised by the conflicting news reports. According to the presented data, the “acts of kindness” kids were more accepted by others and there was no data on whether they also became more accepting of others. I then looked at the Methods section to understand the statistics and models used for the analysis and found that there were no details included in the paper. The Methods section just ended with the following sentences:


Pre-post changes in self-reports and peer nominations were analyzed using multilevel modeling to account for students’ nesting within classrooms. No baseline condition differences were found on any outcome variables. Further details about method and results are available from the first author.


Based on reviewing the actual paper, I am quite surprised that PLOS One accepted it for publication. There are minimal data presented in the paper, no actual baseline scores regarding peer acceptance or happiness, incomplete methods and the rather grand title of “Kindness Counts: Prompting Prosocial Behavior in Preadolescents Boosts Peer Acceptance and Well-Being” considering the marginally significant data. One is left with many unanswered questions:

1) What if kids had not been asked to perform additional “acts of kindness” or additional visits to “pleasant places” and had instead merely logged these positive activities that they usually performed as part of their routine? This would have been a very important control group.

2) Why did the authors only show brief summaries of the analyses and omit to show all of the actual affect, happiness, satisfaction and peer acceptance data?

3) Did the kids in both groups also become more accepting of their peers?


It is quite remarkable that going to places one likes, such as a shopping mall is just as effective pro-social behavior (performing “acts of kindness”) in terms of improving happiness and well-being. The visits to pleasant places also helped gain peer acceptance, just not quite as much as performing acts of kindness. However, the somewhat selfish sounding headline “Hanging out at the mall makes kids happier and a bit more popular” is not as attractive as the warm and fuzzy headline “Random acts of kindness can make kids more popular“. This may be the reason why the “prosocial” or “kindness” aspect of this study was emphasized so strongly by the news media.


In summary, the limited data in this published paper suggests that children who are asked to intentionally hang out at places they like and keep track of these for four weeks seem to become happier, similar to kids who make an effort to perform additional acts of kindness. Both groups of children gain acceptance by their peers, but the children who perform acts of kindness fare slightly better. There are no clear descriptions of the statistical methods, no actual scores for the two groups (only the changes in scores are shown) and important control groups (such as children who keep track of their positive activities, without increasing them) are missing. Therefore, definitive conclusions cannot be drawn from these limited data. Unfortunately, none of the above-mentioned news reports highlighted the weaknesses, and instead jumped on the bandwagon of interpreting this study as scientific evidence for the importance of kindness. Some of the titles of the news reports even made references to bullying, even though bullying was not at all assessed in the study.

This does not mean that we should discourage our children from being kind. On the contrary, there are many moral reasons to encourage our children to be kind, and there is no need for a scientific justification for kindness. However, if one does invoke science as a reason for kindness, it should be based on scientifically rigorous and comprehensive data.


Armchair Psychiatry and Violence

Following tragic mass shootings such as the one that unfolded in Newtown, Connecticut, it is natural to try to “make sense” of the events. The process of “making sense” and understanding the underlying causes is part of the healing process. It also gives hope to society that if we were able to address the causes of the tragedy, we could prevent future tragedies. It is not unexpected that mental illness is often invoked as a possible reason for mass shootings. After all, the slaying of fellow human beings seems so far removed from what we consider normal human behavior. Since mental illness directly affects human behavior, it seems like the most straightforward explanation for a mass shooting. It is surmised that the mental illness severely impairs the decision-making capacity and perceptions of the afflicted person so that he or she is prone to acting out in a violent manner and causing great harm to others. Once evidence for “mental illness” in a shooter is found, one may also be tempted to stop looking for other factors that may have caused the tragedy. The nebulous expression “mental illness” can appear like a convenient catch-all explanation that requires no further investigation, because the behavior of a “mentally ill” person might be beyond comprehension.

The problem with this convenient explanation is that “mental illness” is not a homogeneous entity. There are many different types of mental illness, and specific psychiatric disorders, such as major depression, anxiety disorder or schizophrenia represent a broad spectrum of disease. These illnesses do not only vary in their severity from patient to patient, but even within a single patient, mental illnesses vary over time in severity. Just because someone carries the diagnosis of schizophrenia does not mean that the patient will continuously have severe manifestations of the disease. Some patients may show signs of withdrawal and introversion, others may act out with aggressive behavior. Making a direct causal link between a person’s diagnosis of mental illness and their violent behavior requires a careful psychiatric examination of that individual patient, as well as other circumstances, such as recent events in their lives or possible substance abuse.

When shooters kill themselves after the murders they commit, it is impossible to perform such a psychiatric examination and all that one can go by are prior medical records, but it becomes extremely difficult to retrospectively construct cause-effect relationships. In the case of Adam Lanza, the media and the public do not have access to his medical records. However, soon after the shooting, there was frequent mention in the media that Lanza had been diagnosed with either Asperger syndrome, autism or a personality disorder and potential links between these diagnoses and the shooting were implied. Without carefully perusing his medical records, it is difficult to assess whether these diagnoses were accurate, how severe his symptoms were and how they were being treated. To make matters worse, some newspapers and websites have resorted to generating narratives about Adam Lanza’s behavior and mental health based on subjective and anecdotal experiences of class-mates, family friends and in perhaps the most ridiculous case, Lanza’s hair stylist. Snippets of subjective information regarding odd behaviors exhibited by Lanza have been offered to readers and viewers so that they can perform an armchair evaluation of Lanza’s mental health from afar and search for potential clues in his past that might point to why he went on a shooting rampage. Needless to say, this form of armchair analysis is fraught with error.

It is difficult enough to diagnose a patient during a face-to-face evaluation and then try to make causal links between the symptoms and the observed pathology. In the setting of cardiovascular disease, for example, the healthcare professional has access to blood tests which accurately measure cholesterol levels or biomarkers of heart disease, angiograms that generate images of the coronary arteries and even ultrasound images of the heart (echocardiograms) that can rather accurately assess the strength of the heart. Despite all of these objective measurements, it requires a careful and extensive discussion with the patient to understand whether his shortness of breath is truly linked to his heart disease or whether it might be related to other factors. Someone might have mild heart disease by objective testing, but the shortness of breath he experiences when trying to walk up the stairs may be due to months of physical inactivity and not due to his mild heart disease.

In psychiatry, making diagnoses and causally linking symptoms and signs to mental illness is even more difficult, because there are fewer objective tests available. There are, as of now, no CT-Scans or blood tests that can accurately and consistently diagnose a mental illness such as depression. There are numerous reports of documented abnormalities of brain imaging observed in patients with mental illness, but their reliability and their ability to predict specific outcomes of the respective diseases remains unclear. The mental health professional has to primarily rely on subjective reports of the patient and the patient’s caregivers or family members in order to arrive at a diagnosis. In the case of Adam Lanza, who killed himself as well as his mother, all one can go by are his most recent mental health evaluations, which could provide a diagnosis, but may still not reliably explain his killing spree. Retrospective evaluations of his mental health by former class-mates, hair stylists or family members are of little help. Comments on the past behavior of a mass shooter will invariably present a biased and subjective view of the past, colored by the knowledge of the terrible shooting. Incidents of “odd behaviors” will be remembered, without objectively assessing how common these behaviors were in other people who did not go on to become mass shooters.

An article written by Liza Long with the sensationalist title “I Am Adam Lanza’s Mother” was widely circulated after the shooting. Long was obviously not the mother of Adam Lanza, and merely took advantage of the opportunity to describe her frustration with the mental health care system and her heart-wrenching struggles with the mental health of her son who was prone to violent outbursts. In addition to violating the privacy of her son and making him a likely target of future prejudice and humiliation, Long implied that the observed violent outbursts she had seen in her son indicated that he might become a mass shooter like Adam Lanza. Long, like the rest of the public, had no access to Lanza’s medical records, did not know whether Lanza had been diagnosed with the same illnesses as her own son and whether Lanza had exhibited the same behaviors. Nevertheless, Long’s emotional story and the sensationalist title of her article caught on, and many readers may have accepted her story as a proof of the link between certain forms of mental illness and predisposition to becoming a mass shooter.

Instead of relying on retrospective analyses and anecdotes, it may be more helpful to review the scientific literature on the purported link between mental illness and violence.


The link between mental illness and violence

There is a widespread notion that mental illness causes violent behavior, but the scientific evidence for this presumed link is not that solid.  “Mental illness” is a very heterogeneous term, comprising a wide range of disorders and degrees of severity for each disorder, so many studies that have tried to establish a link between “mental illness” and violence have focused on the more severe manifestations of mental illness. The 1998 land-mark study “Violence by People Discharged From Acute Psychiatric Inpatient Facilities and by Others in the Same Neighborhoods” by Henry Steadman and colleagues was published in the highly cited psychiatry journal Archives of General Psychiatry and focused on patients whose mental illness was severe enough to require hospitalization. The study followed patients for one year after they were released from the acute psychiatric inpatient units, and assessed how likely they were to engage in violence. At one of the sites (Pittsburgh), the researchers also compared the likelihood of the psychiatric patients to engage in violence with that of other residents of the same neighborhood.  Steadman and colleagues found that there was a higher rate of violence observed in psychiatric patients, this was associated with the higher rate of substance abuse. Psychiatric patients without substance abuse had the same rate of violence as other residents of the neighborhood without substance abuse.

The recent large-scale study “The Intricate Link Between Violence and Mental Disorder” was published in the Archives of General Psychiatry by Elbogen and Johnson in 2009 and also found that severe mental illness by itself was not a strong predictor of violence. Instead, future violence was more closely associated with a history of past violence, substance abuse or contextual factors, such as unemployment or a recent divorce. A 2009 meta-analysis by Fazel and colleagues was published in PLOS Medicine and reviewed major studies that had investigated the potential link between schizophrenia and violence. The authors found an increased risk of violence and homicide in patients with schizophrenia, but this was again primarily due to the higher rates of substance abuse in the patient population. The risk of homicide in individuals with schizophrenia was 0.3%, and the risk of homicide was also 0.3% in people with a history of substance abuse. All of the studies noted a great degree of variability in terms of violence, again reminding us that mental illnesses are very heterogeneous diseases. An individual diagnosed with “schizophrenia” is not necessarily at higher risk for engaging in violent behavior. One also has to assess their specific context, their past history of violence, their social circumstances and especially their degree of substance abuse, which can refer to alcohol abuse or alcohol dependence as well as the abuse of illegal substances such as cocaine. The data on whether Asperger syndrome, one of the conditions that Adam Lanza is said to have been diagnosed with, is far sparser. Stål Bjørkly recently reviewed the studies in this area and found that there has been no systematic research in this field. The hypothesized link between Asperger syndrome and violence is based on just a few studies, mostly dealing with case reports of selected incidents.

It is quite noteworthy that multiple large-scale studies investigating the association between mental illness and violence have come up with the same conclusion: Patients with mental illnesses may be at greater risk for engaging in violence, but this appears to be primarily linked to concomitant substance abuse. In the absence of substance abuse, mental illness by itself does not significantly increase the likelihood of engaging in violence. Richard Friedman summarized it best in an article for the New England Journal of Medicine:

The challenge for medical practitioners is to remain aware that some of their psychiatric patients do in fact pose a small risk of violence, while not losing sight of the larger perspective — that most people who are violent are not mentally ill, and most people who are mentally ill are not violent.

Human behavior and mental illness

One rarely encounters armchair diagnoses in cardiovascular disease, neurologic disease or cancer. Journalists do not usually interview relatives or friends of cancer patients to ascertain whether there had been early signs of the cancer that had been missed before the definitive diagnosis was made or a patient died of cancer. If medical details about public persona are disclosed, such as for example the heart disease in the case of former vice-president Cheney, journalists and TV viewers or readers without medical expertise rarely offer their own opinion whether the diagnosis of cardiovascular disease was correct and how the patient should be treated. There were no interviews with other cardiovascular patients regarding their own personal history of heart disease and they were also not asked to comment on how they felt Cheney was being treated. In the case of the 2012 US meningitis outbreak, which resulted in the death of at least 35 people, many questions were raised in the media regarding the underlying causes and there was understandable concern about how to contain the outbreak and address underlying causes, but the advice was usually sought from experts in infectious disease.

When it comes to mental illness, on the other hand, nearly everyone with access to the media seems to think they are an expert on mental health and one finds a multitude of opinions on the efficacy of psychoactive medications, whether or not psychiatric patients should be institutionalized and warning signs that lead up to violent behavior. There are many potential reasons for why non-experts feel justified in commenting on mental illness, but remain reticent to offer their opinion on cardiovascular disease, cancer or infectious disease.  One reason is the subject matter of psychiatry. As humans, we often define ourselves by our thoughts, emotions and behaviors – and psychiatry primarily concerns itself with thoughts, emotions and behaviors. Our personal experiences may embolden us to offer our opinions on mental health, even though we have not had any formal training in mental health.

The psychiatric profession itself may have also contributed to this phenomenon by blurring the boundaries between true mental illness and the broad spectrum of human behavior. The criteria for mental illness have been broadened to such an extent that according to recent studies, nearly half of all Americans will meet the criteria for a mental illness by the time they have reached the age of 75. There is considerable debate among psychiatrists about the potential for over-diagnosis of mental illness and what the consequences of such over-diagnoses might be. The labeling of mildly “abnormal” behaviors as mental illnesses not only results in the over-prescription of psychoactive medications, but it may also take away mental health resources from patients with truly disabling forms of mental illness. For example, the upcoming edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM – which establishes the diagnostic criteria for each mental disorder) will remove the bereavement exemption for the diagnosis of depression. This means that people suffering from severe grief after the death of their loved ones, such as the parents of the children that were murdered in Newtown, could conceivably be diagnosed with the mental disorder “Major Depression”. 

Romanticizing and vilifying mental illness

The topic of mental illness also lends itself to sensationalism. Occasionally, mental illness is romanticized, such as the idea that mental illness somehow fosters creativity for which there is little scientific evidence. More often, however, patients with mental illness are vilified. Broad generalizations are made and violent tendencies or criminal behaviors are ascribed to patients, without taking into account the heterogeneity of mental illness. Wayne LaPierre of the National Rifle Association (NRA) recently called for the creation of an “active national database of the mentally ill” and on a subsequent event, LaPierre referred to mentally ill patients as “monsters” and “lunatics”. Such sensationalist rants may make for good publicity, but they also help further undermine an objective discussion about mental health. Especially the call for a national database of mentally ill people comes seems somewhat counter-intuitive since the NRA has often portrayed itself as a defender of personal liberty and privacy. Are organizations such as the NRA aware of the fact that nearly half of all Americans will at some point in their life qualify for mental illness diagnoses and would have to be registered in such a database? Who would have access to such a database? For what purposes would the database be used? Would everyone listed in the database be barred from buying guns? How about household members living with a patient who has been diagnosed with a mental illness? Would these household members also be barred from buying guns?  If indeed all patients with at least one psychiatric diagnosis were registered in a national database and if they and their household members were barred from owning guns, nearly all US households would probably become gun-free. If one were to follow along the logic of the NRA, one might even have to generate a national database of people with a history of substance abuse or a past history of violence, since the above-mentioned research showed that substance abuse and past history of violence may be even stronger predictors of future violence than mental illness.

When it comes to reporting about mental illness it is especially important to avoid the pitfalls of sensationalism. Mental illness should be neither romanticized nor vilified. Potential links between mental illness and behaviors such as violence should always be made in the context of the existing medical and scientific literature and one should avoid generalizations and pronouncements based on occasional anecdotes. Journalists and mental health professionals need to help ensure the accuracy and objectivity of analyses regarding the mental health of individuals as well as specific mental illnesses. It never hurts to have a discussion about mental health. There is clearly a need for the improvement of the mental health infrastructure and for the development of better therapies for psychiatric disease, but this discussion should be based on facts and not on myths.


Image Credit: Brain of MRI scan data for child onset schizophrenia showing areas of brain growth and loss of tissue via NIMH


Science Journalism and the Inner Swine Dog

A search of the PubMed database, which indexes scholarly biomedical articles, reveals that 997,508 articles were published in the year 2011, which amounts to roughly 2,700 articles per day. Since the database does not include all published biomedical research articles, the actual number of published biomedical papers is probably even higher. Most biomedical researchers work in defined research areas, so perhaps only 1% of the published articles may be relevant for their research. As an example, the major focus of my research is the biology of stem cells, so I narrowed down the PubMed search to articles containing the expression “stem cells”. I found that 14291 “stem cells” articles were published in 2011, which translates to an average of 39 articles per day (assuming that one reads scientific papers on week-ends and during vacations, which is probably true for most scientists). Many researchers also tend to have two or three areas of interest, which further increases the number of articles one needs to read.

Needless to say, it has become impossible for researchers to read all the articles published in their fields of interest, because if they did that, they would not have any time left to conduct experiments of their own. To avoid drowning in the information overload, researchers have developed multiple strategies how to survive and navigate their way through all this published data. These strategies include relying on recommendations of colleagues, focusing on articles published in high-impact journals, only perusing articles that are directly related to one’s own work or only reading articles that have been cited or featured in major review articles, editorials or commentaries. As a stem cell researcher, I can use the above-mentioned strategies to narrow down the stem cell articles that I ought to read to the manageable number of about three or four articles a day. However, scientific innovation in research is fueled by the cross-fertilization of ideas. The most creative ideas are derived from combining seemingly unrelated research questions. Therefore, the challenge for me is to not only stay informed about important developments in my own areas of interest. I also need to know about major developments in other scientific domains such as network theory, botany or neuroscience, because discoveries in such “distant” fields could inspire me to develop innovative approaches in my own work.
In order to keep up with scientific developments outside of my area of expertise, I have begun to rely on high-quality science journalism, which can be found in selected print and online publications or in science blogs. Good science journalists accurately convey complex scientific concepts in simple language, without oversimplifying the actual science. This is easier said than done, because it requires a solid understanding of the science as well as excellent communication skills. Most scientists are not trained to communicate to the general audience and most journalists have had very limited exposure to actual scientific work. To become good science journalists, either scientists have to be trained in the art of communicating results to non-specialists or journalists have to acquire the scientific knowledge pertinent to the topics they want to write about. The training of science journalists requires time, resources and good mentors.
Once they have completed their training and start working as science journalists, they still need adequate time, resources and mentors. When writing about an important new scientific development, good science journalists do not just repeat the information provided by the researchers or contained in the press release of the university where the research was conducted. Instead, science journalists perform the necessary fact-checking to ensure that the provided information is indeed correct. They also consult the scientific literature as well as other scientific experts to place the new development in the context of the existing research. Importantly, science journalists then analyze the new scientific development, separating the actual scientific data from speculation as well as point out limitations and implications of the work. Science journalists also write for a very broad audience, and this also poses a challenge. Their readership includes members of the general public interested in new scientific findings, politicians and members of the private industry that may base political and economic decisions on scientific findings, patients and physicians that want to stay informed about innovative new treatments and, as mentioned above, scientists that want to know about new scientific research outside of their area of expertise.
Unfortunately, I do not think that it is widely appreciated how important high-quality science journalism is and how much effort it requires. Limited resources, constraints on a journalist’s time and the pressure to publish sensationalist articles that exaggerate or oversimplify the science in order to attract a larger readership can compromise the quality of the work. Two recent examples illustrate this: The so-called Jonah Lehrer controversy, where the highly respected and popular science journalist Jonah Lehrer was found to fabricate quotes, plagiarize and oversimplify the research as well as the more recent case where the Japanese newspaper Yomiuri Shimbun ran a story about the use of induced pluripotent stem cells to treat patients with heart disease, which turned out to be a fraudulent claim of the researcher. The case of Jonah Lehrer was a big shock for me. I had enjoyed reading a number of his articles and blogs that he had written and, at first, it was difficult for me to accept that his work contained so many errors and evidence of misconduct. Boris Kachka has recently written a very profound analysis of the Jonah Lehrer controversy in New York Magazine:

Lehrer was the first of the Millennials to follow his elders into the dubious promised land of the convention hall, where the book, blog, TED talk, and article are merely delivery systems for a core commodity, the Insight.

The Insight is less of an idea than a conceit, a bit of alchemy that transforms minor studies into news, data into magic. Once the Insight is in place—Blink, Nudge, Free, The World Is Flat—the data becomes scaffolding. It can go in the book, along with any caveats, but it’s secondary. The purpose is not to substantiate but to enchant.

Kachka’s expression “Insight” describes our desire to believe in simple narratives. Any active scientist knows that scientific findings tend to be more complex and difficult to interpret than we anticipated. There are few simple truths or “Insights” in science, even though part of us wants to seek out these elusive simple truths. The metaphor that comes to mind is the German expression “der innere Schweinehund”. This literally translates to “the inner swine dog”. The expression may evoke the image of a chimeric pig-dog beast created by a mad German scientist in a Hollywood World War II movie, but in Germany this expression is actually used to describe a metaphorical inner creature that wants us to be lazy, seek out convenience and avoid challenges. In my view, scientific work is an ongoing battle with our “inner swine dog”. We start experiments with simple hypotheses and models, and we are usually quite pleased with results that confirm these anticipated findings because they allow us to be intellectually lazy. However, good scientists know that more often than not, scientific truths are complex and we need to force ourselves to continuously challenge our own scientific concepts. Usually this involves performing more experiments, analyzing more data and trying to interpret data from many different perspectives. Overcoming the intellectual laziness requires work, but most of us who are passionate about science enjoy these challenges and seek out opportunities to battle against our “inner swine dog” instead of succumbing to a state of perpetual intellectual laziness.
When I read Kachka’s description of why Lehrer was able to get away with his fabrications and over-simplifications, I realized that it was probably because Lehrer gave us the narratives we wanted to believe. He provided “Insight” – portraying scientific research in a false shroud of certainty and simplicity. Even though many of us look forward to overcoming intellectual laziness in our own work, we may not be used to challenging our “inner swine dog” when we learn about scientific topics outside of our own areas of expertise. This is precisely why we need good science journalists, who challenge us intellectually by avoiding over-simplifications.

A different but equally instructive case of poor science journalism occurred when the widely circulated Japanese newspaper Yomiuri Shimbun reported in early October of 2012 that the Japanese researcher Hisashi Moriguchi had transplanted induced pluripotent stem cells into patients with heart disease. This was quite a sensation, because it would have been the first transplantation of this kind of stem cells into real patients. For those of us in the field of stem cell research, this came as a big surprise and did not sound very believable, because the story suggested that the work had been performed in the United States and most of us knew that obtaining approvals for using such stem cells in clinical studies would have been very challenging. However, it is very likely that many people who were not acquainted with the complexities of using stem cells in patients may have believed the story. Within days, it became apparent that the researcher’s claims were fraudulent. He had said that he had conducted the studies at Harvard, but Harvard stated that he was not currently affiliated with them and there was no evidence of any such studies ever being conducted there. His claims of how he derived the cells and in how little time he supposedly performed the experiments were also debunked.
This was not the first incident of scientific fraud in the world of stem cell research and it unfortunately will not be the last. What makes this incident noteworthy is how the newspaper Yomiuri Shimbun responded to their reporting of these fraudulent claims. They removed the original story from their page and issued public apologies for their poor reporting. The English-language version of the newspaper listed the mistakes in an article entitled “iPS REPORTS–WHAT WENT WRONG / Moriguchi reporting left questions unanswered”. These problems include inadequate fact-checking regarding the researcher’s claims and affiliations by the reporter and lack of consultation with other scientists whether the findings sounded reasonable. Interestingly, the reporter had identified some red flags and concerns:

–Moriguchi had not published any research on animal experiments.
–The reporter had not been able to contact people who could confirm the iPS cell clinical applications.
–Moriguchi’s affiliation with Harvard University could not be confirmed online.
–It was possible that different cells, instead of iPS cells, had been effective in the treatments.
–It was odd that what appeared to be major world news was appearing only in the form of a poster at a science conference.
–The reporter wondered if it was really possible that transplant operations using iPS cells had been approved at Harvard.
The reporter sent the e-mail to three others, including another news editor in charge of medical science, on the same day, and the reporter’s regular updates on the topic were shared among them.
The science reporter said he felt “at ease” after informing the editors about such dubious points. After receiving explanations from Moriguchi, along with the video clip and other materials, the reporter sought opinions from only one expert and came to believe the doubts had been resolved.

In spite of these red flags, the reporter and the editors decided to run the story. The reporter and the editors gave in to their intellectual laziness and desire of running a sensational story instead of tediously following up on all the red flags. They had a story about a Japanese researcher making a ground-breaking discovery in a very competitive area of stem cell research and this was the story that their readers would probably love. This unprofessional conduct is why the reporter and the editors received reprimands and penalties for their actions. Another article in the newspaper summarizes the punitive measures:

Effective as of next Thursday, The Yomiuri Shimbun will take disciplinary action against the following officials and employees:
–Yoshimitsu Ohashi, senior managing director and managing editor of the company, and Takeshi Mizoguchi, corporate officer and senior deputy managing editor, will each return 30 percent of their remuneration and salary for two months.
–Fumitaka Shibata, a deputy managing editor and editor of the Science News Department, will be replaced and his salary will be reduced.
–Another deputy managing editor in charge of editorial work for the Oct. 11 edition will receive an official reprimand.
–The salaries of two deputy editors of the Science News Department will be cut.
–A reporter in charge of the Oct. 11 series will receive an official reprimand.

I have mixed feelings about these punitive actions. I think it is commendable that the newspaper made apologies without reservations or excuses and listed its mistakes. The reprimands and penalties also highlight that the newspaper takes it science journalism very seriously and recognizes the importance of high professional standards. The penalties were also more severe for its editors than for the reporter, which may reflect the fact that the reporter did consult with the editors and they decided to run the story even though the red flags had been pointed out to them. My concerns arise from the fact that I am not sure punitive actions will solve the problem and they leave a lot of questions unanswered. Did the newspaper evaluate whether the science journalists and editors had been appropriately trained? Did the science journalist have the time and resources to conduct his or her research in a conscientious manner? Importantly, will science journalists be given the appropriate resources and protected from pressures or constraints that encourage unprofessional science journalism? We do not know the answers to these questions, but providing the infrastructure for high quality science journalism is probably going to be more useful than mere punitive actions. We can also hope that media organizations all over the world learn from this incident and recognize the importance of science journalism and put mechanisms in place to ensure that its quality.

Image via Wikimedia Commons/ Norbert Schnitzler: Statue “Mein Innerer Schweinhund” in Bonn