To Err Is Human, To Study Errors Is Science

The family of cholesterol lowering drugs known as ‘statins’ are among the most widely prescribed medications for patients with cardiovascular disease. Large-scale clinical studies have repeatedly shown that statins can significantly lower cholesterol levels and the risk of future heart attacks, especially in patients who have already been diagnosed with cardiovascular disease. A more contentious issue is the use of statins in individuals who have no history of heart attacks, strokes or blockages in their blood vessels. Instead of waiting for the first major manifestation of cardiovascular disease, should one start statin therapy early on to prevent cardiovascular disease?

If statins were free of charge and had no side effects whatsoever, the answer would be rather straightforward: Go ahead and use them as soon as possible. However, like all medications, statins come at a price. There is the financial cost to the patient or their insurance to pay for the medications, and there is a health cost to the patients who experience potential side effects. The Guideline Panel of the American College of Cardiology (ACC) and the American Heart Association (AHA) therefore recently recommended that the preventive use of statins in individuals without known cardiovascular disease should be based on personalized risk calculations. If the risk of developing disease within the next 10 years is greater than 7.5%, then the benefits of statin therapy outweigh its risks and the treatment should be initiated. The panel also indicated that if the 10-year risk of cardiovascular disease is greater than 5%, then physicians should consider prescribing statins, but should bear in mind that the scientific evidence for this recommendation was not as strong as that for higher-risk individuals.

 

Oops button - via Shutterstock
Oops button – via Shutterstock

Using statins in low risk patients

The recommendation that individuals with comparatively low risk of developing future cardiovascular disease (10-year risk lower than 10%) would benefit from statins was met skepticism by some medical experts. In October 2013, the British Medical Journal (BMJ) published a paper by John Abramson, a lecturer at Harvard Medical School, and his colleagues which re-evaluated the data from a prior study on statin benefits in patients with less than 10% cardiovascular disease risk over 10 years. Abramson and colleagues concluded that the statin benefits were over-stated and that statin therapy should not be expanded to include this group of individuals. To further bolster their case, Abramson and colleagues also cited a 2013 study by Huabing Zhang and colleagues in the Annals of Internal Medicine which (according to Abramson et al.) had reported that 18 % of patients discontinued statins due to side effects. Abramson even highlighted the finding from the Zhang study by including it as one of four bullet points summarizing the key take-home messages of his article.

The problem with this characterization of the Zhang study is that it ignored all the caveats that Zhang and colleagues had mentioned when discussing their findings. The Zhang study was based on the retrospective review of patient charts and did not establish a true cause-and-effect relationship between the discontinuation of the statins and actual side effects of statins. Patients may stop taking medications for many reasons, but this does not necessarily mean that it is due to side effects from the medication. According to the Zhang paper, 17.4% of patients in their observational retrospective study had reported a “statin related incident” and of those only 59% had stopped the medication. The fraction of patients discontinuing statins due to suspected side effects was at most 9-10% instead of the 18% cited by Abramson. But as Zhang pointed out, their study did not include a placebo control group. Trials with placebo groups document similar rates of “side effects” in patients taking statins and those taking placebos, suggesting that only a small minority of perceived side effects are truly caused by the chemical compounds in statin drugs.

 

Admitting errors is only the first step

Whether 18%, 9% or a far smaller proportion of patients experience significant medication side effects is no small matter because the analysis could affect millions of patients currently being treated with statins. A gross overestimation of statin side effects could prompt physicians to prematurely discontinue medications that have been shown to significantly reduce the risk of heart attacks in a wide range of patients. On the other hand, severely underestimating statin side effects could result in the discounting of important symptoms and the suffering of patients. Abramson’s misinterpretation of statin side effect data was pointed out by readers of the BMJ soon after the article published, and it prompted an inquiry by the journal. After re-evaluating the data and discussing the issue with Abramson and colleagues, the journal issued a correction in which it clarified the misrepresentation of the Zhang paper.

Fiona Godlee, the editor-in-chief of the BMJ also wrote an editorial explaining the decision to issue a correction regarding the question of side effects and that there was not sufficient cause to retract the whole paper since the other points made by Abramson and colleagues – the lack of benefit in low risk patients – might still hold true. Instead, Godlee recognized the inherent bias of a journal’s editor when it comes to deciding on whether or not to retract a paper. Every retraction of a peer reviewed scholarly paper is somewhat of an embarrassment to the authors of the paper as well as the journal because it suggests that the peer review process failed to identify one or more major flaws. In a commendable move, the journal appointed a multidisciplinary review panel which includes leading cardiovascular epidemiologists. This panel will review the Abramson paper as well as another BMJ paper which had also cited the inaccurately high frequency of statin side effects, investigate the peer review process that failed to identify the erroneous claims and provide recommendations regarding the ultimate fate of the papers.

 

Reviewing peer review

Why didn’t the peer reviewers who evaluated Abramson’s article catch the error prior to its publication? We can only speculate as to why such a major error was not identified by the peer reviewers. One has to bear in mind that “peer review” for academic research journals is just that – a review. In most cases, peer reviewers do not have access to the original data and cannot check the veracity or replicability of analyses and experiments. For most journals, peer review is conducted on a voluntary (unpaid) basis by two to four expert reviewers who routinely spend multiple hours analyzing the appropriateness of the experimental design, methods, presentation of results and conclusions of a submitted manuscript. The reviewers operate under the assumption that the authors of the manuscript are professional and honest in terms of how they present the data and describe their scientific methodology.

In the case of Abramson and colleagues, the correction issued by the BMJ refers not to Abramson’s own analysis but to the misreading of another group’s research. Biomedical research papers often cite 30 or 40 studies, and it is unrealistic to expect that peer reviewers read all the cited papers and ensure that they are being properly cited and interpreted. If this were the expectation, few peer reviewers would agree to serve as volunteer reviewers since they would have hardly any time left to conduct their own research. However, in this particular case, most peer reviewers familiar with statins and the controversies surrounding their side effects should have expressed concerns regarding the extraordinarily high figure of 18% cited by Abramson and colleagues. Hopefully, the review panel will identify the reasons for the failure of BMJ’s peer review system and point out ways to improve it.

 

To err is human, to study errors is science

All researchers make mistakes, simply because they are human. It is impossible to eliminate all errors in any endeavor that involves humans, but we can construct safeguards that help us reduce the occurrence and magnitude of our errors. Overt fraud and misconduct are rare causes of errors in research, but their effects on any given research field can be devastating. One of the most notorious occurrences of research fraud is the case of the Dutch psychologist Diederik Stapel who published numerous papers based on blatant fabrication of data – showing ‘results’ of experiments on non-existent study subjects. The field of cell therapy in cardiovascular disease recently experienced a major setback when a university review of studies headed by the German cardiologist Bodo Strauer found evidence of scientific misconduct. The significant discrepancies and irregularities in Strauer’s studies have now lead to wide-ranging skepticism about the efficacy of using bone marrow cell infusions to treat heart disease.

 

It is difficult to obtain precise numbers to quantify the actual extent of severe research misconduct and fraud since it may go undetected. Even when such cases are brought to the attention of the academic leadership, the involved committees and administrators may decide to keep their findings confidential and not disclose them to the public. However, most researchers working in academic research environments would probably agree that these are rare occurrences. A far more likely source of errors in research is the cognitive bias of the researchers. Researchers who believe in certain hypotheses and ideas are prone to interpreting data in a manner most likely to support their preconceived notions. For example, it is likely that a researcher opposed to statin usage will interpret data on side effects of statins differently than a researcher who supports statin usage. While Abramson may have been biased in the interpretation of the data generated by Zhang and colleagues, the field of cardiovascular regeneration is currently grappling in what appears to be a case of biased interpretation of one’s own data. An institutional review by Harvard Medical School and Brigham and Women’s Hospital recently determined that the work of Piero Anversa, one of the world’s most widely cited stem cell researchers, was significantly compromised and warranted a retraction. His group had reported that the adult human heart exhibited an amazing regenerative potential, suggesting that roughly every 8 to 9 years the adult human heart replaces its entire collective of beating heart cells (a 7% – 19% yearly turnover of beating heart cells). These findings were in sharp contrast to a prior study which had found only a minimal turnover of beating heart cells (1% or less per year) in adult humans. Anversa’s finding was also at odds with the observations of clinical cardiologists who rarely observe a near-miraculous recovery of heart function in patients with severe heart disease. One possible explanation for the huge discrepancy between the prior research and Anversa’s studies was that Anversa and his colleagues had not taken into account the possibility of contaminations that could have falsely elevated the cell regeneration counts.

 

Improving the quality of research: peer review and more

Despite the fact that researchers are prone to make errors due to inherent biases does not mean we should simply throw our hands up in the air, say “Mistakes happen!” and let matters rest. High quality science is characterized by its willingness to correct itself, and this includes improving methods to detect and correct scientific errors early on so that we can limit their detrimental impact. The realization that lack of reproducibility of peer-reviewed scientific papers is becoming a major problem for many areas of research such as psychology, stem cell research and cancer biology has prompted calls for better ways to track reproducibility and errors in science.

One important new paradigm that is being discussed to improve the quality of scholar papers is the role of post-publication peer evaluation. Instead of viewing the publication of a peer-reviewed research paper as an endpoint, post publication peer evaluation invites fellow scientists to continue commenting on the quality and accuracy of the published research even after its publication and to engage the authors in this process. Traditional peer review relies on just a handful of reviewers who decide about the fate of a manuscript, but post publication peer evaluation opens up the debate to hundreds or even thousands of readers which may be able to detect errors that could not be identified by the small number of traditional peer reviewers prior to publication. It is also becoming apparent that science journalists and science writers can play an important role in the post-publication evaluation of published research papers by investigating and communicating research flaws identified in research papers. In addition to helping dismantle the Science Mystique, critical science journalism can help ensure that corrections, retractions or other major concerns about the validity of scientific findings are communicated to a broad non-specialist audience.

In addition to these ongoing efforts to reduce errors in science by improving the evaluation of scientific papers, it may also be useful to consider new pro-active initiatives which focus on how researchers perform and design experiments. As the head of a research group at an American university, I have to take mandatory courses (in some cases on an annual basis) informing me about laboratory hazards, ethics of animal experimentation or the ethics of how to conduct human studies. However, there are no mandatory courses helping us identify our own research biases or how to minimize their impact on the interpretation of our data. There is an underlying assumption that if you are no longer a trainee, you probably know how to perform and interpret scientific experiments. I would argue that it does not hurt to remind scientists regularly – no matter how junior or senior- that they can become victims of their biases. We have to learn to continuously re-evaluate how we conduct science and to be humble enough to listen to our colleagues, especially when they disagree with us.

 

Note: A shorter version of this article was first published at The Conversation with excellent editorial input provided by Jo Adetunji.

 

ResearchBlogging.org
Abramson, J., Rosenberg, H., Jewell, N., & Wright, J. (2013). Should people at low risk of cardiovascular disease take a statin? BMJ, 347 (oct22 3) DOI: 10.1136/bmj.f6123

Advertisement

Some Highlights of the Live Chat: “Are We Doing Science the Right Way?”

On February 7, 2013, ScienceNOW organized a Live Chat with the microbiologists Ferric Fang and Arturo Casadevall that was moderated by the Science staff writer Jennifer Couzin-Frankel and discussed a very broad range of topics related to how we currently conduct science. For those who could not participate in the Live Chat, I will summarize some key comments made by Fang and Casadevall, Couzin-Frankel or other commenters.

 

I have grouped the comments into key themes and also added some of my own thoughts.

 

1. Introduction to the goals of the Live Chat:

Jennifer Couzin-Frankel: …..For several years (at least) researchers have worried about where their profession is heading. As much as most of them love working in the lab, they’re also facing sometimes extreme pressure to land grants and publish hot papers. And surveys have shown that a subset are even bending or breaking the rules to accomplish that.….With us today are two guests who are studying the “science of science” together, and considering how to nurture discovery and reduce misconduct…

 

Pressure to publish, the difficulties to obtain grant funding, scientific misconduct – these are all topics that should be of interest to all of us who are actively engaged in science.

 

2. Science funding:

Ferric Fang: ….the way in which science is funded has a profound effect on how and what science is done. Paula Stephan has recently written an excellent book on this subject called “How Economics Shapes Science.”

Ferric Fang: Many are understandably reluctant to ask for more funding given the global recession and halting recovery. But I believe a persuasive economic case can be made for greater investment in R&D paying off in the long run. Paula Stephan notes that the U.S. spends twice as much on beer as on science each year.

 

These are great points. I often get the sense that federal funding for science and education is portrayed as an unnecessary luxury, charity or a form of waste. We have to remind people that investments in science and education are a very important investment with long-term returns.

 

3. Reproducibility and the self-correcting nature of science:

Arturo Casadevall: Is science self-correcting? Yes and No. In areas where there is a lot of interest in a subject experiments will be repeated and bad science will be ferreted out. However, until someone sets out to repeat an experiment we do not know whether it is reproducible. We do not know what percentage of the literature is right because no one has ever done a systematic study to see what fraction is reproducible.

 

I think that the reproducibility crisis is one of the biggest challenges for contemporary science. Thousands of scientific papers are published every day, and only a tiny fraction of them will ever be tested for reproducibility. There is minimal funding for attempting to replicate published data and also very little incentive for scientists, because even if they are able to replicate the published work, they will have a hard time publishing a confirmatory study. The lack of attempts to replicate scientific data creates a lot of uncertainty, because we do not really know, how much of the published data is truly valid.

 

Comment From David R Van Houten: …The absence of these weekly [lab] meetings was the single biggest factor allowing for the data fabrication and falsification that I observed 20 years ago as a PhD student. I pushed to get these meetings organized, and when they did occur, it made it easier to get the offender to stop, and easier to “salvage” original data…

 

I agree that regular lab meetings and more supervision by senior researchers and principal investigators can help contain and prevent data fabrication and falsification. However, overt data fabrication and fraud are probably not as common as “data fudging”, where experiments or data points are conveniently ignored because they do not fit the desired model. This kind of “data fudging” is not just a problem of junior scientists, but also occurs with senior scientists.

 

Ferric Fang: Peer review plays an important role in self-correction of science but as nearly everyone recognizes, it is not perfect. Mechanisms of post-publication review to address the problems are very important– these include errata, retractions, correspondences, follow up publications, and nowadays, public discussion on blogs and other websites.

 

I am glad that Fang (who is an editor-in-chief of an academic journal) recognizes the importance of post-publication review, and mentions blog discussions as one such form of post publication review.

 

4. Are salaries of scientists too low?

Comment From Shabbir: When an hedge fund manager makes 100 times more than a theoretical physicist, how can we expect the bright minds to go to science?

 

I agree that academic salaries for scientists are on the lower side, especially when compared with the salary that one can make in the private industry. However, I do not think that obscene salaries of hedge fund managers are the correct comparison. If the US wants to attract and retain excellent scientists, raising their salaries is definitely important. Scientists are routinely over-worked, balancing their research work, teaching, mentoring and administrative duties and receive very inadequate compensation. I have also observed a near-cynical attitude of many elite universities, which try to portray working as a scientist as an “honor” that should not require much compensation. This kind of abuse really needs to end.

 

5. Communicating science to the public

Arturo Casadevall: … Many scientists cannot explain their work at a dinner party and keep the other guests interested. We are passionate about what we do but we are often terrible in communicating the excitement that we feel. I think this is one area where perhaps better public communicating skills are needed and maybe some attention should be given to mastering these arts in training.

 

I could not agree more. Communicating science should be part of every PhD program, postdoctoral training and an ongoing effort when a scientist becomes an independent principal investigator.

 

6. Are we focusing on quantity rather than quality in science?

Ferric Fang: …. There are now in excess of 50,000,000 scientific publications according to one estimate, and we are in danger of creating a Library of Babel in which it is impossible to find the truth buried amidst poor quality or unimportant publications. This is in part a consequence of the “publish or perish” mentality in academia. A focus on quality rather than quantity in promotion decisions might help.

 

It is correct that the amount of scientific data being generated is overwhelming, but I am not sure that there is an easy way to find the “truth”. Scientific “truth” is very dynamic and I think it is becoming more and more difficult to publish in the high impact journals. A typical paper in a high-impact journal now has anywhere between 5 and 20 supplemental figures and tables, and that same paper could have been published as two or three separate papers just a few decades ago. We now just have many more active scientists all over the world that have begun publishing in English and we all have tools that generate huge amounts of data in a matter of weeks (such as microarrays, proteomics and metabolomics). It is likely that the number of publications will continue to rise in the next years and we need to come up with an innovative system to manage scientific information. Hopefully, scientists will realize that managing and evaluating existing scientific information is just as valuable as generating new scientific datasets.

 

This was a great and inspiring discussion and I look forward to other such Live Chat events.