Crowdfunding and Tribefunding in Science

Competition for government research grants to fund scientific research remains fierce in the United States. The budget of the National Institutes of Health (NIH), which constitute the major source of funding for US biological and medical research, has been increased only modestly during the past decade but it is not even keeping up with inflation. This problem is compounded by the fact that more scientists are applying for grants now than one or two decades ago, forcing the NIH to enforce strict cut-offs and only fund the top 10-20% of all submitted research proposals. Such competition ought to be good for the field because it could theoretically improve the quality of science. Unfortunately, it is nearly impossible to discern differences between excellent research grants. For example, if an institute of the NIH has a cut-off at the 13 percentile range, then a grant proposal judged to be in the top 10% would receive funding but a proposal in top 15% would end up not being funded. In an era where universities are also scaling back their financial support for research, an unfunded proposal could ultimately lead to the closure of a research laboratory and the dismissal of several members of a research team. Since the prospective assessment of a research proposal’s scientific merits are somewhat subjective, it is quite possible that the budget constraints are creating cemeteries of brilliant ideas and concepts, a world of scientific what-ifs that are forever lost.

Red Panda
Red Panda

How do we scientists deal with these scenarios? Some of us keep soldiering on, writing one grant after the other. Others change and broaden the direction of their research, hoping that perhaps research proposals in other areas are more likely to receive the elusive scores that will qualify for funding. Yet another approach is to submit research proposals to philanthropic foundations or non-profit organizations, but most of these organizations tend to focus on research which directly impacts human health. Receiving a foundation grant to study the fundamental mechanisms by which the internal clocks of plants coordinate external timing cues such as sunlight, food and temperature, for example, would be quite challenging. One alternate source of research funding that is now emerging is “scientific crowdfunding” in which scientists use web platforms to present their proposed research project to the public and thus attract donations from a large number of supporters. The basic underlying idea is that instead of receiving a $50,000 research grant from one foundation or government agency, researchers may receive smaller donations from 10, 50 or even a 100 supporters and thus finance their project.

The website experiment.com is a scientific crowdfunding platform which presents an intriguing array of projects in search of backers, ranging from “Death of a Tyrant: Help us Solve a Late Cretaceous Dinosaur Mystery!” to “Eating tough stuff with floppy jaws – how do freshwater rays eat crabs, insects, and mollusks?” Many of the projects include a video in which the researchers outline the basic goals and significance of their project and then also provide more detailed information on the webpage regarding how the funds will be used. There is also a “Discussion” section for each proposed project in which researchers answer questions raised by potential backers and, importantly, a “Results” in which researchers can report emerging results once their project is funded.

How can scientists get involved in scientific crowdfunding? Julien Vachelard and colleagues recently published an excellent overview of scientific crowdfunding. They analyzed the projects funded on experiment.com and found that projects which successfully achieved the funding goal tend to have 30-40 backers. The total amount of funds raised for most projects ranged from about $3,000 to $5,000. While these amounts are impressive, they are still far lower than a standard foundation or government agency grant in biomedical research. These smaller amounts could support limited materials to expand ongoing projects, but they are not sufficient to carry out standard biomedical research projects which cover salaries and stipends of the researchers. The annual stipends for postdoctoral research fellows alone run in the $40,000 – $55,000 range.

Vachelard and colleagues also provide great advice for how scientists can increase the likelihood of funding. Attention span is limited on the internet so researchers need to convey the key message of their research proposal in a clear, succinct and engaging manner. It is best to use powerful images and videos, set realistic goals (such as $3,000 to $5,000), articulate what the funds will be used for, participate in discussions to answer questions and also update backers with results as they emerge. Presenting research in a crowdfunding platform is an opportunity to educate the public and thus advance science, forcing scientists to develop better communication skills. These collateral benefits to the scientific enterprise extend beyond the actual amount of funding that is solicited.

One of the concerns that is voiced about scientific crowdfunding is that it may only work for “panda bear science“, i.e. scientific research involving popular themes such as cute and cuddly animals or studying life on other planets. However, a study of what actually gets funded in a scientific crowdfunding campaign revealed that the subject matter was not as important as how well the researchers communicated with their audience. A bigger challenge for the long-term success of scientific crowdfunding may be the limited amounts that are raised and therefore only cover the cost of small sub-projects but are neither sufficient to embark on exploring exciting new ideas and independent ideas nor offset salary and personnel costs. Donating $20 or $50 to a project is very different from donating amounts such as $1,000 because the latter requires not only the necessary financial resources but also a represents a major personal investment in the success of the research project. To initiate an exciting new biomedical research project in the $50,000 or $100,000 range, one needs several backers who are willing to donate $1,000 or more.

Perhaps one solution could be to move from a crowdfunding towards a tribefunding model. Crowds consist of a mass of anonymous people, mostly strangers in a confined space who do not engage each other. Tribes, on the other hand, are characterized by individuals who experience a sense of belonging and fellowship, they share and take responsibility for each other. The “tribes” in scientific tribefunding would consist of science supporters or enthusiasts who recognize the importance of the scientific work and also actively participate in discussions not just with the scientists but also with each other. Members of a paleontology tribe could include specialists and non-specialists who are willing to put in the required time to study the scientific background of a proposed paleontology research project, understand how it would advance the field and how even negative results (which are quite common in science) could be meaningful.

Tribefunding in higher education and science may sound like a novel concept but certain aspects of tribefunding are already common practice in the United States, albeit under different names. When wealthy alumni establish endowments for student scholarships, fellowship programs or research centers at their alma mater, it is in part because they feel a tribe-like loyalty towards the institutions that laid the cornerstones of their future success. The students and scholars who will benefit from these endowments are members of the same academic institution or tribe. The difference between the currently practiced form of philanthropic funding and the proposed tribefunding model is that tribe identity would not be defined by where one graduated from but instead by scientific interests.

Tribefunding could also impact the review process of scientific proposals. Currently, peer reviewers who assess the quality of scientific proposals for government agencies spend a substantial amount of time assessing the strengths and limitations of each proposal, and then convene either in person or via conference calls to arrive at a consensus regarding the merits of a proposal. Researchers often invest months of effort when they prepare research proposals which is why peer reviewers take their work very seriously and devote the required time to review each proposal carefully. Although the peer review system for grant proposals is often criticized because reviewers can make errors when they assess the quality of proposals, there are no established alternatives for how to assess research proposals. Most peer reviewers also realize that they are part of a “tribe”, with the common interest of selecting the best science. However, the definition of a “peer” is usually limited to other scientists, most of whom are tenured professors at academic institutions and does not really solicit input from non-academic science supporters.  In a tribefunding model, the definition of a “peer” would be expanded to professional scientists as well as science supporters for any given area of science. All members of the tribe could participate during the review and selection of the best projects  as well as throughout the funding period of the research projects that receive the support.

Merging the grassroots character and public outreach of crowdfunding with the sense of fellowship and active dialogue in a “scientific tribe” could take scientific crowdfunding to the next level. A comment section on a webpage is not sufficient to develop such a “tribe” affiliation but regular face-to-face meetings or conventional telephone/Skype conference calls involving several backers (independent of whether they can donate $50 or $5,000) may be more suitable. Developing a sense of ownership through this kind of communication would mean that every member of the science “tribe” realizes that they are a stakeholder. This sense of project ownership may not only increase donations, but could also create a grassroots synergy between laboratory and tribe, allowing for meaningful education and intellectual exchange.

Reference:

Vachelard J, Gambarra-Soares T, Augustini G, Riul P, Maracaja-Coutinho V (2016) A Guide to Scientific Crowdfunding. PLoS Biol 14(2): e1002373. doi:10.1371/journal.pbio.1002373

Note: An earlier version of this article was first published on the 3Quarksdaily blog.

 

ResearchBlogging.org

Vachelard J, Gambarra-Soares T, Augustini G, Riul P, & Maracaja-Coutinho V (2016). A Guide to Scientific Crowdfunding. PLoS Biology, 14 (2) PMID: 26886064

The Dire State of Science in the Muslim World

Universities and the scientific infrastructures in Muslim-majority countries need to undergo radical reforms if they want to avoid falling by the wayside in a world characterized by major scientific and technological innovations. This is the conclusion reached by Nidhal Guessoum and Athar Osama in their recent commentary “Institutions: Revive universities of the Muslim world“, published in the scientific journal Nature. The physics and astronomy professor Guessoum (American University of Sharjah, United Arab Emirates) and Osama, who is the founder of the Muslim World Science Initiative, use the commentary to summarize the key findings of the report “Science at Universities of the Muslim World” (PDF), which was released in October 2015 by a task force of policymakers, academic vice-chancellors, deans, professors and science communicators. This report is one of the most comprehensive analyses of the state of scientific education and research in the 57 countries with a Muslim-majority population, which are members of the Organisation of Islamic Cooperation (OIC).

Map of Saudi Arabia in electronic circuits via Shutterstock (copyright drical)
Map of Saudi Arabia using electronic circuits via Shutterstock (copyright drical)

Here are some of the key findings:

1.    Lower scientific productivity in the Muslim world: The 57 Muslim-majority countries constitute 25% of the world’s population, yet they only generate 6% of the world’s scientific publications and 1.6% of the world’s patents.

2.    Lower scientific impact of papers published in the OIC countries: Not only are Muslim-majority countries severely under-represented in terms of the numbers of publications, the papers which do get published are cited far less than the papers stemming from non-Muslim countries. One illustrative example is that of Iran and Switzerland. In the 2014 SCImago ranking of publications by country, Iran was the highest-ranked Muslim-majority country with nearly 40,000 publications, just slightly ahead of Switzerland with 38,000 publications – even though Iran’s population of 77 million is nearly ten times larger than that of Switzerland. However, the average Swiss publication was more than twice as likely to garner a citation by scientific colleagues than an Iranian publication, thus indicating that the actual scientific impact of research in Switzerland was far greater than that of Iran.

To correct for economic differences between countries that may account for the quality or impact of the scientific work, the analysis also compared selected OIC countries to matched non-Muslim countries with similar per capita Gross Domestic Product (GDP) values (PDF). The per capita GDP in 2010 was $10,136 for Turkey, $8,754 for Malaysia and only $7,390 for South Africa. However, South Africa still outperformed both Turkey and Malaysia in terms of average citations per scientific paper in the years 2006-2015 (Turkey: 5.6; Malaysia: 5.0; South Africa: 9.7).

3.    Muslim-majority countries make minimal investments in research and development: The world average for investing in research and development is roughly 1.8% of the GDP. Advanced developed countries invest up to 2-3 percent of their GDP, whereas the average for the OIC countries is only 0.5%, less than a third of the world average! One could perhaps understand why poverty-stricken Muslim countries such as Pakistan do not have the funds to invest in research because their more immediate concerns are to provide basic necessities to the population. However, one of the most dismaying findings of the report is the dismally low rate of research investments made by the members of the Gulf Cooperation Council (GCC, the economic union of six oil-rich gulf countries Saudi Arabia, Kuwait, Bahrain, Oman, United Arab Emirates and Qatar with a mean per capita GDP of over $30,000 which is comparable to that of the European Union). Saudi Arabia and Kuwait, for example, invest less than 0.1% of their GDP in research and development, far lower than the OIC average of 0.5%.

So how does one go about fixing this dire state of science in the Muslim world? Some fixes are rather obvious, such as increasing the investment in scientific research and education, especially in the OIC countries which have the financial means and are currently lagging far behind in terms of how much funds are made available to improve the scientific infrastructures. Guessoum and Athar also highlight the importance of introducing key metrics to assess scientific productivity and the quality of science education. It is not easy to objectively measure scientific and educational impact, and one can argue about the significance or reliability of any given metric. But without any metrics, it will become very difficult for OIC universities to identify problems and weaknesses, build new research and educational programs and reward excellence in research and teaching. There is also a need for reforming the curriculum so that it shifts its focus from lecture-based teaching, which is so prevalent in OIC universities, to inquiry-based teaching in which students learn science hands-on by experimentally testing hypotheses and are encouraged to ask questions.

In addition to these commonsense suggestions, the task force also put forward a rather intriguing proposition to strengthen scientific research and education: place a stronger emphasis on basic liberal arts in science education. I could not agree more because I strongly believe that exposing science students to the arts and humanities plays a key role in fostering the creativity and curiosity required for scientific excellence. Science is a multi-disciplinary enterprise, and scientists can benefit greatly from studying philosophy, history or literature. A course in philosophy, for example, can teach science students to question their basic assumptions about reality and objectivity, encourage them to examine their own biases, challenge authority and understand the importance of doubt and uncertainty, all of which will likely help them become critical thinkers and better scientists.

However, the specific examples provided by Guessoum and Athar do not necessarily indicate a support for this kind of a broad liberal arts education. They mention the example of the newly founded private Habib University in Karachi which mandates that all science and engineering students also take classes in the humanities, including a two semester course in “hikma” or “traditional wisdom”. Upon reviewing the details of this philosophy course on the university’s website, it seems that the course is a history of Islamic philosophy focused on antiquity and pre-modern texts which date back to the “Golden Age” of Islam. The task force also specifically applauds an online course developed by Ahmed Djebbar. He is an emeritus science historian at the University of Lille in France, which attempts to stimulate scientific curiosity in young pre-university students by relating scientific concepts to great discoveries from the Islamic “Golden Age”. My concern is that this is a rather Islamocentric form of liberal arts education. Do students who have spent all their lives growing up in a Muslim society really need to revel in the glories of a bygone era in order to get excited about science? Does the Habib University philosophy course focus on Islamic philosophy because the university feels that students should be more aware of their cultural heritage or are there concerns that exposing students to non-Islamic ideas could cause problems with students, parents, university administrators or other members of society who could perceive this as an attack on Islamic values? If the true purpose of liberal arts education is to expand the minds of students by exposing them to new ideas, wouldn’t it make more sense to focus on non-Islamic philosophy? It is definitely not a good idea to coddle Muslim students by adulating the “Golden Age” of Islam or using kid gloves when discussing philosophy in order to avoid offending them.

This leads us to a question that is not directly addressed by Guessoum and Osama: How “liberal” is a liberal arts education in countries with governments and societies that curtail the free expression of ideas? The Saudi blogger Raif Badawi was sentenced to 1,000 lashes and 10 years in prison because of his liberal views that were perceived as an attack on religion. Faculty members at universities in Saudi Arabia who teach liberal arts courses are probably very aware of these occupational hazards. At first glance, professors who teach in the sciences may not seem to be as susceptible to the wrath of religious zealots and authoritarian governments. However, the above-mentioned interdisciplinary nature of science could easily spell trouble for free-thinking professors or students. Comments about evolutionary biology, the ethics of genome editing or discussing research on sexuality could all be construed as a violation of societal and religious norms.

The 2010 study Faculty perceptions of academic freedom at a GCC university surveyed professors at an anonymous GCC university (most likely Qatar University since roughly 25% of the faculty members were Qatari nationals and the authors of the study were based in Qatar) regarding their views of academic freedom. The vast majority of faculty members (Arab and non-Arab) felt that academic freedom was important to them and that their university upheld academic freedom. However, in interviews with individual faculty members, the researchers found that the professors were engaging in self-censorship in order to avoid untoward repercussions. Here are some examples of the comments from the faculty at this GCC University:

“I am fully aware of our culture. So, when I suggest any topic in class, I don’t need external censorship except mine.”

“Yes. I avoid subjects that are culturally inappropriate.”

“Yes, all the time. I avoid all references to Israel or the Jewish people despite their contributions to world culture. I also avoid any kind of questioning of their religious tradition. I do this out of respect.”

This latter comment is especially painful for me because one of my heroes who inspired me to become a cell biologist was the Italian Jewish scientist Rita Levi-Montalcini. She revolutionized our understanding of how cells communicate with each other using growth factors. She was also forced to secretly conduct her experiments in her bedroom because the Fascists banned all “non-Aryans” from going to the university laboratory. Would faculty members who teach the discovery of growth factors at this GCC University downplay the role of the Nobel laureate Levi-Montalcini because she was Jewish? We do not know how prevalent this form of self-censorship is in other OIC countries because the research on academic freedom in Muslim-majority countries is understandably scant. Few faculty members would be willing to voice their concerns about government or university censorship and admitting to self-censorship is also not easy.

The task force report on science in the universities of Muslim-majority countries is an important first step towards reforming scientific research and education in the Muslim world. Increasing investments in research and development, using and appropriately acting on carefully selected metrics as well as introducing a core liberal arts curriculum for science students will probably all significantly improve the dire state of science in the Muslim world. However, the reform of the research and education programs needs to also include discussions about the importance of academic freedom. If Muslim societies are serious about nurturing scientific innovation, then they will need to also ensure that scientists, educators and students will be provided with the intellectual freedom that is the cornerstone of scientific creativity.

References:

Guessoum, N., & Osama, A. (2015). Institutions: Revive universities of the Muslim world. Nature, 526(7575), 634-6.

Romanowski, M. H., & Nasser, R. (2010). Faculty perceptions of academic freedom at a GCC university. Prospects, 40(4), 481-497.

 

**************************************************************

 Note: An earlier version of this article was first published on the 3Quarksdaily blog.

 

ResearchBlogging.org

 

Guessoum N, & Osama A (2015). Institutions: Revive universities of the Muslim world. Nature, 526 (7575), 634-6 PMID: 26511563

 

 

Romanowski, M., & Nasser, R. (2010). Faculty perceptions of academic freedom at a GCC university PROSPECTS, 40 (4), 481-497 DOI: 10.1007/s11125-010-9166-2

To Err Is Human, To Study Errors Is Science

The family of cholesterol lowering drugs known as ‘statins’ are among the most widely prescribed medications for patients with cardiovascular disease. Large-scale clinical studies have repeatedly shown that statins can significantly lower cholesterol levels and the risk of future heart attacks, especially in patients who have already been diagnosed with cardiovascular disease. A more contentious issue is the use of statins in individuals who have no history of heart attacks, strokes or blockages in their blood vessels. Instead of waiting for the first major manifestation of cardiovascular disease, should one start statin therapy early on to prevent cardiovascular disease?

If statins were free of charge and had no side effects whatsoever, the answer would be rather straightforward: Go ahead and use them as soon as possible. However, like all medications, statins come at a price. There is the financial cost to the patient or their insurance to pay for the medications, and there is a health cost to the patients who experience potential side effects. The Guideline Panel of the American College of Cardiology (ACC) and the American Heart Association (AHA) therefore recently recommended that the preventive use of statins in individuals without known cardiovascular disease should be based on personalized risk calculations. If the risk of developing disease within the next 10 years is greater than 7.5%, then the benefits of statin therapy outweigh its risks and the treatment should be initiated. The panel also indicated that if the 10-year risk of cardiovascular disease is greater than 5%, then physicians should consider prescribing statins, but should bear in mind that the scientific evidence for this recommendation was not as strong as that for higher-risk individuals.

 

Oops button - via Shutterstock
Oops button – via Shutterstock

Using statins in low risk patients

The recommendation that individuals with comparatively low risk of developing future cardiovascular disease (10-year risk lower than 10%) would benefit from statins was met skepticism by some medical experts. In October 2013, the British Medical Journal (BMJ) published a paper by John Abramson, a lecturer at Harvard Medical School, and his colleagues which re-evaluated the data from a prior study on statin benefits in patients with less than 10% cardiovascular disease risk over 10 years. Abramson and colleagues concluded that the statin benefits were over-stated and that statin therapy should not be expanded to include this group of individuals. To further bolster their case, Abramson and colleagues also cited a 2013 study by Huabing Zhang and colleagues in the Annals of Internal Medicine which (according to Abramson et al.) had reported that 18 % of patients discontinued statins due to side effects. Abramson even highlighted the finding from the Zhang study by including it as one of four bullet points summarizing the key take-home messages of his article.

The problem with this characterization of the Zhang study is that it ignored all the caveats that Zhang and colleagues had mentioned when discussing their findings. The Zhang study was based on the retrospective review of patient charts and did not establish a true cause-and-effect relationship between the discontinuation of the statins and actual side effects of statins. Patients may stop taking medications for many reasons, but this does not necessarily mean that it is due to side effects from the medication. According to the Zhang paper, 17.4% of patients in their observational retrospective study had reported a “statin related incident” and of those only 59% had stopped the medication. The fraction of patients discontinuing statins due to suspected side effects was at most 9-10% instead of the 18% cited by Abramson. But as Zhang pointed out, their study did not include a placebo control group. Trials with placebo groups document similar rates of “side effects” in patients taking statins and those taking placebos, suggesting that only a small minority of perceived side effects are truly caused by the chemical compounds in statin drugs.

 

Admitting errors is only the first step

Whether 18%, 9% or a far smaller proportion of patients experience significant medication side effects is no small matter because the analysis could affect millions of patients currently being treated with statins. A gross overestimation of statin side effects could prompt physicians to prematurely discontinue medications that have been shown to significantly reduce the risk of heart attacks in a wide range of patients. On the other hand, severely underestimating statin side effects could result in the discounting of important symptoms and the suffering of patients. Abramson’s misinterpretation of statin side effect data was pointed out by readers of the BMJ soon after the article published, and it prompted an inquiry by the journal. After re-evaluating the data and discussing the issue with Abramson and colleagues, the journal issued a correction in which it clarified the misrepresentation of the Zhang paper.

Fiona Godlee, the editor-in-chief of the BMJ also wrote an editorial explaining the decision to issue a correction regarding the question of side effects and that there was not sufficient cause to retract the whole paper since the other points made by Abramson and colleagues – the lack of benefit in low risk patients – might still hold true. Instead, Godlee recognized the inherent bias of a journal’s editor when it comes to deciding on whether or not to retract a paper. Every retraction of a peer reviewed scholarly paper is somewhat of an embarrassment to the authors of the paper as well as the journal because it suggests that the peer review process failed to identify one or more major flaws. In a commendable move, the journal appointed a multidisciplinary review panel which includes leading cardiovascular epidemiologists. This panel will review the Abramson paper as well as another BMJ paper which had also cited the inaccurately high frequency of statin side effects, investigate the peer review process that failed to identify the erroneous claims and provide recommendations regarding the ultimate fate of the papers.

 

Reviewing peer review

Why didn’t the peer reviewers who evaluated Abramson’s article catch the error prior to its publication? We can only speculate as to why such a major error was not identified by the peer reviewers. One has to bear in mind that “peer review” for academic research journals is just that – a review. In most cases, peer reviewers do not have access to the original data and cannot check the veracity or replicability of analyses and experiments. For most journals, peer review is conducted on a voluntary (unpaid) basis by two to four expert reviewers who routinely spend multiple hours analyzing the appropriateness of the experimental design, methods, presentation of results and conclusions of a submitted manuscript. The reviewers operate under the assumption that the authors of the manuscript are professional and honest in terms of how they present the data and describe their scientific methodology.

In the case of Abramson and colleagues, the correction issued by the BMJ refers not to Abramson’s own analysis but to the misreading of another group’s research. Biomedical research papers often cite 30 or 40 studies, and it is unrealistic to expect that peer reviewers read all the cited papers and ensure that they are being properly cited and interpreted. If this were the expectation, few peer reviewers would agree to serve as volunteer reviewers since they would have hardly any time left to conduct their own research. However, in this particular case, most peer reviewers familiar with statins and the controversies surrounding their side effects should have expressed concerns regarding the extraordinarily high figure of 18% cited by Abramson and colleagues. Hopefully, the review panel will identify the reasons for the failure of BMJ’s peer review system and point out ways to improve it.

 

To err is human, to study errors is science

All researchers make mistakes, simply because they are human. It is impossible to eliminate all errors in any endeavor that involves humans, but we can construct safeguards that help us reduce the occurrence and magnitude of our errors. Overt fraud and misconduct are rare causes of errors in research, but their effects on any given research field can be devastating. One of the most notorious occurrences of research fraud is the case of the Dutch psychologist Diederik Stapel who published numerous papers based on blatant fabrication of data – showing ‘results’ of experiments on non-existent study subjects. The field of cell therapy in cardiovascular disease recently experienced a major setback when a university review of studies headed by the German cardiologist Bodo Strauer found evidence of scientific misconduct. The significant discrepancies and irregularities in Strauer’s studies have now lead to wide-ranging skepticism about the efficacy of using bone marrow cell infusions to treat heart disease.

 

It is difficult to obtain precise numbers to quantify the actual extent of severe research misconduct and fraud since it may go undetected. Even when such cases are brought to the attention of the academic leadership, the involved committees and administrators may decide to keep their findings confidential and not disclose them to the public. However, most researchers working in academic research environments would probably agree that these are rare occurrences. A far more likely source of errors in research is the cognitive bias of the researchers. Researchers who believe in certain hypotheses and ideas are prone to interpreting data in a manner most likely to support their preconceived notions. For example, it is likely that a researcher opposed to statin usage will interpret data on side effects of statins differently than a researcher who supports statin usage. While Abramson may have been biased in the interpretation of the data generated by Zhang and colleagues, the field of cardiovascular regeneration is currently grappling in what appears to be a case of biased interpretation of one’s own data. An institutional review by Harvard Medical School and Brigham and Women’s Hospital recently determined that the work of Piero Anversa, one of the world’s most widely cited stem cell researchers, was significantly compromised and warranted a retraction. His group had reported that the adult human heart exhibited an amazing regenerative potential, suggesting that roughly every 8 to 9 years the adult human heart replaces its entire collective of beating heart cells (a 7% – 19% yearly turnover of beating heart cells). These findings were in sharp contrast to a prior study which had found only a minimal turnover of beating heart cells (1% or less per year) in adult humans. Anversa’s finding was also at odds with the observations of clinical cardiologists who rarely observe a near-miraculous recovery of heart function in patients with severe heart disease. One possible explanation for the huge discrepancy between the prior research and Anversa’s studies was that Anversa and his colleagues had not taken into account the possibility of contaminations that could have falsely elevated the cell regeneration counts.

 

Improving the quality of research: peer review and more

Despite the fact that researchers are prone to make errors due to inherent biases does not mean we should simply throw our hands up in the air, say “Mistakes happen!” and let matters rest. High quality science is characterized by its willingness to correct itself, and this includes improving methods to detect and correct scientific errors early on so that we can limit their detrimental impact. The realization that lack of reproducibility of peer-reviewed scientific papers is becoming a major problem for many areas of research such as psychology, stem cell research and cancer biology has prompted calls for better ways to track reproducibility and errors in science.

One important new paradigm that is being discussed to improve the quality of scholar papers is the role of post-publication peer evaluation. Instead of viewing the publication of a peer-reviewed research paper as an endpoint, post publication peer evaluation invites fellow scientists to continue commenting on the quality and accuracy of the published research even after its publication and to engage the authors in this process. Traditional peer review relies on just a handful of reviewers who decide about the fate of a manuscript, but post publication peer evaluation opens up the debate to hundreds or even thousands of readers which may be able to detect errors that could not be identified by the small number of traditional peer reviewers prior to publication. It is also becoming apparent that science journalists and science writers can play an important role in the post-publication evaluation of published research papers by investigating and communicating research flaws identified in research papers. In addition to helping dismantle the Science Mystique, critical science journalism can help ensure that corrections, retractions or other major concerns about the validity of scientific findings are communicated to a broad non-specialist audience.

In addition to these ongoing efforts to reduce errors in science by improving the evaluation of scientific papers, it may also be useful to consider new pro-active initiatives which focus on how researchers perform and design experiments. As the head of a research group at an American university, I have to take mandatory courses (in some cases on an annual basis) informing me about laboratory hazards, ethics of animal experimentation or the ethics of how to conduct human studies. However, there are no mandatory courses helping us identify our own research biases or how to minimize their impact on the interpretation of our data. There is an underlying assumption that if you are no longer a trainee, you probably know how to perform and interpret scientific experiments. I would argue that it does not hurt to remind scientists regularly – no matter how junior or senior- that they can become victims of their biases. We have to learn to continuously re-evaluate how we conduct science and to be humble enough to listen to our colleagues, especially when they disagree with us.

 

Note: A shorter version of this article was first published at The Conversation with excellent editorial input provided by Jo Adetunji.

 

ResearchBlogging.org
Abramson, J., Rosenberg, H., Jewell, N., & Wright, J. (2013). Should people at low risk of cardiovascular disease take a statin? BMJ, 347 (oct22 3) DOI: 10.1136/bmj.f6123

Neutrality, Balance and Anonymous Sources in Science Blogging – #scioStandards

This is Part 2 of a series of blog posts in anticipation of the Upholding standards in scientific blogs (Session 10B, #scioStandards) session which I will be facilitating at noon on Saturday, March 1 at the upcoming ScienceOnline conference (February 27 – March 1, 2014 in Raleigh, NC – USA). Please read Part 1 here. The goal of these blog posts is to raise questions which readers can ponder and hopefully discuss during the session.

scioStandards

1.       Neutrality

Neutrality is prized by scientists and journalists. Scientists are supposed to report and analyze their scientific research in a neutral fashion. Similarly, journalistic professionalism requires a neutral and objective stance when reporting or analyzing news. Nevertheless, scientists and journalists are also aware of the fact that there is no perfect neutrality. We are all victims of our conscious and unconscious biases and how we report data or events is colored by our biases. Not only is it impossible to be truly “neutral”, but one can even question whether “neutrality” should be a universal mandate. Neutrality can make us passive, especially when we see a clear ethical mandate to take action. Should one report in a neutral manner about genocide instead of becoming an advocate for the victims? Should a scientist who observes a destruction of ecosystems report on this in a neutral manner? Is it acceptable or perhaps even required for such a scientist to abandon neutrality and becoming an advocate to protect the ecosystems?

Science bloggers or science journalists have to struggle to find the right balance between neutrality and advocacy. Political bloggers and journalists who are enthusiastic supporters of a political party will find it difficult to preserve neutrality in their writing, but their target audiences may not necessarily expect them to remain neutral. I am often fascinated and excited by scientific discoveries and concepts that I want to write about, but I also notice how my enthusiasm for science compromises my neutrality. Should science bloggers strive for neutrality and avoid advocacy? Or is it understood that their audiences do not expect neutrality?

 

2.       Balance

One way to increase objectivity and neutrality in science writing is to provide balanced views. When discussing a scientific discovery or concept, one can also cite or reference scientists with opposing views. This underscores that scientific opinion is not a monolith and that most scientific findings can and should be challenged. However, the mandate to provide balance can also lead to “false balance” when two opposing opinions are presented as two equivalent perspectives, even though one of the two sides has little to no scientific evidence to back up its claims. More than 99% of all climatologists agree about the importance of anthropogenic global warming, therefore it would be “false balance” to give equal space to opposing fringe views. Most science bloggers would also avoid “false balance” when it comes to reporting about the scientific value of homeopathy since nearly every scientist in the world agrees that homeopathy has no scientific data to back it up.

But how should science bloggers decide what constitutes “necessary balance” versus “false balance” when writing about areas of research where the scientific evidence is more ambivalent. How about a scientific discovery which 80% of scientists think is a landmark finding and 20% of scientists believe is a fluke? How does one find out about the scientific rigor of the various viewpoints and how should a blog post reflect these differences in opinion? Press releases of universities or research institutions usually only cite the researchers that conducted a scientific study, but how does one find out about other scientists who disagree with the significance of the new study?

 

3.       Anonymous Sources

Most scientific peer review is conducted with anonymous sources. The editors of peer reviewed scientific journals send out newly submitted manuscripts to expert reviewers in the field but they try to make sure that the names of the reviewers remain confidential. This helps ensure that the reviewers can comment freely about any potential flaws in the manuscript without having to fear retaliation from the authors who might be incensed about the critique. Even in the post-publication phase, anonymous commenters can leave critical comments about a published study at the post-publication peer review website PubPeer. The comments made by anonymous as well as identified commenters at PubPeer played an important role in raising questions about recent controversial stem cell papers. On the other hand, anonymous sources may also use their cover to make baseless accusations and malign researchers. In the case of journals, the responsibility lies with the editors to ensure that their anonymous reviewers are indeed behaving in a professional manner and not abusing their anonymity.

Investigative political journalists also often rely on anonymous sources and whistle-blowers to receive critical information that would have otherwise been impossible to obtain. Journalists are also trained to ensure that their anonymous sources are credible and that they are not abusing their anonymity.

Should science bloggers and science journalists also consider using anonymous sources? Would unnamed scientists provide a more thorough critical appraisal of the quality of scientific research or would this open the door to abuse?

 

I hope that you leave comments on this post, tweet your thoughts using the #scioStandards hashtag and discuss your views at the Science Online conference.

Background Reading in Science Blogging – #scioStandards

There will be so many interesting sessions at the upcoming ScienceOnline conference (February 27 – March 1, 2014 in Raleigh, NC – USA) that it is going to be difficult to choose which sessions to attend, because one will invariably miss out on concurrent sessions. If you are not too exhausted, please attend one of the last sessions of the conference: Upholding standards in scientific blogs (Session 10B, #scioStandards).

scioStandards

I will be facilitating the discussion at this session, which will take place at noon on Saturday, March 1, just before the final session of the conference. The title of the session is rather vague, and the purpose of the session is for attendees to exchange their views on whether we can agree on certain scientific and journalistic standards for science blogging.

Individual science bloggers have very different professional backgrounds and they also write for a rather diverse audience. Some bloggers are part of larger networks, others host a blog on their own personal website. Some are paid, others write for free. Most bloggers have developed their own personal styles for how they write about scientific studies, the process of scientific discovery, science policy and the lives of people involved in science. Considering the heterogeneity in the science blogging community, is it even feasible to identify “standards” for scientific blogging? Are there some core scientific and journalistic standards that most science bloggers can agree on? Would such “standards” merely serve as informal guidelines or should they be used as measures to assess the quality of science blogging?

These are the kinds of questions that we will try to discuss at the session. I hope that we will have a lively discussion, share our respective viewpoints and see what we can learn from each other. To gauge the interest levels of the attendees, I am going to pitch a few potential discussion topics on this blog and use your feedback to facilitate the discussion. I would welcome all of your responses and comments, independent of whether you intend to attend the conference or the session. I will also post these questions in the Science Online discussion forum.

One of the challenges we face when we blog about specific scientific studies is determining how much background reading is necessary to write a reasonably accurate blog post. Most science bloggers probably read the original research paper they intend to write about, but even this can be challenging at times. Scientific papers aren’t very long. Journals usually restrict the word count of original research papers to somewhere between 2,000 words to 8,000 words (depending on each scientific journal’s policy and whether the study is a published as a short communication or a full-length article). However, original research papers are also accompanied four to eight multi-paneled figures with extensive legends.

Nowadays, research papers frequently include additional figures, data-sets and detailed descriptions of scientific methods that are published online and not subject to the word count limit. A 2,000 word short communication with two data figures in the main manuscript may therefore be accompanied by eight “supplemental” online-only figures and an additional 2,000 words of text describing the methods in detail. A single manuscript usually summarizes the results of multiple years of experimental work, which is why this condensed end-product is quite dense. It can take hours to properly study the published research study and understand the intricate details.

Is it enough to merely read the original research paper in order to blog about it? Scientific papers include a brief introduction section, but these tend to be written for colleagues who are well-acquainted with the background and significance of the research. However, unless one happens to blog about a paper that is directly related to one’s own work, most of us probably need additional background reading to fully understand the significance of a newly published study.

An expert on liver stem cells, for example, who wants blog about the significance of a new paper on lung stem cells will probably need substantial amount of additional background reading. One may have to read at least one or two older research papers by the authors or their scientific colleagues / competitors to grasp what makes the new study so unique. It may also be helpful to read at least one review paper (e.g. a review article summarizing recent lung stem cell discoveries) to understand the “big picture”. Some research papers are accompanied by scientific editorials which can provide important insights into the strengths and limitations of the paper in question.

All of this reading adds up. If it takes a few hours to understand the main paper that one intends to blog about, and an additional 2-3 hours to read other papers or editorials, a science blogger may end up having to invest 4-5 hours of reading before one has even begun to write the intended blog post.

What strategies have science bloggers developed to manage their time efficiently and make sure they can meet (external or self-imposed) deadlines but still complete the necessary background reading?

Should bloggers provide references and links to the additional papers they consulted?

Should bloggers try to focus on a narrow area of expertise so that over time they develop enough of a background in this niche area so that they do not need so much background reading?

Are there major differences in the expectations of how much background reading is necessary? For example, does an area such as stem cell research or nanotechnology require far more background reading because every day numerous new papers are published and it is so difficult to keep up with the pace of the research?

Is it acceptable to take short-cuts? Could one just read the paper that one wants to blog about and forget about additional background reading, hoping that the background provided in the paper is sufficient and balanced?

Can one avoid reading the supplementary figures or texts of a paper and just stick to the main text of a paper, relying on the fact that the peer reviewers of the published paper would have caught any irregularities in the supplementary data?

Is it possible to primarily rely on a press release or an interview with the researchers of the paper and just skim the results of the paper instead of spending a few hours trying to read the original paper?

Or do such short-cuts compromise the scientific and journalistic quality of science blogs?

Would a discussion about expectations, standards and strategies to manage background reading be helpful for participants of the session?

Growing Skepticism about the Stem Cell Acid Trip

In January 2014, the two papers “Stimulus-triggered fate conversion of somatic cells into pluripotency” and “Bidirectional developmental potential in reprogrammed cells with acquired pluripotency” published in the journal Nature by Haruko Obokata and colleagues took the world of stem cell research by surprise.

Since Shinya Yamanaka’s landmark discovery that adult skin cells could be reprogrammed into embryonic-like induced pluripotent stem cells (iPSCs) by introducing selected embryonic genes into adult cells, laboratories all over the world have been using modifications of the “Yamanaka method” to create their own stem cell lines. The original Yamanaka method published in 2006 used a virus which integrated into the genome of the adult cell to introduce the necessary genes. Any introduction of genetic material into a cell carries the risk of causing genetic aberrancies that could lead to complications, especially if the newly generated stem cells are intended for therapeutic usage in patients.

billboard-63978_150

Researchers have therefore tried to modify the “Yamanaka method” and reduce the risk of genetic aberrations by either using genetic tools to remove the introduced genes once the cells are fully reprogrammed to a stem cell state, introducing genes without non-integrating viruses or by using complex cocktails of chemicals and growth factors in order to generate stem cells without the introduction of any genes into the adult cells.

The papers by Obokata and colleagues at the RIKEN center in Kobe, Japan use a far more simple method to reprogram adult cells. Instead of introducing foreign genes, they suggest that one can expose adult mouse cells to a severe stress such as an acidic solution. The cells which survive acid-dipping adventure (25 minutes in a solution with pH 5.7) activate their endogenous dormant embryonic genes by an unknown mechanism. The researchers then show that these activated cells take on properties of embryonic stem cells or iPSCs if they are maintained in a stem cell culture medium and treated with the necessary growth factors. Once the cells reach the stem cell state, they can then be converted into cells of any desired tissue, both in a culture dish as well as in a developing mouse embryo. Many of the experiments in the papers were performed by starting out with adult mouse lymphocytes, but the researchers also found that mouse skin fibroblasts and other cells could also be successfully converted into an embryonic-like state using the acid stress.

My first reaction was incredulity. How could such a simple and yet noxious stress such as exposing cells to acid be sufficient to initiate a complex “stemness” program? Research labs have spent years fine-tuning the introduction of the embryonic genes, trying to figure out the optimal combination of genes and timing of when the genes are essential during the reprogramming process. These two papers propose that the whole business of introducing stem cell genes into adult cells was unnecessary – All You Need Is Acid.

 

This sounds too good to be true. The recent history in stem cell research has taught us that we need to be skeptical. Some of the most widely cited stem cell papers cannot be replicated. This problem is not unique to stem cell research, because other biomedical research areas such as cancer biology are also struggling with issues of replicability, but the high scientific impact of burgeoning stem cell research has forced its replicability issues into the limelight. Nowadays, whenever stem cell researchers hear about a ground-breaking new stem cell discovery, they often tend to respond with some degree of skepticism until multiple independent laboratories can confirm the results.

My second reaction was that I really liked the idea. Maybe we had never tried something as straightforward as an acid stress because we were too narrow-minded, always looking for complex ways to create stem cells instead of trying simple approaches. The stress-induction of stem cell behavior may also represent a regenerative mechanism that has been conserved by evolution. When our amphibian cousins regenerate limbs following an injury, adult tissue cells are also reprogrammed to a premature state by the stress of the injury before they start building a new limb.

The idea of stress-induced reprogramming of adult cells to an embryonic-like state also has a powerful poetic appeal, which inspired me to write the following haiku:

 

The old warrior

plunges into an acid lake

to emerge reborn.

 

(Read more about science-related haikus here)

Just because the idea of acid-induced reprogramming is so attractive does not mean that it is scientifically accurate or replicable.

A number of concerns about potential scientific misconduct in the context of the two papers have been raised and it appears that the RIKEN center is investigating these concerns. Specifically, anonymous bloggers have pointed out irregularities in the figures of the papers and that some of the images may be duplicated. We will have to wait for the results of the investigation, but even if image errors or duplications are found, this does not necessarily mean that this was intentional misconduct or fraud. Assembling manuscripts with so many images is no easy task and unintentional errors do occur. These errors are probably far more common than we think. High profile papers undergo much more scrutiny than the average peer-reviewed paper, and this is probably why we tend to uncover them more readily in such papers. For example, image duplication errors were discovered in the 2013 Cell paper on human cloning, but many researchers agreed that the errors in the 2013 Cell paper were likely due to sloppiness during the assembly of the submitted manuscript and did not constitute intentional fraud.

Irrespective of the investigation into the irregularities of figures in the two Nature papers, the key question that stem cell researchers have to now address is whether the core findings of the Obokata papers are replicable. Can adult cells – lymphocytes, skin fibroblasts or other cells – be converted into embryonic-like stem cells by an acid stress? If yes, then this will make stem cell generation far easier and it will open up a whole new field of inquiry, leading to many new exciting questions. Do human cells also respond to acid stress in the same manner as the mouse cells? How does acid stress reprogram the adult cells? Is there an acid-stress signal that directly acts on stem cell transcription factors or does the stress merely activate global epigenetic switches? Are other stressors equally effective? Does this kind of reprogramming occur in our bodies in response to an injury such as low oxygen or inflammation because these kinds of injuries can transiently create an acidic environment in our tissues?

Researchers all around the world are currently attempting to test the effect of acid exposure on the activation of stem cell genes. Paul Knoepfler’s stem cell blog is currently soliciting input from researchers trying to replicate the work. Paul makes it very clear that this is an informal exchange of ideas so that researchers can learn from each other on a “real-time” basis. It is an opportunity to find out about how colleagues are progressing without having to wait for 6-12 months for the next big stem cell meeting or the publication of a paper confirming or denying the replication of acid-induced reprogramming. Posting one’s summary of results on a blog is not as rigorous as publishing a peer-reviewed paper with all the necessary methodological details, but it can at least provide some clues as to whether some or all of the results in the controversial Obokata papers can be replicated.

If the preliminary findings of multiple labs posted on the blog indicate that lymphocytes or skin cells begin to activate their stem cell gene signature after acid stress, then we at least know that this is a project which merits further investigation and researchers will be more willing to invest valuable time and resources to conduct additional replication experiments. On the other hand, if nearly all the researchers post negative results on the blog, then it is probably not a good investment of resources to spend the next year or so trying to replicate the results.

It does not hurt to have one’s paradigms or ideas challenged by new scientific papers as long as we realize that paradigm-challenging papers need to be replicated. The Nature papers must have undergone rigorous peer review before their publication, but scientific peer review does not involve checking replicability of the results. Peer reviewers focus on assessing the internal logic, experimental design, novelty, significance and validity of the conclusions based on the presented data. The crucial step of replicability testing occurs in the post-publication phase. The post-publication exchange of results on scientific blogs by independent research labs is an opportunity to crowd-source replicability testing and thus accelerate the scientific authentication process. Irrespective of whether or not the attempts to replicate acid-induced reprogramming succeed, the willingness of the stem cell community to engage in a dialogue using scientific blogs and evaluate replicability is an important step forward.

 

ResearchBlogging.org
Obokata H, Wakayama T, Sasai Y, Kojima K, Vacanti MP, Niwa H, Yamato M, & Vacanti CA (2014). Stimulus-triggered fate conversion of somatic cells into pluripotency. Nature, 505 (7485), 641-7 PMID: 24476887

Is It Possible To Have Excess Weight And Still Be Healthy?

Is it possible to be overweight or obese and still be considered healthy? Most physicians advise their patients who are overweight or obese to lose weight because excess weight is a known risk factor for severe chronic diseases such as diabetes, high blood pressure or cardiovascular disease. However, in recent years, a controversy has arisen regarding the actual impact of increased weight on an individual’s life expectancy or risk of suffering from heart attacks. Some researchers argue that being overweight (body mass index between 25 and 30; calculate your body mass index here) or obese (body mass index greater than 30) primarily affects one’s metabolic health and it is the prolonged exposure to metabolic problems that in turn lead to cardiovascular disease or death.

256px-Obesity-waist_circumference.svg

 

According to this view, merely having excess weight is not dangerous. It only becomes a major problem if it causes metabolic problems such as high cholesterol levels, high blood sugar levels and diabetes or high blood pressure. This suggests that there is a weight/health spectrum which includes overweight or obese individuals with normal metabolic parameters who are not yet significantly impacted by the excess weight (“healthy overweight” and “healthy obesity”). The other end of the spectrum includes overweight and obese individuals who also have significant metabolic abnormalities due to the excess weight and these individuals are at a much higher risk for heart disease and death because of the metabolic problems.

Other researchers disagree with this view and propose that all excess weight is harmful, independent of whether the overweight or obese individuals have normal metabolic parameters. To resolve this controversy, researchers at the Mount Sinai Hospital and University of Toronto recently performed a meta-analysis and evaluated the data from major clinical studies comparing the mortality (risk of death) and heart disease (as defined by events such as heart attacks) in normal weight, overweight and obese individuals and grouping them by their metabolic health.

The study was recently published in the Annals of Internal Medicine (2014) as “Are Metabolically Healthy Overweight and Obesity Benign Conditions?: A Systematic Review and Meta-analysis” and provided data on six groups of individuals: 1) metabolically healthy and normal weight, 2) metabolically healthy and overweight, 3) metabolically healthy and obese, 4) metabolically unhealthy and normal weight, 5) metabolically unhealthy and overweight and 6) metabolically unhealthy and obese. The researchers could only include studies which had measured metabolic health (normal blood sugar, blood pressure, cholesterol, etc.) alongside with weight.

The first important finding was that metabolically healthy overweight individuals did NOT have a significantly higher risk of death and cardiovascular events when compared to metabolically healthy normal weight individuals. The researchers then analyzed the risk profile of the metabolically healthy obese individuals and found that their risk was 1.19-fold higher than the normal weight counterparts, but this slight increase in risk was not statistically significant. The confidence intervals were 0.98 to 1.38 and for this finding to be statistically significant, the lower confidence interval would have needed to be higher than 1.0 instead of 0.98.

The researchers then decided to exclude studies which did not provide at least 10 years of follow up data on the enrolled subjects. This new rule excluded studies which had shown no significant impact of obesity on survival. When the researchers now re-analyzed their data after the exclusions, they found that metabolically healthy obese individuals did have a statistically significant higher risk! Metabolically healthy obese subjects had a 1.24-fold higher risk, with a confidence interval of 1.02 to 1.55. The lower confidence interval was now a tick higher than the 1.0 threshold and thus statistically significant.

Another important finding was that among metabolically unhealthy individuals, all three groups (normal weight, overweight, obese) had a similar risk profile. Metabolically unhealthy normal weight subjects had a three-fold higher than metabolically healthy normal weight individuals. The metabolically unhealthy overweight and obese groups also had a roughly three—fold higher risk when compared to metabolically healthy counterparts. This means that metabolic parameters are far more important as predictors of cardiovascular health than just weight (compare the rather small 1.24-fold higher risk with the 3-fold higher risk).

Unfortunately, the authors of the study did not provide a comprehensive discussion of these findings. Instead, they conclude that there is no “healthy obesity” and suggest that all excess weight is bad, even if one is metabolically healthy. The discussion section of the paper glosses over the important finding that metabolically healthy overweight individuals do not have a higher risk. They also do not emphasize that even the purported effects of obesity in metabolically healthy individuals were only marginally significant. The editorial accompanying the paper is even more biased and carries the definitive title “ The Myth of Healthy Obesity”. “Myth” is a rather strong word considering the rather small impact of the individuals’ weight on their overall risk.

 

Some press reports also went along with the skewed interpretation presented by the study authors and the editorial.

 

A BBC article describing the results stated:

 

It has been argued that being overweight does not necessarily imply health risks if individuals remain healthy in other ways.

The research, published in Annals of Internal Medicine, contradicts this idea.

 

This BBC article conflates the terms overweight and obese, ignoring the fact that the study showed that metabolically healthy overweight individuals actually do not have a higher risk.

 

The New York Times blog cited a study author:

 

“The message here is pretty clear,” said the lead author, Dr. Caroline K. Kramer, a researcher at the University of Toronto. “The results are very consistent. It’s not O.K. to be obese. There is no such thing as healthy obesity.”

 

Suggesting that the message is “pretty clear” is somewhat overreaching. One of the key problems with using this meta-analysis to reach definitive conclusions about “healthy overweight” or “healthy obesity” is that the study authors and editorial equate increased risk with unhealthy. Definitions of what constitutes “health” or “disease” should be based on scientific parameters (biomarkers in the blood, functional assessments of cardiovascular health, etc.) and not just on increased risk. Men have an increased risk of dying from cardiovascular disease than women. Does this mean that being a healthy man is a myth? Another major weakness of the study was that there was no data included on regular exercise. Numerous studies have shown that regular exercise reduces the risk of cardiovascular events. It is quite possible that the mild increase in cardiovascular risk in the metabolically healthy obese group may be due, in part, to lower levels of exercise.

This study does not prove that healthy obesity is a “myth”. Overweight individuals with normal metabolic health do not yet have a significant elevation in their cardiovascular risk. At this stage, one can indeed be “overweight” as defined by one’s body mass index but still be considered “healthy” as long as all the other metabolic parameters are within the normal ranges and one abides by the general health recommendations such as avoiding tobacco, exercising regularly. If an overweight person progresses to becoming obese, he or she may be at slightly higher risk for cardiovascular events even if their metabolic health remains intact. The important take-home message from this study is that while obesity itself can be a risk factor for increased risk of cardiovascular disease, it is far more important to ensure metabolic health by controlling cholesterol levels, blood pressure, preventing diabetes and important additional interventions such as encouraging regular exercise instead of just focusing on an individual’s weight.

 

ResearchBlogging.org

Kramer CK, Zinman B, & Retnakaran R (2013). Are metabolically healthy overweight and obesity benign conditions?: A systematic review and meta-analysis. Annals of internal medicine, 159 (11), 758-69 PMID: 24297192