Crowdfunding and Tribefunding in Science

Competition for government research grants to fund scientific research remains fierce in the United States. The budget of the National Institutes of Health (NIH), which constitute the major source of funding for US biological and medical research, has been increased only modestly during the past decade but it is not even keeping up with inflation. This problem is compounded by the fact that more scientists are applying for grants now than one or two decades ago, forcing the NIH to enforce strict cut-offs and only fund the top 10-20% of all submitted research proposals. Such competition ought to be good for the field because it could theoretically improve the quality of science. Unfortunately, it is nearly impossible to discern differences between excellent research grants. For example, if an institute of the NIH has a cut-off at the 13 percentile range, then a grant proposal judged to be in the top 10% would receive funding but a proposal in top 15% would end up not being funded. In an era where universities are also scaling back their financial support for research, an unfunded proposal could ultimately lead to the closure of a research laboratory and the dismissal of several members of a research team. Since the prospective assessment of a research proposal’s scientific merits are somewhat subjective, it is quite possible that the budget constraints are creating cemeteries of brilliant ideas and concepts, a world of scientific what-ifs that are forever lost.

Red Panda
Red Panda

How do we scientists deal with these scenarios? Some of us keep soldiering on, writing one grant after the other. Others change and broaden the direction of their research, hoping that perhaps research proposals in other areas are more likely to receive the elusive scores that will qualify for funding. Yet another approach is to submit research proposals to philanthropic foundations or non-profit organizations, but most of these organizations tend to focus on research which directly impacts human health. Receiving a foundation grant to study the fundamental mechanisms by which the internal clocks of plants coordinate external timing cues such as sunlight, food and temperature, for example, would be quite challenging. One alternate source of research funding that is now emerging is “scientific crowdfunding” in which scientists use web platforms to present their proposed research project to the public and thus attract donations from a large number of supporters. The basic underlying idea is that instead of receiving a $50,000 research grant from one foundation or government agency, researchers may receive smaller donations from 10, 50 or even a 100 supporters and thus finance their project.

The website experiment.com is a scientific crowdfunding platform which presents an intriguing array of projects in search of backers, ranging from “Death of a Tyrant: Help us Solve a Late Cretaceous Dinosaur Mystery!” to “Eating tough stuff with floppy jaws – how do freshwater rays eat crabs, insects, and mollusks?” Many of the projects include a video in which the researchers outline the basic goals and significance of their project and then also provide more detailed information on the webpage regarding how the funds will be used. There is also a “Discussion” section for each proposed project in which researchers answer questions raised by potential backers and, importantly, a “Results” in which researchers can report emerging results once their project is funded.

How can scientists get involved in scientific crowdfunding? Julien Vachelard and colleagues recently published an excellent overview of scientific crowdfunding. They analyzed the projects funded on experiment.com and found that projects which successfully achieved the funding goal tend to have 30-40 backers. The total amount of funds raised for most projects ranged from about $3,000 to $5,000. While these amounts are impressive, they are still far lower than a standard foundation or government agency grant in biomedical research. These smaller amounts could support limited materials to expand ongoing projects, but they are not sufficient to carry out standard biomedical research projects which cover salaries and stipends of the researchers. The annual stipends for postdoctoral research fellows alone run in the $40,000 – $55,000 range.

Vachelard and colleagues also provide great advice for how scientists can increase the likelihood of funding. Attention span is limited on the internet so researchers need to convey the key message of their research proposal in a clear, succinct and engaging manner. It is best to use powerful images and videos, set realistic goals (such as $3,000 to $5,000), articulate what the funds will be used for, participate in discussions to answer questions and also update backers with results as they emerge. Presenting research in a crowdfunding platform is an opportunity to educate the public and thus advance science, forcing scientists to develop better communication skills. These collateral benefits to the scientific enterprise extend beyond the actual amount of funding that is solicited.

One of the concerns that is voiced about scientific crowdfunding is that it may only work for “panda bear science“, i.e. scientific research involving popular themes such as cute and cuddly animals or studying life on other planets. However, a study of what actually gets funded in a scientific crowdfunding campaign revealed that the subject matter was not as important as how well the researchers communicated with their audience. A bigger challenge for the long-term success of scientific crowdfunding may be the limited amounts that are raised and therefore only cover the cost of small sub-projects but are neither sufficient to embark on exploring exciting new ideas and independent ideas nor offset salary and personnel costs. Donating $20 or $50 to a project is very different from donating amounts such as $1,000 because the latter requires not only the necessary financial resources but also a represents a major personal investment in the success of the research project. To initiate an exciting new biomedical research project in the $50,000 or $100,000 range, one needs several backers who are willing to donate $1,000 or more.

Perhaps one solution could be to move from a crowdfunding towards a tribefunding model. Crowds consist of a mass of anonymous people, mostly strangers in a confined space who do not engage each other. Tribes, on the other hand, are characterized by individuals who experience a sense of belonging and fellowship, they share and take responsibility for each other. The “tribes” in scientific tribefunding would consist of science supporters or enthusiasts who recognize the importance of the scientific work and also actively participate in discussions not just with the scientists but also with each other. Members of a paleontology tribe could include specialists and non-specialists who are willing to put in the required time to study the scientific background of a proposed paleontology research project, understand how it would advance the field and how even negative results (which are quite common in science) could be meaningful.

Tribefunding in higher education and science may sound like a novel concept but certain aspects of tribefunding are already common practice in the United States, albeit under different names. When wealthy alumni establish endowments for student scholarships, fellowship programs or research centers at their alma mater, it is in part because they feel a tribe-like loyalty towards the institutions that laid the cornerstones of their future success. The students and scholars who will benefit from these endowments are members of the same academic institution or tribe. The difference between the currently practiced form of philanthropic funding and the proposed tribefunding model is that tribe identity would not be defined by where one graduated from but instead by scientific interests.

Tribefunding could also impact the review process of scientific proposals. Currently, peer reviewers who assess the quality of scientific proposals for government agencies spend a substantial amount of time assessing the strengths and limitations of each proposal, and then convene either in person or via conference calls to arrive at a consensus regarding the merits of a proposal. Researchers often invest months of effort when they prepare research proposals which is why peer reviewers take their work very seriously and devote the required time to review each proposal carefully. Although the peer review system for grant proposals is often criticized because reviewers can make errors when they assess the quality of proposals, there are no established alternatives for how to assess research proposals. Most peer reviewers also realize that they are part of a “tribe”, with the common interest of selecting the best science. However, the definition of a “peer” is usually limited to other scientists, most of whom are tenured professors at academic institutions and does not really solicit input from non-academic science supporters.  In a tribefunding model, the definition of a “peer” would be expanded to professional scientists as well as science supporters for any given area of science. All members of the tribe could participate during the review and selection of the best projects  as well as throughout the funding period of the research projects that receive the support.

Merging the grassroots character and public outreach of crowdfunding with the sense of fellowship and active dialogue in a “scientific tribe” could take scientific crowdfunding to the next level. A comment section on a webpage is not sufficient to develop such a “tribe” affiliation but regular face-to-face meetings or conventional telephone/Skype conference calls involving several backers (independent of whether they can donate $50 or $5,000) may be more suitable. Developing a sense of ownership through this kind of communication would mean that every member of the science “tribe” realizes that they are a stakeholder. This sense of project ownership may not only increase donations, but could also create a grassroots synergy between laboratory and tribe, allowing for meaningful education and intellectual exchange.

Reference:

Vachelard J, Gambarra-Soares T, Augustini G, Riul P, Maracaja-Coutinho V (2016) A Guide to Scientific Crowdfunding. PLoS Biol 14(2): e1002373. doi:10.1371/journal.pbio.1002373

Note: An earlier version of this article was first published on the 3Quarksdaily blog.

 

ResearchBlogging.org

Vachelard J, Gambarra-Soares T, Augustini G, Riul P, & Maracaja-Coutinho V (2016). A Guide to Scientific Crowdfunding. PLoS Biology, 14 (2) PMID: 26886064

Are American Professors More Responsive to Requests Made by White Male Students?

Less than one fifth of PhD students in the United States will be able to pursue tenure track academic faculty careers once they graduate from their program. Reduced federal funding for research and dwindling support from the institutions for their tenure-track faculty are some of the major reasons for why there is such an imbalance between the large numbers of PhD graduates and the limited availability of academic positions. Upon completing the program, PhD graduates have to consider non-academic job opportunities such as in the industry, government agencies and non-profit foundations but not every doctoral program is equally well-suited to prepare their graduates for such alternate careers. It is therefore essential for prospective students to carefully assess the doctoral program they want to enroll in and the primary mentor they would work with. The best approach is to proactively contact prospective mentors, meet with them and learn about the research opportunities in their group but also discuss how completing the doctoral program would prepare them for their future careers.

students-in-library

The vast majority of professors will gladly meet a prospective graduate student and discuss research opportunities as well as long-term career options, especially if the student requesting the meeting clarifies the goal of the meeting. However, there are cases when students wait in vain for a response. Is it because their email never reached the professor because it got lost in the internet ether or a spam folder? Was the professor simply too busy to respond? A research study headed by Katherine Milkman from the University of Pennsylvania suggests that the lack of response from the professor may in part be influenced by the perceived race or gender of the student.


Milkman and her colleagues conducted a field experiment in which 6,548 professors at the leading US academic institutions (covering 89 disciplines) were contacted via email to meet with a prospective graduate student. Here is the text of the email that was sent to each professor.

Subject Line: Prospective Doctoral Student (On Campus Next

Monday)

Dear Professor [surname of professor inserted here],

I am writing you because I am a prospective doctoral student with considerable interest in your research. My plan is to apply to doctoral programs this coming Fall, and I am eager to learn as much as I can about research opportunities in the meantime.

I will be on campus next Monday, and although I know it is short notice, I was wondering if you might have 10 minutes when you would be willing to meet with me to briefly talk about your work and any possible opportunities for me to get involved in your research. Any time that would be convenient for you would be fine with me, as meeting with you is my first priority during this campus visit.

 Thank you in advance for your consideration.

Sincerely,

[Student’s full name inserted here]

As a professor who frequently receives emails from people who want to work in my laboratory, I feel that the email used in the research study was extremely well-crafted. The student only wants a brief meeting to explore potential opportunities without trying to extract any specific commitment from the professor. The email clearly states the long-term goal – applying to doctoral programs. The tone is also very polite and the student expresses willingness of the prospective student to a to the professor’s schedule. Each email was also personally addressed with the name of the contacted faculty member.

Milkman’s research team then assessed whether the willingness of the professors to respond depended on the gender or ethnicity of the prospective student.  Since this was an experiment, the emails and student names were all fictional but the researchers generated names which most readers would clearly associate with a specific gender and ethnicity.

Here is a list of the names they used:

White male names:  Brad Anderson, Steven Smith

White female names:  Meredith Roberts, Claire Smith

Black male names: Lamar Washington, Terell Jones

Black female names: Keisha Thomas, Latoya Brown

Hispanic male names: Carlos Lopez, Juan Gonzalez

Hispanic female names: Gabriella Rodriguez, Juanita Martinez

Indian male names: Raj Singh, Deepak Patel

Indian female names: Sonali Desai, Indira Shah

Chinese Male names; Chang Huang, Dong Lin

Chinese female names: Mei Chen, Ling Wong

The researchers assessed whether the professors responded (either by agreeing to meet or providing a reason for why they could not meet) at all or whether they simply ignored the email and whether the rate of response depended on the ethnicity/gender of the student.

The overall response rate of the professors ranged from about 60% to 80%, depending on the research discipline as well as the perceived ethnicity and gender of the prospective student. When the emails were signed with names suggesting a white male background of the student, professors were far less likely to ignore the email when compared to those signed with female names or names indicating an ethnic minority background. Professors in the business sciences showed the strongest discrimination in their response rates. They ignored only 18% of emails when it appeared that they had been written by a white male and ignored 38% of the emails if they were signed with names indicating a female gender or ethnic minority background. Professors in the education disciplines ignored 21% of emails with white male names versus 35% with female or minority names. The discrimination gaps in the health sciences (33% vs 43%) and life sciences (32% vs 39%) were smaller but still significant, whereas there was no statistical difference in the humanities professor response rates. Doctoral programs in the fine arts were an interesting exception where emails from apparent white male students were more likely to be ignored (26%) than those of female or minority candidates (only 10%).

The discrimination primarily occurred at the initial response stage. When professors did respond, there was no difference in terms of whether they were able to make time for the student. The researchers also noted that responsiveness discrimination in any discipline was not restricted to one gender or ethnicity. In business doctoral programs, for example, professors were most likely to ignore emails with black female names and Indian male names. Significant discrimination against white female names (when compared to white males names) predicted an increase in discrimination against other ethnic minorities. Surprisingly, the researchers found that having higher representation of female and minority faculty at an institution did not necessarily improve the responsiveness towards requests from potential female or minority students.

This carefully designed study with a large sample size of over 6,500 professors reveals the prevalence of bias against women and ethnic minorities at the top US institutions. This bias may be so entrenched and subconscious that it cannot be remedied by simply increasing the percentage of female or ethnic minority professors in academia. Instead, it is important that professors understand that they may be victims of these biases even if they do not know it. Something as simple as deleting an email from a prospective student because we think that we are too busy to respond may be indicative of an insidious gender or racial bias that we need to understand and confront. Increased awareness and introspection as well targeted measures by institutions are the important first steps to ensure that students receive the guidance and mentorship they need, independent of their gender or ethnic background.

Reference:

Milkman KL, Akinola M, Chugh D. (2015). What Happens Before? A Field Experiment Exploring How Pay and Representation Differentially Shape Bias on the Pathway Into Organizations. Journal of Applied Psychology, 100(6), 1678–1712.

Note: An earlier version of this post was first published on the 3Quarksdaily Blog.

ResearchBlogging.org

Milkman KL, Akinola M, & Chugh D (2015). What happens before? A field experiment exploring how pay and representation differentially shape bias on the pathway into organizations. The Journal of applied psychology, 100 (6), 1678-712 PMID: 25867167

The Dire State of Science in the Muslim World

Universities and the scientific infrastructures in Muslim-majority countries need to undergo radical reforms if they want to avoid falling by the wayside in a world characterized by major scientific and technological innovations. This is the conclusion reached by Nidhal Guessoum and Athar Osama in their recent commentary “Institutions: Revive universities of the Muslim world“, published in the scientific journal Nature. The physics and astronomy professor Guessoum (American University of Sharjah, United Arab Emirates) and Osama, who is the founder of the Muslim World Science Initiative, use the commentary to summarize the key findings of the report “Science at Universities of the Muslim World” (PDF), which was released in October 2015 by a task force of policymakers, academic vice-chancellors, deans, professors and science communicators. This report is one of the most comprehensive analyses of the state of scientific education and research in the 57 countries with a Muslim-majority population, which are members of the Organisation of Islamic Cooperation (OIC).

Map of Saudi Arabia in electronic circuits via Shutterstock (copyright drical)
Map of Saudi Arabia using electronic circuits via Shutterstock (copyright drical)

Here are some of the key findings:

1.    Lower scientific productivity in the Muslim world: The 57 Muslim-majority countries constitute 25% of the world’s population, yet they only generate 6% of the world’s scientific publications and 1.6% of the world’s patents.

2.    Lower scientific impact of papers published in the OIC countries: Not only are Muslim-majority countries severely under-represented in terms of the numbers of publications, the papers which do get published are cited far less than the papers stemming from non-Muslim countries. One illustrative example is that of Iran and Switzerland. In the 2014 SCImago ranking of publications by country, Iran was the highest-ranked Muslim-majority country with nearly 40,000 publications, just slightly ahead of Switzerland with 38,000 publications – even though Iran’s population of 77 million is nearly ten times larger than that of Switzerland. However, the average Swiss publication was more than twice as likely to garner a citation by scientific colleagues than an Iranian publication, thus indicating that the actual scientific impact of research in Switzerland was far greater than that of Iran.

To correct for economic differences between countries that may account for the quality or impact of the scientific work, the analysis also compared selected OIC countries to matched non-Muslim countries with similar per capita Gross Domestic Product (GDP) values (PDF). The per capita GDP in 2010 was $10,136 for Turkey, $8,754 for Malaysia and only $7,390 for South Africa. However, South Africa still outperformed both Turkey and Malaysia in terms of average citations per scientific paper in the years 2006-2015 (Turkey: 5.6; Malaysia: 5.0; South Africa: 9.7).

3.    Muslim-majority countries make minimal investments in research and development: The world average for investing in research and development is roughly 1.8% of the GDP. Advanced developed countries invest up to 2-3 percent of their GDP, whereas the average for the OIC countries is only 0.5%, less than a third of the world average! One could perhaps understand why poverty-stricken Muslim countries such as Pakistan do not have the funds to invest in research because their more immediate concerns are to provide basic necessities to the population. However, one of the most dismaying findings of the report is the dismally low rate of research investments made by the members of the Gulf Cooperation Council (GCC, the economic union of six oil-rich gulf countries Saudi Arabia, Kuwait, Bahrain, Oman, United Arab Emirates and Qatar with a mean per capita GDP of over $30,000 which is comparable to that of the European Union). Saudi Arabia and Kuwait, for example, invest less than 0.1% of their GDP in research and development, far lower than the OIC average of 0.5%.

So how does one go about fixing this dire state of science in the Muslim world? Some fixes are rather obvious, such as increasing the investment in scientific research and education, especially in the OIC countries which have the financial means and are currently lagging far behind in terms of how much funds are made available to improve the scientific infrastructures. Guessoum and Athar also highlight the importance of introducing key metrics to assess scientific productivity and the quality of science education. It is not easy to objectively measure scientific and educational impact, and one can argue about the significance or reliability of any given metric. But without any metrics, it will become very difficult for OIC universities to identify problems and weaknesses, build new research and educational programs and reward excellence in research and teaching. There is also a need for reforming the curriculum so that it shifts its focus from lecture-based teaching, which is so prevalent in OIC universities, to inquiry-based teaching in which students learn science hands-on by experimentally testing hypotheses and are encouraged to ask questions.

In addition to these commonsense suggestions, the task force also put forward a rather intriguing proposition to strengthen scientific research and education: place a stronger emphasis on basic liberal arts in science education. I could not agree more because I strongly believe that exposing science students to the arts and humanities plays a key role in fostering the creativity and curiosity required for scientific excellence. Science is a multi-disciplinary enterprise, and scientists can benefit greatly from studying philosophy, history or literature. A course in philosophy, for example, can teach science students to question their basic assumptions about reality and objectivity, encourage them to examine their own biases, challenge authority and understand the importance of doubt and uncertainty, all of which will likely help them become critical thinkers and better scientists.

However, the specific examples provided by Guessoum and Athar do not necessarily indicate a support for this kind of a broad liberal arts education. They mention the example of the newly founded private Habib University in Karachi which mandates that all science and engineering students also take classes in the humanities, including a two semester course in “hikma” or “traditional wisdom”. Upon reviewing the details of this philosophy course on the university’s website, it seems that the course is a history of Islamic philosophy focused on antiquity and pre-modern texts which date back to the “Golden Age” of Islam. The task force also specifically applauds an online course developed by Ahmed Djebbar. He is an emeritus science historian at the University of Lille in France, which attempts to stimulate scientific curiosity in young pre-university students by relating scientific concepts to great discoveries from the Islamic “Golden Age”. My concern is that this is a rather Islamocentric form of liberal arts education. Do students who have spent all their lives growing up in a Muslim society really need to revel in the glories of a bygone era in order to get excited about science? Does the Habib University philosophy course focus on Islamic philosophy because the university feels that students should be more aware of their cultural heritage or are there concerns that exposing students to non-Islamic ideas could cause problems with students, parents, university administrators or other members of society who could perceive this as an attack on Islamic values? If the true purpose of liberal arts education is to expand the minds of students by exposing them to new ideas, wouldn’t it make more sense to focus on non-Islamic philosophy? It is definitely not a good idea to coddle Muslim students by adulating the “Golden Age” of Islam or using kid gloves when discussing philosophy in order to avoid offending them.

This leads us to a question that is not directly addressed by Guessoum and Osama: How “liberal” is a liberal arts education in countries with governments and societies that curtail the free expression of ideas? The Saudi blogger Raif Badawi was sentenced to 1,000 lashes and 10 years in prison because of his liberal views that were perceived as an attack on religion. Faculty members at universities in Saudi Arabia who teach liberal arts courses are probably very aware of these occupational hazards. At first glance, professors who teach in the sciences may not seem to be as susceptible to the wrath of religious zealots and authoritarian governments. However, the above-mentioned interdisciplinary nature of science could easily spell trouble for free-thinking professors or students. Comments about evolutionary biology, the ethics of genome editing or discussing research on sexuality could all be construed as a violation of societal and religious norms.

The 2010 study Faculty perceptions of academic freedom at a GCC university surveyed professors at an anonymous GCC university (most likely Qatar University since roughly 25% of the faculty members were Qatari nationals and the authors of the study were based in Qatar) regarding their views of academic freedom. The vast majority of faculty members (Arab and non-Arab) felt that academic freedom was important to them and that their university upheld academic freedom. However, in interviews with individual faculty members, the researchers found that the professors were engaging in self-censorship in order to avoid untoward repercussions. Here are some examples of the comments from the faculty at this GCC University:

“I am fully aware of our culture. So, when I suggest any topic in class, I don’t need external censorship except mine.”

“Yes. I avoid subjects that are culturally inappropriate.”

“Yes, all the time. I avoid all references to Israel or the Jewish people despite their contributions to world culture. I also avoid any kind of questioning of their religious tradition. I do this out of respect.”

This latter comment is especially painful for me because one of my heroes who inspired me to become a cell biologist was the Italian Jewish scientist Rita Levi-Montalcini. She revolutionized our understanding of how cells communicate with each other using growth factors. She was also forced to secretly conduct her experiments in her bedroom because the Fascists banned all “non-Aryans” from going to the university laboratory. Would faculty members who teach the discovery of growth factors at this GCC University downplay the role of the Nobel laureate Levi-Montalcini because she was Jewish? We do not know how prevalent this form of self-censorship is in other OIC countries because the research on academic freedom in Muslim-majority countries is understandably scant. Few faculty members would be willing to voice their concerns about government or university censorship and admitting to self-censorship is also not easy.

The task force report on science in the universities of Muslim-majority countries is an important first step towards reforming scientific research and education in the Muslim world. Increasing investments in research and development, using and appropriately acting on carefully selected metrics as well as introducing a core liberal arts curriculum for science students will probably all significantly improve the dire state of science in the Muslim world. However, the reform of the research and education programs needs to also include discussions about the importance of academic freedom. If Muslim societies are serious about nurturing scientific innovation, then they will need to also ensure that scientists, educators and students will be provided with the intellectual freedom that is the cornerstone of scientific creativity.

References:

Guessoum, N., & Osama, A. (2015). Institutions: Revive universities of the Muslim world. Nature, 526(7575), 634-6.

Romanowski, M. H., & Nasser, R. (2010). Faculty perceptions of academic freedom at a GCC university. Prospects, 40(4), 481-497.

 

**************************************************************

 Note: An earlier version of this article was first published on the 3Quarksdaily blog.

 

ResearchBlogging.org

 

Guessoum N, & Osama A (2015). Institutions: Revive universities of the Muslim world. Nature, 526 (7575), 634-6 PMID: 26511563

 

 

Romanowski, M., & Nasser, R. (2010). Faculty perceptions of academic freedom at a GCC university PROSPECTS, 40 (4), 481-497 DOI: 10.1007/s11125-010-9166-2

Blissful Ignorance: How Environmental Activists Shut Down Molecular Biology Labs in High Schools

Hearing about the HannoverGEN project made me feel envious and excited. Envious, because I wish my high school had offered the kind of hands-on molecular biology training provided to high school students in Hannover, the capital of the German state of Niedersachsen. Excited, because it reminded me of the joy I felt when I first isolated DNA and ran gels after restriction enzyme digests during my first year of university in Munich. I knew that many of the students at the HannoverGEN high schools would be similarly thrilled by their laboratory experience and perhaps even pursue careers as biologists or biochemists.

dna-163466_640
What did HannoverGEN entail? It was an optional pilot program initiated and funded by the state government of Niedersachsen at four high schools in the Hannover area. Students enrolled in the HannoverGEN classes would learn to use molecular biology tools typically reserved for college-level or graduate school courses in order to study plant genetics. Some of the basic experiments involved isolating DNA from cabbage or how learning how bacteria transfer genes to plants, more advanced experiments enabled the students to analyze whether or not the genome of a provided maize sample had been genetically modified. Each experimental unit was accompanied by relevant theoretical instruction on the molecular mechanisms of gene expression and biotechnology as well as ethical discussions regarding the benefits and risks of generating genetically modified organisms (“GMOs”). The details of the HannoverGEN program are only accessible through the the Wayback Machine Internet archive because the award-winning educational program and the associated website were shut down in 2013 at the behest of German anti-GMO activist groups, environmental activists, Greenpeace, the Niedersachsen Green Party and the German organic food industry.

Why did these activists and organic food industry lobbyists oppose a government-funded educational program which improved the molecular biology knowledge and expertise of high school students? A press release entitled “Keine Akzeptanzbeschaffung für Agro-Gentechnik an Schulen!” (“No Acceptance for Agricultural Gene Technology at Schools“) in 2012 by an alliance representing “organic” or “natural food” farmers accompanied by the publication of a critical “study” with the same title (PDF), which was funded by this alliance as well as its anti-GMO partners, gives us some clues. They feared that the high school students might become too accepting of biotechnology in agriculture and that the curriculum did not sufficiently highlight all the potential dangers of GMOs. By allowing the ethical discussions to not only discuss the risks but also mention the benefits of genetically modifying crops, students might walk away with the idea that GMOs could be beneficial for humankind. The group believed that taxpayer money should not be used to foster special interests such as those of the agricultural industry which may want to use GMOs.

A response by the University of Hannover (PDF), which had helped develop the curriculum and coordinated the classes for the high school students, carefully analyzed the complaints of the anti-GMO activists. The author of the anti-HannoverGEN “study” had not visited the HannoverGEN laboratories, nor had he had interviewed the biology teachers or students enrolled in the classes. In fact, his critique was based on weblinks that were not even used in the curriculum by the HannoverGEN teachers or students. His analysis ignored the balanced presentation of biotechnology that formed the basis of the HannoverGEN curriculum and that discussing potential risks of genetic modification was a core topic in all the classes.

Unfortunately, this shoddily prepared “study” had a significant impact, in part because it was widely promoted by partner organizations. Its release in the autumn of 2012 came at an opportune time for political activists because Niedersachsen was about to have an election. Campaigning against GMOs seemed like a perfect cause for the Green Party and a high school program which taught the use of biotechnology to high school students became a convenient lightning rod. When the Social Democrats and the Green Party formed a coalition after winning the election in early 2013, nixing the HannoverGEN high school program was formally included in the so-called coalition contract. This is a document in which coalition partners outline the key goals for the upcoming four year period. When one considers how many major issues and problems the government of a large German state has to face, such as healthcare, education, unemployment or immigration, it is mind-boggling that de-funding a program involving only four high schools received so much attention that it needed to be anchored in the coalition contract. In fact, it is a testimony to the influence and zeal of the anti-GMO lobby.

Once the cancellation of HannoverGEN was announced, the Hannover branch of Greenpeace also took credit for campaigning against this high school program and celebrated its victory. The Greenpeace anti-GMO activist David Petersen said that the program was too cost intensive because equipping high school laboratories with state-of-the-art molecular biology equipment had already cost more than 1 million Euros. The previous center-right government which had initiated the HannoverGEN project was planning on expanding the program to even more high schools because of the program’s success and national recognition for innovative teaching. According to Petersen, this would have wasted even more taxpayer money without adequately conveying the dangers of using GMOs in agriculture.

The scientific community was shaken up by the decision of the new Social Democrat-Green Party coalition government in Niedersachsen. This was an attack on the academic freedom of schools under the guise of accusing them of promoting special interests while ignoring that the anti-GMO activists were representing their own special interests. The “study” attacking HannoverGEN was funded by the lucrative “organic” or “natural food” food industry! Scientists and science writers such as Martin Ballaschk or Lars Fischer wrote excellent critical articles stating that squashing high-quality, hand-on science programs could not lead to better decision-making. How could ignorant students have a better grasp of GMO risks and benefits than those who receive relevant formal science education and thus make truly informed decisions? Sadly, this outcry by scientists and science writers did not make much of a difference. It did not seem that the media felt this was much of a cause to fight for. I wonder if the media response would have been just as lackluster if the government had de-funded a hands-on science lab to study the effects of climate change.

In 2014, the government of Niedersachsen then announced that they would resurrect an advanced biology laboratory program for high schools with the generic and vague title “Life Science Lab”. By removing the word “Gen” from its title which seems to trigger visceral antipathy among anti-GMO activists, de-emphasizing genome science and by also removing any discussion of GMOs from the curriculum, this new program would leave students in the dark about GMOs. Ignorance is bliss from an anti-GMO activist perspective because the void of scientific ignorance can be filled with fear.

From the very first day that I could vote in Germany during the federal election of 1990, I always viewed the Green Party as a party that represented my generation. A party of progressive ideas, concerned about our environment and social causes. However, the HannoverGEN incident is just one example of how the Green Party is caving in to ideologies, thus losing its open-mindedness and progressive nature. In the United States, the anti-science movement, which attacks teaching climate change science or evolutionary biology at schools, tends to be rooted in the right wing political spectrum. Right wingers or libertarians are the ones who always complain about taxpayer dollars being wasted and used to promote agendas in schools and universities. But we should not forget that there is also a different anti-science movement rooted in the leftist and pro-environmental political spectrum – not just in Germany. As a scientist, I feel that it is becoming increasingly difficult to support the Green Party because of its anti-science stance.

I worry about all anti-science movements, especially those which attack science education. There is nothing wrong with questioning special interests and ensuring that school and university science curricula are truly balanced. But the balance needs to be rooted in scientific principles, not political ideologies. Science education has a natural bias – it is biased towards knowledge that is backed up by scientific evidence. We can hypothetically discuss dangers of GMOs but the science behind the dangers of GMO crops is very questionable. Just like environmental activists and leftists agree with us scientists that we do not need to give climate change deniers and creationists “balanced” treatment in our science curricula, they should also accept that much of the “anti-GMO science” is currently more based on ideology than on actual scientific data. Our job is to provide excellent science education so that our students can critically analyze and understand scientific research, independent of whether or not it supports our personal ideologies.

 

Note: An earlier version of this article was first published on the 3Quarksdaily blog.

Murder Your Darling Hypotheses But Do Not Bury Them

“Whenever you feel an impulse to perpetrate a piece of exceptionally fine writing, obey it—whole-heartedly—and delete it before sending your manuscript to press. Murder your darlings.”

Sir Arthur Quiller-Couch (1863–1944). On the Art of Writing. 1916

 

Murder your darlings. The British writer Sir Arthur Quiller Crouch shared this piece of writerly wisdom when he gave his inaugural lecture series at Cambridge, asking writers to consider deleting words, phrases or even paragraphs that are especially dear to them. The minute writers fall in love with what they write, they are bound to lose their objectivity and may not be able to judge how their choice of words will be perceived by the reader. But writers aren’t the only ones who can fall prey to the Pygmalion syndrome. Scientists often find themselves in a similar situation when they develop “pet” or “darling” hypotheses.

Hypothesis via Shutterstock
Hypothesis via Shutterstock

How do scientists decide when it is time to murder their darling hypotheses? The simple answer is that scientists ought to give up scientific hypotheses once the experimental data is unable to support them, no matter how “darling” they are. However, the problem with scientific hypotheses is that they aren’t just generated based on subjective whims. A scientific hypothesis is usually put forward after analyzing substantial amounts of experimental data. The better a hypothesis is at explaining the existing data, the more “darling” it becomes. Therefore, scientists are reluctant to discard a hypothesis because of just one piece of experimental data that contradicts it.

In addition to experimental data, a number of additional factors can also play a major role in determining whether scientists will either discard or uphold their darling scientific hypotheses. Some scientific careers are built on specific scientific hypotheses which set apart certain scientists from competing rival groups. Research grants, which are essential to the survival of a scientific laboratory by providing salary funds for the senior researchers as well as the junior trainees and research staff, are written in a hypothesis-focused manner, outlining experiments that will lead to the acceptance or rejection of selected scientific hypotheses. Well written research grants always consider the possibility that the core hypothesis may be rejected based on the future experimental data. But if the hypothesis has to be rejected then the scientist has to explain the discrepancies between the preferred hypothesis that is now falling in disrepute and all the preliminary data that had led her to formulate the initial hypothesis. Such discrepancies could endanger the renewal of the grant funding and the future of the laboratory. Last but not least, it is very difficult to publish a scholarly paper describing a rejected scientific hypothesis without providing an in-depth mechanistic explanation for why the hypothesis was wrong and proposing alternate hypotheses.

For example, it is quite reasonable for a cell biologist to formulate the hypothesis that protein A improves the survival of neurons by activating pathway X based on prior scientific studies which have shown that protein A is an activator of pathway X in neurons and other studies which prove that pathway X improves cell survival in skin cells. If the data supports the hypothesis, publishing this result is fairly straightforward because it conforms to the general expectations. However, if the data does not support this hypothesis then the scientist has to explain why. Is it because protein A did not activate pathway X in her experiments? Is it because in pathway X functions differently in neurons than in skin cells? Is it because neurons and skin cells have a different threshold for survival? Experimental results that do not conform to the predictions have the potential to uncover exciting new scientific mechanisms but chasing down these alternate explanations requires a lot of time and resources which are becoming increasingly scarce. Therefore, it shouldn’t come as a surprise that some scientists may consciously or subconsciously ignore selected pieces of experimental data which contradict their darling hypotheses.

Let us move from these hypothetical situations to the real world of laboratories. There is surprisingly little data on how and when scientists reject hypotheses, but John Fugelsang and Kevin Dunbar at Dartmouth conducted a rather unique study “Theory and data interactions of the scientific mind: Evidence from the molecular and the cognitive laboratory” in 2004 in which they researched researchers. They sat in at scientific laboratory meetings of three renowned molecular biology laboratories at carefully recorded how scientists presented their laboratory data and how they would handle results which contradicted their predictions based on their hypotheses and models.

In their final analysis, Fugelsang and Dunbar included 417 scientific results that were presented at the meetings of which roughly half (223 out of 417) were not consistent with the predictions. Only 12% of these inconsistencies lead to change of the scientific model (and thus a revision of hypotheses). In the vast majority of the cases, the laboratories decided to follow up the studies by repeating and modifying the experimental protocols, thinking that the fault did not lie with the hypotheses but instead with the manner how the experiment was conducted. In the follow up experiments, 84 of the inconsistent findings could be replicated and this in turn resulted in a gradual modification of the underlying models and hypotheses in the majority of the cases. However, even when the inconsistent results were replicated, only 61% of the models were revised which means that 39% of the cases did not lead to any significant changes.

The study did not provide much information on the long-term fate of the hypotheses and models and we obviously cannot generalize the results of three molecular biology laboratory meetings at one university to the whole scientific enterprise. Also, Fugelsang and Dunbar’s study did not have a large enough sample size to clearly identify the reasons why some scientists were willing to revise their models and others weren’t. Was it because of varying complexity of experiments and models? Was it because of the approach of the individuals who conducted the experiments or the laboratory heads? I wish there were more studies like this because it would help us understand the scientific process better and maybe improve the quality of scientific research if we learned how different scientists handle inconsistent results.

In my own experience, I have also struggled with results which defied my scientific hypotheses. In 2002, we found that stem cells in human fat tissue could help grow new blood vessels. Yes, you could obtain fat from a liposuction performed by a plastic surgeon and inject these fat-derived stem cells into animal models of low blood flow in the legs. Within a week or two, the injected cells helped restore the blood flow to near normal levels! The simplest hypothesis was that the stem cells converted into endothelial cells, the cell type which forms the lining of blood vessels. However, after several months of experiments, I found no consistent evidence of fat-derived stem cells transforming into endothelial cells. We ended up publishing a paper which proposed an alternative explanation that the stem cells were releasing growth factors that helped grow blood vessels. But this explanation was not as satisfying as I had hoped. It did not account for the fact that the stem cells had aligned themselves alongside blood vessel structures and behaved like blood vessel cells.

Even though I “murdered” my darling hypothesis of fat –derived stem cells converting into blood vessel endothelial cells at the time, I did not “bury” the hypothesis. It kept ruminating in the back of my mind until roughly one decade later when we were again studying how stem cells were improving blood vessel growth. The difference was that this time, I had access to a live-imaging confocal laser microscope which allowed us to take images of cells labeled with red and green fluorescent dyes over long periods of time. Below, you can see a video of human bone marrow mesenchymal stem cells (labeled green) and human endothelial cells (labeled red) observed with the microscope overnight. The short movie compresses images obtained throughout the night and shows that the stem cells indeed do not convert into endothelial cells. Instead, they form a scaffold and guide the endothelial cells (red) by allowing them to move alongside the green scaffold and thus construct their network. This work was published in 2013 in the Journal of Molecular and Cellular Cardiology, roughly a decade after I had been forced to give up on the initial hypothesis. Back in 2002, I had assumed that the stem cells were turning into blood vessel endothelial cells because they aligned themselves in blood vessel like structures. I had never considered the possibility that they were scaffold for the endothelial cells.

This and other similar experiences have lead me to reformulate the “murder your darlings” commandment to “murder your darling hypotheses but do not bury them”. Instead of repeatedly trying to defend scientific hypotheses that cannot be supported by emerging experimental data, it is better to give up on them. But this does not mean that we should forget and bury those initial hypotheses. With newer technologies, resources or collaborations, we may find ways to explain inconsistent results years later that were not previously available to us. This is why I regularly peruse my cemetery of dead hypotheses on my hard drive to see if there are ways of perhaps resurrecting them, not in their original form but in a modification that I am now able to test.

 

Reference:

ResearchBlogging.org

Fugelsang, J., Stein, C., Green, A., & Dunbar, K. (2004). Theory and Data Interactions of the Scientific Mind: Evidence From the Molecular and the Cognitive Laboratory. Canadian Journal of Experimental Psychology/Revue canadienne de psychologie expérimentale, 58 (2), 86-95 DOI: 10.1037/h0085799

 

Note: An earlier version of this article first appeared on 3Quarksdaily.

Literature and Philosophy in the Laboratory Meeting

Research institutions in the life sciences engage in two types of regular scientific meet-ups: scientific seminars and lab meetings. The structure of scientific seminars is fairly standard. Speakers give Powerpoint presentations (typically 45 to 55 minutes long) which provide the necessary scientific background, summarize their group’s recent published scientific work and then (hopefully) present newer, unpublished data. Lab meetings are a rather different affair. The purpose of a lab meeting is to share the scientific work-in-progress with one’s peers within a research group and also to update the laboratory heads. Lab meetings are usually less formal than seminars, and all members of a research group are encouraged to critique the presented scientific data and work-in-progress. There is no need to provide much background information because the audience of peers is already well-acquainted with the subject and it is not uncommon to show raw, unprocessed data and images in order to solicit constructive criticism and guidance from lab members and mentors on how to interpret the data. This enables peer review in real-time, so that, hopefully, major errors and flaws can be averted and newer ideas incorporated into the ongoing experiments.

Books

During the past two decades that I have actively participated in biological, psychological and medical research, I have observed very different styles of lab meetings. Some involve brief 5-10 minute updates from each group member; others develop a rotation system in which one lab member has to present the progress of their ongoing work in a seminar-like, polished format with publication-quality images. Some labs have two hour meetings twice a week, other labs meet only every two weeks for an hour. Some groups bring snacks or coffee to lab meetings, others spend a lot of time discussing logistics such as obtaining and sharing biological reagents or establishing timelines for submitting manuscripts and grants. During the first decade of my work as a researcher, I was a trainee and followed the format of whatever group I belonged to. During the past decade, I have been heading my own research group and it has become my responsibility to structure our lab meetings. I do not know which format works best, so I approach lab meetings like our experiments. Developing a good lab meeting structure is a work-in-progress which requires continuous exploration and testing of new approaches. During the current academic year, I decided to try out a new twist: incorporating literature and philosophy into the weekly lab meetings.

My research group studies stem cells and tissue engineeringcellular metabolism in cancer cells and stem cells and the inflammation of blood vessels. Most of our work focuses on identifying molecular and cellular pathways in cells, and we then test our findings in animal models. Over the years, I have noticed that the increasing complexity of the molecular and cellular signaling pathways and the technologies we employ makes it easy to forget the “big picture” of why we are even conducting the experiments. Determining whether protein A is required for phenomenon X and whether protein B is a necessary co-activator which acts in concert with protein A becomes such a central focus of our work that we may not always remember what it is that compels us to study phenomenon X in the first place. Some of our research has direct medical relevance, but at other times we primarily want to unravel the awe-inspiring complexity of cellular processes. But the question of whether our work is establishing a definitive cause-effect relationship or whether we are uncovering yet another mechanism within an intricate web of causes and effects sometimes falls by the wayside. When asked to explain the purpose or goals of our research, we have become so used to directing a laser pointer onto a slide of a cellular model that it becomes challenging to explain the nature of our work without visual aids.

This fall, I introduced a new component into our weekly lab meetings. After our usual round-up of new experimental data and progress, I suggested that each week one lab member should give a brief 15 minute overview about a book they had recently finished or were still reading. The overview was meant to be a “teaser” without spoilers, explaining why they had started reading the book, what they liked about it, and whether they would recommend it to others. One major condition was to speak about the book without any Powerpoint slides! But there weren’t any major restrictions when it came to the book; it could be fiction or non-fiction and published in any language of the world (but ideally also available in an English translation). If lab members were interested and wanted to talk more about the book, then we would continue to discuss it, otherwise we would disband and return to our usual work. If nobody in my lab wanted to talk about a book then I would give an impromptu mini-talk (without Powerpoint) about a topic relating to the philosophy or culture of science. I use the term “culture of science” broadly to encompass topics such as the peer review process and post-publication peer review, the question of reproducibility of scientific findings, retractions of scientific papers, science communication and science policy – topics which have not been traditionally considered philosophy of science issues but still relate to the process of scientific discovery and the dissemination of scientific findings.

One member of our group introduced us to “For Whom the Bell Tolls” by Ernest Hemingway. He had also recently lived in Spain as a postdoctoral research fellow and shared some of his own personal experiences about how his Spanish friends and colleagues talked about the Spanish Civil War. At another lab meeting, we heard about “Sycamore Row” by John Grisham and the ensuring discussion revolved around race relations in Mississippi. I spoke about “A Tale for a Time Being” by Ruth Ozeki and the difficulties that the book’s protagonist faced as an outsider when her family returned to Japan after living in Silicon Valley. I think that the book which got nearly everyone in the group talking was “Far From the Tree: Parents, Children and the Search for Identity” by Andrew Solomon. The book describes how families grapple with profound physical or cognitive differences between parents and children. The PhD student who discussed the book focused on the “Deafness” chapter of this nearly 1000-page tome but she also placed it in the broader context of parenting, love and the stigma of disability. We stayed in the conference room long after the planned 15 minutes, talking about being “disabled” or being “differently abled” and the challenges that parents and children face.

On the weeks where nobody had a book they wanted to present, we used the time to touch on the cultural and philosophical aspects of science such as Thomas Kuhn’s concept of paradigm shifts in “The Structure of Scientific Revolutions“, Karl Popper’s principles of falsifiability of scientific statements, the challenge of reproducibility of scientific results in stem cell biology and cancer research, or the emergence of Pubpeer as a post-publication peer review website. Some of the lab members had heard of Thomas Kuhn’s or Karl Popper’s ideas before, but by coupling it to a lab meeting, we were able to illustrate these ideas using our own work. A lot of 20th century philosophy of science arose from ideas rooted in physics. When undergraduate or graduate students take courses on philosophy of science, it isn’t always easy for them to apply these abstract principles to their own lab work, especially if they pursue a research career in the life sciences. Thomas Kuhn saw Newtonian and Einsteinian theories as distinct paradigms, but what constitutes a paradigm shift in stem cell biology? Is the ability to generate induced pluripotent stem cells from mature adult cells a paradigm shift or “just” a technological advance?

It is difficult for me to know whether the members of my research group enjoy or benefit from these humanities blurbs at the end of our lab meetings. Perhaps they are just tolerating them as eccentricities of the management and maybe they will tire of them. I personally find these sessions valuable because I believe they help ground us in reality. They remind us that it is important to think and read outside of the box. As scientists, we all read numerous scientific articles every week just to stay up-to-date in our area(s) of expertise, but that does not exempt us from also thinking and reading about important issues facing society and the world we live in. I do not know whether discussing literature and philosophy makes us better scientists but I hope that it makes us better people.

 

Note: An earlier version of this article was first published on the 3Quarksdaily blog.

ResearchBlogging.org

Thomas Kuhn (2012). The Structure of Scientific Revolutions University of Chicago Press DOI: 10.7208/chicago/9780226458106.001.0001

To Err Is Human, To Study Errors Is Science

The family of cholesterol lowering drugs known as ‘statins’ are among the most widely prescribed medications for patients with cardiovascular disease. Large-scale clinical studies have repeatedly shown that statins can significantly lower cholesterol levels and the risk of future heart attacks, especially in patients who have already been diagnosed with cardiovascular disease. A more contentious issue is the use of statins in individuals who have no history of heart attacks, strokes or blockages in their blood vessels. Instead of waiting for the first major manifestation of cardiovascular disease, should one start statin therapy early on to prevent cardiovascular disease?

If statins were free of charge and had no side effects whatsoever, the answer would be rather straightforward: Go ahead and use them as soon as possible. However, like all medications, statins come at a price. There is the financial cost to the patient or their insurance to pay for the medications, and there is a health cost to the patients who experience potential side effects. The Guideline Panel of the American College of Cardiology (ACC) and the American Heart Association (AHA) therefore recently recommended that the preventive use of statins in individuals without known cardiovascular disease should be based on personalized risk calculations. If the risk of developing disease within the next 10 years is greater than 7.5%, then the benefits of statin therapy outweigh its risks and the treatment should be initiated. The panel also indicated that if the 10-year risk of cardiovascular disease is greater than 5%, then physicians should consider prescribing statins, but should bear in mind that the scientific evidence for this recommendation was not as strong as that for higher-risk individuals.

 

Oops button - via Shutterstock
Oops button – via Shutterstock

Using statins in low risk patients

The recommendation that individuals with comparatively low risk of developing future cardiovascular disease (10-year risk lower than 10%) would benefit from statins was met skepticism by some medical experts. In October 2013, the British Medical Journal (BMJ) published a paper by John Abramson, a lecturer at Harvard Medical School, and his colleagues which re-evaluated the data from a prior study on statin benefits in patients with less than 10% cardiovascular disease risk over 10 years. Abramson and colleagues concluded that the statin benefits were over-stated and that statin therapy should not be expanded to include this group of individuals. To further bolster their case, Abramson and colleagues also cited a 2013 study by Huabing Zhang and colleagues in the Annals of Internal Medicine which (according to Abramson et al.) had reported that 18 % of patients discontinued statins due to side effects. Abramson even highlighted the finding from the Zhang study by including it as one of four bullet points summarizing the key take-home messages of his article.

The problem with this characterization of the Zhang study is that it ignored all the caveats that Zhang and colleagues had mentioned when discussing their findings. The Zhang study was based on the retrospective review of patient charts and did not establish a true cause-and-effect relationship between the discontinuation of the statins and actual side effects of statins. Patients may stop taking medications for many reasons, but this does not necessarily mean that it is due to side effects from the medication. According to the Zhang paper, 17.4% of patients in their observational retrospective study had reported a “statin related incident” and of those only 59% had stopped the medication. The fraction of patients discontinuing statins due to suspected side effects was at most 9-10% instead of the 18% cited by Abramson. But as Zhang pointed out, their study did not include a placebo control group. Trials with placebo groups document similar rates of “side effects” in patients taking statins and those taking placebos, suggesting that only a small minority of perceived side effects are truly caused by the chemical compounds in statin drugs.

 

Admitting errors is only the first step

Whether 18%, 9% or a far smaller proportion of patients experience significant medication side effects is no small matter because the analysis could affect millions of patients currently being treated with statins. A gross overestimation of statin side effects could prompt physicians to prematurely discontinue medications that have been shown to significantly reduce the risk of heart attacks in a wide range of patients. On the other hand, severely underestimating statin side effects could result in the discounting of important symptoms and the suffering of patients. Abramson’s misinterpretation of statin side effect data was pointed out by readers of the BMJ soon after the article published, and it prompted an inquiry by the journal. After re-evaluating the data and discussing the issue with Abramson and colleagues, the journal issued a correction in which it clarified the misrepresentation of the Zhang paper.

Fiona Godlee, the editor-in-chief of the BMJ also wrote an editorial explaining the decision to issue a correction regarding the question of side effects and that there was not sufficient cause to retract the whole paper since the other points made by Abramson and colleagues – the lack of benefit in low risk patients – might still hold true. Instead, Godlee recognized the inherent bias of a journal’s editor when it comes to deciding on whether or not to retract a paper. Every retraction of a peer reviewed scholarly paper is somewhat of an embarrassment to the authors of the paper as well as the journal because it suggests that the peer review process failed to identify one or more major flaws. In a commendable move, the journal appointed a multidisciplinary review panel which includes leading cardiovascular epidemiologists. This panel will review the Abramson paper as well as another BMJ paper which had also cited the inaccurately high frequency of statin side effects, investigate the peer review process that failed to identify the erroneous claims and provide recommendations regarding the ultimate fate of the papers.

 

Reviewing peer review

Why didn’t the peer reviewers who evaluated Abramson’s article catch the error prior to its publication? We can only speculate as to why such a major error was not identified by the peer reviewers. One has to bear in mind that “peer review” for academic research journals is just that – a review. In most cases, peer reviewers do not have access to the original data and cannot check the veracity or replicability of analyses and experiments. For most journals, peer review is conducted on a voluntary (unpaid) basis by two to four expert reviewers who routinely spend multiple hours analyzing the appropriateness of the experimental design, methods, presentation of results and conclusions of a submitted manuscript. The reviewers operate under the assumption that the authors of the manuscript are professional and honest in terms of how they present the data and describe their scientific methodology.

In the case of Abramson and colleagues, the correction issued by the BMJ refers not to Abramson’s own analysis but to the misreading of another group’s research. Biomedical research papers often cite 30 or 40 studies, and it is unrealistic to expect that peer reviewers read all the cited papers and ensure that they are being properly cited and interpreted. If this were the expectation, few peer reviewers would agree to serve as volunteer reviewers since they would have hardly any time left to conduct their own research. However, in this particular case, most peer reviewers familiar with statins and the controversies surrounding their side effects should have expressed concerns regarding the extraordinarily high figure of 18% cited by Abramson and colleagues. Hopefully, the review panel will identify the reasons for the failure of BMJ’s peer review system and point out ways to improve it.

 

To err is human, to study errors is science

All researchers make mistakes, simply because they are human. It is impossible to eliminate all errors in any endeavor that involves humans, but we can construct safeguards that help us reduce the occurrence and magnitude of our errors. Overt fraud and misconduct are rare causes of errors in research, but their effects on any given research field can be devastating. One of the most notorious occurrences of research fraud is the case of the Dutch psychologist Diederik Stapel who published numerous papers based on blatant fabrication of data – showing ‘results’ of experiments on non-existent study subjects. The field of cell therapy in cardiovascular disease recently experienced a major setback when a university review of studies headed by the German cardiologist Bodo Strauer found evidence of scientific misconduct. The significant discrepancies and irregularities in Strauer’s studies have now lead to wide-ranging skepticism about the efficacy of using bone marrow cell infusions to treat heart disease.

 

It is difficult to obtain precise numbers to quantify the actual extent of severe research misconduct and fraud since it may go undetected. Even when such cases are brought to the attention of the academic leadership, the involved committees and administrators may decide to keep their findings confidential and not disclose them to the public. However, most researchers working in academic research environments would probably agree that these are rare occurrences. A far more likely source of errors in research is the cognitive bias of the researchers. Researchers who believe in certain hypotheses and ideas are prone to interpreting data in a manner most likely to support their preconceived notions. For example, it is likely that a researcher opposed to statin usage will interpret data on side effects of statins differently than a researcher who supports statin usage. While Abramson may have been biased in the interpretation of the data generated by Zhang and colleagues, the field of cardiovascular regeneration is currently grappling in what appears to be a case of biased interpretation of one’s own data. An institutional review by Harvard Medical School and Brigham and Women’s Hospital recently determined that the work of Piero Anversa, one of the world’s most widely cited stem cell researchers, was significantly compromised and warranted a retraction. His group had reported that the adult human heart exhibited an amazing regenerative potential, suggesting that roughly every 8 to 9 years the adult human heart replaces its entire collective of beating heart cells (a 7% – 19% yearly turnover of beating heart cells). These findings were in sharp contrast to a prior study which had found only a minimal turnover of beating heart cells (1% or less per year) in adult humans. Anversa’s finding was also at odds with the observations of clinical cardiologists who rarely observe a near-miraculous recovery of heart function in patients with severe heart disease. One possible explanation for the huge discrepancy between the prior research and Anversa’s studies was that Anversa and his colleagues had not taken into account the possibility of contaminations that could have falsely elevated the cell regeneration counts.

 

Improving the quality of research: peer review and more

Despite the fact that researchers are prone to make errors due to inherent biases does not mean we should simply throw our hands up in the air, say “Mistakes happen!” and let matters rest. High quality science is characterized by its willingness to correct itself, and this includes improving methods to detect and correct scientific errors early on so that we can limit their detrimental impact. The realization that lack of reproducibility of peer-reviewed scientific papers is becoming a major problem for many areas of research such as psychology, stem cell research and cancer biology has prompted calls for better ways to track reproducibility and errors in science.

One important new paradigm that is being discussed to improve the quality of scholar papers is the role of post-publication peer evaluation. Instead of viewing the publication of a peer-reviewed research paper as an endpoint, post publication peer evaluation invites fellow scientists to continue commenting on the quality and accuracy of the published research even after its publication and to engage the authors in this process. Traditional peer review relies on just a handful of reviewers who decide about the fate of a manuscript, but post publication peer evaluation opens up the debate to hundreds or even thousands of readers which may be able to detect errors that could not be identified by the small number of traditional peer reviewers prior to publication. It is also becoming apparent that science journalists and science writers can play an important role in the post-publication evaluation of published research papers by investigating and communicating research flaws identified in research papers. In addition to helping dismantle the Science Mystique, critical science journalism can help ensure that corrections, retractions or other major concerns about the validity of scientific findings are communicated to a broad non-specialist audience.

In addition to these ongoing efforts to reduce errors in science by improving the evaluation of scientific papers, it may also be useful to consider new pro-active initiatives which focus on how researchers perform and design experiments. As the head of a research group at an American university, I have to take mandatory courses (in some cases on an annual basis) informing me about laboratory hazards, ethics of animal experimentation or the ethics of how to conduct human studies. However, there are no mandatory courses helping us identify our own research biases or how to minimize their impact on the interpretation of our data. There is an underlying assumption that if you are no longer a trainee, you probably know how to perform and interpret scientific experiments. I would argue that it does not hurt to remind scientists regularly – no matter how junior or senior- that they can become victims of their biases. We have to learn to continuously re-evaluate how we conduct science and to be humble enough to listen to our colleagues, especially when they disagree with us.

 

Note: A shorter version of this article was first published at The Conversation with excellent editorial input provided by Jo Adetunji.

 

ResearchBlogging.org
Abramson, J., Rosenberg, H., Jewell, N., & Wright, J. (2013). Should people at low risk of cardiovascular disease take a statin? BMJ, 347 (oct22 3) DOI: 10.1136/bmj.f6123

Blind Peers: A Path To Equality In Scientific Peer Review?

There is a fundamental asymmetry that exists in contemporary peer review of scientific papers. Most scientific journals do not hide the identity of the authors of a submitted manuscript. The scientific reviewers, on the other hand, remain anonymous. Their identities are only known to the editors, who use the assessments of these scientific reviewers to help decide whether or not to accept a scientific manuscript. Even though the comments of the reviewers are usually passed along to the authors of the manuscript, the names of the reviewers are not. There is a good reason for that. Critical comments of peer reviewers can lead to a rejection of a manuscript, or cause substantial delays in its publication, sometimes requiring many months of additional work that needs to be performed by the scientists who authored the manuscript. Scientists who receive such criticisms are understandably disappointed, but in some cases this disappointment can turn into anger and could potentially even lead to retributions against the peer reviewers, if their identities were ever disclosed. The cloak of anonymity thus makes it much easier for peer reviewers to offer honest and critical assessments of the submitted manuscript.

Unfortunately, this asymmetry – the peer reviewers knowing the names of the authors but the authors not knowing the names of the peer reviewers – can create problems. Some peer reviewers may be biased either against or in favor of a manuscript merely because they recognize the names of the authors or the institutions at which the authors work. There is an expectation that peer reviewers judge a paper only based on its scientific merit, but knowledge of the authors could still consciously or subconsciously impact the assessments made by the peer reviewers. Scientific peer reviewers may be much more lenient towards manuscripts of colleagues that they have known for many years and who they consider to be their friends. The reviewers may be more critical of manuscripts submitted by rival groups with whom they have had hostile exchanges in the past or by institutions that they do not trust. A recent study observed that scientists who review applications of students exhibit a subtle gender bias that favors male students, and it may be possible that similar gender bias exists in the peer review evaluation of manuscripts.

The journals Nature Geoscience and Nature Climate Change of  the Nature Publishing Group have recently announced a new “Double-blind peer review” approach to correct this asymmetry. The journals will allow authors to remain anonymous during the peer review process. The hope is that hiding the identities of the authors could reduce bias among peer reviewers.  The journals decided to implement this approach on a trial basis following a survey, in which three-quarters of respondents were supportive of a double-blind peer review. As the announcement correctly points out, this will only work if the authors are willing to phrase their paper in a manner that does not give away their identity. Instead of writing “as we have previously described”, authors write “as has been previously described” when citing prior publications.

The editors of Nature Geoscience state:

From our experience, authors who try to guess the identity of a referee are very often wrong. It seems unlikely that referees will be any more successful when guessing the identity of authors.

I respectfully disagree with this statement. Reviewers can remain anonymous because they rarely make direct references to their own work in the review process. Authors of a scientific manuscript, on the other hand, often publish a paper in the context of their own prior work. Even if the names and addresses of the authors were hidden on the title page and even if the usage of first-person pronouns in the context of prior publications was omitted, the manuscript would likely still contain multiple references to a group’s prior work. These references as well as any mentions of an institution’s facilities or administrative committees that approve animal and human studies could potentially give away the identity of the authors. It would be much easier for reviewers to guess the identity of some of the authors than for authors to guess the identity of the reviewers.

 

But even if referees correctly identify the research group that a paper is coming from, they are much less likely to guess who the first author is. One of our motivations for setting up a double-blind trial is the possibility that female authors are subjected to tougher peer review than their male colleagues — a distinct possibility in view of evidence that subtle gender biases affect assessments of competence, appropriate salaries and other aspects of academic life (Proc. Natl Acad. Sci. USA 109, 16474–16479; 2012). If the first author is unknown, this bias will be largely removed.

 

The double-blind peer review system would definitely make it harder to guess the identity of the first author and would remove biases of reviewers associated with knowing the identity of first authors. The references to prior work would enable a reviewer to infer that the submitted manuscript was authored by the research group of the senior scientist X at the University Y, but it would be nearly impossible for the reviewer to ascertain the identity of the first authors (often postdoctoral fellows, graduate students or junior faculty members). However, based on my discussions with fellow peer reviewers, I think that it is rather rare for reviewers to have a strong bias against or in favor of first authors. The biases are usually associated with knowing the identity of the senior or lead authors.

Many scientists would agree that there is a need for reforming the peer review process and that we need to reduce biased assessments of submitted manuscripts. However, I am not convinced that increasing blindness is necessarily the best approach. In addition to the asymmetry of anonymity in contemporary peer review, there is another form of asymmetry that should be addressed: Manuscripts are eventually made public, the comments of peer reviewers usually are not made public.

This asymmetry allows some peer reviewers to be sloppy in their assessments of manuscripts. While some peer reviewers provide thoughtful and constructive criticism, others just make offhanded comments, either dismissing a manuscript for no good reason or sometimes accepting it without carefully evaluating all its strengths and weaknesses. The solution to this problem is not increasing “blindness”, but instead increasing transparency of the peer review process. The open access journal F1000Research has a post-publication review process for scientific manuscripts, in which a paper is first published and the names and assessments of the referees are openly disclosed.  The open access journal PeerJ offers an alternate approach, in which peer reviewers can choose to either disclose their names or to stay anonymous and authors can choose to disclose the comments they received during the peer review process. This “pro-choice” model would allow reviewers to remain anonymous even if the authors choose to publicly disclose the reviewer comments.

Scientific peer review can play an important role in ensuring the quality of science, if it is conducted appropriately and provides reasonably objective and constructive critiques. Constructive criticism is essential for the growth of scientific knowledge. It is important that we foster a culture of respect for criticism in science, whether it occurs during the peer review process or when science writers analyze published studies. “Double blind” is an excellent way to collect experimental data, because it reduces the bias of the experimenter, but it may not be the best way to improve peer review. When it comes to peer review and scientific criticism, we should strive for more transparency and a culture of mutual respect and dialogue.

Breakthrough Prize in Life Sciences: Hopefully Not Just A Nobel Prize in Medicine 2.0

The recent announcement of the Breakthrough Prize in Life Sciences” and its inaugural 11 recipients is causing quite a bit of buzz in the research community. The Silicon Valley celebrities Art Levinson, Sergey Brin, Anne Wojcicki, Mark Zuckerberg and Priscilla Chan, and Yuri Milner have established the Breakthrough Prize in Life Sciences Foundation, which intends to award five annual prizes in the amount of $3 million each to honor “extraordinary achievements of the outstanding minds in the field of life sciences, enhance medical innovation, and ultimately become a platform for recognizing future discoveries”.

 

The inaugural recipients are:

1. Cornelia I. Bargmann: For the genetics of neural circuits and behavior, and synaptic guidepost molecules

2. David Botstein: For linkage mapping of Mendelian disease in humans using DNA polymorphisms.

3. Lewis C. Cantley: For the discovery of PI 3-Kinase and its role in cancer metabolism.

4. Hans Clevers: For describing the role of Wnt signaling in tissue stem cells and cancer.

5. Titia de Lange: For research on telomeres, illuminating how they protect chromosome ends and their role in genome instability in cancer.

6. Napoleone Ferrara: For discoveries in the mechanisms of angiogenesis that led to therapies for cancer and eye diseases.

7. Eric S. Lander: For the discovery of general principles for identifying human disease genes, and enabling their application to medicine through the creation and analysis of genetic, physical and sequence maps of the human genome.

8. Charles L. Sawyers: For cancer genes and targeted therapy.

9. Bert Vogelstein: For cancer genomics and tumor suppressor genes.

10. Robert A. Weinberg: For characterization of human cancer genes.

11. Shinya Yamanaka: For induced pluripotent stem cells.

 

Anyone familiar with cell biology or molecular biology will recognize most, if not all of these names, because this list consists of many important leaders in these areas. As a stem cell biologist, I am happy to see at least two other stem cell researchers on the list: 1) Shinya Yamanaka (who received the 2012 Nobel Prize in Physiology or Medicine) for discovering that adult skin cells could be converted into pluripotent stem cells by just introducing four genes into the cells and 2) Hans Clevers, who is one of the world’s leading researchers in the field of adult stem cell biology and has been instrumental in characterizing stem cells in the intestinal tissue and defining the role of the Wnt signaling pathway, which regulates both proliferation and differentiation of adult stem cells.

The amount awarded to the recipients seems staggeringly high – $3 million is nearly triple the size of the Nobel Prize. However, one also needs to keep in mind that the Breakthrough Prize not only honors past achievements, but also has “the aim of providing the recipients with more freedom and opportunity to pursue even greater future accomplishments.” This means that the laureates are expected (but not necessarily required) to use some of the funds to pursue new directions of research. Biomedical research is expensive. A typical NIH R01 grant, which is the lifeblood of most federally funded biomedical research labs in the United States, has a budget of $250,000 per year and $1,250,000 over a five year period for which funds are usually requested for a single project. The annual amount of $250,000 has to cover the salaries of the employees working in the laboratory, employee benefits such as health insurance, maintenance contracts to keep up existing equipment and thus leaves very little money to buy the actual materials and equipment needed to conduct the experiments. This relatively small amount of money to conduct experiments forces many scientists to be rather conservative in their work. They do not want to invest money in innovative and high-risk projects, because these do not always yield definitive results, and inconclusive results could jeopardize future grant funding and put the jobs of one’s employees or trainees at risk.

The $3 million amount of the Breakthrough Prize, on the other hand, gives the researchers the freedom to try out exciting and high-risk ideas, without having to spend months writing grant proposals. The $3 million amount is enough to fund two high risk NIH R01 grant size projects for five years, that is, if the laureates choose to use all their award money for their research instead of buying a luxury yacht.

Even though all the laureates above are established and internationally renowned scientists, they are at different stages in their research career. David Botstein, for example, is 70 years old and was already a molecular biology legend when I was a grad student in the 1990s. On the other hand, Shinya Yamanaka is only 50 years old and is in the prime of his research career, with hopefully many more decades of research ahead of him. I also like the fact that the foundation will accept nominations from the public, and I hope that its selection process will be more transparent than the closed door policy involved in the selection of Nobel prize laureates.

Despite all my enthusiasm for the new Breakthrough Prize and my hope that it will help re-energize research in the life sciences, I am concerned by the medical focus of the Foundation’s aims. The title of the prize is “Breakthrough Prize in Life Sciences”, but the aims are to recognize excellence in research aimed at curing intractable diseases and extending human life.” Why is there such a focus on human life and disease? The field of “life sciences” comprises much more than just human life. It includes areas as diverse as ecology, evolutionary biology and botany, even if they do not have any direct implications for human disease. All of the announced laureates worked on areas that are more or less directly connected to human diseases such as cancer or human physiology. In this sense, this new prize is not too different from the Nobel Prize in Physiology or Medicine, merely larger in size, a 2.0 version of the current Nobel Prize in Physiology or Medicine. I have previously written about the lack of a Nobel Prize equivalent that honors efforts in non-medical life sciences. I hope that the Breakthrough Prize foundation reconsiders the medical focus of the prize and that future awards will also be made to life scientists who do not work in areas that directly relate to human life and human disease.

 

UPDATE: I would like to thank some of the readers for their comments, including those who commented on Twitter and I thought it might be helpful to respond to them in this update. One important point raised by some readers is that it should not be our place to tell philanthropists what to do with their money. It is their money and they get to choose what kind of prizes and charitable foundations they establish. In this particular case, some of the founders of the Breakthrough Prize in Life Sciences may have been influenced by personal experiences of their family members or friends with certain illnesses. This could explain the medical or biomedical focus of the prize.

I completely agree that philanthropists should decide what the goals of an established foundation are, but I still think that it is not wrong to engage in a debate. Especially in the case of the Breakthrough Prize in Life Sciences, I think there are at least three good reasons, why this debate is necessary and helpful.

1. The foundation website indicates that it will soon accept online nominations for future awards from the public. This suggests that the philanthropists are open to outside suggestions and perhaps this openness can be extended to engaging in a dialogue about the actual aims of the prize itself. The philanthropists do not have to listen to what scientists say about including awards in the non-medical life sciences, but we scientists should at the very least voice our concerns.

2. The name of the prize is “Breakthrough Prize in Life Sciences”, but the explicit aims are very much focused on human disease and extending human life. This is a bit of a disconnect, because the broadly phrased title “life sciences” encompasses far more than just medical research.

3. There are already numerous honors and prizes available for outstanding achievements in medical research or biological research with direct medical impact. What we lack is a Nobel Prize equivalent in the non-medical life sciences. This is not a big surprise, because the human suffering associated with illness probably motivates many philanthropists. It is thus understandable that many philanthropic foundations might gravitate towards valuing research with medical implications more than non-medical research. However, as scientists, we need to remind philanthropists that in the 21st century, we recognize the importance of biodiversity. We want to understand the biology of plants and the wonderful multitude of animal species. We need to work together to preserve the biodiversity on our planet, even if there is no direct link between this type of research and specific human diseases.

Charles Darwin was one of the most brilliant life scientists in the past two centuries. His work has revolutionized how we think in biology. Would Charles Darwin receive a Breakthrough Prize in Life Sciences? His work was not necessarily directed at extending human life or treating specific human diseases, but the revolution in biological thought that he initiated ultimately did have a major impact on medical sciences, too. I think we should try our best to establish a prize that honors and supports excellence in the life sciences without obvious or direct medical applications. Such prizes should be awarded to the contemporary Charles Darwins in our midst, without requiring them to prove or justify the medical relevance of their work.

New Directions In Scientific Peer Review

Most scientists have a need-hate relationship with scientific peer review. We know that we need some form of peer review, because it is an important quality control measure that is supposed to help prevent the publication of scientifically invalid results. However, we also tend to hate scientific peer review in its current form, because we have had many frustrating experiences with it.

We recently submitted a manuscript to a journal, where it was stuck for more than one year, undergoing multiple rounds revisions in response to requests by the editors and the reviewers, after which they finally rejected it. The reviewers did not necessarily question the validity of our results, but they wanted us to test additional cell lines, confirm many of the findings with multiple methods and identify additional mechanisms that might explain our findings so that the paper started ballooning in size. I was frustrated because I felt that there was no end in sight. There are always novel mechanisms that one has not investigated. A scientific paper is not meant to investigate every possible explanation for a phenomenon, because that would turn the paper into a never-ending saga –every new finding usually raises even more questions.

We received a definitive rejection after multiple rounds of revisions (taking more than a year), but I was actually relieved because the demands of the reviewers were becoming quite excessive. We resubmitted the manuscript to a different journal, for which we had to scale back the manuscript. The new journal had different size restrictions and some of the revisions only made sense in the context of those specific reviewer requests and did not necessarily belong in the manuscript. This new set of reviewers also made some requests for revisions, but once we had made those revisions, the manuscript was published within a matter of months.

I have also had frustrating experiences as a scientific peer reviewer. Some authors completely disregard suggestions for improving the manuscript, and it is really up to the individual editors to decide who they side with. Scientific peer review in its current form also does not involve testing for reproducibility. As reviewers, we have to accept the authors’ claims that they have conducted sufficient experiments to test the reproducibility and validity of their data. Reviewers do not check whether their own laboratory or other laboratories can replicate the results described in the manuscript. Scientific peer reviewers have to rely on the scientific integrity of the authors, even if their gut instinct tells them that these results may not be reproducible by other laboratories.

Due to these experiences, many scientists like to say that the current peer review system is “broken”, and we know that we need radical changes to make the peer review process more reliable and fair. There are two new developments in scientific peer review that sound very interesting: Portable peer review and open peer review.

Richard Van Noorden describes the concept of portable peer review that will soon be offered by a new company called Rubriq, which will conduct the scientific peer review and provide the results for a fee to the editors of the journal. Interestingly, Rubriq will also pay peer reviewers, something which is quite unusual in the current peer review system, which relies on scientists volunteering their time as peer reviewers.  The basic idea is that if journal rejects a paper after the peer review conducted by Rubriq, the comments of the reviewers would still used by the editors of the new journal as long as it also subscribes to the Rubriq service. This would cut down on the review time at the new journal, because the editors could base their decision of acceptance or rejection on the existing reviews instead of sending out the paper for another new, time consuming review. I like this idea, because it “recycles” the efforts of the first round of review and will likely streamline the review process. My only concern is that reviewers currently use different review criteria, depending on what journal they review for. When reviewing for a “high prestige” journal, reviewers tend to set a high bar for novelty and impact and their comments likely reflect this. It may not be very easy for editors to use these reviews for a very different journal. Furthermore, editors get to know their reviewers over time and pick certain reviewers that they believe will give the most appropriate reviews for a submitted manuscript. I am not sure that editors of journals would be that pleased by “farming out” this process to a third party.

The second new development is the concept of open peer review, as proposed by the new open access scientific journal PeerJ. I briefly touched on this when discussing a paper on the emotional impact of genetic testing, but I would like to expand on this, because I am very intrigued by the idea of open peer review. In this new peer review system, the scientific peer reviewers can choose to either remain anonymous or disclose their names. One would think that peer reviewers should be able to stand by their honest, constructive peer reviews so there should be no need for anonymity. On the other hand, some scientists might worry about (un)professional repercussions because some authors may be offended by the critiques. Therefore, I think it is quite reasonable that PeerJ permits anonymity of the reviewers.

The true novelty of the open review system is that the authors can choose to disclose the peer review correspondence, which includes the initial comments by the reviewers as well as their own rebuttal and revisions. I think that this is a very important and exciting development in peer review. It forces the peer reviewers to remain civil and reasonable in their comments. Even if a reviewer chooses to remain anonymous, they are probably still going to be more thoughtful in their reviews of the manuscript if they realize that potentially hundreds or thousands of other scientists could have a peek at their comments. Open peer review allows the public and the scientific community to peek behind the usually closed doors of scientific peer reviews. This provides a certain form of public accountability for the editors. They cannot just arbitrarily accept or reject manuscripts without good reasons, because by opening up the review process to the public they may have to justify their decisions based on the reviews they solicited. One good example for the civil tone and reasonable review requests and responses can be found in the review of the BRCA gene testing paper. The reviewers (one of them chooses to remain anonymous) ask many excellent questions, including questions about the demographics and educational status of the participants. The authors’ rebuttal to some of the questions was that they did not collect the data and cannot include it in the manuscript, but they also expand some of the presented data and mention caveats of their study in the revised discussion. The openness of the review process now permits the general reader to take advantage of the insights of the reviewers, such as the missing information about the educational status of the participants.

The open review system is one of the most important new advances in scientific peer review and I hope that other journals (even the more conservative, traditional and non-open access journals) will implement a similar open peer review system. This will increase accountability of reviewers and editors, and hopefully improve the efficiency and quality of scientific peer review.