“Hype” or Uncertainty: The Reporting of Initial Scientific Findings in Newspapers

One of the cornerstones of scientific research is the reproducibility of findings. Novel scientific observations need to be validated by subsequent studies in order to be considered robust. This has proven to be somewhat of a challenge for many biomedical research areas, including high impact studies in cancer research and stem cell research. The fact that an initial scientific finding of a research group cannot be confirmed by other researchers does not mean that the initial finding was wrong or that there was any foul play involved. The most likely explanation in biomedical research is that there is tremendous biological variability. Human subjects and patients examined in one research study may differ substantially from those in follow-up studies. Biological cell lines and tools used in basic science studies can vary widely, depending on so many details such as the medium in which cells are kept in a culture dish. The variability in findings is not a weakness of biomedical research, in fact it is a testimony to the complexity of biological systems. Therefore, initial findings need to always be treated with caution and presented with the inherent uncertainty. Once subsequent studies – often with larger sample sizes – confirm the initial observations, they are then viewed as being more robust and gradually become accepted by the wider scientific community.

Even though most scientists become aware of the scientific uncertainty associated with an initial observation as their career progresses, non-scientists may be puzzled by shifting scientific narratives. People often complain that “scientists cannot make up their minds” – citing examples of newspaper reports such as those which state drinking coffee may be harmful only to be subsequently contradicted by reports which laud the beneficial health effects of coffee drinking. Accurately communicating scientific findings as well as the inherent uncertainty of such initial findings is a hallmark of critical science journalism.

A group of researchers led by Dr. Estelle Dumas-Mallet at the University of Bordeaux recently studied the extent of uncertainty communicated to the public by newspapers when reporting initial medical research findings in their recently published paper “Scientific Uncertainty in the Press: How Newspapers Describe Initial Biomedical Findings“. Dumas-Mallet and her colleagues examined 426 English-language newspaper articles published between 1988 and 2009 which described 40 initial biomedical research studies. They focused on scientific studies in which a new risk factor such as smoking or old age had been newly associated with a disease such as schizophrenia, autism, Alzheimer’s disease or breast cancer (total of 12 diseases). The researchers only included scientific studies which had subsequently been re-evaluated by follow-up research studies and found that less than one third of the scientific studies had been confirmed by subsequent research. Dumas-Mallet and her colleagues were therefore interested in whether the newspaper articles, which were published shortly after the release of the initial research paper, adequately conveyed the uncertainty surrounding the initial findings and thus adequately preparing their readers for subsequent research that may confirm or invalidate the initial work.

The University of Bordeaux researchers specifically examined whether headlines of the newspaper articles were “hyped” or “factual”, whether they mentioned whether or not this was an initial study and clearly indicated they need for replication or validation by subsequent studies. Roughly 35% of the headlines were “hyped”. One example of a “hyped” headline was “Magic key to breast cancer fight” instead of using a more factual headline such as “Scientists pinpoint genes that raise your breast cancer risk“. Dumas-Mallet and her colleagues found that even though 57% of the newspaper articles mentioned that these medical research studies were initial findings, only 21% of newspaper articles included explicit “replication statements” such as “Tests on larger populations of adults must be performed” or “More work is needed to confirm the findings”.

The researchers next examined the key characteristics of the newspaper articles which were more likely to convey the uncertainty or preliminary nature of the initial scientific findings. Newspaper articles with “hyped” headlines were less likely to mention the need for replicating and validating the results in subsequent studies. On the other hand, newspaper articles which included a direct quote from one of the research study authors were three times more likely to include a replication statement. In fact, approximately half of all the replication statements mentioned in the newspaper articles were found in author quotes, suggesting that many scientists who conducted the research readily emphasize the preliminary nature of their work. Another interesting finding was the gradual shift over time in conveying scientific uncertainty. “Hyped” headlines were rare before 2000 (only 15%) and become more frequent during the 2000s (43%). On the other hand, replication statements were more common before 2000 (35%) than after 2000 (16%). This suggests that there was a trend towards conveying less uncertainty after 2000, which is surprising because debate about scientific replicability in the biomedical research community seems to have become much more widespread in the past decade.

As in all scientific studies, we need to be aware of the analysis performed by Dumas-Mallet and her colleagues. They focused on analyzing a very narrow area of biomedical research – newly identified risk factors for selected diseases. It remains to be seen whether other areas of biomedical research such as treatment of diseases or basic science discoveries of new molecular pathways are also reported with “hyped” headlines and without replication statements. In other words – this research on “replication statements” in newspaper articles also needs to be replicated. It is not clear that the worrisome trend of over-selling robustness of initial research findings after the year 2000 still persists since the work by Dumas-Mallet and colleagues stopped analyzing studies published after 2009. One would hope that the recent discussions about replicability issues in science among scientists would reverse this trend. Even though the findings of the University of Bordeaux researchers need to be replicated by others, science journalists and readers of newspapers can glean some important information from this study: One needs to be wary of “hyped” headlines and it can be very useful to interview authors of scientific studies when reporting about new research, especially asking them about the limitations of their work. “Hyped” newspaper headlines and an exaggerated sense of certainty in initial scientific findings may erode the long-term trust of the public in scientific research, especially if subsequent studies fail to replicate the initial results. Critical and comprehensive reporting of biomedical research studies – including their limitations and uncertainty – by science journalists is therefore a very important service to society which contributes to science literacy and science-based decision making.

Reference

Dumas-Mallet, E., Smith, A., Boraud, T., & Gonon, F. (2018). Scientific Uncertainty in the Press: How Newspapers Describe Initial Biomedical FindingsScience Communication, 40(1), 124-141.

Note: An earlier version of this article was first published on the 3Quarksdaily blog.

Advertisements

Blissful Ignorance: How Environmental Activists Shut Down Molecular Biology Labs in High Schools

Hearing about the HannoverGEN project made me feel envious and excited. Envious, because I wish my high school had offered the kind of hands-on molecular biology training provided to high school students in Hannover, the capital of the German state of Niedersachsen. Excited, because it reminded me of the joy I felt when I first isolated DNA and ran gels after restriction enzyme digests during my first year of university in Munich. I knew that many of the students at the HannoverGEN high schools would be similarly thrilled by their laboratory experience and perhaps even pursue careers as biologists or biochemists.

dna-163466_640
What did HannoverGEN entail? It was an optional pilot program initiated and funded by the state government of Niedersachsen at four high schools in the Hannover area. Students enrolled in the HannoverGEN classes would learn to use molecular biology tools typically reserved for college-level or graduate school courses in order to study plant genetics. Some of the basic experiments involved isolating DNA from cabbage or how learning how bacteria transfer genes to plants, more advanced experiments enabled the students to analyze whether or not the genome of a provided maize sample had been genetically modified. Each experimental unit was accompanied by relevant theoretical instruction on the molecular mechanisms of gene expression and biotechnology as well as ethical discussions regarding the benefits and risks of generating genetically modified organisms (“GMOs”). The details of the HannoverGEN program are only accessible through the the Wayback Machine Internet archive because the award-winning educational program and the associated website were shut down in 2013 at the behest of German anti-GMO activist groups, environmental activists, Greenpeace, the Niedersachsen Green Party and the German organic food industry.

Why did these activists and organic food industry lobbyists oppose a government-funded educational program which improved the molecular biology knowledge and expertise of high school students? A press release entitled “Keine Akzeptanzbeschaffung für Agro-Gentechnik an Schulen!” (“No Acceptance for Agricultural Gene Technology at Schools“) in 2012 by an alliance representing “organic” or “natural food” farmers accompanied by the publication of a critical “study” with the same title (PDF), which was funded by this alliance as well as its anti-GMO partners, gives us some clues. They feared that the high school students might become too accepting of biotechnology in agriculture and that the curriculum did not sufficiently highlight all the potential dangers of GMOs. By allowing the ethical discussions to not only discuss the risks but also mention the benefits of genetically modifying crops, students might walk away with the idea that GMOs could be beneficial for humankind. The group believed that taxpayer money should not be used to foster special interests such as those of the agricultural industry which may want to use GMOs.

A response by the University of Hannover (PDF), which had helped develop the curriculum and coordinated the classes for the high school students, carefully analyzed the complaints of the anti-GMO activists. The author of the anti-HannoverGEN “study” had not visited the HannoverGEN laboratories, nor had he had interviewed the biology teachers or students enrolled in the classes. In fact, his critique was based on weblinks that were not even used in the curriculum by the HannoverGEN teachers or students. His analysis ignored the balanced presentation of biotechnology that formed the basis of the HannoverGEN curriculum and that discussing potential risks of genetic modification was a core topic in all the classes.

Unfortunately, this shoddily prepared “study” had a significant impact, in part because it was widely promoted by partner organizations. Its release in the autumn of 2012 came at an opportune time for political activists because Niedersachsen was about to have an election. Campaigning against GMOs seemed like a perfect cause for the Green Party and a high school program which taught the use of biotechnology to high school students became a convenient lightning rod. When the Social Democrats and the Green Party formed a coalition after winning the election in early 2013, nixing the HannoverGEN high school program was formally included in the so-called coalition contract. This is a document in which coalition partners outline the key goals for the upcoming four year period. When one considers how many major issues and problems the government of a large German state has to face, such as healthcare, education, unemployment or immigration, it is mind-boggling that de-funding a program involving only four high schools received so much attention that it needed to be anchored in the coalition contract. In fact, it is a testimony to the influence and zeal of the anti-GMO lobby.

Once the cancellation of HannoverGEN was announced, the Hannover branch of Greenpeace also took credit for campaigning against this high school program and celebrated its victory. The Greenpeace anti-GMO activist David Petersen said that the program was too cost intensive because equipping high school laboratories with state-of-the-art molecular biology equipment had already cost more than 1 million Euros. The previous center-right government which had initiated the HannoverGEN project was planning on expanding the program to even more high schools because of the program’s success and national recognition for innovative teaching. According to Petersen, this would have wasted even more taxpayer money without adequately conveying the dangers of using GMOs in agriculture.

The scientific community was shaken up by the decision of the new Social Democrat-Green Party coalition government in Niedersachsen. This was an attack on the academic freedom of schools under the guise of accusing them of promoting special interests while ignoring that the anti-GMO activists were representing their own special interests. The “study” attacking HannoverGEN was funded by the lucrative “organic” or “natural food” food industry! Scientists and science writers such as Martin Ballaschk or Lars Fischer wrote excellent critical articles stating that squashing high-quality, hand-on science programs could not lead to better decision-making. How could ignorant students have a better grasp of GMO risks and benefits than those who receive relevant formal science education and thus make truly informed decisions? Sadly, this outcry by scientists and science writers did not make much of a difference. It did not seem that the media felt this was much of a cause to fight for. I wonder if the media response would have been just as lackluster if the government had de-funded a hands-on science lab to study the effects of climate change.

In 2014, the government of Niedersachsen then announced that they would resurrect an advanced biology laboratory program for high schools with the generic and vague title “Life Science Lab”. By removing the word “Gen” from its title which seems to trigger visceral antipathy among anti-GMO activists, de-emphasizing genome science and by also removing any discussion of GMOs from the curriculum, this new program would leave students in the dark about GMOs. Ignorance is bliss from an anti-GMO activist perspective because the void of scientific ignorance can be filled with fear.

From the very first day that I could vote in Germany during the federal election of 1990, I always viewed the Green Party as a party that represented my generation. A party of progressive ideas, concerned about our environment and social causes. However, the HannoverGEN incident is just one example of how the Green Party is caving in to ideologies, thus losing its open-mindedness and progressive nature. In the United States, the anti-science movement, which attacks teaching climate change science or evolutionary biology at schools, tends to be rooted in the right wing political spectrum. Right wingers or libertarians are the ones who always complain about taxpayer dollars being wasted and used to promote agendas in schools and universities. But we should not forget that there is also a different anti-science movement rooted in the leftist and pro-environmental political spectrum – not just in Germany. As a scientist, I feel that it is becoming increasingly difficult to support the Green Party because of its anti-science stance.

I worry about all anti-science movements, especially those which attack science education. There is nothing wrong with questioning special interests and ensuring that school and university science curricula are truly balanced. But the balance needs to be rooted in scientific principles, not political ideologies. Science education has a natural bias – it is biased towards knowledge that is backed up by scientific evidence. We can hypothetically discuss dangers of GMOs but the science behind the dangers of GMO crops is very questionable. Just like environmental activists and leftists agree with us scientists that we do not need to give climate change deniers and creationists “balanced” treatment in our science curricula, they should also accept that much of the “anti-GMO science” is currently more based on ideology than on actual scientific data. Our job is to provide excellent science education so that our students can critically analyze and understand scientific research, independent of whether or not it supports our personal ideologies.

 

Note: An earlier version of this article was first published on the 3Quarksdaily blog.

STEM Education Promotes Critical Thinking and Creativity: A Response to Fareed Zakaria

Fareed Zakaria recently wrote an article in the Washington Post lamenting the loss of liberal arts education in the United States. However, instead of making a case for balanced education, which integrates various forms of creativity and critical thinking promoted by STEM (science, technology, engineering and mathematics) and by a liberal arts education, Zakaria misrepresents STEM education as primarily teaching technical skills and also throws in a few cliches about Asians. You can read my response to his article at 3Quarksdaily.

 

Fractal

Literature and Philosophy in the Laboratory Meeting

Research institutions in the life sciences engage in two types of regular scientific meet-ups: scientific seminars and lab meetings. The structure of scientific seminars is fairly standard. Speakers give Powerpoint presentations (typically 45 to 55 minutes long) which provide the necessary scientific background, summarize their group’s recent published scientific work and then (hopefully) present newer, unpublished data. Lab meetings are a rather different affair. The purpose of a lab meeting is to share the scientific work-in-progress with one’s peers within a research group and also to update the laboratory heads. Lab meetings are usually less formal than seminars, and all members of a research group are encouraged to critique the presented scientific data and work-in-progress. There is no need to provide much background information because the audience of peers is already well-acquainted with the subject and it is not uncommon to show raw, unprocessed data and images in order to solicit constructive criticism and guidance from lab members and mentors on how to interpret the data. This enables peer review in real-time, so that, hopefully, major errors and flaws can be averted and newer ideas incorporated into the ongoing experiments.

Books

During the past two decades that I have actively participated in biological, psychological and medical research, I have observed very different styles of lab meetings. Some involve brief 5-10 minute updates from each group member; others develop a rotation system in which one lab member has to present the progress of their ongoing work in a seminar-like, polished format with publication-quality images. Some labs have two hour meetings twice a week, other labs meet only every two weeks for an hour. Some groups bring snacks or coffee to lab meetings, others spend a lot of time discussing logistics such as obtaining and sharing biological reagents or establishing timelines for submitting manuscripts and grants. During the first decade of my work as a researcher, I was a trainee and followed the format of whatever group I belonged to. During the past decade, I have been heading my own research group and it has become my responsibility to structure our lab meetings. I do not know which format works best, so I approach lab meetings like our experiments. Developing a good lab meeting structure is a work-in-progress which requires continuous exploration and testing of new approaches. During the current academic year, I decided to try out a new twist: incorporating literature and philosophy into the weekly lab meetings.

My research group studies stem cells and tissue engineeringcellular metabolism in cancer cells and stem cells and the inflammation of blood vessels. Most of our work focuses on identifying molecular and cellular pathways in cells, and we then test our findings in animal models. Over the years, I have noticed that the increasing complexity of the molecular and cellular signaling pathways and the technologies we employ makes it easy to forget the “big picture” of why we are even conducting the experiments. Determining whether protein A is required for phenomenon X and whether protein B is a necessary co-activator which acts in concert with protein A becomes such a central focus of our work that we may not always remember what it is that compels us to study phenomenon X in the first place. Some of our research has direct medical relevance, but at other times we primarily want to unravel the awe-inspiring complexity of cellular processes. But the question of whether our work is establishing a definitive cause-effect relationship or whether we are uncovering yet another mechanism within an intricate web of causes and effects sometimes falls by the wayside. When asked to explain the purpose or goals of our research, we have become so used to directing a laser pointer onto a slide of a cellular model that it becomes challenging to explain the nature of our work without visual aids.

This fall, I introduced a new component into our weekly lab meetings. After our usual round-up of new experimental data and progress, I suggested that each week one lab member should give a brief 15 minute overview about a book they had recently finished or were still reading. The overview was meant to be a “teaser” without spoilers, explaining why they had started reading the book, what they liked about it, and whether they would recommend it to others. One major condition was to speak about the book without any Powerpoint slides! But there weren’t any major restrictions when it came to the book; it could be fiction or non-fiction and published in any language of the world (but ideally also available in an English translation). If lab members were interested and wanted to talk more about the book, then we would continue to discuss it, otherwise we would disband and return to our usual work. If nobody in my lab wanted to talk about a book then I would give an impromptu mini-talk (without Powerpoint) about a topic relating to the philosophy or culture of science. I use the term “culture of science” broadly to encompass topics such as the peer review process and post-publication peer review, the question of reproducibility of scientific findings, retractions of scientific papers, science communication and science policy – topics which have not been traditionally considered philosophy of science issues but still relate to the process of scientific discovery and the dissemination of scientific findings.

One member of our group introduced us to “For Whom the Bell Tolls” by Ernest Hemingway. He had also recently lived in Spain as a postdoctoral research fellow and shared some of his own personal experiences about how his Spanish friends and colleagues talked about the Spanish Civil War. At another lab meeting, we heard about “Sycamore Row” by John Grisham and the ensuring discussion revolved around race relations in Mississippi. I spoke about “A Tale for a Time Being” by Ruth Ozeki and the difficulties that the book’s protagonist faced as an outsider when her family returned to Japan after living in Silicon Valley. I think that the book which got nearly everyone in the group talking was “Far From the Tree: Parents, Children and the Search for Identity” by Andrew Solomon. The book describes how families grapple with profound physical or cognitive differences between parents and children. The PhD student who discussed the book focused on the “Deafness” chapter of this nearly 1000-page tome but she also placed it in the broader context of parenting, love and the stigma of disability. We stayed in the conference room long after the planned 15 minutes, talking about being “disabled” or being “differently abled” and the challenges that parents and children face.

On the weeks where nobody had a book they wanted to present, we used the time to touch on the cultural and philosophical aspects of science such as Thomas Kuhn’s concept of paradigm shifts in “The Structure of Scientific Revolutions“, Karl Popper’s principles of falsifiability of scientific statements, the challenge of reproducibility of scientific results in stem cell biology and cancer research, or the emergence of Pubpeer as a post-publication peer review website. Some of the lab members had heard of Thomas Kuhn’s or Karl Popper’s ideas before, but by coupling it to a lab meeting, we were able to illustrate these ideas using our own work. A lot of 20th century philosophy of science arose from ideas rooted in physics. When undergraduate or graduate students take courses on philosophy of science, it isn’t always easy for them to apply these abstract principles to their own lab work, especially if they pursue a research career in the life sciences. Thomas Kuhn saw Newtonian and Einsteinian theories as distinct paradigms, but what constitutes a paradigm shift in stem cell biology? Is the ability to generate induced pluripotent stem cells from mature adult cells a paradigm shift or “just” a technological advance?

It is difficult for me to know whether the members of my research group enjoy or benefit from these humanities blurbs at the end of our lab meetings. Perhaps they are just tolerating them as eccentricities of the management and maybe they will tire of them. I personally find these sessions valuable because I believe they help ground us in reality. They remind us that it is important to think and read outside of the box. As scientists, we all read numerous scientific articles every week just to stay up-to-date in our area(s) of expertise, but that does not exempt us from also thinking and reading about important issues facing society and the world we live in. I do not know whether discussing literature and philosophy makes us better scientists but I hope that it makes us better people.

 

Note: An earlier version of this article was first published on the 3Quarksdaily blog.

ResearchBlogging.org

Thomas Kuhn (2012). The Structure of Scientific Revolutions University of Chicago Press DOI: 10.7208/chicago/9780226458106.001.0001

How Often Do Books Mention Scientists and Researchers?

Here is a graphic showing the usage of the words “scientists”, “researchers”, “soldiers” in English-language books published in 1900-2008. The graphic was generated using the Google N-gram Viewer which scours all digitized books in the Google database for selected words and assesses the relative word usage frequencies.

Ngram

 

(You can click on the chart to see a screen shot or on this link for the N-gram Viewer)

It is depressing that soldiers are mentioned more frequently than scientists or researchers (even when the word frequencies of “scientists” and “researchers” are combined) in English-language books even though the numbers of researchers in the countries which produce most English-language books are comparable or higher than the number of soldiers.

Here are the numbers of researchers (data from the 2010 UNESCO Science report, numbers are reported for the year 2007, PDF) in selected English-language countries and the corresponding numbers of armed forces personnel (data from the World Bank, numbers reported for 2012):

United States: 1.4 million researchers vs. 1.5 million armed forces personnel
United Kingdom: 255,000 researchers vs. 169,000 armed forces personnel
Canada: 139,000 researchers vs. 66,000 armed forces personnel

I find it disturbing that our books – arguably one of our main cultural legacies – give a disproportionately greater space to discussing or describing the military than to our scientific and scholarly endeavors. But I am even more worried about the recent trends. The N-gram Viewer evaluates word usage up until 2008, and “soldiers” has been steadily increasing since the 1990s. The usage of “scientists” and “researchers” has reached a plateau and is now decreasing. I do not want to over-interpret the importance of relative word frequencies as indicators of society’s priorities, but the last two surges of “soldiers” usage occurred during the two World Wars and in 2008, “soldiers” was used as frequently as during the first years of World War II.

It is mind-boggling for us scientists that we have to struggle to get funding for research which has the potential to transform society by providing important new insights into the nature of our universe, life on this planet, our environment and health, whereas the military receives substantially higher amounts of government funding (at least in the USA) for its destructive goals. Perhaps one reason for this discrepancy is that voters hear, see and read much more about wars and soldiers than about science and research. Depictions of heroic soldiers fighting evil make it much easier for voters to go along with allocation of resources to the military. Most of my non-scientist friends can easily name books or movies about soldiers, but they would have a hard time coming up with books and movies about science and scientists. My take-home message from the N-gram Viewer results is that scientists have an obligation to reach out to the public and communicate the importance of science in an understandable manner if they want to avoid the marginalization of science.

Neutrality, Balance and Anonymous Sources in Science Blogging – #scioStandards

This is Part 2 of a series of blog posts in anticipation of the Upholding standards in scientific blogs (Session 10B, #scioStandards) session which I will be facilitating at noon on Saturday, March 1 at the upcoming ScienceOnline conference (February 27 – March 1, 2014 in Raleigh, NC – USA). Please read Part 1 here. The goal of these blog posts is to raise questions which readers can ponder and hopefully discuss during the session.

scioStandards

1.       Neutrality

Neutrality is prized by scientists and journalists. Scientists are supposed to report and analyze their scientific research in a neutral fashion. Similarly, journalistic professionalism requires a neutral and objective stance when reporting or analyzing news. Nevertheless, scientists and journalists are also aware of the fact that there is no perfect neutrality. We are all victims of our conscious and unconscious biases and how we report data or events is colored by our biases. Not only is it impossible to be truly “neutral”, but one can even question whether “neutrality” should be a universal mandate. Neutrality can make us passive, especially when we see a clear ethical mandate to take action. Should one report in a neutral manner about genocide instead of becoming an advocate for the victims? Should a scientist who observes a destruction of ecosystems report on this in a neutral manner? Is it acceptable or perhaps even required for such a scientist to abandon neutrality and becoming an advocate to protect the ecosystems?

Science bloggers or science journalists have to struggle to find the right balance between neutrality and advocacy. Political bloggers and journalists who are enthusiastic supporters of a political party will find it difficult to preserve neutrality in their writing, but their target audiences may not necessarily expect them to remain neutral. I am often fascinated and excited by scientific discoveries and concepts that I want to write about, but I also notice how my enthusiasm for science compromises my neutrality. Should science bloggers strive for neutrality and avoid advocacy? Or is it understood that their audiences do not expect neutrality?

 

2.       Balance

One way to increase objectivity and neutrality in science writing is to provide balanced views. When discussing a scientific discovery or concept, one can also cite or reference scientists with opposing views. This underscores that scientific opinion is not a monolith and that most scientific findings can and should be challenged. However, the mandate to provide balance can also lead to “false balance” when two opposing opinions are presented as two equivalent perspectives, even though one of the two sides has little to no scientific evidence to back up its claims. More than 99% of all climatologists agree about the importance of anthropogenic global warming, therefore it would be “false balance” to give equal space to opposing fringe views. Most science bloggers would also avoid “false balance” when it comes to reporting about the scientific value of homeopathy since nearly every scientist in the world agrees that homeopathy has no scientific data to back it up.

But how should science bloggers decide what constitutes “necessary balance” versus “false balance” when writing about areas of research where the scientific evidence is more ambivalent. How about a scientific discovery which 80% of scientists think is a landmark finding and 20% of scientists believe is a fluke? How does one find out about the scientific rigor of the various viewpoints and how should a blog post reflect these differences in opinion? Press releases of universities or research institutions usually only cite the researchers that conducted a scientific study, but how does one find out about other scientists who disagree with the significance of the new study?

 

3.       Anonymous Sources

Most scientific peer review is conducted with anonymous sources. The editors of peer reviewed scientific journals send out newly submitted manuscripts to expert reviewers in the field but they try to make sure that the names of the reviewers remain confidential. This helps ensure that the reviewers can comment freely about any potential flaws in the manuscript without having to fear retaliation from the authors who might be incensed about the critique. Even in the post-publication phase, anonymous commenters can leave critical comments about a published study at the post-publication peer review website PubPeer. The comments made by anonymous as well as identified commenters at PubPeer played an important role in raising questions about recent controversial stem cell papers. On the other hand, anonymous sources may also use their cover to make baseless accusations and malign researchers. In the case of journals, the responsibility lies with the editors to ensure that their anonymous reviewers are indeed behaving in a professional manner and not abusing their anonymity.

Investigative political journalists also often rely on anonymous sources and whistle-blowers to receive critical information that would have otherwise been impossible to obtain. Journalists are also trained to ensure that their anonymous sources are credible and that they are not abusing their anonymity.

Should science bloggers and science journalists also consider using anonymous sources? Would unnamed scientists provide a more thorough critical appraisal of the quality of scientific research or would this open the door to abuse?

 

I hope that you leave comments on this post, tweet your thoughts using the #scioStandards hashtag and discuss your views at the Science Online conference.

Background Reading in Science Blogging – #scioStandards

There will be so many interesting sessions at the upcoming ScienceOnline conference (February 27 – March 1, 2014 in Raleigh, NC – USA) that it is going to be difficult to choose which sessions to attend, because one will invariably miss out on concurrent sessions. If you are not too exhausted, please attend one of the last sessions of the conference: Upholding standards in scientific blogs (Session 10B, #scioStandards).

scioStandards

I will be facilitating the discussion at this session, which will take place at noon on Saturday, March 1, just before the final session of the conference. The title of the session is rather vague, and the purpose of the session is for attendees to exchange their views on whether we can agree on certain scientific and journalistic standards for science blogging.

Individual science bloggers have very different professional backgrounds and they also write for a rather diverse audience. Some bloggers are part of larger networks, others host a blog on their own personal website. Some are paid, others write for free. Most bloggers have developed their own personal styles for how they write about scientific studies, the process of scientific discovery, science policy and the lives of people involved in science. Considering the heterogeneity in the science blogging community, is it even feasible to identify “standards” for scientific blogging? Are there some core scientific and journalistic standards that most science bloggers can agree on? Would such “standards” merely serve as informal guidelines or should they be used as measures to assess the quality of science blogging?

These are the kinds of questions that we will try to discuss at the session. I hope that we will have a lively discussion, share our respective viewpoints and see what we can learn from each other. To gauge the interest levels of the attendees, I am going to pitch a few potential discussion topics on this blog and use your feedback to facilitate the discussion. I would welcome all of your responses and comments, independent of whether you intend to attend the conference or the session. I will also post these questions in the Science Online discussion forum.

One of the challenges we face when we blog about specific scientific studies is determining how much background reading is necessary to write a reasonably accurate blog post. Most science bloggers probably read the original research paper they intend to write about, but even this can be challenging at times. Scientific papers aren’t very long. Journals usually restrict the word count of original research papers to somewhere between 2,000 words to 8,000 words (depending on each scientific journal’s policy and whether the study is a published as a short communication or a full-length article). However, original research papers are also accompanied four to eight multi-paneled figures with extensive legends.

Nowadays, research papers frequently include additional figures, data-sets and detailed descriptions of scientific methods that are published online and not subject to the word count limit. A 2,000 word short communication with two data figures in the main manuscript may therefore be accompanied by eight “supplemental” online-only figures and an additional 2,000 words of text describing the methods in detail. A single manuscript usually summarizes the results of multiple years of experimental work, which is why this condensed end-product is quite dense. It can take hours to properly study the published research study and understand the intricate details.

Is it enough to merely read the original research paper in order to blog about it? Scientific papers include a brief introduction section, but these tend to be written for colleagues who are well-acquainted with the background and significance of the research. However, unless one happens to blog about a paper that is directly related to one’s own work, most of us probably need additional background reading to fully understand the significance of a newly published study.

An expert on liver stem cells, for example, who wants blog about the significance of a new paper on lung stem cells will probably need substantial amount of additional background reading. One may have to read at least one or two older research papers by the authors or their scientific colleagues / competitors to grasp what makes the new study so unique. It may also be helpful to read at least one review paper (e.g. a review article summarizing recent lung stem cell discoveries) to understand the “big picture”. Some research papers are accompanied by scientific editorials which can provide important insights into the strengths and limitations of the paper in question.

All of this reading adds up. If it takes a few hours to understand the main paper that one intends to blog about, and an additional 2-3 hours to read other papers or editorials, a science blogger may end up having to invest 4-5 hours of reading before one has even begun to write the intended blog post.

What strategies have science bloggers developed to manage their time efficiently and make sure they can meet (external or self-imposed) deadlines but still complete the necessary background reading?

Should bloggers provide references and links to the additional papers they consulted?

Should bloggers try to focus on a narrow area of expertise so that over time they develop enough of a background in this niche area so that they do not need so much background reading?

Are there major differences in the expectations of how much background reading is necessary? For example, does an area such as stem cell research or nanotechnology require far more background reading because every day numerous new papers are published and it is so difficult to keep up with the pace of the research?

Is it acceptable to take short-cuts? Could one just read the paper that one wants to blog about and forget about additional background reading, hoping that the background provided in the paper is sufficient and balanced?

Can one avoid reading the supplementary figures or texts of a paper and just stick to the main text of a paper, relying on the fact that the peer reviewers of the published paper would have caught any irregularities in the supplementary data?

Is it possible to primarily rely on a press release or an interview with the researchers of the paper and just skim the results of the paper instead of spending a few hours trying to read the original paper?

Or do such short-cuts compromise the scientific and journalistic quality of science blogs?

Would a discussion about expectations, standards and strategies to manage background reading be helpful for participants of the session?