Neutrality, Balance and Anonymous Sources in Science Blogging – #scioStandards

This is Part 2 of a series of blog posts in anticipation of the Upholding standards in scientific blogs (Session 10B, #scioStandards) session which I will be facilitating at noon on Saturday, March 1 at the upcoming ScienceOnline conference (February 27 – March 1, 2014 in Raleigh, NC – USA). Please read Part 1 here. The goal of these blog posts is to raise questions which readers can ponder and hopefully discuss during the session.

scioStandards

1.       Neutrality

Neutrality is prized by scientists and journalists. Scientists are supposed to report and analyze their scientific research in a neutral fashion. Similarly, journalistic professionalism requires a neutral and objective stance when reporting or analyzing news. Nevertheless, scientists and journalists are also aware of the fact that there is no perfect neutrality. We are all victims of our conscious and unconscious biases and how we report data or events is colored by our biases. Not only is it impossible to be truly “neutral”, but one can even question whether “neutrality” should be a universal mandate. Neutrality can make us passive, especially when we see a clear ethical mandate to take action. Should one report in a neutral manner about genocide instead of becoming an advocate for the victims? Should a scientist who observes a destruction of ecosystems report on this in a neutral manner? Is it acceptable or perhaps even required for such a scientist to abandon neutrality and becoming an advocate to protect the ecosystems?

Science bloggers or science journalists have to struggle to find the right balance between neutrality and advocacy. Political bloggers and journalists who are enthusiastic supporters of a political party will find it difficult to preserve neutrality in their writing, but their target audiences may not necessarily expect them to remain neutral. I am often fascinated and excited by scientific discoveries and concepts that I want to write about, but I also notice how my enthusiasm for science compromises my neutrality. Should science bloggers strive for neutrality and avoid advocacy? Or is it understood that their audiences do not expect neutrality?

 

2.       Balance

One way to increase objectivity and neutrality in science writing is to provide balanced views. When discussing a scientific discovery or concept, one can also cite or reference scientists with opposing views. This underscores that scientific opinion is not a monolith and that most scientific findings can and should be challenged. However, the mandate to provide balance can also lead to “false balance” when two opposing opinions are presented as two equivalent perspectives, even though one of the two sides has little to no scientific evidence to back up its claims. More than 99% of all climatologists agree about the importance of anthropogenic global warming, therefore it would be “false balance” to give equal space to opposing fringe views. Most science bloggers would also avoid “false balance” when it comes to reporting about the scientific value of homeopathy since nearly every scientist in the world agrees that homeopathy has no scientific data to back it up.

But how should science bloggers decide what constitutes “necessary balance” versus “false balance” when writing about areas of research where the scientific evidence is more ambivalent. How about a scientific discovery which 80% of scientists think is a landmark finding and 20% of scientists believe is a fluke? How does one find out about the scientific rigor of the various viewpoints and how should a blog post reflect these differences in opinion? Press releases of universities or research institutions usually only cite the researchers that conducted a scientific study, but how does one find out about other scientists who disagree with the significance of the new study?

 

3.       Anonymous Sources

Most scientific peer review is conducted with anonymous sources. The editors of peer reviewed scientific journals send out newly submitted manuscripts to expert reviewers in the field but they try to make sure that the names of the reviewers remain confidential. This helps ensure that the reviewers can comment freely about any potential flaws in the manuscript without having to fear retaliation from the authors who might be incensed about the critique. Even in the post-publication phase, anonymous commenters can leave critical comments about a published study at the post-publication peer review website PubPeer. The comments made by anonymous as well as identified commenters at PubPeer played an important role in raising questions about recent controversial stem cell papers. On the other hand, anonymous sources may also use their cover to make baseless accusations and malign researchers. In the case of journals, the responsibility lies with the editors to ensure that their anonymous reviewers are indeed behaving in a professional manner and not abusing their anonymity.

Investigative political journalists also often rely on anonymous sources and whistle-blowers to receive critical information that would have otherwise been impossible to obtain. Journalists are also trained to ensure that their anonymous sources are credible and that they are not abusing their anonymity.

Should science bloggers and science journalists also consider using anonymous sources? Would unnamed scientists provide a more thorough critical appraisal of the quality of scientific research or would this open the door to abuse?

 

I hope that you leave comments on this post, tweet your thoughts using the #scioStandards hashtag and discuss your views at the Science Online conference.

Advertisement

Critical Science Writing: A Checklist for the Life Sciences

One major obstacle in the “infotainment versus critical science writing” debate is that there is no universal definition of what constitutes “critical analysis” in science writing. How can we decide whether or not critical science writing is adequately represented in contemporary science writing or science journalism, if we do not have a standardized method of assessing it? For this purpose, I would like to propose the following checklist of points that can be addressed in news articles or blog-posts which focus on the critical analysis of published scientific research. This checklist is intended for the life sciences – biological and medical research – but it can be easily modified and applied to critical science writing in other areas of research. Each category contains examples of questions which science writers can direct towards members of the scientific research team, institutional representatives or by performing an independent review of the published scientific data. These questions will have to be modified according to the specific context of a research study.

 

1. Novelty of the scientific research:

Most researchers routinely claim that their findings are novel, but are the claims of novelty appropriate? Is the research pointing towards a fundamentally new biological mechanism or introducing a completely new scientific tool? Or does it just represent a minor incremental growth in our understanding of a biological problem?

 

2. Significance of the research:

How does the significance of the research compare to the significance of other studies in the field? A biological study might uncover new regulators of cell death or cell growth, but how many other such regulators have been discovered in recent years? How does the magnitude of the effect in the study compare to magnitude of effects in other research studies? Suppressing a gene might prolong the survival of a cell or increase the regeneration of an organ, but have research groups published similar effects in studies which target other genes? Some research studies report effects that are statistically significant, but are they also biologically significant?

 

3. Replicability:

Have the findings of the scientific study been replicated by other research groups? Does the research study attempt to partially or fully replicate prior research? If the discussed study has not yet been replicated, is there any information available on the general replicability success rate in this area of research?

 

4. Experimental design:

Did the researchers use an appropriate experimental design for the current study by ensuring that they included adequate control groups and addressed potential confounding factors? Were the experimental models appropriate for the questions they asked and for the conclusions they are drawing? Did the researchers study the effects they observed at multiple time points or just at one single time point? Did they report the results of all the time points or did they just pick the time points they were interested in?

Examples of issues: 1) Stem cell studies in which human stem cells are transplanted into injured or diseased mice are often conducted with immune deficient mice to avoid rejection of the human cells. Some studies do not assess whether the immune deficiency itself impacted the injury or disease, which could be a confounding factor when interpreting the results. 2) Studies which investigate the impact of the 24-hour internal biological clock on the expression of genes sometimes perform the studies in humans and animals who maintain a regular sleep-wake schedule. This obscures the cause-effect relationship because one is unable to ascertain whether the observed effects are truly regulated by an internal biological clock or whether they merely reflect changes associated with being awake versus asleep.

 

5. Experimental methods:

Are the methods used in the research study accepted by other researchers? If the methods are completely novel, have they been appropriately validated? Are there any potential artifacts that could explain the findings? How did the findings in a dish (“in vitro“) compare to the findings in an animal experiment (“in vivo“)? If new genes were introduced into cells or into animals, was the level of activity comparable to levels found in nature or were the gene expression levels 10-, 100- or even 1000-fold higher than physiologic levels?

Examples of issues: In stem cell research, a major problem faced by researchers is how stem cells are defined, what constitutes cell differentiation and how the fate of stem cells is tracked. One common problem that has plagued peer-reviewed studies published in high-profile journals is the inadequate characterization of stem cells and function of mature cells derived from the stem cells. Another problem in the stem cell literature is the fact that stem cells are routinely labeled with fluorescent markers to help track their fate, but it is increasingly becoming apparent that unlabeled cells (i.e. non-stem cells) can emit a non-specific fluorescence that is quite similar to that of the labeled stem cells. If a study does not address such problems, some of its key conclusions may be flawed.

 

6. Statistical analysis:

Did the researchers use the appropriate statistical tests to test the validity of their results? Were the experiments adequately powered (have a sufficient sample size) to draw valid conclusions? Did the researchers pre-specify the number of repeat experiments, animals or humans in their experimental groups prior to conducting the studies? Did they modify the number of animals or human subjects in the experimental groups during the course of the study?

 

7. Consensus or dissent among scientists:

What do other scientists think about the published research? Do they agree with the novelty, significance and validity of the scientific findings as claimed by the authors of the published paper or do they have specific concerns in this regard?

 

8. Peer review process:

What were the major issues raised during the peer review process? How did the researchers address the concerns of the reviewers? Did any journals previously reject the study before it was accepted for publication?

 

9. Financial interests:

How was the study funded? Did the organization or corporation which funded the study have any say in how the study was designed, how the data was analyzed and what data was included in the publication? Do the researchers hold any relevant patents, own stock or receive other financial incentives from institutions or corporations that could benefit from this research?

 

10. Scientific misconduct, fraud or breach of ethics

Are there any allegations or concerns about scientific misconduct, fraud or breach of ethics in the context of the research study? If such concerns exist, what are the specific measures taken by the researchers, institutions or scientific journals to resolve the issues? Have members of the research team been previously investigated for scientific misconduct or fraud? Are there concerns about how informed consent was obtained from the human subjects?

 

This is just a preliminary list and I would welcome any feedback on how to improve this list in order to develop tools for assessing the critical analysis content in science writing. It may not always be possible to obtain the pertinent information. For example, since the peer review process is usually anonymous, it may be impossible for a science writer to find out details about what occurred during the peer review process if the researchers themselves refuse to comment on it.

One could assign a point value to each of the categories in this checklist and then score individual science news articles or science blog-posts that discuss specific research studies. A greater in-depth discussion of any issue should result in a greater point score for that category.

Points would not only be based on the number of issues raised but also on the quality of analysis provided in each category. Listing all the funding sources is not as helpful as providing an analysis of how the funding could have impacted the data interpretation. Similarly, if the science writer notices errors in the experimental design, it would be very helpful for the readers to understand whether these errors invalidate all major conclusions of the study or just some of its conclusions. Adding up all the points would then generate a comprehensive score that could become a quantifiable indicator of the degree of critical analysis contained in a science news article or blog-post.

 

********************

EDIT: The checklist now includes a new category – scientific misconduct, fraud or breach of ethics.

‘Infotainment’ and Critical Science Journalism

I recently wrote an op-ed piece for the Guardian in which I suggested that there is too much of an emphasis on ‘infotainment’ in contemporary science journalism and there is too little critical science journalism. The response to the article was unexpectedly strong, provoking some hostile comments on Twitter, and some of the most angry comments seem to indicate a misunderstanding of the core message.

One of the themes that emerged in response to the article was the Us-vs.-Them perception that “scientists” were attacking “journalists”. This was surprising because as a science blogger, I assumed that I, too, was a science journalist. My definitions of scientist and journalist tend to be rather broad and inclusive. I think of scientists with a special interest and expertise in communicating science to a broad readership as science journalists. I also consider journalists with a significant interest and expertise in science as scientists. My inclusive definitions of scientists and journalists have been in part influenced by an article written by Bora Zivkovic, an outstanding science journalist and scientist and the person who inspired me to become a science blogger.  As Bora Zivokovic reminds us, scientists and journalists have a lot in common: They are supposed to be critical and skeptical, they obtain and analyze data and they communicate their findings to an audience after carefully evaluating their data.  However, it is apparent that some scientists and journalists are protective of their respective domains. Some scientists may not accept science journalists as fellow scientists unless they are part of an active science laboratory. Conversely, some journalists may not accept scientists as fellow journalists unless their primary employer is a media organization. For the purpose of this discussion, I will try to therefore use the more generic term “science writing” instead of “science journalism”.

Are infotainment science writing and critical science writing opposites? This was one of the major questions that arose in the Twitter discussion. The schematic below illustrates infotainment and critical science writing.

Triangle

Although this schematic of a triangle might seem oversimplified, it is a tool that I use to help me in my own science writing. “Critical science writing” (base of the triangle) tends to provide information and critical analysis of scientific research to the readers. Infotainment science writing minimizes the critical analysis of the research and instead focuses on presenting content about scientific research in an entertaining style. Scientific satire as a combination of entertainment and critical analysis was not discussed in the Guardian article, but I think that this too is a form of science writing that should be encouraged.

Articles or blog-posts can fall anywhere within this triangle, which is why infotainment and critical science writing are not true dichotomies, they just have distinct emphases. Infotainment science writing can include some degree of critical analysis, and critical science writing can be somewhat entertaining. However, it is rare for science writing (or other forms of writing) to strike a balance that is able to include accurate scientific information, entertainment, as well as a profound critical analysis that challenges the scientific methodology or scientific establishment, all in one article. In American political journalism, Jon Stewart and the Daily Show are perhaps one example of how one can inform, entertain and be critical – all in one succinct package. Currently, contemporary science writing which is informative and entertaining (‘infotainment’), rarely challenges the scientific establishment the way Jon Stewart challenges the political establishment.

Is ‘infotainment’ a derogatory term?  Some readers of the Guardian article assumed that I was not only claiming that all science journalism is ‘infotainment’, but also putting down ‘infotainment’ science journalism. There is nothing wrong with writing about science in an informative and entertaining manner, therefore ‘infotainment’ science writing should not be construed as a derogatory term. There are differences between good and sloppy infotainment science writing. Good infotainment science writing is accurate in terms of the scientific information it conveys, whereas sloppy infotainment science writing discards scientific accuracy to maximize hype and entertainment value. Similarly, there is good and sloppy critical science writing. Good critical science writing is painstakingly careful in the analysis of the scientific data and its scientific context by reviewing numerous other related scientific studies in the field and putting the scientific work in perspective. Sloppy critical science writing, on the other hand, might just single out one scientific study and attempt to discredit a whole area of research without examining context. Examples of sloppy critical science writing can be found in the anti-global warming literature, which hones in on a few minor scientific discrepancies, but ignores the fact that 98-99% of climate scientists agree on the fact that humans are the primary cause of global warming.

Instead of just discussing these distinctions in abstract terms, I will use some of my prior blog-posts to illustrate differences between different types of science writing, such as infotainment, critical science writing or scientific satire. I find it easier to critique my own science writing than that of other science writers, probably because I am plagued by the same self-doubts that most writers struggle with. The following analysis may be helpful for other science writers who want to see where their articles and blog-posts fall on the information – critical analysis – entertainment spectrum.

 

A.     Infotainment science writing

Infotainment science writing allows me to write about exciting or unusual new discoveries in a fairly manageable amount of time, without having to extensively review the literature in the field or perform an in-depth analysis of the statistics and every figure in the study under discussion. After providing some background for the non-specialist reader, one can focus on faithfully reporting the data in the paper and the implications of the work without discussing all the major caveats and pitfalls in the published paper. This writing provides a bit of an escapist pleasure for me, because so much of my time as a scientist is spent performing a critical analysis of the experimental data acquired in my own laboratory or in-depth reviews of scientific manuscripts and grants of either collaborators or as a peer reviewer. Infotainment science writing is a reminder of the big picture, excitement and promise of science, even though it might gloss over certain important experimental flaws and caveats of scientific studies.

Infotainment Science Writing Example 1: Using Viagra To Burn Fat

This blog-post discusses a paper published in the FASEB Journal, which suggested that white (“bad”) fat cells could be converted into brown (“good”) fat cells using Viagra. The study reminded me of a collision between two groups of spam emails: weight loss meets Viagra. The blog-post provides background on white and brown adipose tissue and then describes the key findings of the paper. A few limitations of the study are mentioned, such as the fact that the researchers never document weight loss in the mice they treated, as well as the fact that the paper ignores long-term consequences of chronic Viagra treatment. The reason I consider this piece an infotainment style of science writing is that there were numerous criticisms of the research study that could have been brought to the attention of the readers. The researchers concluded the fat cells were being converted into brown fat using only indirect measures without adequately measuring the metabolic activity and energy expenditure. It is not clear why the researchers did not extend the duration of the animal studies to show that the Viagra treatment could induce weight loss. If all of these criticisms had been included in the blog-post, the fun Viagra-weight loss idea would have been drowned in a whirlpool of details.

Infotainment Science Writing Example 2: The Healing Power of Sweat Glands

The idea of “icky” sweat glands promoting wound healing was the main hook. Smelly apocrine sweat glands versus eccrine sweat glands are defined in the background of this blog-post, and the findings of the paper published in the American Journal of Pathology are summarized.  Limitations of the study included little investigation of the mechanism of regeneration, whether cells primarily proliferate or differentiate to promote the wound healing and an important question: Does sweating itself affect the regenerative capacity of the sweat glands? Although these limitations are briefly mentioned in the blog-post, they are not discussed in-depth and there is no comparison made between the observed wound healing effects of sweat gland cells to the wound healing capacity of other cells. This blog-post is heavy on the “information” end, and it provides little entertainment, other than evoking the image of a healing sweat gland.

 

B.     Critical science writing

Critical science writing is exceedingly difficult because it is time-consuming and challenging to present critiques of scientific studies in a jargon-free manner. An infotainment science blog-post can be written in a matter of a few hours. A critical science writing piece, on the other hand, requires an in-depth review of multiple studies in the field to better understand the limitations and strengths of each report.

Critical Science Writing Example 1: Bone Marrow Cell Infusions Do NOT Improve Cardiac Function After Heart Attack

This blog-post describes an important negative study conducted in Switzerland. Bone marrow cells were injected into the hearts of patients in one of the largest randomized cardiovascular cell therapy trials performed to date. The researchers found no benefit of the cell injections on cardiac function. This research has important implications because it could stave off quack medicine. Clinics in some countries offer “miracle cures” to cardiovascular patients, claiming that the stem cells in the bone marrow will heal their diseased hearts. Desperate patients, who fall for these scams, fly to other countries, undergo risky procedures and end up spending $20,000 or $40,000 out of pocket for treatments that simply do not work. This blog-post is in the critical science writing category because it not only mentions some limitations of the Swiss study, but also puts the clinical trial into context of the problems associated with unproven therapies. It does not specifically discuss other bone marrow injection studies, but it provides a link to an editorial I wrote for an academic journal which contains all the pertinent references. A number of readers of the Guardian article raised the question whether one can make such critical science writing appear entertaining, but I am not sure how to incorporate entertainment into this type of an analysis.

Critical Science Writing Example 2: Cellular Alchemy: Converting Fibroblasts Into Heart Cells

This blog-post was a review of multiple distinct studies on converting fibroblasts – either found in the skin or the hearts – into beating heart cells. The various research groups described the outcomes of their research, but the studies were not perfect replications of each other. For example, one study that reported a very low efficiency of fibroblast conversion not only used cells derived from older animals but also used a different virus to introduce the genes. The challenge for a critical science writer is to decide which of these differences need to be highlighted, because obviously not all differences and discrepancies can be adequately accommodated in a single article or blog-post. I decided to highlight the electrical heterogeneity of the generated cells as the major limitation of the research because this seemed like the most likely problem when trying to move this work forward into clinical therapies. Regenerating a damaged heart following a heart attack would be the ultimate goal, but do we really want to create islands of heart cells that have distinct electrical properties and could give rise to heart rhythm problems?

 

C.     Science Satire

In closing, I just want to briefly mention scientific satire – satirical or humorous descriptions of real-life science. One of the best science satire websites is PhD Comics, because the comics do a brilliant job of portraying real world science issues, such as the misery of PhD students and the vicious cycle of not having enough research funding to apply for research funding. My own attempts at scientific satire take the form of spoof news articles such as “Professor Hands Out “Erase Undesirable Data Points” Coupons To PhD Students” or “Academic Publisher Unveils New Journal Which Prevents All Access To Its Content”. Science satire is usually not informative, but it can provide entertainment and some critical introspection. This kind of satire is best suited for people with experiences that allow them to understand inside jokes. I hope that we will see more writing that satirizes the working world of how scientists interpret data, compete for tenure and grants or interact with graduate students.

 

//storify.com/jalees_rehman/reactions-to-critical-science-journalism-piece-in.js[View the story “Reactions to the “Critical Science Journalism” piece in The Guardian” on Storify]

The ENCODE Controversy And Professionalism In Science

The ENCODE (Encyclopedia Of DNA Elements) project received quite a bit of attention when its results were publicized last year. This project involved a very large consortium of scientists with the goal to identify all the functional elements in the human genome. In September 2012, 30 papers were published in a coordinated release and their extraordinary claim was that roughly 80% of the human genome was “functional”. This was in direct contrast to the prevailing view among molecular biologists that the bulk of human DNA was just “junk DNA”, i.e. sequences of DNA for which one could not assign any specific function. The ENCODE papers contained huge amounts of data, collating the work of hundreds of scientists who had worked on this for nearly a decade. But what garnered most attention, among scientists, the media and the public was the “80%” claim and the supposed “death of junk DNA“.

Soon after the discovery of DNA, the primary function ascribed to DNA was its role as a template from which messenger RNA could be transcribed and then translated into functional proteins. Using this definition of “function”, only 1-2% of the human DNA would be functional because they actually encoded for proteins. The term “junk DNA” was coined to describe the 98-99% of non-coding DNA which appeared to primarily represent genetic remnants of our evolutionary past without any specific function in our present day cells.

However, in the past decades, scientists have uncovered more and more functions for the non-coding DNA segments that were previously thought to be merely “junk”. Non-coding DNA can, for example, act as a binding site for regulatory proteins and exert an influence on protein-coding DNA. There has also been an increasing awareness of the presence of various types of non-coding RNA molecules, i.e. RNA molecules which are transcribed from the DNA but not subsequently translated into proteins. Some of these non-coding RNAs have known regulatory functions, others may not have any or their functions have not yet been established.

Despite these discoveries, most scientists were in agreement that only a small fraction of DNA was “functional”, even when all the non-coding pieces of DNA with known functions were included. The bulk of our genome was still thought to be non-functional. The term “junk DNA” was used less frequently by scientists, because it was becoming apparent that we were probably going to discover even more functional elements in the non-coding DNA.

In September 2012, everyone was talking about “junk DNA” again, because the ENCODE scientists claimed their data showed that 80% of the human genome was “functional”. Most scientists had expected that the ENCODE project would uncover some new functions for non-coding DNA, but the 80% figure was way out of proportion to what everyone had expected. The problem was that the ENCODE project used a very low bar for “function”. Binding to the DNA or any kind of chemical DNA modification was already seen as a sign of “function”, without necessarily proving that these pieces of DNA had any significant impact on the function of a cell.

The media hype with the “death of junk DNA” headlines and the lack of discussion about what constitutes function were appropriately criticized by many scientists, but the recent paper by Dan Graur and colleagues “On the immortality of television sets: “function” in the human genome according to the evolution-free gospel of ENCODE” has grabbed everyone’s attention. Not necessarily because of the fact that it criticizes the claims made by the ENCODE scientists, but because of the sarcastic tone it uses to ridicule ENCODE.

There have been so many other blog posts and articles that either praise or criticize the Graur paper, so I decided to list some of them here:

1. PZ Myers writes “ENCODE gets a public reaming” and seems to generally agree with Graur and colleagues.

2. Ashutosh Jogalekar says Graur’s paper is a “devastating takedown of ENCODE in which they pick apart ENCODE’s claims with the tenacity and aplomb of a vulture picking apart a wildebeest carcass.”

3. Ryan Gregory highlights some of the “zingers” in the Graur paper

Other scientists, on the other hand, agree with some of the conclusions of the Graur paper and its criticism of how the ENCODE data was presented, but disagree with the sarcastic tone:

1. OpenHelix reminds us that this kind of spanking” should not distract from all the valuable data that ENCODE has generated.

2. Mick Watson shows how Graur and colleagues could have presented their key critiques in a very non-confrontational manner and foster a constructive debate.

3. Josh Witten points out the irony of Graur accusing ENCODE of seeking hype, even though Graur and his colleagues seem to use sarcasm and ridicule to also increase the visibility of their work. I think Josh’s blog post is an excellent analysis of the problems with ENCODE and the problems associated with Graur’s tone.

On Twitter, I engaged in a debate with Benoit Bruneau, my fellow Scilogs blogger Malcolm Campbell and Jonathan Eisen and I thought it would be helpful to share the Storify version here. There was a general consensus that even though some of the points mentioned by Graur and colleagues are indeed correct, their sarcastic tone was uncalled for. Scientists can be critical of each other, but can and should do so in a respectful and professional manner, without necessarily resorting to insults or mockery.
//storify.com/jalees_rehman/encode-debate.js

[<a href=”//storify.com/jalees_rehman/encode-debate” target=”_blank”>View the story “ENCODE controversy and professionalism in scientific debates” on Storify</a>]

ResearchBlogging.org
Graur D, Zheng Y, Price N, Azevedo RB, Zufall RA, & Elhaik E (2013). On the immortality of television sets: “function” in the human genome according to the evolution-free gospel of ENCODE. Genome biology and evolution PMID: 23431001

Is Kindness Key to Happiness and Acceptance for Children?

The study “Kindness Counts: Prompting Prosocial Behavior in Preadolescents Boosts Peer Acceptance and Well-Being” published by Layous and colleagues in the journal PLOS One on December 26, 2012 was cited by multiple websites as proof of how important it is to teach children to be kind. NPR commented on the study in the blog post “Random Acts Of Kindness Can Make Kids More Popular“, and the study was also discussed in ScienceDaily in “Kindness Key to Happiness and Acceptance for Children“, Fox News in “No bullies: Kind kids are most popular” and the Huffington Post in “Kind Kids Are Happier And More Popular (STUDY)“.

According to most of these news reports, the design of the study was rather straightforward. Schoolchildren ages 9 to 11 in a Vancouver school district were randomly assigned to two groups for a four week intervention: Half of the children were asked to perform kind acts, while the other half were asked to keep track of pleasant places they visited. Happiness and acceptance by their peers was assessed at the beginning and the end of the four week intervention period. The children were allowed to choose the “acts of kindness” or the “pleasant places”. The “acts of kindness” group chose acts such as sharing their lunch or giving their mothers a hug. The “pleasant places” group chose to visit places such as the playground or a grandparent’s house.

At the end of the four week intervention, both groups of children showed increased signs of happiness, but the news reports differed in terms of the impact of the intervention on the acceptance of the children.

 

The NPR blog reported:

… the children who performed acts of kindness were much more likely to be accepting of their peers, naming more classmates as children they’d like to spend time with.

This would mean that the children performing the “acts of kindness” were the ones that became more accepting of others.

 

The conclusion in the Huffington Post was quite different:

 

The students were asked to report how happy they were and identify classmates they would like to work with in school activities. After four weeks, both groups said they were happier, but the kids who had performed acts of kindness reported experiencing greater acceptance from their peers  –  they were chosen most often by other students as children the other students wanted to work with.

The Huffington Post interpretation (a re-post from Livescience) was that the children performing the “acts of kindness” became more accepted by others, i.e. more popular.

 

Which of the two interpretations was the correct one? Furthermore, how significant were the improvements in happiness and acceptance?

 

I decided to read the original PLOS One paper and I was quite surprised by what I found:

The manuscript (in its published form, as of December 27, 2012) had no figures and no tables in the “Results” section. The entire “Results” section consisted of just two short paragraphs. The first paragraph described the affect and happiness scores:

 

Consistent with previous research, overall, students in both the kindness and whereabouts groups showed significant increases in positive affect (γ00 = 0.15, S.E. = 0.04, t(17) = 3.66, p<.001) and marginally significant increases in life satisfaction (γ00 = 0.09, S.E. = 0.05, t(17) = 1.73, p = .08) and happiness (γ00 = 0.11, S.E. = 0.08, t(17) = 1.50, p = .13). No significant differences were detected between the kindness and whereabouts groups on any of these variables (all ps>.18). Results of t-tests mirrored these analyses, with both groups independently demonstrating increases in positive affect, happiness, and life satisfaction (all ts>1.67, all ps<.10).

 

There are no actual values given, so it is difficult to know how big the changes are. If a starting score is 15, then a change of 1.5 is only a 10% change. On the other hand, if the starting score is 3, then a change of 1.5 represents a 50% change. The Methods section of the paper also does not describe the statistics employed to analyze the data. Just relying on arbitrary p-value thresholds is problematic, but if one were to use the infamous p-value threshold of 0.05 for significance, one can assume that there was a significant change in the affect or mood of children (p-value <0.001), a marginally significant trend of increased life satisfaction (p-value of 0.08) and no really significant change in happiness (p-value of 0.13).

It is surprising that the authors do not show the actual scores for each of the two groups. After all, one of the goals of the study was to test whether performing “acts of kindness” has a bigger impact on happiness and acceptance than the visiting “pleasant places” (“whereabouts” group). There is a generic statement “ No significant differences were detected between the kindness and whereabouts groups on any of these variables (all ps>.18).”, but what were the actual happiness and satisfaction scores for each of the groups? The next sentence is also cryptic: “Results of t-tests mirrored these analyses, with both groups independently demonstrating increases in positive affect, happiness, and life satisfaction (all ts>1.67, all ps<.10).” Does this mean that p<0.1 was the threshold of significance? Do these p-values refer to the post-intervention versus pre-intervention analysis for each tested variable in each of the two groups? If yes, why not show the actual data for both groups?

 

The second (and final) paragraph of the Results section described acceptance of the children by their peers. Children were asked who they would like to “would like to be in school activities [i.e., spend time] with’’:

 

All students increased in the raw number of peer nominations they received from classmates (γ00 = 0.68, S.E. = 0.27, t(17) = 2.37, p = .02), but those who performed kind acts (M = +1.57; SD = 1.90) increased significantly more than those who visited places (M = +0.71; SD = 2.17), γ01 = 0.83, S.E. = 0.39, t(17) = 2.10, p = .05, gaining an average of 1.5 friends. The model excluded a nonsignificant term controlling for classroom size (p = .12), which did not affect the significance of the kindness term. The effects of changes in life satisfaction, happiness, and positive affect on peer acceptance were tested in subsequent models and all found to be nonsignificant (all ps>.54). When controlling for changes in well-being, the effect of the kindness condition on peer acceptance remained significant. Hence, changes in well-being did not predict changes in peer acceptance, and the effect of performing acts of kindness on peer acceptance was over and above the effect of changes in well-being.

 

This is again just a summary of the data, and not the actual data itself. Going to “pleasant places” increased the average number of “friends” (I am not sure I would use “friend” to describe someone who nominates me as a potential partner in a school activity) by 0.71, performing “acts of kindness” increased the average number of friends by 1.57. It did answer the question that was raised by the conflicting news reports. According to the presented data, the “acts of kindness” kids were more accepted by others and there was no data on whether they also became more accepting of others. I then looked at the Methods section to understand the statistics and models used for the analysis and found that there were no details included in the paper. The Methods section just ended with the following sentences:

 

Pre-post changes in self-reports and peer nominations were analyzed using multilevel modeling to account for students’ nesting within classrooms. No baseline condition differences were found on any outcome variables. Further details about method and results are available from the first author.

 

Based on reviewing the actual paper, I am quite surprised that PLOS One accepted it for publication. There are minimal data presented in the paper, no actual baseline scores regarding peer acceptance or happiness, incomplete methods and the rather grand title of “Kindness Counts: Prompting Prosocial Behavior in Preadolescents Boosts Peer Acceptance and Well-Being” considering the marginally significant data. One is left with many unanswered questions:

1) What if kids had not been asked to perform additional “acts of kindness” or additional visits to “pleasant places” and had instead merely logged these positive activities that they usually performed as part of their routine? This would have been a very important control group.

2) Why did the authors only show brief summaries of the analyses and omit to show all of the actual affect, happiness, satisfaction and peer acceptance data?

3) Did the kids in both groups also become more accepting of their peers?

 

It is quite remarkable that going to places one likes, such as a shopping mall is just as effective pro-social behavior (performing “acts of kindness”) in terms of improving happiness and well-being. The visits to pleasant places also helped gain peer acceptance, just not quite as much as performing acts of kindness. However, the somewhat selfish sounding headline “Hanging out at the mall makes kids happier and a bit more popular” is not as attractive as the warm and fuzzy headline “Random acts of kindness can make kids more popular“. This may be the reason why the “prosocial” or “kindness” aspect of this study was emphasized so strongly by the news media.

 

In summary, the limited data in this published paper suggests that children who are asked to intentionally hang out at places they like and keep track of these for four weeks seem to become happier, similar to kids who make an effort to perform additional acts of kindness. Both groups of children gain acceptance by their peers, but the children who perform acts of kindness fare slightly better. There are no clear descriptions of the statistical methods, no actual scores for the two groups (only the changes in scores are shown) and important control groups (such as children who keep track of their positive activities, without increasing them) are missing. Therefore, definitive conclusions cannot be drawn from these limited data. Unfortunately, none of the above-mentioned news reports highlighted the weaknesses, and instead jumped on the bandwagon of interpreting this study as scientific evidence for the importance of kindness. Some of the titles of the news reports even made references to bullying, even though bullying was not at all assessed in the study.

This does not mean that we should discourage our children from being kind. On the contrary, there are many moral reasons to encourage our children to be kind, and there is no need for a scientific justification for kindness. However, if one does invoke science as a reason for kindness, it should be based on scientifically rigorous and comprehensive data.

 

The PhD Route To Becoming a Science Writer

If you know that you want to become a science writer, should you even bother with obtaining a PhD in science? There is no easy answer to this question. Any answer is bound to reflect the personal biases and experiences of the person answering the question. The science writer Akshat Rathi recently made a good case for why an aspiring science writer should not pursue a PhD. I would like to offer a different perspective, which is primarily based on my work in the life sciences and may not necessarily apply to other scientific disciplines.

I think that obtaining a PhD in science a very reasonable path for an aspiring science writer, and I will list some of the “Pros” as well as the “Cons” of going the PhD route. Each aspiring science writer has to weigh the “Pros” and “Cons” carefully and reach a decision that is based on their individual circumstances and goals.

Pros: The benefits of obtaining a science PhD

 

1. Actively engaging in research gives you a first-hand experience of science

A PhD student works closely with a mentor to develop and test hypotheses, learn how to perform experiments, analyze data and reach conclusions based on the data. Scientific findings are rarely clear-cut. A significant amount of research effort is devoted to defining proper control groups, dealing with outliers and trouble-shooting experiments that have failed. Exciting findings are not always easy to replicate. A science writer who has had to actively deal with these issues may be in a better position to appreciate these intricacies and pitfalls of scientific research than someone without this first-hand experience.

 

2. PhD students are exposed to writing opportunities

All graduate students are expected to write their own PhD thesis. Many PhD programs also require that the students write academic research articles, abstracts for conferences or applications for pre-doctoral research grants. When writing these articles, PhD students usually work closely with their faculty mentors. Most articles or grant applications undergo multiple revisions until they are deemed to be ready for submission. The process of writing an initial draft and then making subsequent revisions is an excellent opportunity to improve one’s writing skills.

Most of us are not born with an innate talent for writing. To develop writing skills, the aspiring writer needs to practice and learn from critiques of one’s peers. The PhD mentor, the members of the thesis committee and other graduate students or postdoctoral fellows can provide valuable critiques during graduate school. Even though most of this feedback will likely focus on the science and not the writing, it can reveal whether or not the readers were able to clearly understand the core ideas that the student was trying to convey.

 

3. Presentation of one’s work

Most PhD programs require that students present their work at departmental seminars and at national or international conferences. Oral presentations for conferences need to be carefully crafted so that the audience learns about the background of the work, the novel findings and the implications of the research – all within the tight time constraint of a 15-20 minute time slot. A good mentor will work with PhD students to teach them how to communicate the research findings in a concise and accurate manner. Some presentations at conferences take the form of a poster, but the challenge of designing a first-rate poster is quite similar to that of a short oral presentation. One has to condense months or years of research data into a very limited space. Oral presentations as well as poster presentations are excellent opportunities to improve one’s communication skills, which are a valuable asset for any future science writer.

 

4. Peer review

Learning to perform an in-depth critical review of scientific work is an important pre-requisite for an aspiring science writer. When PhD students give presentations at departmental seminars or at conferences, they interact with a broad range of researchers, who can offer novel perspectives on the work that are distinct from what the students may have encountered in their own laboratory. Such scientific dialogue helps PhD students learn how to critically evaluate their own scientific results and realize that there can be many distinct interpretations of their data. Manuscripts or grant applications submitted by the PhD student undergo peer review by anonymous experts in the field. The reviews can be quite harsh and depressing, but they also help PhD students and their mentors identify potential flaws in their scientific work. The ability to critically evaluate scientific findings is further enhanced when PhD students participate in journal clubs to discuss published papers or when they assist their mentors in the peer review of manuscripts.

 

5. Job opportunities

Very few writers derive enough income from their writing to cover their basic needs. This is not only true for science writers, but for writers in general and it forces writers to take on jobs that help pay the bills. A PhD degree provides the aspiring science writer with a broad range of professional opportunities in academia, industry or government. After completing the PhD program, the science writer can take on such a salaried job, while building a writing portfolio and seeking out a paid position as a science writer.

 

6. Developing a scientific niche

It is not easy to be a generalist when it comes to science writing. Most successful science writers acquire in-depth knowledge in selected areas of science. This enables them to understand the technical jargon and methodologies used in that area of research and read the original scientific papers so that they do not have to rely on secondary sources for their science writing. Conducting research, writing and reviewing academic papers and attending conferences during graduate school all contribute to the development of such a scientific niche. Having such a niche is especially important when one starts out as a science writer, because it helps define the initial focus of the writing and it also provides “credentials” in the eyes of prospective employers. This does not mean that one is forever tied to this scientific niche. Science writers and scientists routinely branch out into other disciplines, once they have established themselves.

 

Cons: The disadvantages of obtaining a science PhD

 

1. Some PhD mentors abuse their graduate students

It is no secret that there are a number of PhD mentors which treat graduate students as if they were merely an additional pair of hands. Instead of being given opportunities to develop thinking and writing skills, students are sometimes forced to just produce large amounts of experimental data. 

 

2. Some of the best science writers did not obtain PhDs in science

Even though I believe that obtaining a PhD in science is a good path to becoming a science writer, I am also aware of the fact that many excellent science writers did not take this route. Instead, they focused on developing their writing skills in other venues. One such example is Steve Silberman who is a highly regarded science writer. He has written many outstanding feature articles for magazines and blog posts for his superb PLOS blog Neurotribes. Steve writes about a diverse array of topics related to neuroscience and psychology, but has also developed certain niche areas of expertise, such as autism research.

 

3. Science writer is not a career that garners much respect among academics

PhD degrees are usually obtained under the tutelage of tenure-track or tenured academics. Their natural bias is to assume that “successful” students should follow a similar career path, i.e. obtain a PhD, engage in postdoctoral research and pursue a tenure-track academic career. Unfortunately, alternate career paths, such as becoming a science writer, are not seen in a very positive light. The mentor’s narcissistic pleasure of seeing a trainee follow in one’s foot-steps is not the only reason for this. Current academic culture is characterized by a certain degree of snobbery that elevates academic research careers and looks down on alternate careers. This lack of respect for alternate careers can be very disheartening for the student. Some PhD mentors or programs may not even take on a student if he or she discloses that their ultimate goal is to become a science writer instead of pursuing a tenure-track academic career.

 

4. A day only has 24 hours

Obtaining a PhD is a full-time job. Conducting experiments, analyzing and presenting data, reading journal articles, writing chapters for the thesis and manuscripts – all of these activities are very time-consuming. It is not easy to carve out time for science writing on the side, especially if the planned science writing is not directly related to the PhD research.

 

Choosing the right environment

 

The caveats mentioned above highlight that a future science writer has to carefully choose a PhD program. The labs/mentors that publish the most papers in high-impact journals or that happen to be located in one’s favorite city may not necessarily be the ones that are best suited to prepare the student for a future career as a science writer. On the other hand, a lab that has its own research blog indicates an interest in science communication and writing. A frank discussion with a prospective mentor about the career goal of becoming a science writer will also reveal how the mentor feels about science writing and whether the mentor would be supportive of such an endeavor. The most important take home message is that the criteria one uses for choosing a PhD program have to be tailored to the career goal of becoming a science writer.

 

Image via Wikimedia Commons(Public Domain): Portrait of Dmitry Ivanovich Mendeleev wearing the Edinburgh University professor robe by Ilya Repin.