Shared Responsibilities for Climate Change Mitigation

The dangers of climate change pose a threat to all of humankind and to ecosystems all over the world. Does this mean that all humans need to equally shoulder the responsibility of mitigating climate change and its effects? The concept of CBDR (common but differentiated responsibilities) is routinely discussed at international negotiations about climate change mitigation. The underlying principle of CBDR in the context of climate change is that highly developed countries have historically contributed far more to climate change and therefore have a greater responsibility to reduce their carbon footprint than less developed countries. The per capita rate of vehicles in the United States is approximately 90 cars per 100 people, whereas the rate in India is 5 cars per 100 people. The total per capita carbon footprint includes a plethora of factors such as carbon emissions derived from industry, air travel and electricity consumption of individual households. As of 2015, the per capita carbon footprint in the United States is ten times higher than that of India, but the discrepancy in the historical per capita carbon footprint is even much greater.

CBDR recognizes that while mitigating carbon emissions in the future is a shared responsibility for all countries, highly developed countries which have contributed substantially to global carbon emissions and climate change for more than a century have a greater responsibility to rein in carbon emissions going forward than less developed countries. However, the idea of “differentiated” responsibilities has emerged as a rather contentious issue. Some representatives of developed countries do not embrace the idea of asking their populations to steeply curb the usage of carbon fuels and achieve strict carbon emission goals, whereas people living in less developed countries face fewer restrictions merely because they are “late developers”. On the other hand, representatives of less developed countries may reject universal standards on carbon emissions which ignore their historical carbon frugality and instead perceive these standards as attempts to curtail their industrial and economic development.

Are citizens of industrialized countries willing to recognize their privileged status and thus contribute more towards climate change mitigation? A team of researchers lead by Reuben Kline at Stony Brook University recently designed a behavioral study published in the journal Nature Human Behavior with volunteer college students from the United States and China to address this question. The students participated in a version of an “economic game” to ascertain how economic advantage would affect their choices. The study consisted of two phases. In the initial “Economic Development Game”, participants were divided into groups of six players and each participant could remove either $0, $1, $2, $3 or $4 per round from a shared pool of money ($180) belonging to the group. There were a total of 10 rounds so the maximum one individual could extract during the 10 rounds was $40. The clever twist in the experimental design was that half the participants were not allowed to extract any money during the first five rounds, so that the total they could have extracted was only $20. The second group thus emulated “late developers” in terms of industrialization and economic growth which merely watched as “early developers” accumulated wealth during the first five rounds.

The second phase of the experiment consisted of the “Climate Game” in which all the participants of a group were asked to return money into the common pool (“climate account”). The amount of money that had to be replenished in each group was 53% of what the group had removed from the common pool of $180 during the “Economic Development Game”. For example, if the combined sum of money removed by all six players in a group, was $100, than the group as a whole had to return $53 during the “Climate Game”. If the group did not meet the 53% target, the group risked a “climate catastrophe” in which all players of a group would lose their earnings. The probability of a catastrophic loss depended on the amount of money extracted during the “Economic Development Game”. If, for example, players in a group depleted $150 during Phase 1 and did not meet the threshold of returning $80 (53% of $150) during Phase 2, there was a 92% chance of a “climate catastrophe” in which all players of a group would lose all earnings. This discouraged greed by individual players and instead encouraged judicious extraction of funds during Phase 1 as well as active replenishment during Phase 2 to meet the 53% target.

The fundamental goal of the study was to understand how “early developers” would act because they had additional time to accumulate wealth during the first five rounds of Phase 1 and whether this advantage would affect their willingness to donate funds into the climate account during Phase 2. The results were quite remarkable and give reasons for hope in regards to how recognizing advantage affects social behavior. “Early developers” initially accumulated funds but then chose to extract less money during the later rounds once the “late developers” entered the game. Furthermore, early developers who had accumulated more funds were also more willing to donate money in order to replenish the “climate account” and help stave off the “climate catastrophe”.

Importantly, these experiments were performed in the United States and China, with similar results in both student populations. Interestingly, a representative quote by a “late developer” participant also explains why “late developers”  had lower rates of donations in Phase 2: “I decided not to contribute any because I felt that the individuals who were able to [appropriate] more money in the first round (early developers) should contribute more because I started with a disadvantage.”

The researchers interpret their data in the context of climate change mitigation behavior and suggest that recognizing one’s privileged status does indeed motivate individuals to greater sacrifice for the common good. The strengths of the study are the elegant design of the two-phase study, the replication of findings in two different countries as well as the inclusion of control groups in which all players were given equal opportunity to extract funds (without subdividing groups into “early” and “late developers”). Reuben Kline and his colleagues recognize the limitations of a highly stylized economic game experiment in a laboratory experiment using young educated college students to infer real world acceptance of carbon frugality by broader groups of citizens and political leaders in developed countries.

However, there is one fundamental issue which is not addressed in the context of this study. The “early” and “late developers” represented highly developed and less developed countries. However, the two countries they chose – United States and China – are marred by a tremendous amount of socio-economic inequality. Fifteen percent of Americans live in poverty even though the United States are often touted as the wealthiest country in the world.  CBDR and the results of the experiment detailed above are predicated on the idea that members of highly developed groups recognize themselves as being advantaged. But if there is such a discrepancy between rich and poor in a highly developed country, how likely is it that socio-economically disadvantaged members of society in a highly developed country will accept their status being labeled as advantaged? Populist political leaders in developed countries appeal to voters who are struggling to pay their bills, and their voters often perceive themselves as marginalized victims. Their income and quality of life may be far higher than that of their counterparts in less developed countries, but it is not clear that they would recognize this as an advantage in the same sense that the “early developer” college students recognized it in the experiment.

The research study by Kline and colleagues indeed provides reason for hope when it comes to climate mitigation behavior as well as perhaps other forms of prosocial behavior. It suggests that recognizing privilege can motivate greater sacrifice for the greater good. However, future studies may need to include a more complex experimental design in which the heterogeneity of “early developers” is addressed and we can derive more insights about how individuals recognize their advantage and privilege.

References

Kline, R., Seltzer, N., Lukinova, E., & Bynum, A. (2018). Differentiated responsibilities and prosocial behaviour in climate change mitigationNature Human Behavior, 2: 653-661.

Note: An earlier version of this article was first published on the 3Quarksdaily blog.

Advertisements

Escape The Tyranny Of Algorithms By Leading A Life Of Poiesis

“Accused not of crimes they have committed, but of crimes they will commit. It is asserted that these men, if allowed to remain free, will at some future time commit felonies.”

From “The Minority Report” by Philip K. Dick

 

In the science fiction short story “The Minority Report” by Philip K. Dick, mutant “precogs” are able to see one to two weeks into the future. Their precognitive prophecies are decoded and analyzed by a computer, and used by the Precrime police unit pre-emptively arrest would-be perpetrators before they commit crimes. The story proposes the existence of multiple time-lines and futures, which explains why crimes can indeed be averted because the pre-emptive arrest leads to a shift in the time path towards an alternate future in which the crime does not place. But the story raises the fundamental question of how a person can be arrested and imprisoned for a crime that was not committed, if indeed the alternate future begins upon his arrest. The dilemma of pre-emptive arrests is one of the many questions pondered by the Austrian philosopher Armen Avanessian in his most recent book “Miamification”.

“Miamification” is basically a journal written during Avanessian’s two week stint as an artist-in-residence in the city of Miami during the fall of 2016, just weeks before the election of Donald Trump as president of the United States. Each chapter of the book represents one day of his stay in Miami, containing musings on so many topics that it feels more like a bricolage than a collection of traditional philosophical essays. The stream-of-consciousness style of writing is filled with several digressions and side-notes. This reflects the journal-like nature of the book but it perhaps also mirrors how we peruse online texts on the web with various levels of links to other webpages as well as the snappy phrases and soundbites that we encounter during social media conversations. The book cover of the German edition lists several of the topics Avanessian ruminates about: Trump, Big Data, Beach, Pre-emptive Personality, Make American Great Again, Immigration, Climate Change, Time Complex, Post-Capitalism, Post-Internet, Recursion, Déjà-vus, Algorithms – just to name a few.

Obviously, none of these topics are exhaustively discussed in this short book, and some readers may struggle with the Ideenüberflutung (idea flooding) in each chapter. But each short chapter provides the reader with the lingering pleasure of having continuous food for thought and questions to ponder for weeks to come. Even though the chapters are not thematically structured, common themes do emerge. “The Disappearance of the Subject” is one such theme that was recently discussed in a brilliant essay by Adrian Nathan West. Another central theme is that of temporal discordance.

“Miamification” begins with physical and biological manifestations of temporal discordance, one that many who have traveled across time zones can easily relate to. Avanessian experiences jet-lag after flying from Berlin to Miami but his jet lag is not limited to having difficulties sleeping or waking up early. When reading his emails, he feels that he is continuously lagging behind. The work day in Europe is nearly over while his day in Miami is just getting started and people in Europe are expecting responses in real-time. This temporal disconnect between expectations and reality not only occurs in the time zone lag situation but even in our daily routines. For example, when tackling complex ideas, we know that we need time to analyze and ponder several concepts in depth but the reality of being perpetually connected to the world by our smartphone exposes us to continuous emails and social media pings which distract us and prevent us from devoting the necessary time. Avanessian also observes other absurd examples of temporal discordance in Miami. Instead of enjoying a swim in the warm water, many tourists appear to be more obsessed with taking selfies of themselves standing in the water so that they may capture this moment for posterity – delaying gratification in order to some time in the future enjoy the memory of a time at the beach when they decided to forgo the pleasure for swimming.

After watching the movie “Minority Report” (loosely based on the Philip K. Dick short story) on his third day in Miami, Avanessian broadens his inquiry into our relationship with time. Even though contemporary police forces do not use mutant precogs to prophecy the future, we are surrounded by computational algorithms which aim to predict behavior. Law enforcement agencies increasingly rely on predictive algorithms to identify individuals who are at risk of committing terrorist acts in the near or distant future, in fact “neuroprediction” of criminal behavior is establishing itself as a scientific discipline. Corporations such as Amazon prompt us with products that we could purchase based on algorithms that analyze our past purchases. At what point do these algorithms become self-fulfilling prophecies? Are individuals who are continuously monitored and questioned by law enforcement perhaps more likely to radicalize and commit crimes? At what point do online “suggestions” by algorithms become a subconscious mandate to buy consumables in order to remain true to our past self?

The temporal assault occurs at several fronts: Surveillance agencies and corporations use predictive algorithms about our future behavior to define and create present behavior. But these algorithms are rooted in past behaviors – thus in some ways chaining us to the past and limiting our ability to change, especially once the predictive algorithms begin influencing our present behavior. At the same time, we are being bombarded with clickbait, social media posts and sensationalist news – all which appear to glorify and obsess about the present. Their rapidity often does not allow us to analyze them in the context of the past or the future. Lastly, we are seeing the rise of reactionary forces in many countries of the world who conjure up bizarre images of a glorious past that we ought to be striving towards. Avanessian specifically mentions Donald Trump and his supporters in their Make American Great Again fervor as an example – weeks before the 2016 presidential election in the USA.

How do we best handle this dysfunctional relationship with the Past (reactionary and revisionist glorifying of the past), Present (barrage of mindless and often meaningless information about the present) and the Future (predictive algorithms which predetermine our future instead of allowing us to define our own future)? Lead a poetic life. Avanessian uses the word poetic in the original Greek sense: Poiesis – to create and produce. Poiesis requires that we prevent algorithms from dictating our behavior. Corporations prompting us to buy certain products as well as political extremists goad us into algorithmic behavior. For example, a common contemporary phenomenon in politics has been the frequent use of racist, misogynist and other offensive social media posts by far right politicians and leaders. Their scandalous and sensationalist tweets elicit a predictable backlash from those opposed to racism, misogyny and other forms of prejudice.  Even though it is absolutely necessary for those of us opposed to hatred and prejudice to voice opposition and resistance, far right activists and politicians use our predicted reactions to further embolden their political base and mock liberal-progressive citizens,and then begin their next cycle of hateful statements. This recursive cycle ends up consuming our attention and undermining our ability to be creative and escape the algorithmic life.

Poiesis, on the other hand,  creates the unexpected and unpredictable and thus generates a reality that eludes predictive algorithms. Art, music, literature, philosophy, science provide poietic paths but the challenge for us is to learn how can integrate these poietic paths into our social, economic and political lives. Political poiesis may be especially important in our current time to counter the rise of far right political movements. One of the reasons for their success is that they conjure up images of a glorious past as well as the supposed danger of a bleak future unless society returns to the status quo of the glorious past. But progressive movements now have the opportunity to offer a poietic vision of the future.

One such poietic success in the United States during the past decade has been the revolution in the acceptance of universal access to healthcare as a human right. In most countries of the developed world, all members of society have enjoyed access to universal healthcare for the past decades. However, up until approximately 10 years ago, Americans accepted the fact that they might face financial bankruptcy and denial of health insurance coverage if they were afflicted by a devastating disease such as cancer. Through the joint efforts of patients, healthcare professionals, community organizers, politicians and most importantly – citizens from all socioeconomic backgrounds – American society began to recognize access to healthcare even for those with pre-existing medical conditions as a human right.

Townhall meetings, marches and door-to-door engagement, medical journal articles, new collaborations across communities and professions were all needed to bring about this change. The sheer scale of the efforts and the creativity of the proponents took right-wing opponents by surprise who had assumed that the American public would stick to its traditional distaste for anything that resembled a universal healthcare system that was so common in other industrialized countries with strong social welfare systems. Conservative and far right politicians in the United States were confident they could repeal the laws implemented during President Barack Obama’s administration which guaranteed health insurance for all – even patients with severe prior illnesses. All subsequent efforts by right wing politicians to abolish the fundamental achievement of the universal healthcare movement to enshrine the right to obtain medical insurance despite pre-existing medical conditions have failed thus far.

The success of the US healthcare movement could serve as an inspiration for all who struggle under the yoke of algorithmic and reactive behavior. Our willingness to dream and create can allow us to break the algorithmic mold. Considering the challenges we face in our world – which include the growing socio-economic divide, the rise of nativism and racism, and the devastating impact of climate change – we need to foster poietic creativity and imagination to overcome these challenges.

Reference

Avanessian, A. (2017). Miamification. Merve Verlag.

This book is also available in an English translation published by Sternberg Press.

Note: An earlier version of this article was first published on the 3Quarksdaily blog.

The Psychology of Collective Memory

Do you still remember the first day of school when you started first grade? If you were fortunate enough (or in some cases, unfortunate enough) to run into your classmates from way back when, you might sit down and exchange stories about that first day in school. There is a good chance that you and your former classmates may differ in the narratives especially regarding some details but you are bound to also find many common memories. This phenomenon is an example of “collective memory”, a term used to describe the shared memories of a group which can be as small as a family or a class of students and as large as a nation. The collective memory of your first day in school refers to a time that you personally experienced but the collective memory of a group can also include vicarious memories consisting of narratives that present-day group members may not have lived through. For example, the collective memory of a family could contain harrowing details of suffering experienced by ancestors who were persecuted and had to abandon their homes. These stories are then passed down from generation to generation and become part of a family’s defining shared narrative. This especially holds true for larger groups such as nations. In Germany, the collective memory of the horrors of the holocaust and the Third Reich have a profound impact on how Germans perceive themselves and their identity even if they were born after 1945.

The German scholar Aleida Assmann is an expert on how collective and cultural memory influences society and recently wrote about the importance of collective memory in her essay “Transformation of the Modern Time Regime” (PDF):

All cultures depend upon an ability to bring their past into the present through acts of remembering and remembrancing in order to recover not only acquired experience and valuable knowledge, exemplary models and unsurpassable achievements, but also negative events and a sense of accountability. Without the past there can be no identity, no responsibility, no orientation. In its multiple applications cultural memory greatly enlarges the stock of the creative imagination of a society.

Assmann uses the German word Erinnerungskultur (culture of remembrance) to describe how the collective memory of a society is kept alive and what impact the act of remembrance has on our lives. The Erinnerungskultur widely differs among nations and even in a given nation or society, it may vary over time. It is quite possible that the memories of the British Empire may evoke nostalgia and romanticized images of a benevolent empire in older British citizens whereas younger Brits may be more likely to focus on the atrocities committed by British troops against colonial subjects or the devastating famines in India under British rule.

Much of the research on collective memory has been rooted in the humanities. Historians and sociologists have studied how historical events enter into the collective memory and how the Erinnerungskultur then preserves and publicly interacts with it. However, more recently, cognitive scientists and psychologists have begun exploring the cognitive mechanisms that govern the formation of collective memory. The cognitive sciences have made substantial advances in researching individual memory – such as how we remember, mis-remember or forget events – but much less is known how these processes apply to collective memory. The cognitive scientists William Hirst, Jeremey Yamashiro and Alin Coman recently reviewed the present psychological approaches to study how collective memories are formed and retained, and they divided up the research approaches into two broad categories: Top-down research and bottom-up research.

Top-down research identifies historical or cultural memories that persist in a society and tries to understand the underlying principles. Why do some historical events become part of the collective memory whereas others do not? Why do some societies update their collective memories based on new data whereas others do not? Hirst and his colleagues cite a study which researched how people updated their beliefs following retractions and corrections issued by the media following the 2003 Iraq war. The claims that the Iraqi forces executed coalition prisoners of war after they surrendered or the initial reports about the discovery of weapons of mass destruction were both retracted but Americans were less likely to remember the retraction whereas Germans were more likely to remember the retraction and the corrected version of the information.

Bottom-up research of collective memory, on the other hand, focuses on how individuals perceive events and then communicate these to their peers so that they become part of a shared memory canon. Researchers using this approach focus on the transmission of memory from local individuals to a larger group network and how the transmission or communication between individuals is affected by the environment. In a fascinating study of autobiographical memory, researchers studied how individuals from various nations dated autobiographical events. Turks who had experienced the 1999 earthquake frequently referenced it, similar to Bosnians who used the civil war to date personal events. However, Americans rarely referenced the September 11, 2001 attacks to date personal events. This research suggested that even though some events such as the September 11, 2001 attacks had great historical and political significance, they may not have had as profound a personal impact on the individual lives of Americans as did the civil war in Bosnia.

Hirst and his colleagues point out that cognitive research of collective memory is still in its infancy but the questions raised at the interface of psychology, neuroscience, history and sociology are so fascinating that this area will likely blossom in the decades to come. The many research questions that will emerge in the near future will not only integrate cutting-edge cognitive research but will likely also address the important phenomenon of the increased flow of information – both by migration of individuals as well as by digital connectedness. This research could have a profound impact on how we define ourselves and what we learn from our past to shape our future.

Reference

Hirst W et al. (2018). Collective Memory from a Psychological Perspective” Trends in Cognitive Science, 22 (5): 438-451

 

**************

Note: An earlier version of this article was first published on the 3Quarksdaily blog.

The Science of Tomato Flavors

Don’t judge a tomato by its appearance. You may salivate when thinking about the luscious large red tomatoes you just purchased in your grocery store, only to find out that they are extremely bland and lack flavor once you actually bite into them after preparing the salad you had been looking forward to all day. You are not alone. Many consumers complain about the growing blandness of fruits. Up until a few decades ago, it was rather challenging to understand the scientific basis of fruit flavors. Recent biochemical and molecular studies of fruits now provide a window into fruit flavors and allow us to understand the rise of blandness.

In a recent article, the scientists Harry Klee and Denise Tieman at the University of Florida summarize some of the most important recent research on the molecular biology of fruit flavors, with a special emphasis on tomatoes. Our perception of “flavor” primarily relies on two senses – taste and smell. Taste is perceived by taste receptors in our mouth, primarily located on the tongue and discriminates between sweet, sour, salty, bitter and savory. The sensation of smell (also referred to as “olfaction”), on the other hand, has a much broader catalog of perceptions. There are at least 400 different olfactory receptors present in the olfactory epithelium – the cells in the nasal passages which perceive smells – and the combined activation of various receptors can allow humans to distinguish up to 1 trillion smells. These receptors are activated by so-called volatile organic compounds or volatiles, a term which refers to organic molecules that are vaporize in the mouth when we are chewing the food and enter our nasal passages to activate the olfactory epithelium. The tremendous diversity of the olfactory receptors thus allows us to perceive a wide range of flavors. Anybody who eats food while having a cold and a stuffy nose will notice how bland food has become, even though the taste receptors on the tongue remain fully functional.

When it comes to tomato flavors, research has shown that consumers clearly prefer “sweetness”. One obvious determinant of sweetness is the presence of sugars such as glucose or fructose in tomatoes which are sensed by the taste receptors in the mouth. But it turns out that several volatiles are critical for the perception of “sweetness” even though they are not sugars but instead activate the smell receptors in the olfactory epithelium. 6-Methyl-5-hepten-2-one, 1-Nitro-2-phenylethane, Benzaldehyde and 2-Phenylethanol are examples of volatiles that enhance the positive flavor perceived by consumers, whereas volatiles such as Eugenol and Isobutyl acetate are perceived to contribute negatively towards flavor. Interestingly, the same volatiles can have no effect or even the opposite effect on flavor perception when present in other fruits. Therefore, it appears that for each fruit, the sweetness flavor is created by the basic taste receptors which sense sugar levels as well as a symphony of smell sensations activated by a unique pattern of volatiles. But just like instruments play defined yet interacting roles in an orchestra, the effect of volatiles on flavor depends on the presence of other volatiles.

This complexity of flavor perception explains why it is so difficult to define flavor. The story becomes even more complicated because individuals have different thresholds for olfactory receptor activation. Furthermore, even the volatiles linked with a positive flavor perception – either by enhancing flavor intensity or letting the consumer sense a greater “sweetness” then actually present based on sugar levels – may have varying effects when they reach higher levels. Thus, it is very difficult to breed the ideal tomato that will satisfy all consumers. But why is there this growing sense that fruits such as tomatoes are becoming blander? Have we simply not tried enough tomato cultivars? A cultivar is a plant variety that has been bred over time to create specific characteristics, and one could surmise that with hundreds or even thousands of tomato cultivars available, each of us might identify a distinct cultivar that we find most flavorful. The volatiles are generated by metabolic enzymes encoded by genes and differences between the flavor of distinct cultivars is likely a reflection of differences in gene expression for the enzymes that regulate sugar metabolism or volatiles generation.

The problem, according to Klee and Tieman, is that the customers of tomato breeders are tomato growers and not the consumers who garnish their salads or create tomato-based masalas. The goal of growers is to maximize shelf-life, appearance, disease-resistance, yield and uniformity. Breeders focus on genetically manipulating tomato strains to maximize these characteristics. The expression GMO (genetically modified organism) describes the use of modern genetic technology to modify individual genes in crops and often provokes a litany of attacks and criticisms by anti-GMO activists who fear potential risks of such genetic interventions. However, the genetic breeding and manipulation of cultivars has been occurring for centuries or even millennia using traditional low tech methods but these do not seem to provoke much criticism by anti-GMO activists. Even though there is a theoretical risk that modern genetic engineering tools could pose a health risk, there is no scientific evidence that this is actually the case. Instead, one could argue that targeted genetic intervention may be more precise using modern technologies than the low-tech genetic breeding manipulations that have led to the creation of numerous cultivars, many of whom carry the “organic, non-GMO” label.

Klee and Tieman argue that consumers prefer flavor, variety and nutrition instead of the traditional goals of growers. The genetic and biochemical analysis of tomato cultivars now offers us a unique insight into the molecular components of flavor and nutrition. Scientists can now analyze each cultivar that has been generated over the past centuries using the low-tech genetic manipulation of selective breeding and inform consumers as to their flavor footprint. Alternatively, one could also use modern genetic tools such as genome editing and specifically modify flavor components while maintaining disease-resistance and high nutritional value of crops such as tomatoes. The key to making informed, rational decisions is to provide consumers comprehensive information based on scientific evidence as to the nutritional value and flavor of fruits, as well as the actual risks of genetically modifying crops using traditional low tech methods such as selective breeding and grafting or newer methods which involve genome editing.

Reference

Klee, H. J & Denise M. Tieman (2018). The genetics of fruit flavour preferencesNature Reviews Genetics, (published online March 2018)

Note: An earlier version of this article was first published on the 3Quarksdaily blog.

“Hype” or Uncertainty: The Reporting of Initial Scientific Findings in Newspapers

One of the cornerstones of scientific research is the reproducibility of findings. Novel scientific observations need to be validated by subsequent studies in order to be considered robust. This has proven to be somewhat of a challenge for many biomedical research areas, including high impact studies in cancer research and stem cell research. The fact that an initial scientific finding of a research group cannot be confirmed by other researchers does not mean that the initial finding was wrong or that there was any foul play involved. The most likely explanation in biomedical research is that there is tremendous biological variability. Human subjects and patients examined in one research study may differ substantially from those in follow-up studies. Biological cell lines and tools used in basic science studies can vary widely, depending on so many details such as the medium in which cells are kept in a culture dish. The variability in findings is not a weakness of biomedical research, in fact it is a testimony to the complexity of biological systems. Therefore, initial findings need to always be treated with caution and presented with the inherent uncertainty. Once subsequent studies – often with larger sample sizes – confirm the initial observations, they are then viewed as being more robust and gradually become accepted by the wider scientific community.

Even though most scientists become aware of the scientific uncertainty associated with an initial observation as their career progresses, non-scientists may be puzzled by shifting scientific narratives. People often complain that “scientists cannot make up their minds” – citing examples of newspaper reports such as those which state drinking coffee may be harmful only to be subsequently contradicted by reports which laud the beneficial health effects of coffee drinking. Accurately communicating scientific findings as well as the inherent uncertainty of such initial findings is a hallmark of critical science journalism.

A group of researchers led by Dr. Estelle Dumas-Mallet at the University of Bordeaux recently studied the extent of uncertainty communicated to the public by newspapers when reporting initial medical research findings in their recently published paper “Scientific Uncertainty in the Press: How Newspapers Describe Initial Biomedical Findings“. Dumas-Mallet and her colleagues examined 426 English-language newspaper articles published between 1988 and 2009 which described 40 initial biomedical research studies. They focused on scientific studies in which a new risk factor such as smoking or old age had been newly associated with a disease such as schizophrenia, autism, Alzheimer’s disease or breast cancer (total of 12 diseases). The researchers only included scientific studies which had subsequently been re-evaluated by follow-up research studies and found that less than one third of the scientific studies had been confirmed by subsequent research. Dumas-Mallet and her colleagues were therefore interested in whether the newspaper articles, which were published shortly after the release of the initial research paper, adequately conveyed the uncertainty surrounding the initial findings and thus adequately preparing their readers for subsequent research that may confirm or invalidate the initial work.

The University of Bordeaux researchers specifically examined whether headlines of the newspaper articles were “hyped” or “factual”, whether they mentioned whether or not this was an initial study and clearly indicated they need for replication or validation by subsequent studies. Roughly 35% of the headlines were “hyped”. One example of a “hyped” headline was “Magic key to breast cancer fight” instead of using a more factual headline such as “Scientists pinpoint genes that raise your breast cancer risk“. Dumas-Mallet and her colleagues found that even though 57% of the newspaper articles mentioned that these medical research studies were initial findings, only 21% of newspaper articles included explicit “replication statements” such as “Tests on larger populations of adults must be performed” or “More work is needed to confirm the findings”.

The researchers next examined the key characteristics of the newspaper articles which were more likely to convey the uncertainty or preliminary nature of the initial scientific findings. Newspaper articles with “hyped” headlines were less likely to mention the need for replicating and validating the results in subsequent studies. On the other hand, newspaper articles which included a direct quote from one of the research study authors were three times more likely to include a replication statement. In fact, approximately half of all the replication statements mentioned in the newspaper articles were found in author quotes, suggesting that many scientists who conducted the research readily emphasize the preliminary nature of their work. Another interesting finding was the gradual shift over time in conveying scientific uncertainty. “Hyped” headlines were rare before 2000 (only 15%) and become more frequent during the 2000s (43%). On the other hand, replication statements were more common before 2000 (35%) than after 2000 (16%). This suggests that there was a trend towards conveying less uncertainty after 2000, which is surprising because debate about scientific replicability in the biomedical research community seems to have become much more widespread in the past decade.

As in all scientific studies, we need to be aware of the analysis performed by Dumas-Mallet and her colleagues. They focused on analyzing a very narrow area of biomedical research – newly identified risk factors for selected diseases. It remains to be seen whether other areas of biomedical research such as treatment of diseases or basic science discoveries of new molecular pathways are also reported with “hyped” headlines and without replication statements. In other words – this research on “replication statements” in newspaper articles also needs to be replicated. It is not clear that the worrisome trend of over-selling robustness of initial research findings after the year 2000 still persists since the work by Dumas-Mallet and colleagues stopped analyzing studies published after 2009. One would hope that the recent discussions about replicability issues in science among scientists would reverse this trend. Even though the findings of the University of Bordeaux researchers need to be replicated by others, science journalists and readers of newspapers can glean some important information from this study: One needs to be wary of “hyped” headlines and it can be very useful to interview authors of scientific studies when reporting about new research, especially asking them about the limitations of their work. “Hyped” newspaper headlines and an exaggerated sense of certainty in initial scientific findings may erode the long-term trust of the public in scientific research, especially if subsequent studies fail to replicate the initial results. Critical and comprehensive reporting of biomedical research studies – including their limitations and uncertainty – by science journalists is therefore a very important service to society which contributes to science literacy and science-based decision making.

Reference

Dumas-Mallet, E., Smith, A., Boraud, T., & Gonon, F. (2018). Scientific Uncertainty in the Press: How Newspapers Describe Initial Biomedical FindingsScience Communication, 40(1), 124-141.

Note: An earlier version of this article was first published on the 3Quarksdaily blog.

The Anatomy of Friendship in a “Digital Age”

Why is the number of friendships that we can actively maintain limited to 150? The evolutionary psychologist and anthropologist Robin Dunbar at the University of Oxford is a pioneer in the study of friendship. Over several decades, he and his colleagues have investigated the nature of friendship and social relationships in non-human primates and humans. His research papers and monographs on social networks, grooming, gossip and friendship have accumulated tens of thousands of academic citations but he may be best known in popular culture for “Dunbar’s number“, the limit to the number of people with whom an individual can maintain stable social relationships. For humans, this number is approximately 150 although there are of course variations between individuals and also across one’s lifetime. The expression “stable social relationships” is what we would call friends and family members with whom we regularly interact. Most of us may know far more people but they likely fall into a category of “acquaintances” instead of “friends”. Acquaintances, for example, are fellow students and colleagues who we occasionally meet at work, but we do not regularly invite them over to share meals or swap anecdotes as we would do with our friends.

Dunbar recently reviewed more than two decades of research on humans and non-human primates in the article “The Anatomy of Friendship” and outlines two fundamental constraints: Time and our brain. In order to maintain friendships, we have to invest time. As most of us intuitively know, friendship is subject to hierarchies. Dunbar and other researchers have been able to study these hierarchies scientifically and found remarkable consistency in the structure of the friendship hierarchy across networks and cultures. This hierarchy can be best visualized as concentric circles of friendship. The innermost core circle consists of 1-2 friends, often the romantic partner and/or the closest family member. The next circle contains approximately 5 very close friends, then progressively wider circles until we reach the maximum of about 150. The wider the circle becomes, the less time we invest in “grooming” or communicating with our friends. The social time we invest also mirrors the emotional closeness we feel. It appears that up to 40% of our social time is invested in the inner circle of our 5 closest friends, 20% to our circle of 15 friends, and progressively less. Our overall social time available to “invest’ in friendships on any given day is limited by our need to sleep and work which then limits the number of friends in each circle as well as the total number of friendships.

The Circles of Friendship – modified from R Dunbar, The Anatomy of Friendship (2018)

The second constraint which limits the number of friendships we can maintain is our cognitive capacity. According to Dunbar, there are at least two fundamental cognitive processes at play in forming friendships. First, there needs to be some basis of trust in a friendship because it represents implicit social contracts, such as a promise of future support if needed and an underlying promise of reciprocity – “If you are here for me now, I will be there for you when you need me.” For a stable friendship between two individuals, both need to be aware of how certain actions could undermine this implicit contract. For example, friends who continuously borrow my books and seem to think that they are allowed to keep them indefinitely will find that there are gradually nudged to the outer circles of friendship and eventually cross into the acquaintance territory. This is not only because I feel I am being taken advantage off and the implicit social contract is being violated but also because they do not appear to put in the mental effort to realize how much I value my books and how their unilateral “borrowing” may affect me. This brings us to “mentalizing”, the second important cognitive component that is critical for stable friendships according to Dunbar. Mentalizing refers to the ability to read or understand someone else’s state of mind. To engage in an active dialogue with friends not only requires being able to read their state of mind but also infer the state of mind of people that they are talking about. These levels of mentalizing (‘I think that you feel that she was correct in …..) appear to hit a limit around four or five. Dunbar cites the example of how at a gathering, up to four people can have an active conversation in which each person is closely following what everyone else is saying but once a fifth person joins (the fifth wheel!), the conversation is likely to split up into two conversations and that the same is true for many TV shows or plays in which scenes will rarely depict more than four characters actively participating in a conversation.

Has the digital age changed the number of friends we can have? The prior research by Dunbar and his colleagues relied on traditional means of communication between friends such as face-to-face interactions and phone calls but do these findings still apply today when social media such as Facebook and Twitter allow us to have several hundred or even thousands of “friends” and “followers”? The surprising finding is that online social networks are quite similar to traditional networks! In a study of Facebook and Twitter social media networks, Dunbar and his colleagues found that social media networks exhibit a hierarchy of friendship and numbers of friends that were extremely similar to “offline” networks. Even though it is possible to have more than a thousand “friends” on Facebook, it turns out that most of the bidirectional interactions with individuals are again concentrated in very narrow circles of approximately 5, 15 and 50 individuals. Social media make it much easier to broadcast information to a broad group of individuals but this sharing of information is very different from the “grooming” of friendships which appears to be based on reciprocity in terms of building trust and mentalizing.

There is a tendency to believe that the Internet has revolutionized all forms of human communication, a belief which falls under the rubric of “internet-centrism” (See the article “Is Internet-Centrism a Religion“) according to the social researcher Evgeny Morozov. Dunbar’s research is an important reminder that core biological and psychological principles such as the anatomy of friendship in humans have evolved over hundreds of thousands of years and will not be fundamentally upstaged by technological improvements in communication. Friendship and its traditional limits are here to stay.

Reference

Dunbar R.I.M. (2018). The Anatomy of Friendship” Trends in Cognitive Science 22(1), 32-51

 

Note: An earlier version of this article was first published on the 3Quarksdaily blog.

Novelty in science – real necessity or distracting obsession?

File 20171219 5004 1ecssnn.jpg?ixlib=rb 1.1
It may take time for a tiny step forward to show its worth.
ellissharp/Shutterstock.com

Jalees Rehman, University of Illinois at Chicago

In a recent survey of over 1,500 scientists, more than 70 percent of them reported having been unable to reproduce other scientists’ findings at least once. Roughly half of the surveyed scientists ran into problems trying to reproduce their own results. No wonder people are talking about a “reproducibility crisis” in scientific research – an epidemic of studies that don’t hold up when run a second time.

Reproducibility of findings is a core foundation of science. If scientific results only hold true in some labs but not in others, then how can researchers feel confident about their discoveries? How can society put evidence-based policies into place if the evidence is unreliable?

Recognition of this “crisis” has prompted calls for reform. Researchers are feeling their way, experimenting with different practices meant to help distinguish solid science from irreproducible results. Some people are even starting to reevaluate how choices are made about what research actually gets tackled. Breaking innovative new ground is flashier than revisiting already published research. Does prioritizing novelty naturally lead to this point?

Incentivizing the wrong thing?

One solution to the reproducibility crisis could be simply to conduct lots of replication studies. For instance, the scientific journal eLife is participating in an initiative to validate and reproduce important recent findings in the field of cancer research. The first set of these “rerun” studies was recently released and yielded mixed results. The results of 2 out of 5 research studies were reproducible, one was not and two additional studies did not provide definitive answers.

There’s no need to restrict these sort of rerun studies to cancer research – reproducibility issues can be spotted across various fields of scientific research.

Researchers should be rewarded for carefully shoring up the foundations of the field.
Alexander Raths/Shutterstock.com

But there’s at least one major obstacle to investing time and effort in this endeavor: the quest for novelty. The prestige of an academic journal depends at least partly on how often the research articles it publishes are cited. Thus, research journals often want to publish novel scientific findings which are more likely to be cited, not necessarily the results of newly rerun older research.

A study of clinical trials published in medical journals found the most prestigious journals prefer publishing studies considered highly novel and not necessarily those that have the most solid numbers backing up the claims. Funding agencies such as the National Institutes of Health ask scientists who review research grant applications to provide an “innovation” score in order to prioritize funding for the most innovative work. And scientists of course notice these tendencies – one study found the use of positive words like “novel,” “amazing,” “innovative” and “unprecedented” in paper abstracts and titles increased almost ninefold between 1974 and 2014.

Genetics researcher Barak Cohen at Washington University in St. Louis recently published a commentary analyzing this growing push for novelty. He suggests that progress in science depends on a delicate balance between novelty and checking the work of other scientists. When rewards such as funding of grants or publication in prestigious journals emphasize novelty at the expense of testing previously published results, science risks developing cracks in its foundation.

Houses of brick, mansions of straw

Cancer researcher William Kaelin Jr., a recipient of the 2016 Albert Lasker Award for Basic Medical Research, recently argued for fewer “mansions of straw” and more “houses of brick” in scientific publications.

One of his main concerns is that scientific papers now inflate their claims in order to emphasize their novelty and the relevance of biomedical research for clinical applications. By exchanging depth of research for breadth of claims, researchers may be at risk of compromising the robustness of the work. By claiming excessive novelty and impact, researchers may undermine its actual significance because they may fail to provide solid evidence for each claim.

Kaelin even suggests that some of his own work from the 1990s, which transformed cell biology research by discovering how cells can sense oxygen, may have struggled to get published today.

Prestigious journals often now demand complete scientific stories, from basic molecular mechanisms to proving their relevance in various animal models. Unexplained results or unanswered questions are seen as weaknesses. Instead of publishing one exciting novel finding that is robust, and which could spawn a new direction of research conducted by other groups, researchers now spend years gathering a whole string of findings with broad claims about novelty and impact.

There should be more than one path to a valuable journal publication.
Mehaniq/Shutterstock.com

Balancing fresh findings and robustness

A challenge for editors and reviewers of scientific manuscripts is assessing the novelty and likely long-term impact of the work they’re assessing. The eventual importance of a new, unique scientific idea is sometimes difficult to recognize even by peers who are grounded in existing knowledge. Many basic research studies form the basis of future practical applications. One recent study found that of basic research articles that received at least one citation, 80 percent were eventually cited by a patent application. But it takes time for practical significance to come to light.

A collaborative team of economics researchers recently developed an unusual measure of scientific novelty by carefully studying the references of a paper. They ranked a scientific paper as more novel if it cited a diverse combination of journals. For example, a scientific article citing a botany journal, an economics journal and a physics journal would be considered very novel if no other article had cited this combination of varied references before.

This measure of novelty allowed them to identify papers which were more likely to be cited in the long run. But it took roughly four years for these novel papers to start showing their greater impact. One may disagree with this particular indicator of novelty, but the study makes an important point: It takes time to recognize the full impact of novel findings.

The ConversationRealizing how difficult it is to assess novelty should give funding agencies, journal editors and scientists pause. Progress in science depends on new discoveries and following unexplored paths – but solid, reproducible research requires an equal emphasis on the robustness of the work. By restoring the balance between demands and rewards for novelty and robustness, science will achieve even greater progress.

Jalees Rehman, Associate Professor of Medicine and Pharmacology, University of Illinois at Chicago

This article was originally published on The Conversation. Read the original article.