The Psychology of Collective Memory

Do you still remember the first day of school when you started first grade? If you were fortunate enough (or in some cases, unfortunate enough) to run into your classmates from way back when, you might sit down and exchange stories about that first day in school. There is a good chance that you and your former classmates may differ in the narratives especially regarding some details but you are bound to also find many common memories. This phenomenon is an example of “collective memory”, a term used to describe the shared memories of a group which can be as small as a family or a class of students and as large as a nation. The collective memory of your first day in school refers to a time that you personally experienced but the collective memory of a group can also include vicarious memories consisting of narratives that present-day group members may not have lived through. For example, the collective memory of a family could contain harrowing details of suffering experienced by ancestors who were persecuted and had to abandon their homes. These stories are then passed down from generation to generation and become part of a family’s defining shared narrative. This especially holds true for larger groups such as nations. In Germany, the collective memory of the horrors of the holocaust and the Third Reich have a profound impact on how Germans perceive themselves and their identity even if they were born after 1945.

The German scholar Aleida Assmann is an expert on how collective and cultural memory influences society and recently wrote about the importance of collective memory in her essay “Transformation of the Modern Time Regime” (PDF):

All cultures depend upon an ability to bring their past into the present through acts of remembering and remembrancing in order to recover not only acquired experience and valuable knowledge, exemplary models and unsurpassable achievements, but also negative events and a sense of accountability. Without the past there can be no identity, no responsibility, no orientation. In its multiple applications cultural memory greatly enlarges the stock of the creative imagination of a society.

Assmann uses the German word Erinnerungskultur (culture of remembrance) to describe how the collective memory of a society is kept alive and what impact the act of remembrance has on our lives. The Erinnerungskultur widely differs among nations and even in a given nation or society, it may vary over time. It is quite possible that the memories of the British Empire may evoke nostalgia and romanticized images of a benevolent empire in older British citizens whereas younger Brits may be more likely to focus on the atrocities committed by British troops against colonial subjects or the devastating famines in India under British rule.

Much of the research on collective memory has been rooted in the humanities. Historians and sociologists have studied how historical events enter into the collective memory and how the Erinnerungskultur then preserves and publicly interacts with it. However, more recently, cognitive scientists and psychologists have begun exploring the cognitive mechanisms that govern the formation of collective memory. The cognitive sciences have made substantial advances in researching individual memory – such as how we remember, mis-remember or forget events – but much less is known how these processes apply to collective memory. The cognitive scientists William Hirst, Jeremey Yamashiro and Alin Coman recently reviewed the present psychological approaches to study how collective memories are formed and retained, and they divided up the research approaches into two broad categories: Top-down research and bottom-up research.

Top-down research identifies historical or cultural memories that persist in a society and tries to understand the underlying principles. Why do some historical events become part of the collective memory whereas others do not? Why do some societies update their collective memories based on new data whereas others do not? Hirst and his colleagues cite a study which researched how people updated their beliefs following retractions and corrections issued by the media following the 2003 Iraq war. The claims that the Iraqi forces executed coalition prisoners of war after they surrendered or the initial reports about the discovery of weapons of mass destruction were both retracted but Americans were less likely to remember the retraction whereas Germans were more likely to remember the retraction and the corrected version of the information.

Bottom-up research of collective memory, on the other hand, focuses on how individuals perceive events and then communicate these to their peers so that they become part of a shared memory canon. Researchers using this approach focus on the transmission of memory from local individuals to a larger group network and how the transmission or communication between individuals is affected by the environment. In a fascinating study of autobiographical memory, researchers studied how individuals from various nations dated autobiographical events. Turks who had experienced the 1999 earthquake frequently referenced it, similar to Bosnians who used the civil war to date personal events. However, Americans rarely referenced the September 11, 2001 attacks to date personal events. This research suggested that even though some events such as the September 11, 2001 attacks had great historical and political significance, they may not have had as profound a personal impact on the individual lives of Americans as did the civil war in Bosnia.

Hirst and his colleagues point out that cognitive research of collective memory is still in its infancy but the questions raised at the interface of psychology, neuroscience, history and sociology are so fascinating that this area will likely blossom in the decades to come. The many research questions that will emerge in the near future will not only integrate cutting-edge cognitive research but will likely also address the important phenomenon of the increased flow of information – both by migration of individuals as well as by digital connectedness. This research could have a profound impact on how we define ourselves and what we learn from our past to shape our future.

Reference

Hirst W et al. (2018). Collective Memory from a Psychological Perspective” Trends in Cognitive Science, 22 (5): 438-451

 

**************

Note: An earlier version of this article was first published on the 3Quarksdaily blog.

Advertisements

The Science of Tomato Flavors

Don’t judge a tomato by its appearance. You may salivate when thinking about the luscious large red tomatoes you just purchased in your grocery store, only to find out that they are extremely bland and lack flavor once you actually bite into them after preparing the salad you had been looking forward to all day. You are not alone. Many consumers complain about the growing blandness of fruits. Up until a few decades ago, it was rather challenging to understand the scientific basis of fruit flavors. Recent biochemical and molecular studies of fruits now provide a window into fruit flavors and allow us to understand the rise of blandness.

In a recent article, the scientists Harry Klee and Denise Tieman at the University of Florida summarize some of the most important recent research on the molecular biology of fruit flavors, with a special emphasis on tomatoes. Our perception of “flavor” primarily relies on two senses – taste and smell. Taste is perceived by taste receptors in our mouth, primarily located on the tongue and discriminates between sweet, sour, salty, bitter and savory. The sensation of smell (also referred to as “olfaction”), on the other hand, has a much broader catalog of perceptions. There are at least 400 different olfactory receptors present in the olfactory epithelium – the cells in the nasal passages which perceive smells – and the combined activation of various receptors can allow humans to distinguish up to 1 trillion smells. These receptors are activated by so-called volatile organic compounds or volatiles, a term which refers to organic molecules that are vaporize in the mouth when we are chewing the food and enter our nasal passages to activate the olfactory epithelium. The tremendous diversity of the olfactory receptors thus allows us to perceive a wide range of flavors. Anybody who eats food while having a cold and a stuffy nose will notice how bland food has become, even though the taste receptors on the tongue remain fully functional.

When it comes to tomato flavors, research has shown that consumers clearly prefer “sweetness”. One obvious determinant of sweetness is the presence of sugars such as glucose or fructose in tomatoes which are sensed by the taste receptors in the mouth. But it turns out that several volatiles are critical for the perception of “sweetness” even though they are not sugars but instead activate the smell receptors in the olfactory epithelium. 6-Methyl-5-hepten-2-one, 1-Nitro-2-phenylethane, Benzaldehyde and 2-Phenylethanol are examples of volatiles that enhance the positive flavor perceived by consumers, whereas volatiles such as Eugenol and Isobutyl acetate are perceived to contribute negatively towards flavor. Interestingly, the same volatiles can have no effect or even the opposite effect on flavor perception when present in other fruits. Therefore, it appears that for each fruit, the sweetness flavor is created by the basic taste receptors which sense sugar levels as well as a symphony of smell sensations activated by a unique pattern of volatiles. But just like instruments play defined yet interacting roles in an orchestra, the effect of volatiles on flavor depends on the presence of other volatiles.

This complexity of flavor perception explains why it is so difficult to define flavor. The story becomes even more complicated because individuals have different thresholds for olfactory receptor activation. Furthermore, even the volatiles linked with a positive flavor perception – either by enhancing flavor intensity or letting the consumer sense a greater “sweetness” then actually present based on sugar levels – may have varying effects when they reach higher levels. Thus, it is very difficult to breed the ideal tomato that will satisfy all consumers. But why is there this growing sense that fruits such as tomatoes are becoming blander? Have we simply not tried enough tomato cultivars? A cultivar is a plant variety that has been bred over time to create specific characteristics, and one could surmise that with hundreds or even thousands of tomato cultivars available, each of us might identify a distinct cultivar that we find most flavorful. The volatiles are generated by metabolic enzymes encoded by genes and differences between the flavor of distinct cultivars is likely a reflection of differences in gene expression for the enzymes that regulate sugar metabolism or volatiles generation.

The problem, according to Klee and Tieman, is that the customers of tomato breeders are tomato growers and not the consumers who garnish their salads or create tomato-based masalas. The goal of growers is to maximize shelf-life, appearance, disease-resistance, yield and uniformity. Breeders focus on genetically manipulating tomato strains to maximize these characteristics. The expression GMO (genetically modified organism) describes the use of modern genetic technology to modify individual genes in crops and often provokes a litany of attacks and criticisms by anti-GMO activists who fear potential risks of such genetic interventions. However, the genetic breeding and manipulation of cultivars has been occurring for centuries or even millennia using traditional low tech methods but these do not seem to provoke much criticism by anti-GMO activists. Even though there is a theoretical risk that modern genetic engineering tools could pose a health risk, there is no scientific evidence that this is actually the case. Instead, one could argue that targeted genetic intervention may be more precise using modern technologies than the low-tech genetic breeding manipulations that have led to the creation of numerous cultivars, many of whom carry the “organic, non-GMO” label.

Klee and Tieman argue that consumers prefer flavor, variety and nutrition instead of the traditional goals of growers. The genetic and biochemical analysis of tomato cultivars now offers us a unique insight into the molecular components of flavor and nutrition. Scientists can now analyze each cultivar that has been generated over the past centuries using the low-tech genetic manipulation of selective breeding and inform consumers as to their flavor footprint. Alternatively, one could also use modern genetic tools such as genome editing and specifically modify flavor components while maintaining disease-resistance and high nutritional value of crops such as tomatoes. The key to making informed, rational decisions is to provide consumers comprehensive information based on scientific evidence as to the nutritional value and flavor of fruits, as well as the actual risks of genetically modifying crops using traditional low tech methods such as selective breeding and grafting or newer methods which involve genome editing.

Reference

Klee, H. J & Denise M. Tieman (2018). The genetics of fruit flavour preferencesNature Reviews Genetics, (published online March 2018)

Note: An earlier version of this article was first published on the 3Quarksdaily blog.

“Hype” or Uncertainty: The Reporting of Initial Scientific Findings in Newspapers

One of the cornerstones of scientific research is the reproducibility of findings. Novel scientific observations need to be validated by subsequent studies in order to be considered robust. This has proven to be somewhat of a challenge for many biomedical research areas, including high impact studies in cancer research and stem cell research. The fact that an initial scientific finding of a research group cannot be confirmed by other researchers does not mean that the initial finding was wrong or that there was any foul play involved. The most likely explanation in biomedical research is that there is tremendous biological variability. Human subjects and patients examined in one research study may differ substantially from those in follow-up studies. Biological cell lines and tools used in basic science studies can vary widely, depending on so many details such as the medium in which cells are kept in a culture dish. The variability in findings is not a weakness of biomedical research, in fact it is a testimony to the complexity of biological systems. Therefore, initial findings need to always be treated with caution and presented with the inherent uncertainty. Once subsequent studies – often with larger sample sizes – confirm the initial observations, they are then viewed as being more robust and gradually become accepted by the wider scientific community.

Even though most scientists become aware of the scientific uncertainty associated with an initial observation as their career progresses, non-scientists may be puzzled by shifting scientific narratives. People often complain that “scientists cannot make up their minds” – citing examples of newspaper reports such as those which state drinking coffee may be harmful only to be subsequently contradicted by reports which laud the beneficial health effects of coffee drinking. Accurately communicating scientific findings as well as the inherent uncertainty of such initial findings is a hallmark of critical science journalism.

A group of researchers led by Dr. Estelle Dumas-Mallet at the University of Bordeaux recently studied the extent of uncertainty communicated to the public by newspapers when reporting initial medical research findings in their recently published paper “Scientific Uncertainty in the Press: How Newspapers Describe Initial Biomedical Findings“. Dumas-Mallet and her colleagues examined 426 English-language newspaper articles published between 1988 and 2009 which described 40 initial biomedical research studies. They focused on scientific studies in which a new risk factor such as smoking or old age had been newly associated with a disease such as schizophrenia, autism, Alzheimer’s disease or breast cancer (total of 12 diseases). The researchers only included scientific studies which had subsequently been re-evaluated by follow-up research studies and found that less than one third of the scientific studies had been confirmed by subsequent research. Dumas-Mallet and her colleagues were therefore interested in whether the newspaper articles, which were published shortly after the release of the initial research paper, adequately conveyed the uncertainty surrounding the initial findings and thus adequately preparing their readers for subsequent research that may confirm or invalidate the initial work.

The University of Bordeaux researchers specifically examined whether headlines of the newspaper articles were “hyped” or “factual”, whether they mentioned whether or not this was an initial study and clearly indicated they need for replication or validation by subsequent studies. Roughly 35% of the headlines were “hyped”. One example of a “hyped” headline was “Magic key to breast cancer fight” instead of using a more factual headline such as “Scientists pinpoint genes that raise your breast cancer risk“. Dumas-Mallet and her colleagues found that even though 57% of the newspaper articles mentioned that these medical research studies were initial findings, only 21% of newspaper articles included explicit “replication statements” such as “Tests on larger populations of adults must be performed” or “More work is needed to confirm the findings”.

The researchers next examined the key characteristics of the newspaper articles which were more likely to convey the uncertainty or preliminary nature of the initial scientific findings. Newspaper articles with “hyped” headlines were less likely to mention the need for replicating and validating the results in subsequent studies. On the other hand, newspaper articles which included a direct quote from one of the research study authors were three times more likely to include a replication statement. In fact, approximately half of all the replication statements mentioned in the newspaper articles were found in author quotes, suggesting that many scientists who conducted the research readily emphasize the preliminary nature of their work. Another interesting finding was the gradual shift over time in conveying scientific uncertainty. “Hyped” headlines were rare before 2000 (only 15%) and become more frequent during the 2000s (43%). On the other hand, replication statements were more common before 2000 (35%) than after 2000 (16%). This suggests that there was a trend towards conveying less uncertainty after 2000, which is surprising because debate about scientific replicability in the biomedical research community seems to have become much more widespread in the past decade.

As in all scientific studies, we need to be aware of the analysis performed by Dumas-Mallet and her colleagues. They focused on analyzing a very narrow area of biomedical research – newly identified risk factors for selected diseases. It remains to be seen whether other areas of biomedical research such as treatment of diseases or basic science discoveries of new molecular pathways are also reported with “hyped” headlines and without replication statements. In other words – this research on “replication statements” in newspaper articles also needs to be replicated. It is not clear that the worrisome trend of over-selling robustness of initial research findings after the year 2000 still persists since the work by Dumas-Mallet and colleagues stopped analyzing studies published after 2009. One would hope that the recent discussions about replicability issues in science among scientists would reverse this trend. Even though the findings of the University of Bordeaux researchers need to be replicated by others, science journalists and readers of newspapers can glean some important information from this study: One needs to be wary of “hyped” headlines and it can be very useful to interview authors of scientific studies when reporting about new research, especially asking them about the limitations of their work. “Hyped” newspaper headlines and an exaggerated sense of certainty in initial scientific findings may erode the long-term trust of the public in scientific research, especially if subsequent studies fail to replicate the initial results. Critical and comprehensive reporting of biomedical research studies – including their limitations and uncertainty – by science journalists is therefore a very important service to society which contributes to science literacy and science-based decision making.

Reference

Dumas-Mallet, E., Smith, A., Boraud, T., & Gonon, F. (2018). Scientific Uncertainty in the Press: How Newspapers Describe Initial Biomedical FindingsScience Communication, 40(1), 124-141.

Note: An earlier version of this article was first published on the 3Quarksdaily blog.

The Anatomy of Friendship in a “Digital Age”

Why is the number of friendships that we can actively maintain limited to 150? The evolutionary psychologist and anthropologist Robin Dunbar at the University of Oxford is a pioneer in the study of friendship. Over several decades, he and his colleagues have investigated the nature of friendship and social relationships in non-human primates and humans. His research papers and monographs on social networks, grooming, gossip and friendship have accumulated tens of thousands of academic citations but he may be best known in popular culture for “Dunbar’s number“, the limit to the number of people with whom an individual can maintain stable social relationships. For humans, this number is approximately 150 although there are of course variations between individuals and also across one’s lifetime. The expression “stable social relationships” is what we would call friends and family members with whom we regularly interact. Most of us may know far more people but they likely fall into a category of “acquaintances” instead of “friends”. Acquaintances, for example, are fellow students and colleagues who we occasionally meet at work, but we do not regularly invite them over to share meals or swap anecdotes as we would do with our friends.

Dunbar recently reviewed more than two decades of research on humans and non-human primates in the article “The Anatomy of Friendship” and outlines two fundamental constraints: Time and our brain. In order to maintain friendships, we have to invest time. As most of us intuitively know, friendship is subject to hierarchies. Dunbar and other researchers have been able to study these hierarchies scientifically and found remarkable consistency in the structure of the friendship hierarchy across networks and cultures. This hierarchy can be best visualized as concentric circles of friendship. The innermost core circle consists of 1-2 friends, often the romantic partner and/or the closest family member. The next circle contains approximately 5 very close friends, then progressively wider circles until we reach the maximum of about 150. The wider the circle becomes, the less time we invest in “grooming” or communicating with our friends. The social time we invest also mirrors the emotional closeness we feel. It appears that up to 40% of our social time is invested in the inner circle of our 5 closest friends, 20% to our circle of 15 friends, and progressively less. Our overall social time available to “invest’ in friendships on any given day is limited by our need to sleep and work which then limits the number of friends in each circle as well as the total number of friendships.

The Circles of Friendship – modified from R Dunbar, The Anatomy of Friendship (2018)

The second constraint which limits the number of friendships we can maintain is our cognitive capacity. According to Dunbar, there are at least two fundamental cognitive processes at play in forming friendships. First, there needs to be some basis of trust in a friendship because it represents implicit social contracts, such as a promise of future support if needed and an underlying promise of reciprocity – “If you are here for me now, I will be there for you when you need me.” For a stable friendship between two individuals, both need to be aware of how certain actions could undermine this implicit contract. For example, friends who continuously borrow my books and seem to think that they are allowed to keep them indefinitely will find that there are gradually nudged to the outer circles of friendship and eventually cross into the acquaintance territory. This is not only because I feel I am being taken advantage off and the implicit social contract is being violated but also because they do not appear to put in the mental effort to realize how much I value my books and how their unilateral “borrowing” may affect me. This brings us to “mentalizing”, the second important cognitive component that is critical for stable friendships according to Dunbar. Mentalizing refers to the ability to read or understand someone else’s state of mind. To engage in an active dialogue with friends not only requires being able to read their state of mind but also infer the state of mind of people that they are talking about. These levels of mentalizing (‘I think that you feel that she was correct in …..) appear to hit a limit around four or five. Dunbar cites the example of how at a gathering, up to four people can have an active conversation in which each person is closely following what everyone else is saying but once a fifth person joins (the fifth wheel!), the conversation is likely to split up into two conversations and that the same is true for many TV shows or plays in which scenes will rarely depict more than four characters actively participating in a conversation.

Has the digital age changed the number of friends we can have? The prior research by Dunbar and his colleagues relied on traditional means of communication between friends such as face-to-face interactions and phone calls but do these findings still apply today when social media such as Facebook and Twitter allow us to have several hundred or even thousands of “friends” and “followers”? The surprising finding is that online social networks are quite similar to traditional networks! In a study of Facebook and Twitter social media networks, Dunbar and his colleagues found that social media networks exhibit a hierarchy of friendship and numbers of friends that were extremely similar to “offline” networks. Even though it is possible to have more than a thousand “friends” on Facebook, it turns out that most of the bidirectional interactions with individuals are again concentrated in very narrow circles of approximately 5, 15 and 50 individuals. Social media make it much easier to broadcast information to a broad group of individuals but this sharing of information is very different from the “grooming” of friendships which appears to be based on reciprocity in terms of building trust and mentalizing.

There is a tendency to believe that the Internet has revolutionized all forms of human communication, a belief which falls under the rubric of “internet-centrism” (See the article “Is Internet-Centrism a Religion“) according to the social researcher Evgeny Morozov. Dunbar’s research is an important reminder that core biological and psychological principles such as the anatomy of friendship in humans have evolved over hundreds of thousands of years and will not be fundamentally upstaged by technological improvements in communication. Friendship and its traditional limits are here to stay.

Reference

Dunbar R.I.M. (2018). The Anatomy of Friendship” Trends in Cognitive Science 22(1), 32-51

 

Note: An earlier version of this article was first published on the 3Quarksdaily blog.

Novelty in science – real necessity or distracting obsession?

File 20171219 5004 1ecssnn.jpg?ixlib=rb 1.1
It may take time for a tiny step forward to show its worth.
ellissharp/Shutterstock.com

Jalees Rehman, University of Illinois at Chicago

In a recent survey of over 1,500 scientists, more than 70 percent of them reported having been unable to reproduce other scientists’ findings at least once. Roughly half of the surveyed scientists ran into problems trying to reproduce their own results. No wonder people are talking about a “reproducibility crisis” in scientific research – an epidemic of studies that don’t hold up when run a second time.

Reproducibility of findings is a core foundation of science. If scientific results only hold true in some labs but not in others, then how can researchers feel confident about their discoveries? How can society put evidence-based policies into place if the evidence is unreliable?

Recognition of this “crisis” has prompted calls for reform. Researchers are feeling their way, experimenting with different practices meant to help distinguish solid science from irreproducible results. Some people are even starting to reevaluate how choices are made about what research actually gets tackled. Breaking innovative new ground is flashier than revisiting already published research. Does prioritizing novelty naturally lead to this point?

Incentivizing the wrong thing?

One solution to the reproducibility crisis could be simply to conduct lots of replication studies. For instance, the scientific journal eLife is participating in an initiative to validate and reproduce important recent findings in the field of cancer research. The first set of these “rerun” studies was recently released and yielded mixed results. The results of 2 out of 5 research studies were reproducible, one was not and two additional studies did not provide definitive answers.

There’s no need to restrict these sort of rerun studies to cancer research – reproducibility issues can be spotted across various fields of scientific research.

Researchers should be rewarded for carefully shoring up the foundations of the field.
Alexander Raths/Shutterstock.com

But there’s at least one major obstacle to investing time and effort in this endeavor: the quest for novelty. The prestige of an academic journal depends at least partly on how often the research articles it publishes are cited. Thus, research journals often want to publish novel scientific findings which are more likely to be cited, not necessarily the results of newly rerun older research.

A study of clinical trials published in medical journals found the most prestigious journals prefer publishing studies considered highly novel and not necessarily those that have the most solid numbers backing up the claims. Funding agencies such as the National Institutes of Health ask scientists who review research grant applications to provide an “innovation” score in order to prioritize funding for the most innovative work. And scientists of course notice these tendencies – one study found the use of positive words like “novel,” “amazing,” “innovative” and “unprecedented” in paper abstracts and titles increased almost ninefold between 1974 and 2014.

Genetics researcher Barak Cohen at Washington University in St. Louis recently published a commentary analyzing this growing push for novelty. He suggests that progress in science depends on a delicate balance between novelty and checking the work of other scientists. When rewards such as funding of grants or publication in prestigious journals emphasize novelty at the expense of testing previously published results, science risks developing cracks in its foundation.

Houses of brick, mansions of straw

Cancer researcher William Kaelin Jr., a recipient of the 2016 Albert Lasker Award for Basic Medical Research, recently argued for fewer “mansions of straw” and more “houses of brick” in scientific publications.

One of his main concerns is that scientific papers now inflate their claims in order to emphasize their novelty and the relevance of biomedical research for clinical applications. By exchanging depth of research for breadth of claims, researchers may be at risk of compromising the robustness of the work. By claiming excessive novelty and impact, researchers may undermine its actual significance because they may fail to provide solid evidence for each claim.

Kaelin even suggests that some of his own work from the 1990s, which transformed cell biology research by discovering how cells can sense oxygen, may have struggled to get published today.

Prestigious journals often now demand complete scientific stories, from basic molecular mechanisms to proving their relevance in various animal models. Unexplained results or unanswered questions are seen as weaknesses. Instead of publishing one exciting novel finding that is robust, and which could spawn a new direction of research conducted by other groups, researchers now spend years gathering a whole string of findings with broad claims about novelty and impact.

There should be more than one path to a valuable journal publication.
Mehaniq/Shutterstock.com

Balancing fresh findings and robustness

A challenge for editors and reviewers of scientific manuscripts is assessing the novelty and likely long-term impact of the work they’re assessing. The eventual importance of a new, unique scientific idea is sometimes difficult to recognize even by peers who are grounded in existing knowledge. Many basic research studies form the basis of future practical applications. One recent study found that of basic research articles that received at least one citation, 80 percent were eventually cited by a patent application. But it takes time for practical significance to come to light.

A collaborative team of economics researchers recently developed an unusual measure of scientific novelty by carefully studying the references of a paper. They ranked a scientific paper as more novel if it cited a diverse combination of journals. For example, a scientific article citing a botany journal, an economics journal and a physics journal would be considered very novel if no other article had cited this combination of varied references before.

This measure of novelty allowed them to identify papers which were more likely to be cited in the long run. But it took roughly four years for these novel papers to start showing their greater impact. One may disagree with this particular indicator of novelty, but the study makes an important point: It takes time to recognize the full impact of novel findings.

The ConversationRealizing how difficult it is to assess novelty should give funding agencies, journal editors and scientists pause. Progress in science depends on new discoveries and following unexplored paths – but solid, reproducible research requires an equal emphasis on the robustness of the work. By restoring the balance between demands and rewards for novelty and robustness, science will achieve even greater progress.

Jalees Rehman, Associate Professor of Medicine and Pharmacology, University of Illinois at Chicago

This article was originally published on The Conversation. Read the original article.

Neuroprediction: Using Neuroscience to Predict Violent Criminal Behavior

Can neuroscience help identify individuals who are most prone to engage in violent criminal behavior? Will it help the legal system make decisions about sentencing, probation, parole or even court-mandated treatments? A panel of researchers lead by Dr. Russell Poldrack from Stanford University recently reviewed the current state of research and outlined the challenges that need to be addressed for “neuroprediction” to gain traction.  The use of scientific knowledge to predict violent behavior is not new. Social factors such as poverty and unemployment increase the risk for engaging in violent behavior. Twin and family studies suggest that genetic factors also significantly contribute to antisocial and violent behavior but the precise genetic mechanisms remain unclear. A substantial amount of research has focused on genetic variants of the MAOA gene (monoamine oxidase A, an enzyme involved in the metabolism of neurotransmitters). Variants of MAOA have been linked to increased violent behavior but these variants are quite common – up to 40% of the US population may express this variant! As pointed out by John Horgan in Scientific American,  it is impossible to derive meaningful predictions of individual behavior based on the presence of such common gene variants.

One fundamental problem of using social and genetic predictors of criminal violent behavior in the legal setting is the group-to-individual problem. Carrying a gene or having been exposed to poverty as a child may increase the group risk for future criminal behavior but it tells us little about an individual who is part of the group. Most people who grow up in poverty or carry the above-mentioned MAOA gene variant do not engage in criminal violent behavior. Since the legal system is concerned with an individual’s guilt and his/her likelihood to commit future violent crimes, group characteristics are of little help. This is where brain imaging may represent an advancement because it can assess individual brains. Imaging individual brains might provide much better insights into a person’s brain function and potential for violent crimes than more generic assessments of behavior or genetic risk factors.

Poldrack and colleagues cite a landmark study published in 2013 by Eyal Aharoni and colleagues in which 96 adult offenders underwent brain imaging with a mobile MRI scanner before being released from one of two New Mexico state correctional facilities. The prisoners were followed for up to four years after their release and the rate of being arrested again was monitored.

This study found that lower activity in the anterior cingulate cortex (ACC- an area of the brain involved in impulse control) was associated with a higher rate being arrested again (60% in participants with lower ACC activity, 46% in those with higher ACC activity). The sample size and rate of re-arrest was too small to see what the predictive accuracy was for violent crime re-arrests (as opposed to all re-arrests). Poldrack and colleagues lauded the study for dealing with the logistics of performing such complex brain imaging studies by using a mobile MRI scanner at the correctional facilities as well as prospectively monitoring their re-arrest rate. However, they also pointed out some limitations of the study in terms of the analysis and the need to validate the results in other groups of subjects.

Brain imaging is also fraught with the group-to-individual problem. Crude measures such as ACC activity may provide statistically significant correlations for differences between groups but do not tell us much about how any one individual is likely to behave in the future. The differences in the re-arrest rates between the high and low ACC activity groups are not that profound and it is unlikely that they would be of much use in the legal system. So is there a future for “neuroprediction” when it comes to deciding about the sentencing or parole of individuals?

Poldrack and colleagues outline some of the challenges of brain imaging for neuroprediction. One major challenge is the issue of selecting subjects. Many people may refuse to undergo brain imaging and it is quite likely that those who struggle with impulse control and discipline may be more likely to refuse brain scanning or move during the brain scanning process and thus distort the images. This could skew the results because those most likely to succumb to impulse control may never be part of the brain imaging studies. Other major challenges include using large enough and representative sample sizes, replicating studies, eliminating biases in the analyses and developing a consensus on the best analytical methods. Addressing these challenges would advance the field.

It does not appear that neuroprediction will become relevant for court cases in the near future. The points outlined by the experts remind us that we need to be cautious when interpreting brain imaging data and that solid science is required for rushing to premature speculations and hype about using brain scanners in court-rooms.

Reference

Poldrack RA et al. (2017). Predicting Violent Behavior:What Can Neuroscience Add? Trends in Cognitive Science, (in press).

Note: An earlier version of this article was first published on the 3Quarksdaily blog.

Do We Value Physical Books More Than Digital Books?

Just a few years ago, the onslaught of digital books seemed unstoppable. Sales of electronic books (E-books) were surging, people were extolling the convenience of carrying around a whole library of thousands of books on a portable digital tablet, phones or E-book readers such as the Amazon Kindle. In addition to portability, E-books allow for highlighting and annotating of key sections, searching for keywords and names of characters, even looking up unknown vocabulary with a single touch. It seemed only like a matter of time until E-books would more or less wholly replace old-fashioned physical books. But recent data seems to challenge this notion. A Pew survey released in 2016 on the reading habits of Americans shows that E-book reading may have reached a plateau in recent years and there is no evidence pointing towards the anticipated extinction of physical books.

The researchers Ozgun Atasoy and Carey Morewedge from Boston University recently conducted a study which suggests that one reason for the stifled E-book market share growth may be that consumers simply value physical goods more than digital goods. In a series of experiments, they tested how much consumers value equivalent physical and digital items such as physical photographs and digital photographs or physical books and digital books. They also asked participants in their studies questions which allowed them to infer some of the psychological motivations that would explain the differences in values.

In one experiment, a research assistant dressed up in a Paul Revere costume asked tourists visiting Old North Church in Boston whether they would like to have their photo taken with the Paul Revere impersonator and keep the photo as a souvenir of the visit. Eighty-six tourists (average age 40 years) volunteered and were informed that they would be asked to donate money to a foundation maintaining the building. The donation could be as low as $0, and the volunteers were randomly assigned to either receiving a physical photo or a digital photo. Participants in both groups received their photo within minutes of the photo being taken, either as an instant-printed photograph or an emailed digital photograph. It turned out that the participants randomly assigned to the digital photo group donated significantly less money than those in the physical photo group (median of $1 in the digital group, $3 in the physical group).

In fact, approximately half the participants in the digital group decided to donate no money. Interestingly, the researchers also asked the participants to estimate the cost of making the photo (such as the costs of the Paul Revere costume and other materials as well as paying the photographer). Both groups estimated the cost around $3 per photo, but despite this estimate, the group receiving digital photos was much less likely to donate money, suggesting that they valued their digital souvenir less.

In a different experiment, the researchers recruited volunteer subjects (100 subjects, mean age 33) online using a web-based survey in which they asked participants how much they would be willing to pay for a physical or digital copy of either a book such as Harry Potter and the Sorcerer’s Stone (print-version or the Kindle E-book version) or a movie such as The Dark Knight (DVD or the iTunes digital version). Participants were also asked how much “personal ownership” they would feel for the digital versus the corresponding physical items by completing a questionnaire scored with responses ranging from “strongly agree” to “strongly disagree” to statements such as “feel like it is mine”.  In addition to these ownership questions, they also indicated how much they thought they would enjoy the digital and physical versions.

The participants were willing to pay significantly more for the physical book and physical DVD than for the digital counterparts even though they estimated that the enjoyment of either version would be similar. It turned out that participants also felt a significantly stronger sense of personal ownership when it came to the physical items and that the extent of personal ownership correlated nicely with the amount they were willing to pay.

To assess whether a greater sense of personal ownership and control over the physical goods was a central factor in explaining the higher value, the researchers than conducted another experiment in which participants (275 undergraduate students, mean age of 20) were given a hypothetical scenario in which they were asked how much they would be willing to pay for either purchasing or renting textbooks in their digital and print formats. The researchers surmised that if ownership of a physical item was a key factor in explaining the higher value, then there should not be much of a difference between the estimated values of physical and digital textbook rentals. You do not “own” or “control” a book if you are merely renting it because you will have to give it up at the end of the rental period anyway. The data confirmed the hypothesis. For digital textbooks, participants were willing to pay the same price for a rental or a purchase (roughly $45), whereas they would pay nearly twice that for purchasing a physical textbook ($88). Renting a physical textbook was valued at around $59, much closer to the amount the participants would have paid for the digital versions.

This research study raises important new aspects for the digital economy by establishing that consumers likely value physical items higher and by also providing some insights into the underlying psychology. Sure, some of us may like physical books because of the tactile sensation of thumbing through pages or being able to elegantly display are books in a bookshelf. But the question of ownership and control is also an important point. If you purchase an E-book using the Amazon Kindle system, you cannot give it away as a present or sell it once you are done, and the rules for how to lend it to others are dictated by the Kindle platform. Even potential concerns about truly “owning” an E-book are not unfounded as became apparent during the infamous “1984” E-book scandal, when Amazon deleted purchased copies of the book – ironically George Orwell’s classic which decries Big Brother controlling information –from the E-book readers of its customers because of some copyright infringement issues. Even though the digital copies of 1984 had been purchased, Amazon still controlled access to the books.

Digital goods have made life more convenient and also bring with them collateral benefits such as environment-friendly reduction in paper consumption. However, some of the issues of control and ownership associated with digital goods need to be addressed to build more trust among consumers to gain more widespread usage.

Reference

Atasoy O and Morewedge CK. (2017). Digital Goods Are Valued Less Than Physical GoodsJournal of Consumer Research, (in press).

Note: An earlier version of this article was first published on the 3Quarksdaily blog.