Researchers at Facebook, Inc., the University of California, San Francisco (UCSF) and Cornell University teamed up to study whether manipulating the News Feeds of Facebook users would affect the emotional content of the users’ status updates or postings. They recently published their findings in the PNAS paper “Experimental evidence of massive-scale emotional contagion through social networks” and suggest that they have found evidence of an “emotional contagion”, i.e. the idea that emotions can spread via Facebook.
The size of the study is quite impressive: The researchers analyzed the postings of 689,003 Facebook users (randomly selected based on their user ID) during the week of January 11-18, 2012! This probably makes it the largest study of its kind in which social media feeds of individual users were manipulated. Other large-scale social media research studies have relied on observing correlations but have not used actual interventions on such a massive scale. The users’ postings (over three million of them) were directly analyzed by a software which evaluated the emotional content of each posting. The researchers did not see the actual postings of the Facebook users, which is why they felt that their research was covered by Facebook’s Data Use Policy and did not require individual informed consent. This means that the individual Facebook users were probably unaware of the fact that their News Feeds were manipulated and that their postings were being analyzed for emotional content.
The researchers selectively removed items with either “positive” or “negative” emotional content from the News Feeds of individual users. The emotional content of News Feed items was categorized using the LIWC software, which defines words such as “ugly” or “hurt” as negative and “nice” or “”sweet” as positive. Each emotional post had a 10%-90% chance (assigned based on their User ID) of being removed from the News Feed. Since removal of News Feed items could have a non-specific, general effect on users being exposed to lesser updates, the researchers also ensured that they studied control groups in whom the same number of News Feed items were randomly removed, independent of their emotional content.
Importantly, 22.4% of posts contained “negative” words, whereas 46.8% of posts contained “positive” words, suggesting that there is roughly a 2:1 ratio of “positive” to “negative” posts on Facebook. This bias towards positivity is compatible with prior research which has shown that sharing of “negative” emotions via Facebook is not always welcome. The difference in total number of “positive” and “negative” posts forced the researchers to use two distinct control groups. For example, users for whom 20% of News Feed posts containing “positive” content were removed required a control group in which 20% of 46.8% (i.e., 9.36%) of News Feed items were randomly removed (regardless of the emotional content). On the other hand, users for whom 20% of News Feed items containing “negative” content were removed had to be matched with control groups in which 20% of 22.4% (i.e., 4.48%) of posts were randomly removed. The researchers only manipulated the News Feeds but did not remove any posts from the timeline or “wall” of any Facebook user.
The tweaking of the users’ News Feeds had a statistically significant impact on what the users posted. Removing “positive” items from the News Feed decreased the “positive” word usage in the users’ own postings from roughly 5.25% to 5.1%. Similarly, removal of “negative” News Feed items resulted in a reduction of “negative” word usage in the posts of the negativity-deprived users.The overall effects were statistically significant but still minuscule (changes of merely 0.05% to 0.15% in the various groups). However, one has to bear in mind that the interventions were also rather subtle: Some of the positivity- or negativity-deprived subjects only had 10% of their positive News Feed items removed. Perhaps the results would have been more impressive if the researchers had focused on severe deprivation of “positivity” or “negativity” (i.e. 90% or even 100% removal of “negative”/”positive” items).
The study shows that emotions expressed by others on Facebook can indeed influence our own emotions. However, in light of the small effect size, it is probably premature to call the observed effect a “massive-scale emotional contagion”, as the title of the PNAS paper claims. The study also raises important questions about the ethics of conducting such large-scale analysis of postings without informing individual users and obtaining their individual consent. The fact that the researchers relied on the general Facebook Data Use Policy as sufficient permission to conduct this research (manipulating News Feeds and analyzing emotional content) should serve as a reminder that when we sign up for “free” accounts with Facebook or other social media platforms, we give corporate social media providers access to highly personal data.
Kramer, A., Guillory, J., & Hancock, J. (2014). Experimental evidence of massive-scale emotional contagion through social networks Proceedings of the National Academy of Sciences DOI: 10.1073/pnas.1320040111