By now, you’ve probably heard about the controversy surrounding the massive study conducted by Facebook, titled “Experimental evidence of massive-scale emotional contagion through social networks” and published in the June Proceedings of the National Academic of Sciences. In the study, individuals, and lots of them (there were 689,003 unknowing participants) were exposed to positive or negative posts on the service. The study then looked to see if people who saw more negative posts in turn wrote more negative posts. (As a side note, if you aren’t familiar with the idea of emotional contagion, a few semesters back I used an interesting book called “The Dragonfly Effect” in my Social Media and Social Change class. The book discusses the concept. Essentially, the value of the idea as argued by the book is that an emotion has a viral quality to it that can spread and that thus telling the emotional aspect of your cause is critical for spreading support for a social causes.)
I’m not here to comment on whether the study was ethical or not (certainly it wouldn’t have passed any IRB I’ve ever heard of). No matter your opinion, this case brings up a very interesting situation that will make for a great discussion opportunity in a research class when it comes time to talk ethics and IRB.
In my experience teaching research methods, students tend to be disinterested (see: blank stares, checking smart phones) in discussions of the ethical obligations of researchers and the IRB. And I understand. Unless a student is going to graduate school the likelihood that she will have to deal with IRB and research ethics outside of our class seems fairly low (unless of course they participate in studies). But this case is an important reminder to students that research does not have to be confined to the academic setting. While this study was published in an academic journal, the Washington Post reported that according to posts on the Facebook page of employee and co-author Adam D.I. Kramer, Kramer stated “… we were concerned that exposure to friends’ negativity might lead people to avoid visiting Facebook.”
And what better way to find out than through social scientific study? Facebook has the access and clearly the ability through their algorithm to manipulate what you see. The fact that they are doing this doesn’t surprise me (They manipulate users’ news feeds in an attempt to optimize their service). And the anxiety and controversy it is causing doesn’t surprise me (It brings questions to the mind of many: How else are they manipulating me? And that breeds distrust). What I find interesting is that they make public their work and used it to contribute to scientific understanding. And, in a way, I’m glad they did because it creates teaching and learning opportunities for all of us.
Great Discussion Topic For Class #1: Informed Consent: What is it? What consists of consent? And did users consent in this case? And what are the ramifications of not having informed consent?
I tell my students Informed Consent means exactly that. You are INFORMED as to what the study is about and you CONSENT to participate. There was no overt informed consent. No one actively stated their agreement to participate in the study, and they were not informed about the study and what it entailed.
The obvious argument is that Facebook should have disclosed this information. From a professional communication perspective, this works. Clear. Transparent. Don’t violate the trust of your users.
If Facebook was concerned that negative content might lead people to avoid visiting Facebook, was their answer to enlist users in an experiment without their consent? Which is more likely to keep people from visiting Facebook?
I always tell my students that one of the ramifications for unethical or deceptive scholarly research is the threat to public trust of scholarship and scholarly institutions. The public has long heard about the Milgram studies and other ethically dubious research. And they’re nervous to participate in academic research for those reasons. People don’t like to be duped and the common perception is that scholarly research involves deception or trickery – that some hidden hand is manipulating them in an uneven interaction where the researcher is supposed to be blindly trusted. And this case with Facebook may produce the same wariness among the public psyche as those famous studies we all learn about in school. Indeed, it seems more and more people are concerned about privacy and what they’re giving up about their lives for gain of free services (e.g., concerns about Google and its Google Glass).
Now, Facebook argues that your consent to participate is implied as a result of agreement to the Facebook terms of service. And it can be argued that if people knew they were being manipulated then the Hawthorne Effect would likely take place. And thus the experiment would have not been effective – thus, in the eyes of some, justifying the use of deception.
I’m sure that asking my students next time I teach communication research class: “What if you were one of the participants? How would you feel?” will produce a lively response. I’ll be sure to remind them that it is possible they were and they’ll never know it. I’m interested to know their informed opinion after we discuss these topics: What responsibility does Facebook have to disclose this information?
Great Discussion Topic For Class #2: IRB – what is it? Why is it important? And what needs to be reviewed and what can be exempted?
Who is responsible for reviewing a study such as this and making sure it passes appropriate guidelines to ensure participant rights are considered and protected?
Of course, at a university we have the Institutional Review Board. We’ve all dealt with them at one time or another. But when a corporation does a study, shouldn’t that also fall under review of a governing board?
Interestingly, this study was done in part with researchers at Cornell. And a quick read of this Washington Post article gives the casual reader the impression that Cornell IRB reviewed and approved this research. However, you’ll note that it states that they approved the use of pre-existing data from Facebook. And that is something many of us have used in conducting our research on Facebook or Twitter. Simply, the data already exists and we’re going to analyze it. But that wasn’t the case in this study, really. Facebook conducted the experiment and now researchers from Cornell were going to analyze it. It seems there may have been some confusion about the fact that Facebook conducted the experiment. And clearly there was some confusion in how the media reported this / or how the public was interpreting it. Because that article was updated to note that Cornell did not approve manipulation in the study. And a follow up article discusses that Cornell’s IRB has made effort to clarify that the study was conducted prior to Cornell’s IRB was consulted.
Because it may seem confusing to someone not familiar with the distinction between conducting an experiment and using pre-existing data, it is a great opportunity to explicate this distinction to students and help them understand the notion of pre-existing data and public data and what can be exempt and what can’t be (and that even if something is going to be exempt, it still needs to go through IRB!).
Altogether, I’m sure this will really help students see why IRBs are in place, the importance of their role, and while it is a lot of work to go through the process, why it is important for ensuring public safety and trust.
These are just some thoughts and starting points. And this post is getting rather long. So I’ll leave it there. I’m looking forward to sharing this with my students and a more relatable and lively discussion when it comes time to talk ethics and IRB next time around. :)
What do you think? Was the study ethical? What other articles or questions would make for a great class discussion on this subject?
** Facebook logo Copyright Facebook. photos: Crystal Campbell | Neil Conway