Recently, I was reading a paper in the AI area, Tracking Fluctuations in Psychological States Using Social Media Language:A Case Study of Weekly Emotion, which takes the data from Facebook users and attempts to track their emotions over the course of a week. To identify when you are likely be in a certain mood. It was conducted by Johannes Eichstaedt (Stanford) and Aaron C. Weidman (University of Michigan) just to give context to their area of expertise (psychology, not AI development or technology ethics, according to what I can find on their PhD work).
Here is the abstract for those of you who are not going to follow the link.
And I take some ethical issues with a study like this one. It has some kind of major problems in the ethics department that need to be sorted out, not just for this study, but for any
The Problematic Nature of Using Social Media to Measure Human Moods
Now, I’m not going to get into the issues with the user base of Face Book, and how its demographics are only representative of a certain (older, higher-earning, more conservative leaning, less digitally literate) part of the population. Meaning it should not be used as representative sample of the general population. We know this, because it has been studied, more than once. (Okay, I got into it a little bit)
But I am going to get into the issue of performance. Posting of social media is not a direct representation of how we feel. It is… in its own way, a performance. It is what we are choosing to show to the world. Anyone who takes these posts at face value, raises some concerns. The observer effect, and the nature of social media are increasingly well studied areas of human behavior.
How Will This be Applied In The Future?
Collecting data for one use and then using it for another…. that is happening all around us in the modern world. Did you intend to have your use data analyzed with AI every time you use your credit card? How about when you go to a restaurant? Or just visiting a store to shop for the things that you need?
When we gave away that data, we had no idea how it would be used. And for people who made those purchases (say in the 1990s and early 2000s) had no idea that these technologies would exist. We don’t know what will exist and how this data can be used in the future. And far more importantly, we do not know how this piece of research will impact the technology of the future. Once you know what peoples moods are (or an approximation of what they are willing to perform to the public) who is to say that this limited research will not be used as a baseline… a guide to how people should feel.
Who is to say that social media and other AI applications won’t, in the future, try to guide people to that range of what it perceives as “normal” feelings. This is why a new research ethics needs to be created. Researchers need to be sure they are careful about what input is and is not used for other projects, and that a body of knowledge is being based on something more solid than Facebook posts. Given psychologies truly massive replicability crisis in research, this is doubly concerning.
Can You Get Real Research Consent? Did EULAs Count?
Finally, lets talk more about that consent. Researchers who work with humans have to get their consent. Even in studies with subterfuge, you still need to get consent, and debrief people after to make sure you have not done any harm to them. Universities in fact have boards who job it is to make sure this is done, the Institutional Review Board (IRB).
I have faced them for my own research and it… is not an easy thing to do. They are rightfully picky people. It is not easy to pass an IRB review if you are not careful with what you are doing. (Again, a problem that psychology has a historic issue with) but in more modern research psychologists have gotten in trouble for trying to use EULAs (the thing you click ‘agree’ on when you sign up for software) as research consent instead of real informed consent.
But Facebook does not let you have that. You are, in fact, already a research subject if you are a user of that platform. And little of it is out in the open, its nebulous even to the industry.
So yeah, its time for some rethinking when it comes to research ethics for the modern era.