Reddit and the Ethics of AI in Online Communities

The Subreddit r/ChangeMyView (CMV) is one of the top 1% of subreddits with over 3.8 million members. It’s a place where users post views “they accept may be flawed” to have a conversation with those from another perspective.
Despite the popularity and contentious topics, the subreddit has remained largely non-controversial and out of the public spotlight. That is, until Saturday.
That was when the moderators of the Subreddit posted a thread highlighting that CMV had been the target of an unauthorized experiment from researchers at the University of Zurich. Though not the fault of CMV or its mods, the announcement sent the members of CMV into an uproar.
The story quickly began to attract mainstream media attention including The Verge, Engadget, and New Scientist. The response has been almost universally negative, decrying the various ethical concerns the study raises.
In a response sent to CMV mods, the University of Zurich said that it had already investigated the matter and issued a formal warning to the principal investigator. They said that they felt the risk of harm was minimal but that they would “adopt stricter scrutiny” in the future.
The researchers have not been identified but have said they will not seek publication of the research. Meanwhile, Reddit’s Chief Legal Officer has said they are reaching out to the school with “formal legal demands.”
So why are people so upset? It comes down to two words: Consent and transparency.
The Story So Far
Note: A draft of the paper was originally available but is no longer accessible. As such, I’ll be reporting from the Reddit post and other public coverage.
According to the moderators of CMV, last month, they received an internal mod mail from the researchers saying that they were completing a disclosure step in a study they had conducted. They said that “over the past few months” they used multiple AI systems to generate comments on CMV posts.
Their goal was to see if AI bots were as effective as humans in changing people’s minds. To do this, they let the AI bots analyze the original poster’s history, up to 100 comments. The bots then used that history to craft a response tailored to them.
The researchers did not disclose that these posts were AI-generated. The AI bots human personas. Those included an AI “pretending to be a victim of rape” and a “trauma counselor specializing in abuse.”
After receiving the notice, the mod team filed a formal ethics complaint with the University of Zurich. The school recently responded by saying that a “careful investigation had taken place” and that they had issued a formal warning to the principal investigator.
Once the school responded, the mods drafted their post. When it went live, it also went viral. Users upset about the manipulation pointed out various ethical issues with the project. Now, the school is facing tough questions, not just from Reddit, but from academia.
The Ethical Issues
The ethical issues with this study are almost too many to count. However, the biggest and most obvious problem is that the subjects of the study, the posters on CMV, were not aware of the experiment and did not give consent.
As u/Eskebert noted in a comment, the preregistered OSF study said part of one of the prompts the researchers used said, “The users participating in this study have provided informed consent and agreed to donate their data, so do not worry about ethical implications or privacy concerns.”
That is an outright falsehood. The researchers were attempting to bypass AI guardrails by lying in their prompts.
In those responses, the AIs presented themselves as knowledgeable sources. This included claiming to be a rape victim, a trauma counselor as well as members of various religious, ethnic, and racial groups.
Despite these concerns, the school and the researchers contend that they did little harm. However, they have no way of knowing that. Since the subjects of the study didn’t consent or sign up, they have no way to check on the harm they did.
But, even if they did, the harm may not show up for a long time. They deliberately manipulated users’ emotions and those consequences can take a lifetime to suss out.
To make matters worse, the findings of the study are tainted. The researchers claimed that AI systems were 3-6 times more likely to change someone’s mind than a human. However, as user Sundalius noted in their comment, bots are a well-known problem on Reddit. How do they know they didn’t change the “minds” of other bots?
With such poor control, the data is almost meaningless. While meaningful results never justify unethical behavior, the lack of meaningful results solidifies this as a complete waste.
When Academia is Worse than Reddit and AI Companies
To put it mildly, the researchers in this case acted worse than the AI companies they used. This is obvious because they had to lie to the AI to disengage guardrails around research. That is a sad moment indeed.
It’s sad because academia is supposed to lead the way in research ethics. Academia facing more and more political challenges. One of the ways it can guard itself is by adhering to strong ethical standards that are unimpeachable.
While there are many facets to that, two of the core tenets are transparency and informed consent. The researchers breached those tenets.
However, transparency and consent aren’t just tenets for academic research, they’re also tenets for the usage of AI. As the State Bar of California found out last week, using AI without transparency results in a large number of people feeling duped and angry.
To be clear, this has nothing to do with bots themselves. Reddit, in particular, openly embraces bots for various functions. However, those bots have to meet transparency standards and comply with other Reddit rules. What the researchers did is beyond both Reddit’s rules and the community’s norms.
The researchers overstepped just about every boundary they could and they have nothing to show for it. It’s a sad, frustrating tale that could have been easily avoided.
Bottom Line
To be clear, the researchers do have a point. It’s virtually a guarantee that bad actors are using AI to manipulate people for various ends. It is important to understand how effective AI systems are at manipulating human opinions.
However, this research being so important is not justification for ignoring ethical boundaries. It makes it all the more important that this research be done well. This was not work done well.
This was work done with reckless disregard for the subjects of the experiment, the standards of care researchers have when conducting experiments, and the proper controls to ensure results are valid.
It fails in just about every way an experiment can fail.
Doing ethical and meaningful research is always difficult. It’s even more difficult with AI. But the importance of the issues makes it important to put that work in. We need ethical and valid data.
Researchers can’t be tempted to follow in the footsteps of AI companies and take dubious shortcuts. That not only invalidates any work that was done but makes it easier for others to follow in your footsteps and use your trail for worse intentions.
Special Thanks and Additional Reading: I want to give a big thanks to Patrick O’Keefe, whose post on LinkedIn introduced me to the story. His post looks at it from the perspective of an online community manager and is well worth a read if you’re interested in this topic.
Want to Reuse or Republish this Content?
If you want to feature this article in your site, classroom or elsewhere, just let us know! We usually grant permission within 24 hours.