Presented by Modulate


This article is part of GamesBeat’s special issue, Gaming communities: Making connections and fighting toxicity.

Over a dozen new and pending internet trust and safety regulations are slated to seriously impact game developers in the near future, from the United States and the EU to Australia, Ireland, the U.K. and Singapore. The regulations target the rise of hate speech, harassment and misinformation driven by major world events, including COVID-related misinformation, potential election influence and the rise of white supremacist extremism. On top of that, privacy laws are being revisited, such as California’s Age-Appropriate Design Act, modeled off the U.K.’s Children’s code.

And as the DSA and other regulations begin kicking into force in 2024, experts only expect enforcement to become more common. Unfortunately, “No one reported it! We didn’t know there was illegal content!” won’t cut it anymore. Today, regulators and consumers are looking for evidence that the studios are taking the problem seriously, with a focus on harm reduction. In other words, game studios must now proactively minimize any harm on their platforms since they could be liable for such harms even if they were never reported by users.

The major requirements for compliance

Compliance has eight major components, which may sound daunting at the outset. It includes writing a clear code of conduct and updating terms of service and producing regular transparency reports, with the help of internal teams who can work with regulators as needed.

On the platform, developers need to find ways to minimize harmful content at scale, especially terrorism, CSAM and grooming – which means building out new moderation and monitoring tools. A user report portal and an appeals portal, to which staff should respond in a timely manner, are crucial. Finally, developers should conduct regular risk assessments and implement UI changes to increase privacy by design, as well as configure privacy by default for all children.

Before anything, however, because of the complexity involved, it’s critical to consult with legal and regulatory experts to ensure the appropriate steps are in place to comply with global regulations.

Here’s a look at each of those steps, and how developers can prepare.

1. Writing a clear code of conduct

While an in-depth code of conduct is now a regulatory requirement, it’s also good sense. The vast majority of misbehavior by players is due to unclear guidance on what’s permissible, and much of the trust lost between studios and their users comes from “black box” reporting, appeals or actioning processes.

A code of conduct should explain precisely which types of behaviors are harmful and could result in an action. It should also identify clearly what types of actions can be taken, and when. Finally, the code of conducts should explain what recourse players have if they experience harmful content, feel they’ve been wrongfully actioned, or want to limit the use of their personal data.

If you’re looking for a place to start, check out other studio codes, or consult with a regulatory expert.

2. Producing regular transparency reports

Transparency Reports are meant to fill a void where regulators and consumers feel platforms have been insufficiently open regarding the severity and prevalence of harmful content on their platforms – and what measures the platform is taking to resolve these issues. The most efficient way to integrate reports into your strategy is adopting a technology solution, powered by machine learning.

Today, innovative moderation platforms like Modulate’s AI-powered ToxMod can automatically track action rates, how frequently appeals result in turnovers and the accuracy of player reports. It can also proactively provide insight into the total number of harmful behaviors across the platform, and even the number of individuals exposed to illegal content – both crucial components of an effective transparency report – and most studios currently lack the tools to measure.

3. Minimizing harmful content

Most platforms today rely primarily on user reports to identify harmful content, including terrorism, child sexual abuse material and grooming content, but that’s never been sufficient protection. Again, an AI-powered moderation tool can identify, log and escalate issues, automating harmful content elimination.

How studios handle issues like harassment and cyberbullying will also be scrutinized. With regulators and enforcement agencies shifting towards a “harm standard,” sufficiently bad outcomes for users can create liability for a studio, even if they never received a user report.

An AI tool like ToxMod can proactively identify toxic voice chat from across your ecosystem, categorize it and hand you a prioritized queue of the very worst stuff your users are up to. And it’s smart enough to filter out playful trash talk, reclamation of slurs or villainous roleplay, as opposed to true harassment and bullying.

4. User report portals, appeals portals and timely responses

Most platforms already offer these platforms, but some of these new regulations also require that platforms respond to every single report, in a timely way, and include context about what decision has been made and why. But assessing player reports can be quite costly, especially given that many users submit false reports out of malice or mischievousness.

While human moderators can never be replaced, moderation solutions can help make them more efficient by automating some of the busywork like assessing when action on a ticket needs to be taken and closing tickets and issuing reports when there’s no evidence of a violation. Violations are escalated to the studio’s moderation team, with enough available evidence to make an accurate call and be able to explain, when they action a user, what part of the code of conduct was violated and provide clear justifications for any punishments, which not only ensures compliance but also, according to EA, can massively reduce repeat offenses from players.

5. Regular risk assessments

Again, automated moderation platforms are your best bet here to minimize risks and offer comprehensive protections to players while complying with privacy regulations. It’s vital to use a platform that has documented high accuracy across major types of harms and has been battle-tested by top games.

A solution can also provide insights into the behaviors of players, for a view into the greatest risks to their players’ safety and experience, as well as offer design improvements and moderation strategies to attack the problem at its source.

6. Configuring privacy by default for all kids

California’s Age-Appropriate Design Act includes a potent requirement that platforms ensure children start with the strictest possible privacy protections enabled. While this is ultimately just a UI update for the platforms, it does raise an important question – how do you know which users are children?

It’s essential to incorporate age assurance like ID or payment checks very early in the onboarding process, but they’re not foolproof, as the recent Epic Games / FTC case shows. If a developer knows there are children on your platform, it’s even more important to go the extra mile to identify and protect them. Tools like voice-based analysis can identify underage users – Modulate’s own system has reached over 98% accuracy and counting.

Staying ahead of the game

Proactive steps like these are essential to ensure compliance, but partnering with a safety and privacy expert can help take on some of the burden. They can provide significant relief to internal teams, ranging from technology solutions that help minimize harmful content at scale, to support with risk assessments, transparency reports and more. In the end, not only are you meeting regulatory standards, but also creating a safer and more positive online experience for users.

Written in collaboration with Tess Lynch, Privacy Associate at Premack Rogers.

Dig deeper: Go here for more info on how game studios can keep tabs on the changing regulatory landscape, take proactive steps and incorporate sophisticated technology to scale privacy and safety efforts.


Sponsored articles are content produced by a company that is either paying for the post or has a business relationship with VentureBeat, and they’re always clearly marked. For more information, contact sales@venturebeat.com.