Gaming giants Ubisoft and Riot Games are collaborating on a research project to combat “toxicity” in in-game chats.
The first phase of the “Zero Harm in Comms” project will be developing a framework that allows the platforms to collect, share, and tag data without compromising user privacy by keeping personally identifiable information. If they cannot do that, “the project stops,” according to Ubisoft’s executive director Yves Jacquier.
If the first phase is successful, the second phase involves developing AI-based tools to detect and mitigate so-called “disruptive behaviors.”
Traditionally, content moderation systems used “dictionary-based technologies,” where there is a list of words, with their different spellings, that are used to flag a message that might violate community standards. What the research project is trying to build is a system that relies on natural language to determine the general meaning of a statement, taking into account the context, explained Jacquier.
Jacquier noted that the project is research, it is not necessarily “a project that will be delivered at some point… it’s way more complex than that.”
The two gaming giants will publish “the learnings of the initial phase of the experiment” next year regardless of the outcome, according to a press release announcing the project.
The post Riot and Ubisoft team up to censor “toxicity” in gaming appeared first on Reclaim The Net.
Reclaim The Net is a free speech and online privacy organization that defends our individual liberty by pushing back against big tech and media gatekeepers. Much of their work focuses on exposing digital tyrants and promoting free speech and privacy-friendly alternative online services. Visit reclaimthenet.org