Gaming giants Riot Games and Ubisoft have announced that they have joined hands on a brand new collaborative research project to improve the detection of “harmful content” in in-game chats.
The Zero Harm in Comms project, as head of technology at Riot Wesley Kerr and executive director of Ubisoft La Forge Yves Jacquier tell The Loadout, aims to test the reliability of more scalable, AI solutions that can be trained to understand complex semantics, and infer intent within text-based comms.
The initiative stemmed, as most great academic projects do, from a beer – more specifically a missed opportunity for one. The two directors were originally set to meet for the first time at the 2020 Game Developers Conference (GDC), but the arrival of COVID marred their encounter. Regardless, the pair stayed in touch, and two years later discussions held over calls began to materialise.
“From a technical and R&D perspective, we were facing the same issue, which, namely, is that it [training AI] is a complex problem,” Jacquier notes. “For technical reasons, it requires a lot of data to train AI to be able to target harmful content.
“So we were discussing that, and then had this crazy idea to start an R&D project together with two legs on it: the first one being to try and find a safe way to share data […] the second, is the blueprint for the data – how do you create algorithms that are reliable enough to spot any type of toxic content?”
Kerr adds that Riot has been looking to grow its investment in the tech research space, and teaming up with Jacquier and Ubisoft was a “great collaborative opportunity given both Ubisoft and Riot’s desire to improve the player experience, and protect our players.”
It’s certainly a surprising collaboration, especially when there’s so much valuable data involved. But despite this, Jacquier notes that the alignment between the two gaming monoliths – which are both part of the Fair Play Alliance – was so strong that it was “extremely easy to reach out and get the ball rolling.”
Both studios have recently stepped up anti-toxicity efforts, exemplified by the recent auto-mute features added to League of Legends and Valorant in Riot’s case, and the integration of various systems in Year 7 of Ubisoft’s Rainbow Six Siege.
However, current solutions to safeguarding players from text-based toxicity rely on a static pool of words and variations thereof, taken from dictionaries and in-game data and fed into a system that struggles with nuance. To use Jacquier’s example, is “I’m coming to take you out” a serious threat or just harmless in-game parlance?
“We’re fortunately at a time in machine learning and AI where we’re seeing an improvement in these large language models and their ability to better understand the context and nuance that goes into language”, Kerr says. “If we can gather these datasets, and put that together on top of these models, there’s a really good opportunity for us to capture way more than we could before.”
Of course, there’s nothing wrong with the traditional method, and Jacquier notes that both approaches have their merits. For example, while a dictionary-based model doesn’t capture the intricacies of an interaction, it is much easier to control what you put into it than a large-scale, AI-based one. “Both methods are totally valid, and will be in our respective toolboxes”, he notes.
Throughout our conversation, Jacquier repeatedly stresses the importance of data ethics through “preserving the privacy of the players, and being compliant with the most stringent regulations.” When quizzed on what sort of player data will be shared between the companies, the directors assert that only the bare minimum required will be utilised, with none of it being personal information.
“We limit the data to chat logs,” Jacquier says. “So everything that is said in [text] chat, and some contextual information of what happens in the game – the minimum.” Additionally, Kerr states that both studios are “working closely” with their respective privacy and legal teams to ensure everything is being done both by the book, and with the utmost transparency.
Although text-based communication is the only area being targeted by the Zero Harm in Comms project at this point, Jacquier reckons voice-based comms could be a next step, provided the current phase is “successful enough”.
As it stands, the project is currently “almost in the middle” of its lifecycle, Jacquier says. “I would say we’re still working on the blueprint aspect of things, but it started well [in July] and has been going well so far since”, he says.
Long-term, the pair hope to expand on their current partnership, bringing more organisations from across the industry into the fold. However, with everything, this depends on the quality of the outcome of the current project.
“If other people want to join, and if we feel that we have something that is solid enough to include other partners, then why not do that?”, Jacquier says. “We totally feel that it’s an industry-wide problem, so what we’re doing here is – beyond goodwill and beyond everything that each company is doing – trying to say that ‘we can do more’, and we will try to prove it and share our learnings.”
The learnings from the project are set to be revealed to the entire industry – regardless of the outcomes – next year. To find out more about the Zero Harm in Comms research project, check out Riot and Ubisoft’s respective reports on the news.