FACEIT’s new AI banned 20,000 people for toxicity in September | The Loadout

FACEIT’s new AI banned 20,000 people for toxicity in September

CSGO terrorist

Toxicity is a huge problem online and can often have a detrimental impact on competitive games. To counter this growing problem, FACEIT teamed up with Google Cloud and Jigsaw to create its own admin-like Artificial Intelligence to flag and punish users for toxic behaviour.

The tool, which has been dubbed Minerva, is live and has been working hard over the last couple of months with an incredible strike rate. In fact, in the first month and a half of activity, Minerva issued 90,000 warnings to players and handed out 20,000 bans for verbal abuse and spam. As a result, toxic messages reduced by 20.13% in September alone.

The AI, which is trained through machine learning, has been in the works for close to a year, as part of FACEIT’s commitment to making its platform a better experience for all. Instead of opting for a “half-baked” solution, like a karma-based system, FACEIT decided to invest in a long-term solution that wouldn’t require manual review, and thus Minerva was born.

“It became clear that toxicity expresses itself in many different ways and it’s no easy task to detect and solve the issue,” FACEIT says in its latest blog post. “The demand by a big part of the community was to have an impartial admin observing and judging every match that is happening on FACEIT, which at the current volume of matches if not feasible.

“We could have built a half-baked solution but this would not have achieved the desired results in the long term as it would not identify those behaviours accurately and quickly enough to take precise and immediate action on them. Therefore we decided to embark on a long term investment: building an admin-like AI powered by machine learning.”

If Minerva detects a chat message may be toxic, it can take action in seconds. A warning is sent to the sender and cooldowns can be issued depending on the severity of the message. Additionally, repeat offenders will receive harsher punishments every time they commit an offence over a short period of time.

FACEIT is confident that Minerva is delivering real results too, spending months feeding the machine false positives to make sure that each warning and ban is issued correctly. And while toxicity on FACEIT is now on the decline, Minerva flagged over seven million toxic messages in Counter-Strike: Global Offensive in the first couple of months.

Minerva is also helping tackle the issue of account boosters and smurfs on the platform by flagging suspicious accounts and asking the users to verify their phone number before logging in to play. In just two months, Minerva has flagged 250,000 accounts, with 50,000 of those being stopped from playing.

It’s an impressive feat for a machine that’s been a year in the making, and it looks like it’s already having a positive impact on the world of competitive matchmaking. It’ll be interesting to see where FACEIT take Minerva in the future.