Skip to content

The world of online games holds a fascinating dynamic in which players from all over the world can explore virtual worlds and interact with each other. But behind the scenes of these digital realities, developers and publishers face a critical task: moderating player behavior and ensuring fairness in competition. Online games that rely on cooperation and competition face the challenge of maintaining a positive gaming environment and guarding against cheating and hacking.

In the following, we show how these problems can be addressed with the help of artificial intelligence and what needs to be considered in order to mitigate potential legal risks.

1.          AI Moderation

The moderation of online games is indispensable for the enforcement of one’s own codes of conduct. Games that are based on the cooperation of players depend on promoting a positive atmosphere and friendly interactions. If a game is flooded with a hostile community and there is no control by developers or publishers, players can quickly mark the game as “toxic” and start avoiding it. As a result, the number of players dwindles, giving the impression that the game is “dead”.

However, moderating online games with a larger following can quickly become impossible for human customer support. Already, well-known games use automated moderation that filters certain terms and distributes chat blocks for particularly “toxic” players. The use of generative AI models for monitoring and moderating in-game text and voice chat can offer further relief. A well-trained, adaptive AI can also detect violations of the terms of service and particularly “toxic” behavior, such as insults, bullying, and racism, and impose tailored punishments on players. These can range from mutes to permanent bans (account suspensions). The AI can also learn terms that players use to trick simpler automated systems, such as writing words separately, omitting vowels or other letters or replacing them with special symbols, etc.

2.         AI Anti Cheat

The fight against cheaters and hackers also plays a particularly important role in maintaining game integrity. Especially in competitive games, some players try to gain an unfair advantage over others. They do this by using programs that allow them to see through walls, aim better, steer, etc. In addition, it often happens that players who have been playing the game for a long time and have reached a high rank or a high level, for example, create new accounts in order to play against beginners and defeat them by a huge margin (so-called “smurfing”) or to help beginners or worse players to reach ranks and levels that do not correspond to their abilities (so-called “boosting”). Furthermore, players have been using so-called bots to bypass time-consuming, repetitive tasks such as collecting materials. This, while helpful to the players, can have a negative impact on an in-game marketplace and the game’s economy. If cheating and hacking are not combatted, a game can also quickly lose popularity and consequently active players.

Anti-cheat provider Anybrain, in cooperation with esports platform provider G-Loot, has demonstrated in a pilot project in Epic Games’ first-person shooter Valorant how AI-based anti-cheat software can draw conclusions about hacking and cheating based on user behavior by evaluating keyboard and mouse inputs in the game. Anybrain’s reported success in detecting hackers, smurfs and boosters is impressive and points to the future of cheat prevention. Players are either automatically detected and reported by the system, or reported by other players, and their data can be analyzed with the help of AI.

The question will get more exciting when the market is flooded with cheats and hacks that are based on modern AI algorithms. Since it is conceivable that such programs can realistically simulate the inputs and behavior of real players, this can lead to an arms race or a cat-and-mouse game between hack developers and game developers, in which the AI systems used in each case try to detect or outsmart each other.

3.         Risks

However, the use of such AI moderation and anti-cheats also carries the risk that players may be unjustifiably sanctioned. If the AI is programmed to learn from player input (e.g., manual messages from players), it may incorrectly classify even innocuous communications as violations, or legitimate gaming behavior as cheating or hacking. If the maximum penalty (permanent ban) is imposed, gamers may permanently lose access to their account. This can result in gamers who have purchased the game and/or made in-game purchases losing all ability to use purchased and earned content.

This is a significant problem, as game accounts can be worth a considerable amount depending on the investments made by the gamers. In such cases of unjustified blocking, claims for the unlocking of accounts as well as claims for damages by the affected gamers may come into consideration. On the one hand, such legal disputes can involve considerable costs for the gaming operators. On the other hand, however, they go hand in hand with negative media coverage and can thus damage the reputation of the game and lead to a decline in player numbers.

The following should be noted for the practice:

  • The use of generative AI for moderation of online games and protection against cheating can bring considerable relief for developers and publishers. However, it also entails specific legal risks.
  • To prevent such risks, it is recommended to implement a model where the review of serious penalties (such as permanent bans and/or deletions of game accounts) is done by a human employee.
  • It is also advisable to implement a system for appealing against longer-term bans and deletions, where cases are also reviewed manually by employees to avoid illegitimate penalties.