ToxMod assesses the “tone, timbre, emotion, and context” of a phrase or conversation to determine whether a player is behaving in a harmful way.
Given how neural networks are considered black boxes, is there any specific counter-measures against false-positives? And what about the running costs of the model? Is it sustainable for the game company?
Given how neural networks are considered black boxes, is there any specific counter-measures against false-positives? And what about the running costs of the model? Is it sustainable for the game company?
They all get reviewed by the same process as reports.