While bad behavior wasn’t born with the internet, the online environment certainly provides fertile ground for a range of antisocial activity. The tendency for people to behave in the virtual world in ways they would never do in their everyday lives is familiar to anyone who ever stumbled into the middle of a Twitter spat.
Cyber bullying and harassment aren’t just a feature of social media, they’re also a big problem in online gaming. In fact, in a recent poll, 86% of US gamers said they had encountered harassment while playing video games online, with 77% saying they had been severely harassed. What’s more, the problem is on the increase – 12% more gamers said they had been harassed online than in 2019, with some forms of harassment up 100%.
While it might be argued that perceptions of some forms of bullying such as trolling or griefing can be subjective, the fact that more than half of those surveyed had experienced physical threats is deeply concerning. What’s more, forms of harassment that crossed over from the virtual to the physical world were surprisingly common; more than one in six gamers had experienced doxing, and 12% even reported being exposed to swatting (whereby a bully makes hoax calls to emergency services to direct a police response to a rival player’s address).
Policing your gaming environments can be challenging, but it needs to be treated as an integral element of providing the best possible user experience and protecting your brand reputation. Ultimately, letting antisocial behavior go unchecked on your gaming platform is more than a moral issue; it’s also a financial risk.
Users who experience repeated bullying are likely to take the only course of action open to them and leave your platform, taking their spending power with them. That was the conclusion of another recent survey of nearly 1,400 gamers, which found that 70% had considered quitting online gaming environments due to negative experiences.
Dropout from bullying can also have a knock-on effect, contributing to a reduction in traffic on your platform that can spiral downwards. The effect can be similar to the exodus of people from crime-blighted districts in the physical world. As law-abiding players leave, bad actors increasingly take over. That makes your platform even less appealing to the kind of legitimate users who contribute strongly to your revenue streams with their in-game spending.
The first step in addressing the issue is to suspend or in the worst cases ban players who demonstrate bullying behavior. However, it’s important to ensure malicious actors aren’t simply able to create a new account with different basic information, through which they can continue harassing and antagonizing legitimate users.
A 98% check automation rate gets customers through in about 6 seconds.
Real-time end user feedback and fewer steps gets 95% of users through on the first try.
An unmatched 12K+, and growing, government-issued IDs are covered.
Up to 30% more customer conversions with superior accuracy and user experience.
Veriff’s data-driven fraud detection is consistent, auditable, and reliably detects fraudulent forms of identification.
Veriff’s POA can grow with your company’s needs and keep up with times of increased user demand.
An effective solution will combine multiple technologies to make circumventing a suspension or ban as difficult as possible, significantly reducing bullies’ motivation. For example, Veriff’s velocity abuse service provides effectively multi-accounting prevention, combining proprietary device and network fingerprinting with sophisticated cross-linking technology to create a powerful deterrent.
Partnering with us gives you access to the latest identity verification (IDV) technology with minimal lead times for setup and is far cheaper than building an in-house solution. To find out more about how Veriff can help your business minimize the financial risk from a range of identity-related issues, read our gaming guide.