Acting proactively, not reactively, to fraudulent content and repeat offenders
As a community, Couchsurfing operates on a currency of trust. Fake accounts and spammy content not only affect the experience for legitimate users of the platform, they might even involve phishing, malicious comments, or further risk when linked to offline schemes. To prevent this off-platform movement, Couchsurfing built in-house tools that were clunky, led to lots of manual review, and became outdated quickly. They needed a solution that could address a variety of bad content, malicious users, and returning users who were hiding their identity after being banned.
Furthermore, there was no way to proactively improve safety for the community by preventing these bad users from creating fraudulent content. Couchsurfing needed to determine who the bad users were and ensure they couldn’t come back—even if disguised under a different name, email, IP address, or device. They required something that could help them proactively fight and prevent content abuse, fake accounts, and other forms of fraud on their platform.