Secure your business from login to chargeback
Stop fraud, break down data silos, and lower friction with Sift.
- Achieve up to 285% ROI
- Increase user acceptance rates up to 99%
- Drop time spent on manual review up to 80%
User-generated content is the lifeblood of marketplaces and communities -- which is why content abuse is such a painful problem. Content abuse is any fake or malicious content posted on a site, usually to defraud the business or another user. And it's a rapidly growing problem. By 2022, people will be exposed to more fake information than real information online, according to a Gartner study.
To get a handle on this rising threat, we're sitting down with two experts. Oggie Nikolic is a Google engineer with an extensive background in risk management. Kevin Lee, a familiar face on the podcast, is the trust & safety architect at Sift. Oggie and Kevin explain why content abuse is so hard to fight, and how online marketplaces and communities should approach this challenge.
Evan: Welcome to Trust and Safety in Numbers presented by Sift Science. I’m your host, Evan Ramzipoor. We rely on the internet for information. This has become such a fact of life that it feels so obvious, even strange to articulate it. But what if when you went online you were exposed to more fake information than real information? According to a Gardner study, by 2022 that might be our reality. That’s how pervasive and insidious content fraud has become.
To get a handle on content fraud and abuse, I’m sitting down with two experts, Kevin Lee, our Trust and Safety Architect here at Sift Science and Oggie Nikolic, a Machine Learning Engineer at Google with years of experience in risk management. But first let’s warm up with a quick fraud fact.
Did you know that about 30% of online reviews are fake? For more information, check out 10 things you need to know about fake reviews on the Sift Science blog. Now, onto the interview. So, Oggie, tell me a little bit about yourself and your background.
Oggie: Hi, I’m Oggie Nikolic. I currently work on Google Assistant where I lead some of the quality and machine-learning efforts. Prior to that, for the last seven and a half years, I led the risk engineering group for Google AdWords, so protecting Google’s ad network against payment fraud, account takeover, and policy and content violations and bad ads.
Evan: And some of our listeners will already be familiar with you, Kevin. For those of you who don’t know, Kevin was in charge of Facebook’s global spam ops team, dealing with account takeover and malware. Can you remind us who you are?
Kevin: I’m the Trust and Safety architect here at Sift Science and I did some more Financial Aid fraud at Square where I headed up the risk, charge backs, and collections teams globally for Square. And, actually, Oggie and I met way back when I also worked at Google in the payment space.
Evan: When we think of fraud, we typically think of payment fraud like chargebacks or credit card fraud but content abuse seems radically different from these more traditional forms of fraud. In what ways is that the case?
Oggie: Content abuse is an attack on the users of the business or the platform. The content abuse is eroding the trust. So if you have a platform, whether it’s a dating site or a ride-sharing service or a vacation rental or a buyer or seller, the platform is essentially the broker of trust between those two parties. Content abuse, which comes in a wide spectrum of scams, usually deals with one of the parties misrepresenting or misleading or trying to gain a certain advantage and compromise that trust of the overall platform and in term actually creating a sort of brand reputation and potentially even a PR problem. And I see this abuse to be actually far more damaging in the long run if left unchecked compared to payment fraud, which does definitely harm the business, but sort of privately and eating away at the profit margins, this content abuse can be a lot more public when reporters start writing news articles about a certain platform and the mind share of good users start being eroded away.
Kevin: One, I think, point that Oggie is getting to and, Oggie, feel free to disagree, is that when we talk about content abuse, it is really hitting the top line of the business when it’s impacting your legitimate users. And eventually there may be some drop-off in lifetime value or maybe it leads to payment fraud, which can then hit your bottom line but it’s not necessarily a one-to-one relation. If I’m going to defraud a company as a fraudster, the bottom line’s gonna get hit pretty immediately. You know, it takes maybe 30 days or 60 days for a chargeback to come in. But it’ll eventually come in there, whereas for things like content abuse, if I’m directly attacking let’s say it’s a data site and I’m spamming legitimate people looking for a relationship with all these fake profiles, I’m probably gonna drop off pretty quickly if all I see is these kind of fake hot girls asking to get together, like I’m not going to want to use that platform anymore. And my lifetime value and the potential of me telling my friends of like, “Oh, don’t use that dating app because it’s just a bunch of garbage” can spread.
Evan: Is there a difference between the types of teams or personnel who would handle a case of let’s say payment fraud versus a case of content abuse?
Oggie: I think it depends from company to company but the complaints can come into the customer support teams. It can come in through the sales panels or marketing, sometimes through PR or legal. And I think quite often, because there’s a broad spectrum of abuses, whether it’s misrepresentation or identity theft or scam posts or scam on the links, it might not be categorized correctly or it might not all be aggregated to appreciate the scale and the scope of the issue initially.
Kevin was mentioning with payment fraud, chargebacks do come in for payment fraud. They might take a while to come in, but they do come in and it’s a fairly strong feedback loop. With content abuse, a lot of the victims might not complain. They also might not even be aware of the fact that they were exposed to a scam. It becomes a lot harder to fully measure and appreciate the scale of content abuse on any platform.
Evan: One subset of content abuse, it’s kind of blown up in recent years is ad fraud. Put simply, ad fraud is a scam in which a fraudster fools advertisers into paying for fake content, fake traffic, ad placement, leads, clicks, and so on, and it’s a pretty serious problem. It’s costing the industry about $8.2 billion a year. Why is ad fraud in particular posing such a challenge for ordinary users and businesses alike?
Oggie: The ads platforms, just like any other platform out there, is a broker between two parties. In this case, it’s the advertiser and the user. And it’s brokering trust or certain exchange or agreement or an understanding that both parties have been vetted. Abuse can come or can go in both directions. Obviously the users can be fictitious and that’s via ad spam and bots that generate ad clicks as well as from the other side the advertisements themselves might be misleading or inappropriate and lead to bad outcomes. Each platform itself has some unique abuses.
For ride sharing, did the driver actually take the shortest path or did they overcharge the user? So you’ll have some of those unique abuses and you will have a set of content abuses that sort of spam or that are pervasive across all platforms. This is usually a lot of the spam, a lot of the instability or gambling and so on. So the ad systems are no different across any of the ad platforms, including Google’s.
Evan: We’re going to close with a question that seems to be growing more urgent by the day. Users must now constantly parse out what is real and what is fake online. How is that changing the way we think about trust and safety on the internet?
Oggie: Yes, so one of the cost of these attacks and the abuses that we’re experiencing is similar to the airport syndrome of everyone having to go through the security line. The users will have to fill out a lot more CAPTCHAs at account signup. There will be a lot more transactions and a lot more activity that’s held back or flagged and reviewed. And so it’s a cost that good users have to pay for ensuring that there’s some semblance of enforcement and protection on their behalf.
But they still need to be absolutely vigilant and cautious and I think continuing the education of the users on all of these platforms. Obviously, there’s a great deal of work to protect them and to make sure that the platform is safe, but fraud and abuse will never be zero.
Kevin: For better or for worse with various types of content abuse bubbling up, I think it’s just a reflection of, one, technology, and people. Ten years ago maybe we would buy things online and that was the extent of it. But now people are really moving their entire identities online and there’s so much more at stake. And so when there is content abuse out there, whether it’s hate speech or bullying or trolling or spam or other types of scams or fake profiles, there’s definitely much more awareness around it. At a previous company, I definitely would try my best to educate users about different scams and ways to protect themselves. And initially there was always some pushback from the product team or the growth team around not wanting to scare users, and I think those conversations have become actually significantly easier because they’ve seen what content abuse or account takeover can do to a business in eroding user trust. And so education is critical and also I’d say building a product that is anti-content abuse by design or anti-account takeover by design. One thing, for example, we did on Facebook, which we became better at was allowing the community to flag more bad behavior and then using that as a signal and so having the community step up as well as been really, really beneficial.
Oggie: And to just add to Kevin’s point, I think when we talk about user education, it’s both in the detecting the abuse or being more vigilant and now there’s trusting of the other parties or the activities they engage with. And the other part of the education is knowing where and how to report that abuse so that action can be taken.
Evan: Thanks for joining me on Trust and Safety in Numbers. Until next time, stay vigilant fraud fighters!
Stop fraud, break down data silos, and lower friction with Sift.