• Share this Episode

Podcast

07/3/2018 | Episode 24

Content Moderation: Best Practices

User-generated content (UGC) is the lifeblood of online marketplaces and communities. When someone posts fraudulent, abusive, or spammy content on your site, it doesn’t just look bad: it’s a visible, public indication that your business isn’t safe. Content abuse hurts your brand and hits your bottom line.

Content moderators are often the first line of defense against fraud and abuse. But how can they keep up with the rapidly changing face of content abuse? What happens when fraudsters switch languages or invent codes? And what should content moderators do when they encounter gray areas? We sit down with Kevin Lee and Jeff Sakasegawa to find out. Drawing on their colorful experience at companies like Google, Square, and Facebook, Kevin and Jeff share tips, tricks, and insights.

Featuring

Roxanna “Evan” Ramzipoor is a content marketing manager at Sift. Her debut novel The Ventriloquists will be released in 2019.


Transcript

Evan: Welcome to Trust in Safety in Numbers presented by Sift Science. I’m your host, Evan Ramzipoor. Many online marketplaces and communities rely on user-generated content and a lot of them have content moderators who sort through all of the content to adjudicate between what does and does not apply with their terms of service. Content moderators have a tough job. They have to be quick but thorough, decisive but impartial. To get the scoop on best practices for content moderation, I’m sitting down with Sift’s trust and safety architects, Kevin Lee and Jeff Sakasegawa. Both of them have a long history of working in risk management and fraud at places like Square and Google. But first, let’s warm up with a quick fraud fact.

Did you know that fraudsters are increasingly targeting travel loyalty programs? Of all card-not-present fraud that occurred in 2016, 4% included loyalty and rewards point accounts. But that jumped to 11% the following year. To learn more, check out loyalty program fraud on the Sift Science blog. Now on to the interview. So let’s start with the basics. What is content moderation?

Kevin: I can go first. Content moderation is reviewing any sort of user-generated contents on your platform. For example, if you have .com, reviewing the ratings and reviews there, if you’re a Facebook reviewing what pieces of information are being posted in a public fashion.

Jeff: Yeah. And we’ll get into more specifics later but to plus on what Kevin said. Also according to your policies and procedures. So it will be an increasingly common theme with respect to content moderation.

Kevin: Things, again, sort of like terms of service. So you can absolutely sell guns online but many companies don’t. But it is legal to do it, you just have to adhere to a lot of different policies.

Evan: What kinds of websites, market places or communities can benefit from a content moderation team?

Jeff: I don’t want to be glib but I would say any. The reason being is, I think people that have this user generated content are often very sensitive to it, either from a community perspective where you just want to know what’s being communicated, how your customers experience your business, the conversations that are going on. And then also wanting to create that level of trust and a safe space where regardless if you’re just a website or a marketplace or a community that these exchanges can happen free with like positive intent and good actions coming out of it and not the converse.

Evan: How much of content moderation is proactive and how much of it involves reacting to problems as they arise in real time?

Kevin: Realistically speaking, for most businesses, all of it is reactive. Most companies don’t have the infrastructure, the tools in place to actually proactively seek out abusive contents, and that’s just because they didn’t necessarily think of it when they created this ability for users to talk to other users or users to post content with minimal curation. And as a result, if someone is using that platform incorrectly or not as intended, that’s when you have, you’re essentially caught on your back foot and then need to respond reactively. Now bigger companies, let’s say a Facebook or a YouTube or Yelp or Google, for example, have taken a more proactive stance, specifically via automation, via machine learning or to enabling users to flag other people’s content. So it is somewhat reactive in the sense of yes, some user out there needs to flag something as inappropriate or spamming, but it can become proactive in the sense of using that signal to then fan out into the rest of the ecosystem to look for similar bad content or executing.

Jeff: Yeah, and to add on to that, I think there’s kind of an axiom around where it’s like a troll will usually get one trolly comment out. Even if there are linkages that he might have to preexisting bad actors or like Kevin said, the proactive part, sometimes some companies, you know, their tolerances to confirm that suspicion based on the content that comes out. So in that sense, it would be more reactive.

Evan: How can fraud and risk teams incorporate content moderation into a coherent strategy for fostering trust and safety on their online marketplace or community?

Kevin: Baking in the ability to moderate content from a product level is paramount. Whether that’s in the engineering road map in terms of looking for this, being able to detect this content or from a user perspective, being able to manually, as a user, flag this content as inappropriate or not according to the terms of service. Case in point would be Facebook for example. They went a long time without the ability to mark something as spammy or inappropriate and you’re essentially turning off one of your biggest moderators out there. Like content moderation doesn’t have to be just an internal team. It can be an external community as well. So just like anyone in the community can post something, anyone in the community should also be able to report something. And for a long time, Facebook did not have very good feedback in that regard. Now, they’ve built it into their product to do that kind of stuff which is overall better and leads to a better, healthier ecosystem.

Jeff: Yeah and it doesn’t seem like it on its face, but you actually can help scale your teams quite well if you put this into your strategy up front. I think a lot of times where teams can get into pitfalls is to come into point, you may not have that as a consideration from the get-go and then you kind of unearth these horrible interactions your customers are having. When you think about a lot of help contents or support content in other words, nowadays a lot of is self-serve and a lot of it is your customers helping other customers and moderating that, because you can’t staff, you know, 100,000 people to have like a one for one relationship with your user base, right? So being able to proactively use machine learning in content moderation can actually really be powerful for a lot of teams when you’re building them out.

Evan: So content moderators have to walk kind of a tight rope. They have to be thorough but also impartial. How do content moderators approach the diverse and sometimes controversial content they encounter online while also doing their job effectively?

Jeff: This is a very important question and I think if you could talk to content moderators off the record, maybe over their beverage of choice, it’s probably the thing that gives them a lot of heartburn. Probably the best anecdote I’ve heard on this when I talked with someone in the industry was you can have a hateful person use your product but there is a line where you permit hate or allow hateful content to exist in your ecosystem. And drawing that line can be very difficult, especially when a lot of fraud and risk teams now will find third-party information about who the original publisher is and try to make inferences or suggestions about their intent or who they are. It’s sometimes, those actual moderators, they can’t unsee that and it’s very hard for them to stick to what the actual policy and procedure is when you just know like perhaps this person just is hateful.

Evan: Can you give some examples of some grey area cases that might come up in content moderation?

Kevin: So let’s say you don’t allow hate speech on your platform. Hate speech has several shades of grey. So let’s say Twitter for example. It is not okay to single out a particular race or religion. Similar to hiring someone. Like there’s just no-no questions. Like if someone’s hiring or interviewing for a particular job, you cannot ask them their religious preference. Those are like third rail kind of questions that you cannot touch, but there are other areas that can come into play where it’s okay to say on Twitter, “I hate all Americans.” But it’s not okay to say, “I hate white people.”

Jeff: Yeah. You can kind of think of, this is a very lawyer-y answer and I acknowledge I am not a lawyer but protected classes is generally how…

Kevin: Thank you for that word.

Jeff: One thing I really liked about Kevin’s example is I think there’s also a difference between expressing an opinion and also having that be a precursor to action. So it may say something like, “I hate all people with Kevin’s haircut.” It’s a different thing to say, “I think everyone that is having Kevin’s haircut should get punched in the shoulder.” Both have a dispassionate view of Kevin’s haircut, but one’s actually kind of encouraging that into violence. So I think grey areas can also be difficult there. Another thing I think about is just as there are positive communities on the web, in your life, there are also those that are not so positive. And a lot of times, they can have their own vernacular and code words, for lack of a better word, to kind of explain who they’re talking about. So if I know that Twitter or some social media format will prevent me from complaining about people that are Jewish. I won’t say the word Jewish. I’ll call them butterfingers or something but people in my community will know, “Oh, if I mean, if you say butterfingers,” this is how I translate that to actually get this thing out which is counter to your policy. So I find that comes up a lot in grey areas, it’s just trying to, once you’re aware of what gets taken down, how you circumvent that.

Evan: How do content moderators create clear guidelines for what is and isn’t acceptable in an online marketplace or community?

Kevin: The easiest way is to come up with examples. So what are some concrete examples, a little bit getting out of the theoretical. Like you can have theoretical guidelines but the application to those guidelines can be tricky and so coming up with specific examples of this is examples where it’s definitely not okay, these are examples where it is okay and then here is the large encyclopedia of areas that are grey and where those kind of went on either side of the spectrum and why.

Jeff: To the use a…kind of distort a payments fraud anecdote, one thing that is very helpful for payments fraud in a risk and fraud team are charge backs or disputes. So even though it’s unfortunate those things happen, they are incredibly valuable feedback which inform things you should prevent in the future. With respect to content moderation and as Kevin talked about, I’m getting that from your own community. Wallowing in those reports should be equally informative to you for how you want to create guidelines, right? If your community is constantly reporting this particular instance or a particular type of event, you can probably reduce those negative interactions if you just include that in policies and procedures for your own internal team or even externally facing. So again, it’s not great that it’s happening, but it’s a worst thing not to action on that.

Evan: Can you tell us about some of the limits and downsides of manual content moderation?

Kevin: So I’ll sum it up in three terms. So scale, flexibility and response time. So scale, if your platform takes off, it is very difficult to hire that many people, train that many people or even models in that regard, quickly enough. Flexibility. So let’s say you want to launch in a new country with a new language. That makes it even higher in terms if you’re hiring people for this position. No longer can it just be in English, it needs to be in Bulgarian or Romanian. Like languages that may be a little bit tough for you to find and staff against and then response time. Content, once someone posts it, they can post it any time of day basically. Does that mean you need to have someone on the other side to review that in real time or close to real time? That again creates some manual difficulties to scale up to that kind of organization.

Jeff: When the content lives, even if you take it down, that’s time that could possibly be screen-shotted or shared around or just, it can catch in a press cycle negatively, unfortunately, as part of the times and when you think about your own team, even if you have the luxury of having thousands of people to moderate, those people need to go to sleep. There are limits to, just like their human capability to review these and…

Kevin: That’s why you need machine learning.

Jeff: Yes, that’s why you need machine learning. Yeah. And I think to continue to tease that out, if I work on Kevin’s content moderation team, I would surmise that my first review in the morning when I have my cup of coffee might be a little stronger than my 500th review at the end of the day when I’m just mentally fatigued. Machines don’t have that.

Evan: Thanks for joining me on Trust and Safety in Numbers. Until next time, stay vigilant, fraud fighters!

Secure your business from login to chargeback

Stop fraud, break down data silos, and lower friction with Sift.

  • Achieve up to 285% ROI
  • Increase user acceptance rates up to 99%
  • Drop time spent on manual review up to 80%
Your information will be used to contact you about our service and subscribe you to our direct marketing communications. You can, of course, unsubscribe at any time. Please see our Website Privacy Notice.