08/8/2017 | Episode 3
Kevin Lee is a Trust & Safety Architect at Sift Science.
Paul Rockwell is Head of Trust & Safety at LinkedIn.
Evan: Welcome to “Trust & Safety in Numbers,” presented by Sift Science. I’m your host, Evan Ramzipoor. Before we get to our episode, let’s warm up with the fast fraud fact. Did you know that the average hacker can commit identity theft using a computer or mobile device in under 60 seconds? For more awesome fraud facts, check out the Sift Science blog, particularly the blog post, “Four Surprising Lessons From Former Cyber Criminals.” Now onto our episode. I’m here today with Kevin Lee, Trust & Safety architect at Sift Science, and Paul Rockwell, head of Trust & Safety at LinkedIn. Thanks for joining me, guys. Today, we’ll be talking about building a team to counter online fraud and abuse which is something that both of you guys have a lot of experience with. So let’s kind of start at the beginning. When you’re building your fraud team, what’s your ultimate goal? What’s at the forefront of your mind? Let’s start with you, Paul.
Paul: Sure. So really, it starts with understanding what types of fraud and abuse we’re dealing with because the different types of fraud have different solutions that may or may not piece together but also tie in to the talent that you need in order to address those issues. So it’s really starting with fraud and then go in from there.
Kevin: I definitely agree with those points. I’d also say it’s important to check in on kind of what are the company goals overall as an objective. Oftentimes, on a risk or fraud team, I guess goal number one is reduce losses or reduce X type of abuse on your platform, but also really taking a look at, “How much do you want to reduce it? How tolerant are you of this type of bad behavior? Is it zero tolerance, where you would want to get to zero, which may or may not be an attainable goal? Or do you want to keep it within a certain threshold knowing that, ‘Hey, it’s still gonna exist on your system a little bit,’ but having that there then enables you to kind of open the top of the funnel a bit more and allow more X types abusers on the platform?”
Evan: Right. That makes sense. So let’s talk a little bit about what you’re calling this. So we’re calling this a fraud team right now. Is that the same thing as a Trust & Safety team or are these two different things entirely? That’s a question you get a lot, Kevin.
Kevin: So for my end I’d say when I first started working in the industry like over 10 years ago, it was predominantly either a fraud team or a risk team. And your job was very transactional-based where let’s say it’s e-commerce focused. So you want to stop bad orders from going through this system. As really the companies have grown, so previously I was at a Facebook or Square, and it originally started as something that was e-commerce based or just community-based where people are posting on each other walls, I mean, sharing comments and stories. Quickly, it grew from there in terms of product features, new products, buying other companies that have their own suite of products.
And so the risk team or the abuse team had actually started morphing into much, much more just because we weren’t just talking about one type of abuse, like payment abuse, for example. There’s a lot more involved with it. And then since then, at least in tech, I’d say, we’ve kind of started moving to more of a Trust & Safety umbrella where teams like the risk team or the fraud team go under there. But you can even look at companies like Airbnb that have a physical safety team, or Facebook has a spam team, an account takeover team, that type of thing.
Evan: So given how much fraud has changed over the past three to five years like with the increase in account takeover attacks and the importance or the prevalence now of mobile fraud, how have your fraud teams changed over the past, let’s say, three to five years? We could start with you, Paul.
Paul: Sure. So paying attention and we talked about the communication, the lines communication with those ops teams, I think very vital in our ability to adapt to these changes. Because as the frontline teams are seeing a shift, that’s communicated. We’ve got a regular rhythm of communications between the teams so that as these things are starting to happen, we’re updating dashboards, we’re making sure we’re focused on some of the new trends, which means we can then allocate resources accordingly to deal with those things. Paying attention to what we’re seeing on the front lines really is helpful. We’ve started to expand what we’re doing as far as not just taking stuff reactively. We’re also looking proactively for potential attacks. So what are some of the things that are being discussed in hacker forums or things of that nature that we need to really be tuned into because those could be a problem? Additionally, we’re also trying to prevent problems from happening in the first place by working with every single product and engineering team on reviewing their product for risk before it’s launched into production.
Kevin: My end I’d say, one thing that we’ve been a lot better at over, let’s say, a transition I’ve seen over the past three to five years is really moving into that more proactive front, where let’s say five years ago, the risk team or the fraud team was really dealing with things on a reactive basis. You think about something like a chargeback, like, that thing is already, like, 30 days old and you’re dealing with it now? We’ve essentially become, I’d say, better as an industry where, yes, we’re still dealing with that kind of stuff, but we’ve begun to shift it more towards a proactive sense, where we can spot these things coming much, much sooner. So Paul was talking about meeting with the product team, giving suggestions, and giving insights prior to product launch, like, that’s something that five or 10 years ago, was much, much harder to do. Now, I think more and more see the value in it, doing it sooner to, like, prior to launch.
Also, we’re also starting to think about things like user experience where maybe traditionally that wasn’t the case. And now when you think about the entire user experience, what does it mean to stop a particular order or block someone? How does the recovery process look like for that user? It’s not just about kind of bringing down the hammer on these bad accounts. We’re not perfect. We’re gonna make mistakes. So what does that user experience look like when it comes to re-enabling that user, or if we did make a mistake, how do we kind of make it right?
Evan: Let’s talk a little about getting buy-in for resources for your Trust & Safety team. So generally, what are some challenges that you’ve run into while trying to build up your team from a buy-in perspective from the resource gathering perspective? Kevin, if you like to go first.
Kevin: So one thing that I think always comes into play is certainly around, like, budget and resources. Oftentimes, depending on the company and really how much risk and fraud is woven into the DNA, and how much particular companies focus on it can be a barrier in itself. But I always kind of try and tie this back to the metrics, where I’d say, unlike teams like marketing or in some cases engineering, there is a clear…there often can be a clear ground truth in terms of how much impact we’re having on the company.
Paul: You can absolutely bring this back around to metrics and how you’re both protecting revenue, protecting a positive member experience, and showing what happens if a good member has a negative experience, and showing that engagement drop over time is a very powerful tool in the toolkit to really make a case for additional resources.
Evan: So to close, I’d like to hear a story from each of you about something that happened while you were on the fraud and abuse frontlines that was either memorable because it was a story in which everybody’s goals were on the same page, and good things happen, and everyone work together as a team, or things that didn’t go as well, and you’ve learned something from it. But just something that stands out to you as a highlight or lowlight perhaps of your fraud and abuse experience in career. Would you like to go first, Paul?
Paul: Wow, that’s a good one. Let’s see. So I’d say, this one isn’t necessarily me on the front line, but early days of, you know, the LinkedIn Trust & Safety team, the team that is relatively small. We were noticing that we really didn’t have a sufficient blocking mechanism, number-to-number blocking, and we sat down with the right teams, aligned on why we needed this, and started to work through what are the challenges, what are all different areas that we need to provide this coverage. It was a very heavily-invested project for a number of different teams but got buy-off all the way up the chain. And it was really incredible to see the team rally around something that we felt this passionately about and we’re able to deliver it. And this is still one of the features that I think we’re very proud of.
Kevin: I’d say, I’ll give an example from when I was at Square, and this will be a case of what not to do but then, what eventually to do. So Square, it’s a payments processor. Many transactions are taking card presence so that’s where you either slide that credit card, or you dip that credit card into the reader. But at the time, and this was several years ago, we allowed users to key in transactions as well. And so let’s say you were taking order over the phone and the end user just gives the credit card number and then you charge the $100 or whatever. We actually had a monthly limit there where you could only get disbursed, I believe it was, like, $3,000 in card not present transactions per month. And then after 30 days, we’d release those funds. We wanted to hold some money in almost like a reserve to cover losses just in case.
This was, in hindsight, a really bad user experience. Like, when I joined the company, that was definitely the case and it was in play. But I didn’t really realize how bad it was until we started looking at how much money we were holding and how much headache it was actually causing the customer operations team where they were getting phone calls and emails about, like, “Where are my funds?” And to any small business or really any business in general, if you’re holding funds for no good reason, or even if there is a good reason for it, they’re not gonna be happy about it. And there’s a pretty sizable effort between risk ops, the consumer operations team, data science and engineering to essentially flip it where we didn’t wanna kind of keep this 30-day rule on anymore. And we essentially, over time, we’re actually able to lift that to where merchants that did take a lot of card not present transactions were actually able to get paid out in full because of basically the risk team loosened up but then in a more kind of strategic way.
So we’re using the data science team and the engineering team, we definitely still hold money where we thought there’s a high potential for fraud or risk. But really, I’d say, 90-plus percent of the time, we were actually able to release those funds, and that actually led to a better user experience, but also from our peers and our counterparts on the customer support team, they began to kind of show a lot more appreciation as well because they didn’t have to take as many of those types of phone calls or emails. And so that freed them up to do, like, other things to better the user experience.
Evan: That was Kevin Lee, Trust & Safety architect at Sift Science, and Paul Rockwell, head of Trust & Safety at LinkedIn. This is part one of a two-part episode about building your fraud and risk team. Stay tune for part two. Until then, stay vigilant, fraud fighters.
Learn more about what sets Sift Science’s machine learning apart.
With billions of compromised credentials already in criminals’ hands, how do you protect your users’ accounts, your brand, and your bottom line?