05/8/2018 | Episode 22

How Two Trust & Safety Experts Fight ATO

Fighting account takeover (ATO) isn't as simple as banning a fraudster. In committing ATO, fraudsters can imitate an honest user's behavior, making the attack difficult to detect. And once you have detected it and stopped the fraudster, you then have to regain the affected user's trust. ATO is a major pain -- and it can seriously damage your brand and bottom line. How do you fight such a unique and pervasive threat?

Enter two trust & safety experts! Oggie Nicolic (who taught us how to combat content abuse on a previous podcast) is a Google engineer with a background in risk management. Kevin Lee is a trust & safety architect at Sift. On this podcast, we're sitting down with Oggie and Kevin to learn how to fight ATO. Is there a difference between the types of fraud teams that would handle a case of ATO versus a case of payment fraud? What makes ATO so challenging to detect -- and how do you mitigate those challenges? And why is ATO just like spraining your ankle?

Oggie Nicolic is a Google engineer with a background in risk management.

Kevin Lee is a Trust & Safety Architect at Sift Science.

  • Share this Episode

Hosted By

Roxanna “Evan” Ramzipoor is a content marketing manager at Sift Science. Her debut novel The Ventriloquists will be released in 2019.


Transcript

Evan: Welcome to, “Trust & Safety in Numbers,” presented by Sift Science. I’m your host, Evan Ramzipoor. In our last episode, we looked at account takeover from the perspective of a fraud fighter and a fraudster, and this time, we’re diving even deeper from the perspective of a trust and safety expert.

Oggie: Hi, I’m Oggie Nicolic. I currently work on Google Assistant where I lead some of the quality and machine learning efforts. Prior to that, for the last seven and a half years, I led the risk engineering group for Google AdWords, so protecting Google’s ad network against payment fraud, account takeover, and policy and content violations and bad ads.

Evan: You might remember Oggie from our episode on content abuse. And if you don’t, you should check out our episode on content abuse. And our other guest is a familiar face on the podcast. Well, maybe not because you can’t see him, but he’s a familiar voice on the podcast. It’s the incomparable, Kevin Lee.

Kevin: I’m the trust and safety architect here at Sift Science. And actually, Oggie and I met way back when I also worked at Google in the payment space.

Evan: I’m sitting down with Oggie and Kevin to learn how trust and safety teams approach this complicated problem. Of course, account takeover is made all the more complex by the uniqueness of those impacted. We’ll also learn how businesses can go about rebuilding customers’ trust after an ATO attack. But first, let’s warm up with a quick fraud fact. Did you know that in 2017 account takeover resulted in over $5 billion in losses? To learn more, check out a snapshot of account takeover on the Sift Science blog. Now on to the interview.

When we think of fraud, we typically think of payment fraud, like chargebacks, or credit card fraud, but account takeover has come to dominate the news recently. Oggie, based on what you’ve seen, how is ATO different or more insidious than simple payment fraud?

Oggie: One difference, obviously, is that it is not just an attack on the business. With payment fraud, it is eating away at the business’s margins and comes in on the balance sheet.

The ATOs are also an attack on the good customers, right? And it is also a sign of more sophisticated attackers, potentially ones that were able to gain control or infest the victim’s computer with malware. It might be an indication of not just that specific account being compromised, but their entire identity or many other accounts, especially if they reuse passwords and logins.

Kevin: Account takeover, albeit perhaps not as prominent as payment fraud, the severity attached to it is definitely much more severe. So if, for example, someone stole my credit card and fraudulently purchased a $100…a gift card from Amazon or something, I would dispute it. Most likely, I’d get my money back. Amazon would kind of foot the bill at the end of the day. But when it comes to account takeover, if someone, let’s say, hijacked my Facebook account and started deleting content or messaging all my friends, that is a pretty personal piece of information or part of my identity that’s now being exposed, and so the reaction I have to that type of abuse is much, much more personal than, let’s say, traditional payment fraud.

Evan: On our last podcast, we spoke to Karisse Hendrick, who’s a veteran fraud fighter with a lot of great insights to share. But unlike you two, she’s never managed a fraud team. So, let’s dig into the methods through which trust and safety teams approach account takeover. First of all, is there a difference between the types of fraud teams that would handle a case of payment fraud versus a case of account takeover? And if so, how does that impact the way ATO is diagnosed and treated?

Oggie: Account takeover is often deemed as a security risk or a compromise of the network, or dealt maybe by the chief information officer and his staff versus on the payment front side, it becomes more of an accounting, finance, and chargeback dispute and resolution problem. I think there’s a huge amount of overlap in terms of the tooling and the capabilities and the opportunities around the manual reviews, the kind of data that needs to be collected and the kind of systems needed to be built. Hopefully, over time, they converge as these opportunities to collaborate become more clear.

Kevin: I think, at least in the tech industry, there’s been a shift more towards developing these overarching trust and safety teams as opposed to just risk to kind of align company goals and incentives and organizations around that where trust and safety is a pretty broad term, but it can encapsulate things like payment risk, compliance, security concerns.

If you’re a company like Airbnb, for example, you have to worry about payment fraud, of course, but then you also have things like physical safety where you have your customers interacting in the real world for the first time together and having to think about those types of interactions. And so, I think more companies now are adopting kind of a trust and safety umbrella to kind of look at all these risks that are facing your complex product.

Evan: I know we touched on this a bit earlier, but let’s go into more detail. How are the victims of ATO different from those impacted by payment fraud?

Oggie: With payment fraud, we put a lot of emphasis on detection with the idea that once we detect the attack, it’s just a matter of shutting down that attempt or defending individual in their account and their credit card being used. With ATO, just detecting it is the beginning of actually reestablishing the contact with the legitimate user, taking them through the recovery process of their actual account, taking them through a vetting and determining the scope of the damage on their side in terms of malware or virus infestations, or likeliness of other accounts being compromised or the same account being compromised again. [inaudible 00:06:31] there is to reinstate the good customers, because every single ATO is also a potentially lost customer, a legitimate customer, the victim who presumably would continue using that platform.

Kevin: It’s, kind of like maybe spraining an ankle. Like, if you sprain your ankle one time very severely, you are definitely more prone to spraining that ankle again. The same could be said for getting a concussion or something. But oftentimes, when I was at Facebook, when we would remediate someone and they would get access back to their account, sometimes within minutes or hours, that account would be compromised again and they’d start sending spammy messages and posts just because they had not cleaned up their machine and gotten rid of the piece of malware or virus that had infected the machine.

Let’s say your Facebook account got taken over and a bunch of stuff and spam got posted, we would try and clean it up for you, but oftentimes, attacks aren’t all-in-one. And so, if someone compromises your account, maybe they post something and then I, as the legitimate user, go back on there and start posting things or liking things and then the spammer goes at it again, it can be difficult to figure out which one is legit, which one is not legit. And so, there are some errors that can happen there.

Evan: Part of the reason ATO is so challenging is that it’s hard to detect. Payment fraud can be pretty cut and dry. But on the surface, someone who’s committing account takeover may look like an honest user, since they’ve taken over an honest user’s account. How does that impact the way we go about fighting ATO?

Oggie: ATO also comes in two different flavors. There’s ATO where the user’s credentials have been compromised. Then, the attacker is actually coming from a new location or IP or machine. So, some of the signals that we also use in payment fraud around machine fingerprinting and location-based signals would help there.

There’s a second attack also for ATO of fraudsters coming through and actually directing their traffic or boxing it through the victim’s machine. We have to start building a whole suite of specialized signals and telltale signs. So, you know, as Kevin was suggesting around Facebook, every time an account is taken over on any platform, there are specific actions or specific ways that the fraudster is looking to monetize that activity and leverage this victim’s good reputation and history and good standing on that platform.

Obviously, we can also add a lot of heuristics. So, you know, we have a login from a brand new IP or a new machine, and somebody is trying to make a post or make a very large purchase, those can also be very suspicious telltale sign.

Kevin: And Oggie, you mentioned heuristic rules and also models and really looking at the behavior of what’s going on in a particular account. What is your recommendation or experience around how much of this stuff can be fought or mitigated against purely by heuristics or rules? Or how much does it have to be driven by machine learning?

Oggie: Yeah. That is an excellent question. And I think that is a trade-off that we have to make for every single type of abuse we fight. And machine learning, by the way, is not the end-all solution for everything. I think there are specific problems that it lends itself to well. There are also vastly different machine learning techniques, not all of them are well-suited for every case. In my experience, I think you can get pretty far with the rules and heuristics. Where it starts breaking down or the telltale sign that the rules and heuristics are no longer working is if with each new rule addition, the adversary can simply change their behavior or the cost of them changing the behavior to circumvent that new rule is very low.

Evan: So, basically what you’re saying, if I’m understanding correctly, is that fraudsters will continue to do their bad deeds unless we make fraud a more costly or more difficult investment of time and resources. Can you give an example that makes that more concrete?

Oggie: A good example of that would be if the hijackers were posting offensive language. And so, we start adding a lot of rules around the specific words in the posts or if they’re bad or inflammatory. But that can be very easily circumvented. And so, very quickly, it will start breaking down and devolve into a whack-a-mole game where the adversary can either use at the source and find all the synonyms and acronyms. They can start doing text manipulation. And so, at that point or once we see that the fraud fighting has progressed to that level, we have to think about more sophisticated techniques, and how to build proper feedback loops, and how to anticipate that evolution and that chess game essentially where you have to play it out several steps in advance.

Kevin: Yeah. I can imagine the, let’s say, the bad words list becoming really not scalable when you’re talking about if your company operates in different languages. And so, you might have a great bad words list for English, but then if your company wants to expand into Indonesia or parts of Europe, it could be a pretty daunting task to maintain and build out these blacklists for every single language out there and then actually keep them updated.

Evan: One other thing that’s different about ATO is that your job isn’t done after you’ve stopped a fraudster. Victims of account takeover can often find the experience a little traumatizing, to be honest. So, the objective of ATO fraud-fighting isn’t just to detect the fraud, you’re also working to rebuild the affected user’s trust once the fraud has happened. How do you go about doing that?

Oggie: The cleanup and the reinstatement process is very manual because account takeover and the actions taken within the account can be vast and different. Almost every single ATO has a different fingerprint, so to speak. But having general tooling around the change history or versioning of the changes and sequences, having the ability to rollback actions or the ability to, in mass, identify the changes. A good example of that would be a tool enabling to see all activity from a given IP or from a given machine across all accounts within a particular day and see if there’s actually bigger trends or if there’s…it’s usually not a single user that’s compromised. You’re most likely dealing with entire clusters. It strengthens the…and I think it reassures you that you’re on the correct path when you see that, you know, it’s highly unlikely that 1,000 different users all logged in from the same place and made very similar or eerily similar activity.

Kevin: Yeah, definitely. Plus one to the clustering in terms of when an agent is doing their analysis to figure out how many accounts or things are potentially exposed. Being able to cluster based on machine ID or IP or other characteristics is super crucial. That’s half the battle, just in terms of figuring out who might be affected. In the best-case scenario, hopefully you can find an infected machine or an account that has not yet spread spam or malware or begin exhibiting traits yet and then you can put the legitimate user through some sort of two-factor authentication or some more preventative measures. Depending on how much that account is worth to you, it can and usually does require a lot of human intervention to make sure that the account is put back into a state that is familiar to the user.

Evan: Thanks for joining me on, “Trust & Safety in Numbers.” Until next time, stay vigilant, fraud fighters.

Related Content

Not all Machine Learning Systems are Created Equal

Learn more about what sets Sift Science’s machine learning apart.

Download

Complete Guide to Account Takeover

With billions of compromised credentials already in criminals’ hands, how do you protect your users’ accounts, your brand, and your bottom line?

Download

X