White hat hackers roam the internet in search of security vulnerabilities. Unlike their black hat counterparts, their goal isn’t to penetrate those vulnerabilities: it’s to make website and business owners aware of these dangerous loopholes. But Alex Rice, co-founder of bug bounty organization HackerOne, says that white hat hackers are playing a dangerous game.
Without a culture of vulnerability disclosure, white hat hackers can’t do their job without risking severe legal penalties. Fearful of the repercussions, many hackers aren't coming forward when they discover bugs. Alex Rice sits down with us to chat about how our pursuit of online safety might be causing harm as well as good.
Interviewer: Welcome to Trust and Safety in Numbers, presented by Sift Science. I am your host Evan Ramzipoor. When you think of a hacker, what comes to mind? Eighties movies, aqua facts? You probably don’t think of an organized international group of security professionals working with companies and government agencies to uncover security vulnerabilities. And yet, that’s exactly what HackerOne is.
Interviewee: We’re a bug bounty platform that helps companies find a friendly path towards working with hackers.
Interviewer: HackerOne was founded by ethical hackers who quite simply wanted to make the internet safer both for businesses with security vulnerabilities and white hat hackers who want to report these vulnerabilities. I’m here today with co-founder Alex Rice to talk about white hat hacking and why the culture surrounding the disclosure of security vulnerabilities might be working against us.
But first, let’s warm up with a quick fried fact. Did you know that last year, 33 million Americans had at least one honest transaction blocked on an e-commerce site? That’s 15% of all cardholders in the U.S. For more in false positives, check out 10 things you need to know about blocking good customers on the Sift Science blog. Now, onto the interview. So why is a platform like HackerOne necessary? To find out, you have to dig into the legal and social norms surrounding hackers.
Interviewee: The kind of legal environment in the U.S. is… The main cyber loss that we deal with is the CFAA, which was drafted back in the 80s, and has a pretty broad reach of what is criminal activity online.
Interviewer: That’s the Computer Fraud and Abuse Act, which prohibits anyone from accessing a computer without proper authorization.
Interviewee: That legal uncertainty has led to a lot of anxiety with hackers in the past. If they become aware of a potential problem in a website, they don’t have clear protections under U.S. law to actually come forward and tell somebody about that problem. It doesn’t matter how they found out about it. If the company doesn’t proactively welcome this type of security reports, the individual hacker is actually placing themselves at significant risk by telling you about a problem.
Interviewer: But, of course, some organizations are able to get around that with something called a Vulnerability Disclosure Policy.
Interviewee: Often referred to as responsible disclosure that carves out an exception from the CFA law that essentially says, “If you see something wrong with our site, we would love to hear about it, and as long as you’re following these rules and engaging in this behavior, this simple set of rules of, like, do no harm and respect privacy of other individuals, we won’t initiate legal action against you.”
And so, if you as, a company haven’t taken that proactive step to say, “Okay. If you point out a problem in our website, we’re not going to take vindictive action against you, we’re not going to send you to law enforcement, we’re not going to send you a cease and desist letter.” It’s very unlikely that a hacker, even someone who’s trying to help you out, is going to proactively reach out to you. And if that doesn’t happen, we’re all really missing out on some very valuable information.
Interviewer: But how should we go about sharing information that business or other entities might find valuable without compromising their security or their user security?
Interviewee: The default status quo of not telling people explicitly what they should do if they come across compromised credentials, or vulnerability, or some type of thing that can potentially harm the end users. If we leave it up the chance and leave it up to the individual interpretation, it’s bounded a wrong sooner or later.
Interviewer: And so, the first step is simply having a policy that tells white hat hackers what they should do if they’re interested in coming forward with the vulnerability they found.
Interviewee: And that can be some very simple language around if you have identified a problem in our site, or you spot abuse, or you spot a vulnerability, here’s explicitly what you should go do with it next, and that open line of communication is, unfortunately, far too rare for most organizations out there. They wait until something…an actual problem has occurred before welcoming input from the outside.
And I think it’s worse spending lot of the time being proactive. Just let the friendly community out there on the internet know that if you have any valuable information that can help safety, that can help build trust, we, as a company, want to hear about it. That simple step goes a long way.
Interviewer: After that policy is in place, it’s important for the information provided by these white hat hackers, information about security vulnerabilities goes to teams with the expertise necessary to deal with it.
Interviewee: In the case of vulnerabilities, that means having properly trained analysts and security engineers that can assess the impact or the vulnerability, they can escalate it appropriately, they can get it remediated and fix it in proper time for abuse and phishing. It means having experts who are prepared to respond to those type of incidence at the same time. And so, the challenge in dealing with that for an organization is both making it clear that you’re willing to receive that type of input and feedback, but then secondly, making sure that you’re operationally equipped to be able to act upon it. And that’s where the real benefit to our users comes, but it’s also where the real business challenges are introduced.
Interviewer: Is it at all possible to make progress in diagnosing and treating security vulnerabilities if we don’t have a culture of disclosure?
Interviewee: Absolutely not. If you leave it up to chance to how you’re going to find out about a security vulnerability, it’s guaranteed you’d go wrong and we see this happening time and time again. If you don’t have a clear process for where you go to report security vulnerabilities, it’s guaranteed to go about in an unstructured manner.
When we talked a little bit about the beginning around hackers are placing themselves at risk even by merely reaching out and contacting you. And so, if you don’t have that policy, the most common path is just going to be non-disclosure.
You’re not going to find out about it and that risk is going to remain to you and your users. But if somebody does take that step, of that proactive step of like, “You know, I’m going to chance it, their company hasn’t promised not to initiate legal action against me, but I’m going to try to contact them anyway.”
Interviewer: The problem then is that companies no longer have control over who hears about these vulnerabilities somehow. If there aren’t systems in place by which a white hat hacker can deliver their findings to their proper authorities and do so safely, then there’s no telling where that information might wind up.
Interviewee: That vulnerability report might go to your social media team, it might go to your customer support team. We’ve seen CEOs send direct messages on Twitter. We’ve seen vulnerabilities publicly posted on Twitter. Sometimes, folks will just try at random email addresses at the company.
And all of that means that sensitive vulnerability information is potentially going to people who aren’t prepared to deal with it properly. And the ways that that could go wrong are just innumerable, from just somebody ignoring it, some of it can get spammed, some responding with incorrect information.
And then, the secondary effect of that is, it significantly increases the length of time that it takes to actually diagnose and remediate. If a vulnerability exists, the most important priority is to get a triage effectively and into the hands of a security engineer who can fix the issue properly before it can cause any harm or any additional harm.
Interviewer: How does HackerOne go about fostering a culture of disclosure without putting itself in a vulnerable legal position?
Interviewee: To be honest, we are…we intentionally entered a murky legal area when we started HackerOne. The law has not caught up with the norms of the internet today. The reality is users on the internet, security researchers, technology professionals, hackers find vulnerabilities all the time even when they’re not looking for them.
And we don’t think that’s fair to them, and we think we can contribute greatly to the impact they can have on the world by helping access some of those norms, and being advocates for them when they go wrong.
Interviewer: So HackerOne offers something called disclosure assistance. If you’re a white hat hacker who discovered a security vulnerability, but you haven’t been able to contact someone at the website or business in question but, HackerOne will work with you to get in touch with them in the legally safest way possible.
Interviewee: We are exposing ourselves to some non-negligible amount of legal risk in the process of doing that, but we actually think it’s the right thing to do. If a vulnerability exists, pretending that it doesn’t exist or trying to enforce silence around it, it’s just not the right path forward anymore. We have to operate at a mindset that’s these vulnerabilities already exist.
The hacker who found it isn’t the bad guy, they’re trying to help, and the only action that really matters is making sure that it gets remediated as expeditiously as possible.
Interviewer: Do you think this culture of disclosure will become more pervasive as we’re faced with increasingly devastating data breaches?
Interviewee: I think so. These programs are almost unheard of even 5-, 10 years ago. It was an exception to see them. And within the last few years, they started to really crop up in organizations that were very, very traditional and risky verse in the past from General Motors, to Tesla, to airlines are now publishing vulnerability disclosure policies.
One of the most significant ones was just over a year ago. The U.S. Department of Defense published their vulnerability disclosure program.
Interviewer: Secretary of defense, Ash Carter, basically called it a, ‘see something, say something policy’ and since that policy took effect, the DOD has been notified of about 3000 security vulnerabilities. Vulnerabilities that previously would have gone unnoticed until, of course, the wrong people notice them
Interviewee: And that we’re all measurably safer with that…a program like that in place. And if an organization like the department of defense who really architected the original hacking laws on the internet is now coming forward and saying, “We think everybody should have a vulnerability disclosure policy. Ignoring vulnerabilities is not an appropriate course of action anymore.” I do think they…we’re on a path towards that become the norm across the board.
Interviewer: Thanks for joining on Trust and Safety in numbers. We’re taking a quick break for the holidays and we will be back on January 9th. Until then, stay vigil, fried fighters.
Your information will be used to contact you about our service and subscribe you to our direct marketing communications. You can, of course, unsubscribe at any time. Please see our Website Privacy Notice.