This week, I've been writing up my 5-part guide on "Fixing Data Breaches". On Monday I talked about the value of education; let's try and stop the breach from happening in the first place. Then yesterday it was all about reducing the impact of a breach, namely by collecting a lot less data in the first place then recognising that it belongs to the person who provided it and treating with the appropriate respect.
Today, I want to focus on the ease of disclosure. What I'm talking about here is ensuring that when someone wants to report something of a security nature - and that could be anything from a minor vulnerability through to a major data breach - that channels exist to easily communicate the issue with the organisation involved. That may sound both obvious and simple, but it's frequently a tedious, time consuming process which results in many serious incidents going unreported.
Let's begin by understanding the current state of affairs.
What's the Problem We're Fixing?
I'd like to share a comment that came through as I was writing this series. I had a guy reach out to me looking for advice after finding a vulnerability in a site which exposed customer telephone numbers along with email and IP addresses. He needed advice because he was worried; here's what he told me:
I don't want to contact the company directly as I know that some companies can get very defensive when this happens
I don't know how the guy found the vulnerability nor do I know what it actually was (SQL injection, direct object references, etc). All I know at this point is that a website is leaking customer data that puts both the customers and the site owners themselves at risk. Now think about this for a moment - what's the outcome we all want? What does the site, the reporter and indeed the community as a whole want? Well it's obvious in that we all want the vulnerability fixed - we all want the exact same outcome - yet here we are with a situation I see played out over and over again where the person who has found the vulnerability is worried about the reaction of the company involved.
How organisations respond to reports like this can vary wildly. For example, when I reported the Red Cross Blood Service leak last year, they set a gold standard in terms of how they dealt with the incident. Same again with Disqus just a couple of months ago. But compare that to these news headlines:
This is precisely what scares people like the guy who emailed me earlier - he doesn't want to get arrested! He sees stories like these and out of pure self-preservation, is reluctant to report his findings. We must acknowledge that this is not a healthy situation for anyone.
But I also want to look closer at what exactly went on behind the scenes of these two incidents. As much as we might get up in arms when we hear stories like this, there's often more to them than what immediately meets the eye.
Responsibility Starts with Us
Firstly, I'm conscious that there's debate about whether we should say "responsible" or "coordinated" disclosure. I appreciate the arguments for the latter over the former and that the word "responsible" implies certain things that are very open to interpretation. So, let me focus instead on irresponsible disclosure for a moment because that's easier to get consensus on.
I want to begin by sharing a talk I did at the AusCERT conference earlier this year. If you have a moment, just watch the first 11 minutes of this video:
Make sense? Just in case you didn't have the time to watch it, the tl;dr is that in both these cases the guys involved performed actions and accessed volumes of data that went well beyond just merely demonstrating a risk existed in the system involved. Law enforcement only got involved after they went well beyond the scope of "responsible" or "coordinated" (or however you choose to describe it), disclosure.
Now in fairness, there are cases where honest people doing good work within the bounds of what most of us would consider "reasonable" still find themselves in hot water. That's one of the things that needs to change in this industry. However, we also need to acknowledge that it's incumbent upon those of us identifying and reporting these issues to proceed with a level of professionalism that ensures if ever an organisation does get cranky, they don't have ammunition of any substance that might lead to the sorts of outcomes we saw in those earlier headlines. Shaking a company down for money Uber style, for example, is both unhealthy for the industry and likely to land you in serious legal hot water.
Let me share some of my own experiences that demonstrate just how hard it can be get through to an organisation in an ethical fashion.
Doing the Right Thing is (Often) Hard
Let's begin with my attempts to contact 000webhost a couple of years ago. Here I had 13m of their customer records (including plain text passwords, thank you very much) that someone had sent me. I tried finding a contact via WHOIS which was useless as they'd hidden their contact info (not unusual). I try the "report abuse" feature (the closest thing I could find to a contact form) except that error'd out because I allegedly had an account with them and wasn't logged on (I later learned that someone else had created an account using my email address). I looked at their Twitter account and there'd been no action for years so I wasn't going to get any traction there. I ended up finding a reference to a parent company and after an hour of effort already, I fill out that entity's contact form. Things go around and around without anything happening so the next day I try again. I can't get in touch with a security person and I'm reluctant to provide sensitive information to a helpdesk minion, but I try anyway and then have trouble responding to the original ticket! I'll save you the pain of reading through all the gory details again here, but the bottom line was that after many hours of effort spread over days of getting nowhere, I loaded the data into Have I Been Pwned (HIBP) which finally got their attention. 8 days after trying to raise them, they finally acknowledged the breach and reported it to their customers.
Or how about CloudPets who exposed a Mongo DB of data collected from teddy bears with microphones in them (yes, you read that right). Multiple parties discovered this data in December and attempted to contact CloudPets without success. Then their database was wiped and ransom'd by 3 different parties earlier this year. The data was sent to me and I worked with a reporter to try and get in touch with them but like those who had tried earlier, we were unsuccessful. (Oh - and Context Security also discovered that due to a lack of auth on the Bluetooth, you could remotely connect to the unicorn and make it talk like a Dalek whilst it was sitting in some little kid's bed.) The full details of all the attempts to contact them are in that blog post, but the comment that made a lasting impression on me with regards to the disclosure process was this one the CEO provided to a reporter:
"We did have a reporter, try to contact us multiple times last week, you don't respond to some random person about a data breach.— Michael Kan (@Michael_Kan) February 28, 2017
Yes you do! It's "random people" who found your data exposed to the world!!!
One more just to emphasise how hard it can be for those of us trying to do the right thing: I had a little run-in with Kids Pass around the middle of the year which all began when someone found a direct object reference risk within their site and attempted to (unsuccessfully) get in touch with them. I chatted with the guy, saw how serious it was then tried to seek out someone who could help me connect with them:
Unfortunately, they decided that the best way to deal with bad news was to block it out entirely:
Naturally, I wasn't very impressed about this and the subsequent airtime their approach received ultimately led to them doing the right thing, getting in touch and ultimately fixing the problem. This result was only achieved because I was persistent and I have a healthy Twitter following I could lean on. But think about what it's like for so many other people - good people trying to do the right thing but perhaps not wanting to continue throwing their own personal time at an issue like this or not having a social profile they can leverage. That's going to be most people and when companies respond (or don't respond) like the 3 examples above, it's no wonder serious security problems so frequently go unreported.
Many well-intentioned people simply give up and don't report serious security incidents when the effort is too high or the risk is too great. That has to change.
With that out of the way, let's start with something simple that any organisation can do for free.
Have a Security Vulnerability Reporting Policy
I want to focus on some simple practices that can fundamentally change the way disclosure takes place. Let's start with one of the most basic - some text on a page. More specifically, having a security vulnerability reporting policy.
A security vulnerability reporting policy is an acknowledgement that it's possible someone may find something in your online assets that needs to be reported. That much alone - simply acknowledging that people may want to report a security thing - is a massive step in the right direction. Take a look at Tesla's policy which they place on their legal page and let me highlight some significant points:
- They acknowledge that the work of security researchers who may find vulnerabilities in their assets is valuable to them
- They encourage people to report these vulnerabilities responsibly
- They provide an email address to send these reports to - email@example.com - and it's clearly an address specifically set up for this purpose
- They provide their PGP key should you wish to encrypt your communications (incidentally, I'm always concerned about some generic outsourced helpdesk person seeing sensitive details about a vulnerability so this addresses that too)
- They make a commitment to investigate legitimate reports and correct any vulnerabilities
- They commit not to take legal action against you or ask law enforcement to investigate if you adhere to their guidelines (which incidentally, are very reasonable)
- They give you a timeframe within which they'll revert if you send them a vulnerability
This is awesome. It's awesome not just because it gives people wanting to report in an ethical fashion a good framework in which to do so, but also because it's the simplest thing in the world to do. It's text on a page. That is all. Any organisation can do this and frankly, the biggest hurdle they'll face is satisfying the legal folks. But that's an effort worth investing in because think about how fundamentally different that makes the disclosure process; would the guy I quoted at the beginning of this post be too scared to contact an organisation who was as receptive as Tesla? Almost certainly not because he'd have confidence in their desire to receive his report.
And in case you're wondering, no, this isn't a bug bounty. I'm going to come back to those later in this series. For now, this is just text on a page, that is all.
Add a security.txt File to Your Website
There's a great piece of work been done by a security researcher named Ed Foudil. He's come up with a super smart way to help folks findings vulnerabilities in your web things get in touch and it couldn't be simpler: you put a file called security.txt on your website with some contact info. That is all.
But this isn't a trivial spur of the moment thing either, he actually has a draft spec up on IETF which explains the initiative as follows:
When security risks in web services are discovered by independent security researchers who understand the severity of the risk, they often lack the channels to properly disclose them. As a result, security issues may be left unreported. security.txt defines a standard to help organizations define the process for security researchers to securely disclose security vulnerabilities.
Sound familiar? Ed also has a website at securitytxt.org which explains what it's all about and how to get started:
You can see Ed's own security.txt file on his site. Bitly has one too and as of today, you can also find a security.txt file on HIBP. It's early days and adoption isn't exactly widespread right now, but this is the easiest thing in the world to do so why wouldn't you? Support this initiative, support Ed and support those who genuinely want to report security issues. Go and generate your security.txt file now!
If I'm honest, it was frustrating writing this post because it shouldn't be something I need to write. That there are companies out there who aren't just hard to reach but actually go out of their way to reject the disclosure of serious security issues is sheer recklessness. But here we are.
A fundamental part of fixing data breaches is that we need to collectively strive to do better. We must all acknowledge that none of us are immune to security vulnerabilities and it must be one of our highest priorities to engage with those wanting to bring them to our attention. I hope this post encourages organisations to reflect on these simple questions:
How likely is someone to disclose a security vulnerability if they identify one in our service? And how easy are we making it on them to do so responsibly?
Consider the answers, then go and fix the problem!