Remember when web security was all about looking for padlocks? I mean in terms of the advice we gave your everyday people, that's what it boiled down to - "look for the padlock before entering passwords or credit card info into a website". Back in the day, this was pretty solid advice too as it gave you confidence not just in the usual confidentiality, integrity and authenticity of the web traffic, but in the legitimacy of the site as well. If it had a padlock, you could trust it and there's weren't a lot of exceptions to that.
But as time has gone by and HTTPS has become increasingly ubiquitous and obtainable by all, the veracity of that advice has taken a serious hit. So much so that Barclays Bank have found themselves in hot water over this ad:
See the problem? It's this quote:
Look, in there, you need a padlock when you pay for stuff. If there isn't one, the website could be fake.
The implication here is that this site could be fake:
Whilst this site is legitimate:
This led to the Advertising Standards Association in the UK (ASA) classifying the ad as "misleading" and nuking it off the air. Barclays responded by saying they'd merely intended to advise consumers to look for the padlock before paying (which is reasonable) and to "always check the seller's genuine" (which is much harder). But clearly, when you watch that ad and consider what everyday people will take away from it, it is indeed misleading. ASA upheld their assessment and countered with the following very reasonable statement:
consumers were generally unlikely to have a detailed understanding of the website padlock symbol and the general steps required to ensure a website was safe
And let's be fair to Barclays - it's not just them offering outdated and inaccurate advice about the true meaning of the padlock:
Taking a mandatory Cyber Awareness Course. The correct answer to this question is: The traffic between the browser and the webshop is encrypted. But the option does not exist. cc @troyhunt pic.twitter.com/bRM5BnVC6l
— Tim Skauge (@tims) May 3, 2018
Now I'm going to work on the assumption that readers here generally have a good grasp of why this no longer makes any sense, but just as a really quick recap: HTTPS (and consequently the padlock in the browser), is rapidly becoming ubiquitous as the barriers to adoption fall. Most notably, they're now free through services like Let's Encrypt and Cloudflare and they're dead easy to setup so there goes another barrier too. As of a few months ago, 38% of the world's top 1 million websites were served over HTTPS and that figure was up by a third in only 6 months, a trend that's continued for some time now. But the presence of HTTPS is in no way a judgement call on the trustworthiness of the site:
HTTPS & SSL doesn't mean "trust this." It means "this is private." You may be having a private conversation with Satan.
— Scott Hanselman (@shanselman) April 4, 2012
As with other forms of encryption, HTTPS is morally neutral; it could be a good site, it could be a bad site, who knows, the padlock icon doesn't have anything to do with that. Which brings me to the title of this blog post - the positive visual indicators we've become so accustomed to are increasingly useless and instead, we need to be focusing more on about the negative ones. Let me explain:
The Uselessness of Positive Indicators
Last year, I wrote a long piece on certs and phishing which I'll come back to and talk about more a little later on. One of the images in that piece was this one:
Should we trust this site? It has a padlock! No, of course we shouldn't trust it and it's a perfect example of where a positive visual indicator is, in fact, misleading. This particular cert was issued by Comodo when the owner of the site put it behind Cloudflare and per my earlier comments on the trustworthiness of sites served over HTTPS, there's no reason why that site shouldn't have a cert and subsequently, a padlock. (Amusingly, this sort of thing hasn't stopped sellers of commercial Comodo certificates berating Let's Encrypt for issuing them to phishing sites, but you don't have to look far to understand why they're upset.)
I saw another perfect example of this just the other day, this time by way of a Spotify phish:
Ouch, can think of a lot of people who would fall for this... and a green padlock - must mean it's secure! ? @Spotify cc: @troyhunt pic.twitter.com/WZ1gaTCFSc
— George McCarron (@george_mccarron) April 29, 2018
And George is totally right - a lot of people would fall for this, particularly if they were following Barclay's advice. Clearly, the mere presence of a positive visual indicator is an insufficient means of making a call on the legitimacy of the website. And yes, we all know that the padlock never meant the site wasn't going to be nasty, but we also know the history with the way the masses have been educated about it and the assumptions they consequently draw. Which brings me to the next point - let's talk about negative visual indicators.
The Value of Negative Indicators
The Spotify example above is going to serve multiple purposes in this blog post and the first one is that it shows how misleading the padlock icon can be. But the second purpose it serves is to show what a negative visual indicator looks like, which is exactly what you'll see if you go to membership[.]spotifyuser[.]validationsupport[.]userbilling-secure.com today:
Not real subtle, right? Whilst George was spot on about people falling for the site due to the presence of the positive indicator, nobody is falling for it with the negative indicator! There's a simple and obvious reason why:
Positive security indicators are readily obtainable or spoofable, but nobody ever wants to show a negative indicator on a legitimate site!
Now, I also said "spoofable" because we have situations like this:
That's from my post of many years ago on why I'm the world's greatest lover, a time well before ubiquitous HTTPS but that didn't stop websites proclaiming their security prowess by way of images on the page.
Back to George's Spotify phishing site for a moment, his tweet came through during the night for me and it was already flagged as being a deceptive site by the time I woke up and saw it. The certificate transparency logs suggest the cert was only obtained 10 hours before George's tweet; based on the time I saw Chrome's warning, there was a maximum of about an 18 hour window between the phisher getting the cert and users of Google's browser seeing a massive negative visual indicator. So, the other purpose that this example serves is to illustrate that even in the presence of HTTPS, we have very effective controls available to mitigate phishing attacks. For all the commerical CAs people decrying Let's Encrypt issuing certs to phishing sites, let's not forget this control which, especially in light of revocation being broken anyway, is enormously powerful.
As a more general theme beyond just phishing, negative visual indicators can be enormously effective in other scenarios too:
Want to go to the Daily Mail over the secure scheme? You're going to get a great big warning before you need to drill down into the advanced section and proceed to what's clearly then marked as an unsafe link. And again, we come back to the point about training people to look for negative indicators and act on those rather than to simply assume everything is fine in the presence of a positive one.
The Futility of Neutral, User-Interpreted Indicators (URLs)
Last month, I hopped over to Hawaii for the inaugural Loco Moco Sec conference. One of the most highly anticipated talks for me was Emily Schechter's who's a product manager on the Chrome team. Emily did a talk titled "The Trouble with URLs, and how Humans (Don’t) Understand Site Identity" and if you have any interest in the topic at all, it's essential viewing:
When it comes to the topic of how humans interact with browsers and how they make trust decisions, few people are better equipped to comment than Emily, not least because Google invests a heap of effort into focus groups and other means of measuring how people actually behave. As Emily spoke, I snapped pics from the front row and tweeted a few of them:
Chrome will hide the “https” scheme prefix in the future as it’s redundant with the padlock and “secure” text pic.twitter.com/lYw84gNIYx
— Troy Hunt (@troyhunt) April 6, 2018
Emily talks about why Google is intending to hide the HTTPS scheme at about the 5-minute mark in that video and it's worth a watch. It makes sense to me, but my tweet did result in some rather "enthusiastic" feedback from the Twitters, for example:
Here’s a slide @emschec showed - which one is the correct site? How does a user know? People making security decisions based on the URL alone is fraught with problems. pic.twitter.com/oUDTYi7b9V
— Troy Hunt (@troyhunt) April 7, 2018
I've included my reply in there because Emily's subsequent slide explains the problem perfectly. Humans are lousy at interpreting the URL which is precisely why the aforementioned Netflix phishing site works and same again for the Spotify one. Here's another great example:
Email from Twitter:
— Daniel Crabtree (@DanielCrabtree) May 6, 2018
How do I know an email is from Twitter?
Links in this email will start with “https://” and contain “https://t.co/FjKaCAmjKd.”
So https://twitter.com.evil-example.com is safe?
Don't think so.@troyhunt
This tweet is bang on and again, it illustrates why the URL alone - which is frequently only partially displayed anyway - is an absolutely lousy indicator of trustworthiness. Still don't believe me? How about this site:
You can read the full blog post on what's happening here but the crux of it is that by using Punycode in domain names, Firefox will render xn--80ak6aa92e.com precisely as you see it above which is actually "аррӏе.com" (note that the "l" is actually the Cyrillic letter "ӏ" - which both look identical here!)
Certificate Authorities and Visual Indicators
I reckon we're overdue a rethink on the efficacy of visual indicators in browsers. Here's another great example of this - EV certificates. I mentioned I'd come back to this when I linked to my post on the perceived value of them and I'll refer to a slide from a recent talk I've been doing (link to the relevant point):
Because this is precisely what EV is - it only works if people change behaviour when they don't see it! The commercial CAs will tell you that you need EV to increase confidence and differentiate yourself from the phishing sites, but it just simply doesn't work that way:
“EV isn’t a good defense against phishing attacks” and other wisdom from @emschec pic.twitter.com/dfTkb4FOFI
— Troy Hunt (@troyhunt) April 6, 2018
In fact, Google is already signalling that they're looking at removing the EV indicator from Chrome and if you want to get a sense of what that might look like, give this a go:
Want to see if losing the EV UI will make a difference to browsing habits? There's now a flag in Chrome to disable the EV indicator! chrome://flags/#simplify-https-indicator pic.twitter.com/Hv175WrFOP
— Scott Helme (@Scott_Helme) April 27, 2018
Frankly, the CAs are struggling to find any meaningful role to play as it relates to phishing. Phishing sites have certs, revocation is broken and EV is useless not only due to the points mentioned above, but because even the commercial CAs aren't sure who should have EV certs! For example, from that talk of mine:
That's stripe.ian.sh with an EV cert that shows the name of the company he registered (Stripe Inc) in Safari on iOS but just the domain name itself in Chrome on iOS (it's entirely up to the client how they choose to display the presence of EV, if they display it at all). However, as of today, every browser just displays the domain name because Comodo revoked his EV cert. So he went and got one from GoDaddy and... that one was revoked too! Why? Well apparently, there were a couple of risk factors and whilst they were never clearly defined, it's pretty obvious what they were:
There were risk factors for the EV business model.
— CopperheadOS (@CopperheadOS) April 3, 2018
And then came the totally unforeseen twist in the saga just last week - Comodo admitted to wrongdoing in cancelling the cert, apologised and offered Ian a new one. Kudos to them, obviously, but it shows just how much of a mess the whole thing is, at least as it relates to the trustworthiness of positive visual indicators the whole value proposition is increasingly questionable anyway.
Summary
When I first saw that Barclays ad appear back in November, I went and registered totally-trustworthy-site-because-it-has-a-padlock.com, fully intending to make a bit of a thing out of how misleading the whole "look for the padlock" message was. That was until ASA beat me to it! But the passage of time has also provided so many of the other examples mentioned above just since last year, not least of which is Google's proposed changes to visual indicators in the browser.
On those changes, there will likely be a time where the positive visual indicator that is the padlock can be removed entirely. Think about it - when (almost) every site is HTTPS anyway, why have it? You could instead fall back to ever more negative visual indicators when sites aren't served over HTTPS and we're only a couple of months out from seeing the beginning of that. Wouldn't it be great if we could kill the padlock and the indication of the HTTPS scheme off altogether and just flag the exceptions? We're getting there.
So what can we conclude from all of this? Pretty much per the title, the education needs to change from looking for those positive visual indicators as a sign of trustworthiness to looking at the negative ones as a sign of a site to approach with caution. The browsers are increasingly helping us to do this and indeed Chrome in particular has led the charge putting warnings on insecure login pages then insecure input pages of any kind and in the very near future, warnings on all insecure pages regardless of what they do. So let's focus on those; drive awareness of that and accept that padlocks icons are rapidly becoming a sign of a bygone era.