Thursday, 30 May 2013

Understanding XSS – input sanitisation semantics and output encoding contexts

Thursday, 30 May 2013

Cross site scripting (henceforth referred to as XSS) is one of those attacks that’s both extremely prevalent (remember, it’s number 2 on the OWASP Top 10) and frequently misunderstood. You’ll very often see some attempt at mitigating the risk but then find it’s easily circumvented because the developers weren’t fully aware of the attack vectors.

Last week someone flicked me over a great example of this after having read my previous post Here’s why we keep getting hacked – clear and present Billabong failures. In that post I pointed out the ease with which you could decorate Billabong’s registration page with the beautiful Miranda Kerr and a slightly stoned looking Bugs Bunny. In this post here, the ramifications of getting XSS wrong means stealing someone’s session and pulling out their personal details, all because of this:

Stealing Billabong cookies

I’ll come back to that, let’s first go back to the title and focus on input sanitisation and output encoding contexts. If XSS is an entirely new concept to you, start by taking a look at my post on it here then come back to this one.

Read more

Wednesday, 29 May 2013

The responsibility of public disclosure

Wednesday, 29 May 2013

There’s this debate that goes round and round about a process that’s commonly known as responsible disclosure or in other words, notifying the owner of a system that their security sucks and giving them the opportunity to fix it rather than telling the great unwashed masses and letting them have at a vulnerable system. The theory goes that responsible disclosure is the ethical thing to do whilst airing website security dirty laundry publicly makes you an irresponsible cowboy, or something to that effect.

But of course these things are not black and white. On a daily basis I’ll see tweets about how a website is storing credentials in plain text: “Hey [insert name here], what’s the go with emailing my password, aren’t you protecting it?”. Is this “responsible”? I think it’s fair to say it’s not irresponsible, I mean the risk is obvious, right?

How about security risks such as an XSS flaw which might be a little more grey, but is still shared frequently in public? It’s not exactly going to land you the mother lode of sensitive data in a couple of clicks, but it could be used as part of a more sophisticated attack. Or move right on to the other end of the scale where serious flaws lead to serious breaches. It might be default credentials on an admin system or publicly accessible backups, the point is it’s in another realm of risk and impact altogether.

Anyway, the reason for this post is that a number of events over recent times have given me pause for what I consider responsible disclosure. These events are numerous and include incidents where I’ve ticked every responsibility box in the book and incidents where I’ve been accused of being, well, a cowboy. I wanted to capture – with as much cohesion as possible – what I consider to be responsible disclosure because sure enough, I’ll be called on this again in the future and I want to have a clear point of view that simply won’t fit into 140 characters.

I’d like to start with two disclosure stories that took very different tacks with different outcomes for both the sites and me personally. Here’s what happened:

Read more

Tuesday, 28 May 2013

Security is hard, insecurity is easy – demonstrating a simple misconfiguration risk

Tuesday, 28 May 2013

One could argue that security is hard. Not all aspects of it, mind you, but the prevalence of website hacks would seem to indicate that plenty of people are struggling to get it right.

On the other hand, insecurity can be very easy. What I mean by this is that sometimes it can be the smallest change to a website that blows the security wide open.

Last week someone passed me a private note about Black and Decker, or more to the point, they passed me a link to an unsecured ELMAH log. For the uninitiated, ELMAH logs and their discovery via Google is something I’ve written about before. In fact in this very case it was someone simply searching for some info on ELMAH that lead to this discovery – it’s that easy.

Now I want to stress that this is not intended to be all about Black and Decker and before posting this I did privately contact them and they have now correctly secured the logs. In fact there are tens of thousands of Google results for publicly exposed ELMAH logs so clearly this is a very prevalent risk. Let me first share the video on what you can do with exposed ELMAH logs then I’ll come back around to the point of this post:

Read more

Monday, 27 May 2013

Talking with Scott Hanselman on honeypots, pineapples and SSL

Monday, 27 May 2013

For many of you, Scott Hanselman will need no introduction and is a very familiar face, voice and writer. Among the many good things that Scott does to support the web development community (and that’s not just the Microsoft folks either), he’s also the man behind the Hanselminutes podcast which I was very happy to join him on recently. In fact this remains one of the very few podcasts where I actually listen to every episode – regardless of the direct relevance to me – simply because it’s delivered in such a professional manner and I know I’m going to learn something each time.

The podcast has gone out under the title Are you secure? WiFi Honeypots, Pineapples and SSL with Troy Hunt which is pretty self-explanatory. As per the title, we mostly discuss the risks presented by using public wifi plus the importance of HTTPS for those of us who are building web apps. Let me share some supplementary material which I’ve either touched on in that talk or will be of relevance to interested listeners:

  1. SSL is not about encryption
  2. OWASP Top 10 for .NET developers part 9: Insufficient Transport Layer Protection
  3. 5 ways to implement HTTPS in an insufficient manner (and leak sensitive data)
  4. Your login form posts to HTTPS, but you blew it when you loaded it over HTTP
  5. The beginners guide to breaking website security with nothing more than a Pineapple
  6. Pineapple Surprise! Mixing trusting devices with sneaky Wi-Fi at #wdc13

There’s a lot more related content beneath those but that’s a good starting point. I hope you enjoy the podcast!

Monday, 20 May 2013

Your login form posts to HTTPS, but you blew it when you loaded it over HTTP

Monday, 20 May 2013

Here’s an often held conversation between concerned website user and site owner:

User: “Hey mate, your website isn’t using SSL when I enter my password, what gives?!”

Owner: “Ah, but it posts to HTTPS so your password is secure! We take security seriously. Our measures are robust.” (and other random, unquantifiable claims)

Loading login forms over HTTP renders any downstream transport layer security almost entirely useless. Rather than just tell you what’s wrong with this, let me show precisely why this is with a site that implements this pattern:

Read more

Thursday, 16 May 2013

Hack yourself first – how to go on the offence before online attackers do

Thursday, 16 May 2013

The unfortunate reality of the web today is that you’re going to get hacked. Statistically speaking at least, the odds of you having a website without a serious security risk are very low – 14% according to WhiteHat’s State of Web Security report from a couple of weeks ago. Have enough websites for long enough (as many organisations do), and the chances of you getting out unscathed aren’t real good.

There’s this great TEDx talk by Jeremiah Grossman titled Hack Yourself First where he talks about the importance of actively seeking out vulnerabilities in your own software before the evildoers do it for you. In Jeremiah’s post about the talk, he makes a very salient point:

Hack Yourself First advocates building up our cyber-offense skills, and focusing these skills inward at ourselves, to find and fix security issues before the bad guys find and exploit them.

I love this angle – the angle that empowers the individual to go out and seek out risks in their own assets – as it’s a far more proactive, constructive approach than the one we so often see today which is the “after it breaks, I’ll fix it” approach. Perhaps that’s not always a conscious decision but it all too often turns out to be the case. It also advocates for the folks writing our apps to develop the skills required to break them which is a big part of what I’ve been advocating for some time now and features heavily in many posts on this blog as well as throughout the Pluralsight training I recently released. If developers do not understand the risk – I mean really understand it to the point where they know how to exploit it – then you’re fighting an uphill battle in terms of getting them to understand the value of secure coding.

It’s not just the dedicated security folks talking about hacking yourself first. The other day I was listening to Scott Hanselman talking about WordPress security on his podcast and he made the following point:

I know when I’m writing code I’m not thinking about evil, I’m just trying to think about functionality.

Which of course is perfectly naturally for most developers – we build stuff. Other people break stuff! But he goes on to say:

When was the last time I sat down and spent a day or a week trying to break my site?

And we’re back to hacking yourself first or in other words, making a concerted attempt to find vulnerabilities in your own code before someone else does. As Jeremiah referred to it, building up cyber-offense skills for developers. Developing the ability to detect these risks is easy once you know what to look for, in fact many of them are staring you right in the face when you browse a website and that’s what I want to talk about here today.

Let me share my top picks of website security fundamentals that you can check on any site right now without doing anything that a reasonable person would consider “hacking”. I make this point for two reasons: firstly, you really don’t want to go messing up things in your own live site and testing for risks such as SQL injection has every chance of doing just that if a risk is present. The other reason is that by picking non-invasive risks you can assess them on other peoples’ sites. I’ll come back to why I’m saying this and the context it can be used in at the end of this post, the point is that these are by no means malicious tests, think of them as the gateway drug to identifying more serious risks.

This is going to be a lengthy one so let me give you a little index to get you started:

  1. Lack of transport layer protection for sensitive data
  2. Loading login forms over an insecure channel
  3. Secure cookies
  4. Mixed mode HTTP and HTTPS
  5. Cross Site Scripting (XSS)
  6. Password reminders via email
  7. Insecure password storage
  8. Poor password entropy rules
  9. Denial of service via password reset
  10. HTTP only cookies
  11. Internal server error messages
  12. Path disclosure via robots.txt
  13. Sensitive data leakage via HTML source
  14. Parameter tampering
  15. Clickjacking and the X-Frame-Options header
  16. Cross Site Request Forgery (CSRF)

Remember, every one of these is remotely detectable and you can find them in any website with nothing more than a browser. They’re also web platform agnostic so everything you read here is equally relevant to ASP.NET as it is PHP as it is Java – there are no favourites here! I’m going to draw on lots of examples from previous posts and live websites to bring this back down to earth and avoid focussing on theory alone. Let’s get into it.

Read more

Monday, 13 May 2013

Clickjack attack – the hidden threat right in front of you

Monday, 13 May 2013

XSS protection: check!

No SQL injection: check!

Proper use of HTTPS: check!

Clickjacking defences: uh, click what now?!

This is one of those risks which doesn’t tend to get a lot of coverage but it can be a malicious little bugger when exploited by an attacker. Originally described by Jeremiah Grossman of WhiteHat Security fame back in 2008, a clickjacking attack relies on creating a veneer of authenticity under which lies a more sinister objective.

Imagine you visit a website and see the following:

Win an iPad website

Free stuff is always good so you click on the big button and WAMMO! You’ve just been clickjacked. You see, whilst you think you just clicked a “WIN” link, in reality you just clicked this instead:

Banking website

This, of course, is your bank. You are logged in and your bank provides a handy option to transfer all your money with a single click. But of course you don’t know you you’ve just given Mr Dotcom all your money because you never even saw the link. This is a very simple example of a clickjacking attack, let’s take a look at the mechanism underneath and then talk about defences.

Read more

Wednesday, 8 May 2013

Here’s why you can’t trust SSL logos on HTTP pages (even from SSL vendors)

Wednesday, 8 May 2013

A couple of days ago I wrote about Why I am the world’s greatest lover (and other worthless security claims) and it  really seemed to resonate with people. In short, whacking a seal on your website that talks about security awesomeness in no way causes security awesomeness. Andy Gambles gets that and shared this tweet with me:

@troyhunt so when an SSL vendor is saying stuff like this http://st.cm/18WVy6O  does that give us any hope?

So let’s check out exactly what’s going on here and you really need video to understand the fatal flaw in the logic of SSL logos coming down over HTTPS:

So there you go – it can be that simple. How I MiTM’d the page so easily is not really the point, the point is that an SSL logo on an unprotected page is as good as worthless (and frankly they’re not much good on protected pages either).

Monday, 6 May 2013

Why I am the world’s greatest lover (and other worthless security claims)

Monday, 6 May 2013

I’ve been considering purchasing one of these t-shirts:

 

The World's Greatest Lover T-shirt

This shirt would announce to everyone who crosses my path that I am, in fact, the world’s greatest lover. They would know this because I have a t-shirt that tells them so and it would give them enormous confidence in my sexual prowess.

If ever I was challenged on the claim, I could quite rightly say that nobody has ever demonstrated that this is not the case and there are no proven incidents that disprove it.

Sound ridiculous? Of course it is but somehow we’ve come to accept this practice – or at least tolerate it – by virtue of images like these:

Norton Secured - Powered by VeriSign

Read more

Friday, 3 May 2013

Pineapple Surprise! Mixing trusting devices with sneaky Wi-Fi at #wdc13

Friday, 3 May 2013

I’m pushing the “Publish” button on this just before I go on stage at Web Directions Code because all things going well, what I’m going to talk about in this post will form part of my demo about securing web services.

Web Directions Code stage

I’m making some (admittedly very simple) code available and providing some resources that will hopefully help everything I talk about with regards to unprotected wireless traffic make sense. I’d like to begin by introducing you to Pineapple Surprise!

Stack Overflow dnsspoof

Wait – what?! Where’s my Stack Overflow?! I mean I’m seeing stackoverflow.com in the address bar, what’s going on here?! It gets worse:

2

That little usr cookie down the bottom – that’s the money shot. Create a cookie in the browser with that name and value while the session is active (yes, it has expired just in case you were wondering) and wammo! You’re now me on Stack Overflow. You can go and respond to every security question about encryption and tell them to use ROT13, you can abuse Jon Skeet for not knowing his covariants from his contravariants and you can respond to any question about “How do I use ASP.NET to…” by telling them to use SharePoint. Except it’s not you saying that, it’s me and I’ll cop the abuse for it.

Let me explain what’s happening here.

Read more

Wednesday, 1 May 2013

Introducing the OWASP Top 10 Web Application Security Risks for ASP.NET on Pluralsight

Wednesday, 1 May 2013

I’ve been a little bit busy the last few months and here’s why – my first Pluralsight course, the OWASP Top 10 Web Application Security Risks for ASP.NET. Actually, if I’m honest, it’s been a lot longer than that in the making as my writing about the OWASP Top 10 goes all the way back to right on three years ago now. It begin with the blog series followed by the free eBook then last year the instructor lead training for QA and now finally, a complete online video course via Pluralsight.

Pluralsight - hardcore developer training

For the uninitiated, Pluralsight is what they call “hardcore developer training” and it’s predominantly produced by fellow MVPs, community leaders and other subject matter experts who many of you would be very familiar with (there’s a great little article on TechCrunch with a very glowing overview of Pluralsight here). The quality of Pluralsight’s authors really are top notch and it is an honour to be able to contribute to my own little corner of expertise. The content is subscription based and starts at only $29 per month (and there’s a free 10 day, 200 minute trial if you’re not sure). You can watch it online via the browser or native clients for a variety of devices and depending on your subscription level, even save the courses offline.

Pluralsight as a training service is an absolutely fantastic resource and one I have used on many, many different occasions now. The breadth of content is huge and includes everything from the latest enhancements to ASP.NET 4.5, nitty gritty detail about LINQ, lots of fancy tricks in Entity Framework and a heap more in the ASP.NET space. Then of course you’ve got everything you could want to know about client technologies such as jQuery, CSS, HTML and on and on. Take a spin through the top courses and you start to get a sense of the breadth of content.

One thing I’ve learnt working with a bunch of different people over the years is that everyone has their own style of learning. We all like absorbing information in different ways and what I like about the structure of Pluralsight is that you have the choice to either sit through an entire course and go through in a very structured fashion, pick just one module on a discrete topic and learn everything about it or drill down even further and just pick one clip from within the module.

I’ve used each of these approaches in the past, most recently when I really wanted to get up to speed on Entity Framework 5 enum support. There’s plenty of info out there on this but I knew that Julie Lerman is the person when it comes to EF so I spent 6 minutes watching that topic in her Getting Started with Entity Framework 5 course and had exactly what I needed to, well, get started!

Let me tell you a little more about what I’ve created.

Read more