Ben Summers’ blog

Strategies for implementing Content Security Policy

Content Security Policy (CSP) is a feature of modern web browsers which helps mitigate some content injection vulnerabilities in web applications. While it’s no substitute for writing a secure application, it’s useful in minimising the effect of these vulnerabilities.

I recently implemented a strict CSP in a reasonably old web application. As coding started in 2006, it used a few techniques which are a little out of date, and needed some work to cope with a CSP which was strict enough to be worth using.

I found I only needed to use a few strategies in making the required changes, and once I had the strategies in mind, converting old code to be CSP-compatible was pretty much a mechanical task. While a little dull, it was a welcome opportunity to review my old code and improve it.

The required Content Security Policy

My aim was to implement this CSP:

default-src ‘self’; style-src ‘self’ ‘unsafe-inline’

The effect of this policy is to

  • only allow resources (images, stylesheets and scripts) to be loaded from the same host, protocol and port as the page.
  • ensure all resources are served via encrypted connections, as the page itself is served over https. (a policy of default-src https: would explicitly only allow encrypted resources, but allow them from any host.)
  • disable eval() and other hidden eval-like constructs in JavaScript.
  • disable inline JavaScript, both in event handlers and <script> elements.
  • allows inline styles using the style attribute as an exception to the general default-src rule.

I allowed inline styles as there were a number of places which used attributes like style="display:none” to temporarily hide parts of the page. While unsafe-inline sounds pretty ominous, there’s nothing you can do with an inline style that you couldn’t do with an underlying HTML content injection vulnerability, so little is lost.

Strict policies require good code

Less restrictive polices can be useful, but unless the only JavaScript which can be executed is code served as a separate file from the same server, most of the benefits of CSP are lost.

This uncompromising policy meant I had to revise a lot of the older code. However, I was pleasantly surprised to find that it just enforces good coding style, such as Unobtrusive JavaScript. I also found that I ended up with a more efficient application that sent less data to the browser.

While it’s tempting to do a partial job and only send the CSP policy header for conforming pages, just having one unprotected page could negate the benefits for the entire application. An attacker only needs to find one hole, but a defender needs to close them all.

I found that there were a few classes of changes which needed to be made. I’ve used jQuery in the examples, but the techniques will apply to any other client side framework.

eval() and JSON

Some of my early code, written when JSON and Ajax were still pretty new, used eval('('+json+')'); for parsing JSON responses from the server. While this is relatively safe, it’s disallowed by the CSP. Replacing it with jQuery’s $.parseJSON() solves this problem, as it uses the browser’s built in JSON parser if available. All CSP-supporting browsers have native JSON parsers, so eval() is no longer needed.

I did find I had to modify some of the server-side code to output strict JSON. eval() is much more forgiving than a proper JSON parser. Writing valid JavaScript literals like {key:"value”} cause a JSON parser to exception, so need to be brought in line with the JSON standard.

Inline JavaScript – passing data to scripts

Sometimes it’s handy to pass values to your JavaScript. You might do something like this

  <script>var itemID = 983633;<script>

to pass the ID of the item represented by the current page to your scripts. While convenient, the CSP prevents it from executing.

Instead, you can use HTML5 custom data attributes. The HTML5 specification formalises them, but they can actually be used in pretty much any browser. You can make up any attribute you want, as long as the name begins with data-, and then read it with the getAttribute() DOM method.

So you could pass the item ID like this

  <body data-item-id="983633”>
    ...
  </body>

and read it within your scripts like this

  var itemID = document.body.getAttribute("data-item-id”);

I tried to add the data- attributes to the HTML elements which represented those items, where it made sense. In many cases, I managed it, but occasionally I had to resort to adding <div> elements as a place to put the data.

jQuery has a data() function which could be used to read these attributes, but I prefer not to use it as it’s meant for something completely different, and just happens to read an initial value from these attributes.

You can pass HTML-encoded JSON structures in data- attributes. While it’s not terribly efficient, as characters like quotes are encoded as HTML entities, gzipping means there’s little difference over the inline JavaScript. There are few limitations in the size of the JSON structure. Browsers support attributes of at least a few MB in size, so anything reasonable will be handled easily.

Inline JavaScript – larger or infrequently changing data

There were a couple of cases where I was passing relatively rarely changing data using inline scripts. For example, if you have a basket in an e-commerce application, you might write

  <script>var basket = [[19283, “Book”], [72522, “Flowerpot”]];<script>

to pass, say, the contents to your scripts. But it doesn’t change on every page view, so you’re not being as efficient as you could. Instead, it can be loaded with a script tag:

  <script src="/example/basket/123456789”></script>

When this resource is requested, the application generates the equivalent JavaScript. You have to be a little careful to get the caching directives right so the browser reloads the resource every time the data changes.

I’ve done this by setting caching directives so it explicitly expires a few hours in the future, and using a serial number in the URL which is updated every time the contents change. As a bonus, you get some efficiency gains because the data isn’t generated and transferred on every page view.

Inline JavaScript – static scripts

To avoid having too many JavaScript files, in some of the lesser used code I embedded the JavaScript in the HTML itself. While not an elegant technique, it was acceptable, until the CSP was applied. This was pretty simple to fix by moving the JavaScript to external files, and using naming conventions to make it easy to associate the script with where it is used.

Some of these scripts had some data inserted by the templates, which had to be modified to use the data- attributes technique.

Inline JavaScript – element event handlers

There were only a few cases in my really early code which used inline event handlers like this

  <a href="#" onclick="clickHandler()">Do something</a>

rather than binding to events in the script. These were pretty easy to sort out.

However, there was a complex JavaScript widget which implemented a hierarchical tree which needed a different technique. The contents of the widget were constantly being rewritten as the user navigated, so for ease of coding, the generated HTML contained elements like this

  <a href="#" onclick="treeClick(1,5,2)">Item 4</a>

This avoided binding lots of event handlers to individual elements as the contents changed, or complex event handling. However, using a combination of data- elements and jQuery’s new on() function, it becomes rather easy to code. The elements now look like this

  <a href="#" class="tree_leaf” data-tree-leaf="1,5,2”>Item 4</a>

and an event handler is placed on the container to match clicks on any of these <a> elements, regardless of whether it was created before or after the event handler was registered

  // Register event handler once, just after the page has loaded.
  $(document).ready(function() {
    $("tree_container”).on("click”, “.tree_leaf”, function() {
      var leafData = this.getAttribute("data-tree-leaf”);
      // Handle the click!
    });
  });

This is much cleaner, and performs just as well as the inline event handler.

document.write()

I had used document.write() in a couple of places. An inline script would call a function defined in an external JavaScript file, passing in the data it needed to generate the HTML. This was used in places where the JavaScript had to be able to regenerate any of the HTML, so it made sense to implement it entirely client side rather than reimplementing the HTML generation code on the server. For example, we have quite a complex item editor which supports multi-values in all fields, so it might as well generate all the HTML client side from a simple JSON structure.

This was fixed by using a container to mark where the structure should go, with a data- attribute for the JSON. Unlike the document.write() approach, there could potentially be a short delay between the page loading and the contents rendering, but in practise, it wasn’t noticeable. Your application may be different.

Supporting browsers

Only Chrome and Firefox have good CSP implementations, using the X-Content-Security-Policy and X-WebKit-CSP headers respectively. The latest Safari has an implementation, but it doesn’t appear to work that well. IE10 should have one, but hasn’t been released yet.

The server looks at the user agent, and sends the appropriate header only if it’s a browser I’ve tested. Because of the potential to prevent the application working correctly, it feels safest to only send the header when it’s definitely going to work. We advise our clients to use Chrome, but pretty much all of them (who have the choice) use it anyway.

Is it worth it?

If your code is perfect, there’s no point in implementing CSP. But even the best developer cannot claim their code is perfect.

I decided it was worth implementing, as our service is aimed at storing sensitive data and we do everything we can to protect it. CSP adds a “belt-and-braces” layer on top of the usual care and attention to the details of security, and gives our users the choice of a little bit more protection.

Services which have less of a focus on security may decide it’s not worthwhile to retro-fit onto an existing application. In that case, it may be worth simply adding a CSP which prohibits use of unencrypted resources, along with your HSTS header. This makes sure that you can’t accidentally load a resource which can be tampered with in transit, or transmit any credentials which could be sniffed on the network.

The only potential problem is embedding other services, such as social media buttons and fonts. Hopefully they should all have HTTPS versions, so you can at least enforce encryption, but they’re unlikely to conform to all the necessary coding standards. But then, if you include resources from third-parties on your site, you have bigger security problems anyway, as your security can be trivially broken by that third party.

I’d suggest that any new web application should implement a strict CSP. As well as giving a little extra reassurance on the security side, it enforces good coding practises.

It’s hard to imagine a more inappropriate client for applications than a web browser. But it’s good to see that the browser vendors are making sensible steps to incrementally improve the only universal platform we have.

 

COMMENTS

blog comments powered by Disqus

 

Hello, I’m Ben.

I’m the Technical Director of Haplo Services, an open source platform for information management.

 

About this blog

 

Twitter: @bensummers

 

Subscribe

Jobs at Haplo
Come and work with me!