What to do with HTTP/2

profile picture

Stijn Co-CTO

21 Feb 2016  ·  6 min read

While we like our shiny gadgets and tricks here, we also keep an eye on the basics that power our digital business.

While we like our shiny gadgets and tricks here, we also keep an eye on the basics that power our digital business. And when we’re talking basics, HTTP certainly comes to mind. The protocol is getting an update, and that update will have a very real impact on the way we build for the web.

But let’s not get ahead of ourselves: what exactly is HTTP?

HTTP is the most low-level protocol all of our developers interact with — and that anyone using the web encounters on a daily basis. It is a request-response protocol: the client, usually the web browser, submits an HTTP request to the server, which then delivers documents such as HTML files to create what we see as a web page.

It is, effectively, the foundation onto which the internet is built.

It’s also very far from flawless — which, of course, is where HTTP/2 comes in.


In order to understand the significance of the HTTP/2 update, a little history is required. The very first version of the HTTP protocol was created by Tim Berners-Lee way back in 1989, twenty years after the inception of ARPANET, and marked the birth of the World Wide Web. Its first documentation was completed two years later, and after several additions and alterations, the current version, HTTP/1.1, was completed in 1999. It has, in other words, been part of the basis of the internet for the past 16 (or 26) years.

Taking into account that the age of technology can best be measured in dog years, that’s a long time.

A few examples. When HTTP/1.1 was completed in ’96, the internet consisted mainly of plain HTML pages, sprinkled with 75 x 75-pixel images. Today, real-time collaborative office applications or video chat platforms running entirely in the browser have become a normal part of digital life. In 1996, the average website contained seven assets per page. In 2015, we’re now averaging 220 assets per page.

In other words, things change, and HTTP, though still functional, was starting to show its age. This is why the Hypertext Transfer Protocol workgroup set out to create its successor in the form of HTTP/2 — a successor that was approved as a Proposed Standard by the IESG in February of this year.

Google’s SPDY project

It’s important to note that HTTP2 has a predecessor. Before people officially started working on the second version of the protocol, Google was building SPDY. The search giant started working on SPDY in 2012, in an attempt to reduce page load times and improve the security of data transfers online. The protocol applies processes like compression, multiplexing and prioritisation to the existing HTTP layer — exactly the sort of features that are now baked into HTTP/2. Being a Google project, full support for the protocol was soon added to the Chrome and Chromium browsers.

Unsurprisingly, the promise of a faster web gained traction very quickly, and other browsers started to add implementations of the SPDY protocol, including Mozilla Firefox, Opera, Internet Explorer and Safari.

In February of this year, Google announced that it would be pulling the plug on SPDY in favour of HTTP/2. Support for SPDY will be pulled completely in 2016. This was not exactly a surprise: the main developers of SPDY were also involved in the creation of HTTP/2. After all, the web is complicated enough as it is: Google’s support for universal, open standards is a welcome move.

The how & why

There are three pain points with the current version of the HTTP protocol that HTTP/2 hopes to eradicate or improve. The new protocol will reduce request overhead, ensure fewer connections are necessary, and purge the majority of hacks or so-called “best practices” – which are usually roundabout ways to achieve the first two goals.

Fewer requests, better results

If a web browser receives content from a server by sending it HTTP requests, it’s evident that fewer requests will lead to faster load times. HTTP/2 uses two main strategies to cut back on the amount of requests a website needs to load.

Firstly and perhaps most importantly, the HTTP/2 protocol is binary, whereas the original HTTP is textual. It’s a switch that yields great results; a binary protocol is easier and more efficient to parse, takes up less “space” on the wire, and is much less prone to errors.

Secondly, the HTTP/2 creators focused on header compression, using a new compression scheme called HPACK. On any average HTML page, the headers are small enough to be considered negligible, but on a global scale, the sheer amount of headers seriously affects the average load time of each page, meaning that efficient compression has a big impact – especially on mobile clients, where round-trip latency is much higher.

Connections cut-back

HTTP has another big roadblock: it can only support one outstanding request on a connection at a time. Because of the amount of resources – and therefore, the amount of requests – the average modern web page needs, this slows down the page load immensely. It forces clients to try to “guess” which requests are the most important, with unsurprisingly varying degrees of success.

In order to solve this problem, HTTP/2 is multiplexed. This means that it allows multiple requests and response messages to be in flight at the same time, and that parts of different messages can intermingle when they move on the wire.

Because of this, a client can now use a single connection per origin when loading a page, in stead of the four to eight connections per origin most browsers use under HTTP, loading a single file per connection. Currently, sites try to stay ahead through domain sharding: faking to have multiple domains while they really only have one.

The internet, due to its origins, is optimized for downloading large files, whereas developers today usually work with a large set of small files. A good example of this is the widely used TCP slow start. This is a throttling mechanism that tries to avoid network congestion – unfortunately, it’s only effective when you have long-lasting connections, and much less so with the small files developers prefer. HTTP/2’s multiplexing finds the middle ground. When browsers download loads of files, but mask them as a single big one, the network is much more effective. Because the issues with TCP slow start are even more pronounced when using clients on cellular networks, mobile users in particular will benefit enormously from this change.

Bad best practices

Because HTTP/2 tackles the biggest problems with HTTP, many so-called best practices will become obsolete. The use of things like sprites, concatenation, domain sharding, data URL’s, and CSS inlining are usually workarounds for those same latency problems, but they make the web more complex and more prone to errors. HTTP/2 tackles the roots of these latency issues. It will also ensure that REST API’s can be properly implemented, and hypermedia API’s won’t suffer from enormous overhead when returning collections with URL’s and will no longer need to inline entities in those collections.

So, is everyone on board?

So far, most big players have accepted HTTP/2 as the next big standard. Google, Twitter, Mozilla, cURL, F5, Varnish and Apple’s iOS 9 are all on board with the new protocol. Notably absent from the list of early adopters are Amazon’s Web Services. Developers for Android, iOS and the web, however, can start implementing and optimising HTTP/2 right now.

Let’s get started!