January 20, 2014 | Posted by Rachel Gillevet | SEO

What’s So Great About HTTP 2.0?

What’s So Great About HTTP 2.0?

The way we use the web has changed radically over the last few years. Where once a web page was mainly comprised of a few static resources that were loaded once and from a small number of servers, many pages are now complex combinations of resources loaded from multiple servers with extensive interactive components.

HTTP, the Hypertext Transfer Protocol, was designed for an older web and was not built with today’s complex interactive sites in mind. While HTTP 1.1, the current version, has given great service, it’s time for something new. HTTP’s old-fashioned assumptions about the nature of the web causes sites to be slower and less responsive than they should be, forcing browser manufacturers and web developers to implement hacky solutions to work around its deficiencies.

The creation of the next version of HTTP, HTTP 2.0, is well underway, and even if the protocol doesn’t see widespread adoption in 2014, it’ll certainly be on the minds of many developers and web hosting companies.

HTTP 2.0 is based on Google’s SPDY protocol, which was designed to help mitigate some of the less helpful features of HTTP 1.1.

So, what can we expect from HTTP 2.0?

Multiplexing

One of the reasons HTTP 1.1 is slow and resource intensive is because of the way it handles streams. As we’ve already said, web pages used to be much simpler. HTTP 1.1 was not designed to handle the multiple streams of data that modern sites require. HTTP 2.0 includes the ability to multiplex streams, which means that servers and clients no longer have to open multiple TCP connections. Multiple requests can be sent over the same connection at the same time, and responses can be sent in an intelligent order (prioritization), rather than having to be responded to in the order in which they are received. The result should be lower resource usage on both ends of the connection and significantly reduced latency.

Header Compression

Previously, only the resource and some other frames of an HTTP connection could be compressed. In HTTP 2.0, the headers, which carry various bits of information, can also be compressed, reducing the amount of data that needs to be sent.

Server Push

Server push allows the server to send multiple parallel responses to a single client request.

Consider how a web server currently handles loading a web page. It receives multiple connections for each resource: HTML, CSS, JavaScript, etc. Each of those connections uses server resources and time. One way round this in inlining, where all of those resources are included into one page that can be sent over one connection. That works, but a drawback is that those resources cannot then be cached and used on multiple pages, which somewhat negates the value of inlining.

With HTTP 2.0, all those resources can be sent on the same connection without the problems caused by inlining, which will reduce latency and resource use.

Mandatory Encryption

In response to last year’s controversy about security and privacy online, the HTTPbis working group have decided that HTTP 2.0 will only work with HTTPS connections. Unless a connection is using TLS (a.k.a. SSL), it will not be able to use HTTP 2.0. It appears that the goal is to use the other benefits of HTTP 2.0 as an incentive to encourage wider adoption of encryption. It’s somewhat controversial decision, so we’ll have to keep an eye on what happens over the next year.

Those are the main points that developers and web hosts need to know, but if you’d like more detailed information, take a look at the complete HTTP 2.0 Specifications.

Comments