AMA LIVE
Ask me anything on

HTTP/2

HTTP/2 is here and it’s changing the web. In this recording, the foremost experts on HTTP/2 explore the new protocol, its features, and its implications for how you design, deploy, and deliver your applications.

View On-Demand Video

Thank you! Your submission has been received!

Oops! Something went wrong while submitting the form

Who's on the panel?

Ilya Grigorik

Web Performance Engineer at

Tim Kadlec

Web Technology Advocate at 

Suzanne Aldrich

Solutions Engineer at

Andrew Smirnov

Performance Engineer at

AMA Transcript

Andrew:

The topic today is HTTP/2. For over the last 15 years, we've used HTTP 1.1 for delivering content over the web, and as the web continued to grow, rather than optimizing and creating a new protocol, we've essentially created a bunch of these optimizations, or hacks, to push the performance of web boundaries. Unfortunately, HTTP 1.1 wasn't really designed to handle the modern web per se, and so we needed a new spec. Today with our panelists, we're going to go over some of the questions around it.

With that said, let's go ahead and introduce the panelists that are on the call today. First, we have Ilya Grigorik, who is a web performance engineer at Google, co-chair of the W3C Web Performance Working Group, and author of the High Performance Browser Networking O'Reilly book. If you haven't had a chance to read that book, amazing stuff, great place, great primer on HTTP/2 there. In short, he is an internet plumber.

Next, we have Tim Kadlec who is a web technology advocate pushing the web faster at Akamai. That's a great job title. He's also the author of "Implementing Responsive Design: Building Sites for an Anywhere, Everywhere Web," and he was contributing author to "Smashing Book #4 The New Perspectives on Web Design, Web Performance, Day Book Volume 2." He also writes a blog. Check out timkadlec.com.

Lastly, we have Suzanne Aldrich, who's a solutions engineer working at CloudFlare with some of their enterprise customers to achieve the highest level of security and performance for their websites. She's super excited about the rapid adoption of HTTP/2 at CloudFlare.
Does optimizing for HTTP/2 automatically imply a poorer experience on HTTP 1.1?

Ilya:

I think that's a pretty simple one. The short answer is no. Going back to what you said in the introduction, the reason we added HTTP/2 is we found a collection of flaws, if you will, workarounds that we have to do when using HTTP/1. HTTP/2 is effectively ... It's the same HTTP that you would use on the web. All the verbs, all the capabilities are still there. Any application that has been delivered over HTTP/1 still works on HTTP/2 as is. If you happen to be running say on CloudFlare, or Akamai, both of which support HTTP/2, Akamai or CloudFlare can just enable that feature, and your site continues to run. Nothing has changed.

From there it just becomes a question of, are there things that I can take advantage of in HTTP/2 to my site even faster? Just enabling HTTP/2 ... Say you already have an HTTPS site, then you enable HTTP/2, you're no worse off. Chances are you may be a little bit better off, but it really depends on your application, and then it becomes a question of, what can I do to optimize? Depending on how aggressive you want to be, you may want to change some of your best practices. Maybe you stop doing some of those things, and I think we'll get into the details of some of those a little bit later.


Andrew:

HTTP/2, is serving all assets (CSS, images, JS) independently always a better option instead of creating bundles/grouping? Which performance hacks are no longer needed?


Tim:

Yeah, I think it's similar to Ilya's question in that we viewed using the word “always” there, which is always a sign that there's some, not quite spot on. There's going to be times where it doesn't work out that way. The most famous example of that, that we've seen, is Khan Academy's blog post where they were talking about bundling JavaScript. They went from something like 25 different JavaScript packages to 300 or something like that. Saw degradation of performance, and a degradation in compression. The takeaway there isn't necessarily that you shouldn't break these things up in individual files, the takeaway there is that there is some point where packaging still makes sense to an extent.

There's a line in there somewhere where this becomes ... Ceases to be beneficial, breaking it up at that point. That's what I think the interesting discussion still has to happen. We've got a lot of these best practices that we've had established for a long time. The sharding, the inlining of the resources, concatenating files, that we've accepted as best practice over time. We also recognize that inside of H2, some of these things on paper make a lot less sense. What we need now is we need the real world experimentation and data to help back that up, and determine exactly when does it make sense to do and when doesn't it. There's a challenge there too in terms of everything ... H2 is young. It is young. The implementations in the browser are young, the server's young, the CDN.

A lot of it is the problem with the protocol, is the problem with the browser implementation, is the problem with the maturity of the server implementation. There's just so much variability. This is definitely day zero in terms of H2 in establishing what these practices are. It's going to take a lot of experimentation before they can firmly cement what makes sense to do in this world.


Ilya:

Just to add to what Tim was saying. The Khan Academy example is really interesting, because as Tim said, they went from 25 files to 300. I’d actually say that 25 files is already a performance problem with HTTP/1, because you would need a lot of different origins to compress all of those in parallel. Chances are, if you are 25 in HTTP/1 you're already thinking about how can I collapse this to 5? With HTTP/2 that's not a problem. You can easily ship those 25. Then the question becomes, should I unpack the 25 into 300? Then you get into more nuanced conversations, well what's the overhead of this request, and all the rest. This is the space were you really have to experiment. As Tim said, measure it. Measure it in your own application.


Andrew:

Do you feel like there's a risk of creating two Internets, a slow one and a fast one due to the possible confusion of having different protocols out there at the same time?


Suzanne:

I think that there already is little bit of a problem with the slow web and the fast web with regard to delivery over TLS. With the introduction of HTTP/2, since it is implemented in TLS only through all the browsers, it is necessary to deploy it over TLS. One of the advantages of using TLS you can utilize HTTP/2. So, in fact, it's going to even out the playing field a bit in that way, and reduce the TCP of handshake overhead that we see. That's one of the reasons that people, sometimes a little averse to utilizing TLS.

However, and this is harkening a little bit back to the point that Ilya pointed out, which is that these can certainly coexist. You can still utilize technology that is, such as sharding of domains, when  you're utilizing HTTP/2. If you're using the same IP address among the hosts, then it is possible for the same TCP/IP connection to be used for that particular set of requests. You can still use all the multiplexing, and take advantage of that pipeline without degrading the performance for HTTP/1 clients.

In addition to that, because of the fact that there's this transition point between SPDY and HTTP/2, there's an additional concern there. People don't want to necessarily jump into the deep water yet, so how do you enable developers to go ahead and produce applications without the fear they're cutting out a large majority of the market share browsers? I thought an interesting technique that we utilized to your CloudFlare was to essentially fork our implementation of NGINX, to allow for lazy loading of SPDY if the browser supports that. If it supports HTTP/2, then we'll go ahead and make the connection for them in that mode. That way I think that we've really addressed that concern in particular.


Andrew:

When we talk about HTTP/2, we're thinking about delivering over that one TCP connection. Is there a performance benefit of HTTP/2 for JSON / XML web-services / REST interfaces that are typically single-resource responses?


Ilya:

Yes, there is. I think historically, and when we develop SPDY and HTTP/2, a lot of the focus has been on web browsing. Which is understandable, because one of the primary, if not the primary, use cases includes it. Although some people will disagree, because HTTP is actually pervasive everywhere. Count all the things that are powered by HTTP behind the scenes, you'll be amazed. It's anything from toasters to fridges, to all kinds of things. I think this is an area that's been somewhat under explored at the moment, where HTTP/2 has a lot of benefits. All the same things that we've been .advocating for, for web browsing. The header compression, prioritization, server push. You open up a lot of new interesting capabilities for API use cases.

In fact, we do have some really good case studies that came out, even when we were developing SPDY from Facebook, Twitter, and a bunch of Google products that were leveraging SPDY on the back end. This is not for web browsing, this is for if you have a native app, and it talks to your servers. Twitter really early on published a really interesting case study showing a huge win in latency of their requests because they're able to take advantage of these features.

There's a lot to explore here. I think there are huge savings to be had. You can eliminate some of the use cases where you've used, for example, web sockets before, or considered using web sockets because now you have server push. You can respond with smaller responses, you can prioritize them. It's a whole new kind of field for APIs. One example that I think is good to investigate, and just look at how it's working. GRPC, you can read about it at grpc.io, is a new RPC library that was developed at Google, that now powers all of the Cloud APIs. If you're using any of the Google APIs, you're using GRPC under the hood. That thing is built directly on top of HTTP/2. It was built from the ground up to take advantage of HTTP/2. There's a lot of stuff in there that just assumes that you have the advantage of HTTP/2.


Tim:

That's a really solid point. When we talk about H2 a lot, especially if we're really focused on our traditional web application, web browser settings, we really quickly seem to hone in on the multiplexing feature. That's only one small piece of what H2 is actually bringing to the table. I think we zero in on it because it's the easiest to understand, but it's the prioritization in the dependencies in the server push. In particular, those two areas where there's a lot of room for some really interesting applications. It brings much more to the table than the multiplex, even though that's what we tend to focus a lot of the conversation on it initially.


Andrew:

I saw on the web currently 76% of users in the US use a browser that supports HTTP/2, so when would be the right time for retailers to migrate to HTTP/2 completely, without causing any revenue and user experience impact?


Tim:

I think if we're talking about migrating in terms of when you should be turning on or enabling H2, that's pretty much yesterday. It's in every major browser. It's been in most of the major browsers for a few versions now. As Ilya mentioned before, if it hits something where H2's not supported it's going to fall back to the H1 connection. From that perspective, in terms of enabling H2, you should be doing that. Now would be the time. There are some optimizations that make sense on H1 that won't on H2, and if we're talking about migrating in terms of your build process and when do you stop making these optimizations for that H1 protocol? The 76% is a broad indication, but that's definitely where you're going to want to look at your own internal analytics, and figure out where you're sitting and make that call. It's equivalent in some ways to when we talk about sunsetting support inside of an organization for IE6 or IE7 or IE8.

 At some point you're going to do it for H1. It's just that when we talk about not doing the different build process for H1, it's not actually killing the site over that, over browsers that are supporting it, it's just meaning that it’s going to slow it down in some way. That's going to be something that you're going to have to assess based on the organization’s analytics. In terms of enabling it, and starting to make those baby steps and doing the optimizations that make sense. Like Suzanne was talking about with the sharding … as far as doing that stuff, it's now. Now is the time to be doing that.


Andrew:

How can HTTP/2 help mobile apps, html5 mobile websites or mobile responsive desktop sites? How do we make it happen as system architects? Ideally the answer will include mainstream hosting companies like Bluehost and Hostgator as well as more customized or enterprise-level cloud hosting like RackSpace and Amazon. Maybe specific mods for Linux or Apache?


Suzanne:

That's a great question. Also, just to add on to what Tim was saying about market share, it is definitely time to start leaping into the water. Currently, CloudFlare is supporting about 70,000 websites, domains that have true HTTP/2 support. I hope that encourages everyone to, at the very least, start experimenting. It won't break your websites. It's definitely something that you can start looking into now, and essentially keep up with the Joneses, because this is a very rapid option. This is much faster than something like IPv6. Very much easier to get going, especially considering the fact that the browsers have been implementing support. If you want to keep track of that, recommend the website caniuse.com, and to hone in on the HTTP/2 support sections to investigate the current market share for browsers, including the mobile browsers.

To get into the specific question. One of reasons I'm most excited about the adoption of HTTP/2, is because I really see this as what is for the web, going from serial to parallel processing is for computing. That's really what we're going to be able to allow for end users on mobile devices.

One of the biggest problems is delivering a very good experience, considering all the latencies that exist. The number one problem that HTTP/2 addresses is latency. Another problem that it addresses is header compression. Anything you can do to make the payload smaller is a good thing for mobile experiences, and anything you can do to take that precarious TCP connection with the mobile devices and optimize it as much as possible, is going to produce a much better user experience. We're going to get mobile applications to act more like desktop applications. We're going to be able to see desktop applications and mobile applications that remind us of what it used to be when you'd install software from a box, if you remember back that far. That's one of the reasons that I think this is going to particularly help out the experience for these folks.

On the side of system architects, how do we do that? How do we start transitioning? I think it's certainly going to include asking, making that demand for the mainstream hosting companies to actually support these services. Apache and NGINX both have versions that support HTTP/2. Your major servers are already supporting it, your major browsers are already supporting it. As far as any kind of cutover plan that you might have, I think, again to harken back to the SPDY to HTTP/2 cutover, companies like CloudFlare can help provide a proxy that can serve as a bridge for people going from A to B at this point, I believe.


Ilya:

Let me just jump in here, and mention something else which is, coming back to also earlier points that Tim mentioned, where we're still at the very earlier stages. The servers, the clients, and all the rest, and I just want to talk about the clients in particular, the browsers. We just talked about how we can make applications fast, right? Like if I have an application, what can I do to make it faster? The truth is, as we kind of hinted before, even on the clients and the browsers there's still a lot that we can do moving forward.

The way I think about it is with HTTP/1, we, and when I say we, the browser engineers, ran into a wall.  We optimized everything we could and we’re like “look”, there's just fundamentally things that we cannot do anymore. Let's go work on this other thing, which at that time was SPDY, which evolved into HTTP/2. Now we are at a point where we have this HTTP/2 thing, and now we can go back into our rendering engine, and, for example, take much better advantage of prioritization. So in HTTP/1 we were very limited with what information we could communicate down to the server. Well, frankly we actually couldn't, we could just send a request.

Now we can actually go back and say things like, well, let's understand this application a little bit more deeply. Which are the
resources which are most important? Let's send that information to the server. That means the server needs to be smart, and take that into account. I see a future, like this will take us years. Some of these things are on the order of months, some are on the order of years. But we, both the browser manufacturers, the people that work on servers, the individuals working on the websites, have to work together to make the best of the new capabilities that we have.

So it's not like we shipped this thing and now we're done, and it’s over.


Suzanne:

Yeah, I agree, just one more thing. I'd just say that the defaults is always kind of a black art. Getting that tuned down and also with the prioritization, that seems to be the least developed portion of how people are expecting to communicate through the servers, what exactly we need to get downloaded at this time. That needs some work, I believe.


Tim:

Think about how long, we had HTTP 1.1 since what, 1999, which is the same year we were given Jar Jar Binks, by the way, I love to remind people of that. Great year, banner year for us. Since 1999, and we were still finding ways to optimize our sites and applications for that network stack. Within the last few years we've had things that were pretty recent, that were improvements to the way that we did this. This is a full 16, 17 years later. When we're talking about this stuff with H2, there's going to be a long haul in terms of optimizing it.

The good news, is that even out of the box though, I think everybody can agree that we're seeing benefits, even right now, at the default level, without all of the intelligence, experimentation, and improvements that's going to come from the CDNs and from the clients and from everybody else. We're still, internally we're seeing like zero to ... We're not seeing degradation, we're seeing up to 20% improvements. That's at day one, right, so imagine what this is going to be like in 5 or 6 years when we actually get to sink in and figure out all of the things that don't work, or work even better.


Ilya:

Yeah, I think we're at a stage where we got the basics right. Weall wrote our framing code and got the first version deployed, and then we're like okay, it's working. From here on we have to start optimizing. This applies on all ends, the clients, the servers, and the actual applications.


Andrew:

Switching from non-secure H1 to secure H2 can result in multiple large critical path resources that had separate TCP connections having to vie for a single TCP connection.  How do you feel about this loss of initial burstability? How do you advise to compensate for or deal with it?


Ilya:

That's a big one. Let's try and attack this. For those that are not familiar with the basics of this thing, when you open the new
 connection, the server is only allowed to send a certain amount of data. That's what we call the initial congestion window. The current standard is 10 packets, and that's per connection. Which HTTP/1, if you were to open multiple connections, in theory you could send N number of packets, just related to how many connections you have.

I think that's what this question's getting at, right? Because HTTP/2 actually opens only one connection. All browsers do this, and we do this intentionally, because if you don't do this, then you actually forfeit a lot of the benefits that HTTP/2 gives you. You can't do prioritization across multiple connections, you can't do effective flow control across multiple connections. Header compression is not as effective, and all the rest. There’s tradeoffs here, and we ran these experiments when we were working on SPDY and HTTP/2 in production. We realized and learned that on balance it still made sense to go with one connection, because of these other benefits that we get.

That does not mean that, there aren’t cases where, if you had, say, a user that is on a very fast connection, like you’re sitting on a fiber connection and you're connected to a website, would you benefit from a larger congestion window? Yes, you could. In fact, there have been experiments in various places for, should we raise the default from 10 to 32 or 64, whatever that may be. At least, the results that I've seen so far, show that there are cases where that's certainly beneficial. I think some of the CDNs actually do this already. They set their own higher limits, higher than 10, which is what the IETF recommends.

I've also seen data that shows that, oftentimes that hurts mobile clients, and especially mobile clients on slow connections. In general, I think we have used domain sharding today. We use it far too aggressively, and there are some good case studies showing that for users on slow connections, we're actually hurting them doubly so, because not only are they slow, but we're also sending a lot of unnecessary data.

I'm not sure if I'm actually answering the question there. It's a trade off, and you have to balance it, and you have to understand where the users are, the geography, and all the rest. Then work out and back out a strategy where maybe you increase your own limits. There are other experiments, you mentioned QUIC, which are exploring this space as well, and that's where we get into more interesting conversations around burstability versus packet spacing. There's a whole big variety of techniques that make this space more efficient.

Maybe just to finish this thought, just increasing your congestion window size, I would advise against that. If you're thinking, hey, I'm now on a single connection so I need to increase my congestion window to 100, that's probably a bad idea.


Andrew:

With technologies like QUIC being developed that relies on UDP instead of TCP, do you expect there to be an HTTP vNext to follow soon after HTTP/2?


Tim:

First off, at the moment there is no HTTP. It doesn't exist, there's no working group or anything like that. I would not rule out at some point another major overhaul to the protocol, just because we can't ... It's going to be the same thing as H 1.1, where we built it for what we're trying to anticipate, the best of our knowledge. At some point, it might scale to a point where we need to change it up significantly.

However, I think what you will see in the much more short term view here, is important changes to the network stack that aren't a full blown brand new version of HTTP protocol. That's where QUIC comes into play, which Ilya talked about, the QUIC UDP internet connections. Basically, taking the TCP, TLS, H2 stack, and throwing it over user datagram protocol, UDP, which has all sorts of advantages. One of them being, TCP is implemented at the kernel and firmware level, UDP is in user space. It's going to be a lot easier for us to make significant changes, and experiment and iterate on it  than it was for TCP, which was a very long, painful drawn out process.

Because it's looking at all the things that we've learned over the last 15 years on these different protocols, it has the advantage of having that insight into what we've figured has worked on these stacks and what hasn't. It's working around things like congestion, it's focusing on trying to get this ... It has this concept of a zero round trip time for connection, if the client has made a connection to the server, you can cache those credentials for the next time. Basically, it's a zero round trip to get things started, which is huge.

QUIC is even ... I don't think people realize how close that is in terms of actually already being used in some places. Like, Google has, what was it, April of last year had that post release, said it was over 50% of Chrome traffic was using QUIC. I assume, Ilya, you can jump in on this in a minute, but I'm sure it's going to be significantly higher now. It's there, Chrome can handle it. I know at Akamai we're working on an implementation. It's not that far out from at least being used by a subset of traffic. Then in addition, you have things like TLS 1.3, which is also working on reducing the costs of that SSL negotiation.

There's all sorts of improvements that are going to happen to different various layers of the stack. HTTP/3 or HTTP vNext, I don't know but it's certainly, there's going to be iterative improvements to the network stack along the way. It's not going to be another 17 years of nothing.


Andrew:

Encryption is a big topic in the whole community today. With most of the browsers requiring using TLS, HTTPS, what's happening with the certificate management part of it? Is there a way to make that easier or more affordable, since that seems to be a requirement at this point?


Suzanne:

I feel like the person who asked this question wasn't aware of the universal SSL that's available for free for everybody at CloudFlare. I might highly encourage anybody who wants to start testing out HTTP/2 to just go ahead and sign up. We'll go ahead and issue you a certificate pretty much automatically, and you can start right away within about 15 minutes on this project. That's kind of an easy question.

As far as overall on the web, the choices that are being made these days to start standardizing on TLS ... I've heard that it's being discussed to never release a web standard again that doesn't have encryption built in. There have been so many problems over the years with having clean text standards. People are kind of poor at protecting themselves. I think it's incumbent on us to be responsible and start making sure to be ethical with our users' data by encouraging the usage of encryption.

There's also other great projects besides what CloudFlare is doing, for example the Let's Encrypt project. I'm sure that there's going to others with the HTTPS Everywhere movement. I encourage everybody to get going right away with that. As far as the marketplace for TLS, I believe that we might have made a little dent in it, because of our universal SSL. You can probably expect certificates to be cheaper overall for the market.


Tim:

SSL traditionally has been expensive and very cumbersome. Thankfully, that's been slowly going away from things like this. If you haven't played with Let's Encrypt for example, Let's Encrypt is amazing, it's fantastic. I guarantee you, the first time you get yourself a Let's Encrypt cert and get it running, it's sort of a ‘holy cow’ kind of a moment. The first time I used that, I think was in like 5 minutes, I had a cert on 2 different sites. The tooling is great, there's no cost there. There are improvements being made, absolutely, to reduce how painful that process is.


Andrew:

How does HTTP/2 help with XMPP or web sockets?


Ilya:

That's a fun question. The short answer is it doesn't because there's no defined mapping of web sockets over HTTP/2. There have been attempts and discussions, if you're curious about this and you want to learn about it, go to the IETF mailing list, there's a number of threads there discussing the various proposals. They didn't really get anywhere in the sense that we have a couple of specs. The thought was, to a large degree the reason why we invented web sockets back in the day was because of the limitations in HTTP/1. Now that we have HTTP/2, it actually addresses a lot of those things. Do we even need to define the mapping? We could, but is it even meaningful in that sense? I would actually say that, personally, I think you could polyfill web sockets on top of HTTP/2, and get same or better performance, better characteristics, because you're actually leveraging a well-defined protocol that is understood by all the proxies. You can have prioritization, you can have metadata. You see lots of people hacking things on top of web sockets, like they want to send headers and other things. HTTP/2 gives you all of that.

I think, that's my theory, that we'll actually see web sockets fade. I don't think they'll disappear, but they'll fade in their importance. I think we'll start seeing replacements. To be clear, in the browser, there's still some missing bits that would need to happen to make that work. We would need to expose server push in a better way. I think the fundamentals are there.


Suzanne:

Speaking of server push, we actually, I'm not sure if you're aware announced today that CloudFlare is enabling HTTP server push for all of our customers as of today. You can go ahead and check out our blog post about that. I completely agree with Ilya that web sockets was made for a time when we didn't have capabilities for HTTP/1. With the server push and the other technologies, the multiplexing, it tends to make web sockets a little bit less necessary in the future web.


Tim:

That's just going to be a fun one to watch. The story famously from anybody who was involved in the H2 stuff is like, every feature of H2, every component needed to have some sort of hard, real world use case. [Server] push was the thing that everybody knew was going to be a good idea, but it didn't have necessary the real world use case. It's sort of been rolling along this entire time, waiting for implementation so that people can play around with it.

Now that CloudFlare's got it, now that Akamai's got it, now that it's getting into these places, now that Canary's working on the dev tools to better expose this information, now it's going to be interesting, the next few months, now that people can play with this and actually experiment with it. What can we pull off here?

It's also going to be interesting to find out all the ways that I'm sure we're screwing it up. One of the things that we talked about with server push for the longest time was potentially, the idea of this critical CSS, where you inline the CSS, you could push it down instead. If you look at current implementations, that actually doesn't work so well. The
timing is not that great.
 
Again, this goes back to what we've talked about, everything is just so early, it's going to take a lot of time to really iron this out. I'm excited, for the next few months, to see what happens around push and what experiments people start coming up with now that they have access to it, it's going to be a big deal.


Suzanne:

It is, it's a really big deal. There is currently a very small Wordpress plugin that you can utilize to implement server push in your blog or your Wordpress website. I actually have enabled on my website with the little bug fix. Hopefully, Mr. Dave Ross is listening and will accept my pull request to fix this plugin. I've heard that Wordpress is considering adding HTTP server push as part of a core option to Wordpress. I know that the Ghost blogging platform is also currently working on this.

This harkens back to what can developers do. Besides at the server layer, they can also start working at the CMS layer. I'd love to see a Drupal push for adding server push to capabilities, preferably in Core. It would be awesome if people could start thinking and testing these different use cases out. This is an opportunity to learn how to tune our web applications and actually only push the critical things. Someday we'll get to the point where the application can have a lot of the intelligence about which assets to actually push.

The other thing that will also make this a lot better is the cacheability, since it's difficult to know what the browser has cached or not, should I push it, shouldn't I? There's some technologies that are very interesting to help make the cacheability of these a little bit more clear for the server push side.


Ilya:

Just to add to that, I guess two comments. The CloudFlare post also mentioned this, but if you're playing with server push, Chrome Canary recently added a new visualization, much improved visualization for server push. If you want to understand how that works, definitely boot that up and play with it because it's significantly better that it was better. You
 couldn't tell what was happening before, now you can.

Also, just watching the chat as it flies by, as we're talking about push, another use case that I think is really interesting, but I've seen a few people experiment with this, I just have not seen it put into production, is dependency trees. We have JavaScript modules, and other things, we have these complicated graphs of things. Server push can do a lot
of interesting things there, or if you know these things up front you can push the required modules.

I expect to see some pretty interesting experimentation, now that this stuff is out there. We're starting to see it, so it's exciting.


Andrew:

When an HTML page is requested along with server push, if there are some external files like JavaScript stored on another third party server, does that slow down the overall TCP response, or does it split the connection into two so it serves the HTML along with other resources from the one server, and then wait for a response from another server?


Suzanne:

Yeah, I'll make a comment about this, at least from CloudFlare's side, is that the way we implemented it is we're only pushing assets that are actually coming from the server itself. We're not pushing third party assets. If you are including a lot of these, that might still be an issue, and we always encourage looking for blocking assets and making sure that things you're including from third parties are actually being delivered without hindering the loading of the other assets.


Ilya:

That behavior's actually per spec. You're only allowed to push resources that you own, because it'd be kind of weird, if you think about it, if your site, my example.com site starts pushing resources that belong to Google, where it's like only Google should be able to say what resources are loaded from Google. That's per spec, if you want you could vendor some of those things. If you depend on something that loads somewhere else, maybe it makes sense to move it to your domain. This comes back to our point of domain sharding. The more connections you have, to some degree the less you're going to get out of HTTP/2. You do want to consolidate more and more of your resources under fewer origins. That's something you'll have to negotiate and figure out how to make work in your particular app.


Tim:

Yeah, because third party resources, we could have a whole other AMA on third party resources. We probably should at some point. Yeah, especially in the H2 land when we're talking about being able to multiplexing and the streaming, like ... Third party requires an entirely different connection. The more third party assets you have, the less you're going to get out of the multiplexing feature of HTTP/2. Plus there's the whole requirement of TLS and there's some slowness there as well. Third parties are going to be a beast. They have been for a long time, but it's certainly an area that, lots of fun.


Andrew:

I heard that the HTTP/2 TTFB (Time to First Byte) which is a measured metric in SEO and FEO, is sometimes see higher than HTTP/1.1.  What can be done to again have the TTFB measure be seen as on-par with HTTP/1.1?


Ilya:

The rumors you heard are not true. There is no reason why in HTTP/2, time to ... Look. If you put on HTTP/1, HTTP/2 server side by side, why the HTTP/2 time to first byte would be slower? There's just no practical reason, unless you have a poorly implemented server. Maybe, my guess is where this is coming from, is you're comparing HTTP/1 unencrypted, versus HTTP/2 with TLS. Here's where we get into a whole other discussion around optimizing TLS. There's a lot that you can do. Unfortunately, speaking of getting the defaults right, a lot of the current things, servers that you just deploy, the defaults are not right. They actually give you pretty poor performance, and you need to dig in and use some tools, diagnose them, tweak them. Then if you do it right, you will see at most one extra round trip. That in itself can be a difference.

I would say that, even if you're in HTTP/1, you should be moving to HTTPS anyway, because if you want access to powerful features in the browser, if you want security and privacy for users, you should be doing that anyway. Once you're comparing HTTPS with HTTP 1.1 and HTTP/2, there's no reason why HTTP/2 time to first byte would be slower, none.


Suzanne:

I'll mention another condition in which time to first byte ends up showing up really poorly, that's when you're doing compression. Since you need to know the total length you have to compress before you can get the actual content length. We actually tend to tell our users, not to completely ignore time to first byte, but at the end of the day, the most important thing is how fast the stuff looks for end users. You can look at all these scores that you  want on a web page test. The most important thing is web page response time. I would focus on that, and don't let the TTFB get too much in your way of implementing something that makes everything much better of an experience for the end users.


Ilya:

I think that's a very good and important point. TTFB is important as a metric. If you can make it faster, do so. That's just a good thing to optimize. You're right in that just watching the TTFB is not indicative of when the content is painted to the screen, which is ultimately what the user cares about. Not when they receive the first byte, but when is the text showing up on the screen? I can show you plenty of traces where I can see that, even if I compare the unencrypted version with encrypted over HTTP/2, the time to first byte may be slower, but the page renders faster, because we're able to leverage other features in HTTP/2 to fetch other things faster, maybe using server push, so we don't have to do extra round trips. One metric regresses, but the metric that you care about actually improves.


Andrew:

We talked a little bit about the thought of a vNext, a new spec following HTTP/2. Is there a roadmap for HTTP/3? Do we see that we potentially might be working with HTTP/2 similar to how it took such a long time to go from 1.1 to HTTP/2?


Ilya:

I think the answer is, we don't know. As Tim mentioned, there are experiments. I think at IETF there's been a couple of birds of a feather sessions discussing what are the next potential steps. There's nothing official in that regard, though. As Tim mentioned, there's a team at Google that has been experimenting with QUIC, which is effectively HTTP/2 over UDP. We've seen some pretty interesting results, enough of an interesting result to continue working on it. We're pushing this data back to IETF. I think it's to be determined. There's an upcoming IETF in Berlin in July, where we'll have a session. Maybe something will come of that, we'll see.


Andrew:

With HTTP/2 multiplexing, binary encoding, and compression, what impact do you think it will have on next gen firewalls that are used by enterprises to safely enable applications?


Ilya:

They'll have to be uploaded to understand it, that's what.


Tim:

Other than that I'm not sure either.


Suzanne:

I mean, I just know that from our perspective, for our web application firewall to work we have to decrypt at the edge and then re-encrypt to provide the ability to inspect things like headers and other layer 7 aspects. Those are the signatures that generally [inaudible] work off.


Andrew:

Some sites declare that they use HTTP/2 if the browser supports it. Is there any benefit to request all servers to move to HTTP/2 by a specific date? Can we see something like the transition over the years such as the phasing out of Flash?


Ilya:

I think practically speaking, we'll see HTTP/1 stay here for decades. Frankly, HTTP 0.9 is still supported, you can fetch google.com via HTTP 0.9. Now, I'm not suggesting that you do that, or that's a good idea, but it's there. Just like HTTP/1 is there, I think HTTP/1 will continue to be there. HTTP has been incredibly successful, it has found its way into all kinds of nooks and crannies. It's built into firmware that is not updateable, sadly, so it'll be there, it'll stay there. I expect new services to take advantage of it, and clients will support both for a long time to come.


Suzanne:

Just speaking as somebody who remembers still back in the day when I first started learning about HTTP/1, one of the nice things about it is that it's so easy to work with, it's all plain text, it's easy to type into a terminal. Even for that reason, there's going to be interest in HTTP/1 for a while yet, besides all the baked in applications.


Ilya:

Actually, yeah, to that point, that has frequently come up as a contentious point, where it's like well, before I could open Telnet and just type things in. And as someone who has written HTTP libraries in the past, yes, it's true, it's very convenient. It does definitely take more effort to get bootstrapped, to understand binary framing and all that. If, as an individual, you just want to play with HTTP/2 and understand it, we do have now tools that make this very simple. If you have Go installed, there's a couple of tools that ship right with it, where you can type in the same text, and it'll create the frames for you, so you can interact with it just like as you've done with telnet, and get all the same experiences. There's plugins for Wireshark and all the rest. I think at this point, I'm actually very confident that as a developer, even if you're not like, knee deep in binary framing, you can make sense of this stuff at a very low-level pretty quickly and with good success.


Tim:

Frankly, that's where the burden on this should fall. It should fall on the developer tooling. The benefits should go to the user, the user should not suffer so that we get a little bit better visualization things, or it's a little easier on us as developers to get insight into this. The burden falling on better tooling and, that's exactly where it should lie.


Andrew:

What challenges will developers face in troubleshooting HTTP/2 pages and dev tools and how can they overcome these challenges?


Ilya:

Honestly, I don't think dev tools, you actually really care. You can tell that a resource is being fetched over HTTP/2. I mentioned server push, so this is an area that we know we were lacking in the past. At least in Chrome Canary, now that's landed, that's much improved. I expect to see the same improvements in other browsers as well over time. We could probably surface a bit more with prioritization in the future, but that also requires that the browser itself does a better job of prioritization, so we need to do that work first.
 
I think in developer tools today, I'm pretty confident that there's really nothing that we need to worry about there. The browser takes care of most everything for you.


Tim:

One shout out I'll give for the inside of Chrome too, for the dev tools, is if you go to chrome://net-internals/, you
can get a view for an actual net log of what's going on for HTTP/2. That can be really insightful if you're trying to see, if you're getting knee deep into it and looking at how the [inaudible] frame is being sent, and prioritizations and stuff, that gives you some good insight into that. It's a clunky format, and I know that Rebecca Murphey has written a tool that's up on Github that lets you visualize that, that is pretty nice. If you want to get knee deep in something inside of the browser, that's an option.


Ilya:

That's a good point. I think of dev tools as a request response viewer. As a request response, there's nothing different between HTTP/1, HTTP/2 when you visualize it. It's like, I sent the thing with a bunch of headers and I got a response, also with a bunch of headers. Maybe I can tell the difference, that one was over H1 or H2. Net internals is awesome, but this is kind of the deep dive into, okay, so how did the frames get interleaved on a HTTP/2 connection? If you're into that sort of thing, and you want to analyze it, that's a great tool.

Then, if you want to go even deeper, maybe you're capturing a TCP trace. That's where you'll have to open up Wireshark and start doing your analysis there.


Suzanne:

Yeah, I agree. I've pretty much been able to just use the traditional developer tools for most of my debugging. I've only really had to resort to Chrome's net internals when we were working on server push, just to make sure that the push promises were visible and that we weren't having interruptions in streams.


Andrew:

Let's say I'm a company that's currently using HTTP 1.1 and I'm really interested about HTTP/2. What do I need to do to utilize HTTP/2 today and make it as easy as possible for all the stakeholders of my organization?


Ilya:

I'm looking at Tim.


Tim:

As a representative of a CDN vendor, I suppose the answer is to use a CDN and press the little button and you've got HTTP/2 on and stuff. If you've got a CDN, that's really, it's a turnkey push button solution to at least get it enabled and get process going there. If we're talking about technical migration, that certainly is a lot easier than ... Otherwise, you're probably going to get your server people involved and roll up to the latest version that’s doing it, or pull in the appropriate patches. Really, it's about, from a technical perspective, it's just enabling it.

In fact, the way I typically recommend is get it enabled, whatever that process happens to be for you, and don't do anything yet. This is the nerd in me who wants to see the impact of this stuff. Turn it on, see what that does. Have performance monitoring in place so that you're watching what happens once you've turned this on and you've impacted. Now start tweaking, start taking ... Ilya's done a fantastic job in his presentations and in his book, outlining some of the optimizations that would make sense here. You start with those, and you start tweaking with those, and you start playing with those, and you slowly continued to iterate and find that ideal, beautiful world of H2 performance for your site or application.

Don't feel like you have to rush in and optimize all the things right off the bat, because I think there's a risk of doing things at this point, where you don't know what the value is you're going to get, or maybe you're actually going to have a degradation in certain situations. It's an iterative approach, technically.


Suzanne:

Yeah, I agree, and I think to add on to that, in addition to just your regular webpage performance, one should also be testing the invalidation of assets and how that affects performance on HTTP/1 versus HTTP/2. Since if you start optimizing for HTTP/2, you'll have smaller little chunks that you need to invalidate. Theoretically, you should be able to see much better performance when you're trying to do agile development for your web applications.


Ilya:

Yep, and just to complete our round table of HTTPS, if you're not on HTTPS already, I think that's the first step that you need to think about. Because to me, once you have HTTPS in place, and there's a whole lot of things that comes along with HTTPS, right. Because you have third parties you need to think through, are they HTTPS capable, can I migrate all of my existing content, mixed content warnings, you have to resolve all that. That's probably the biggest practical hurdle for a lot of people enabling that stuff. My claim is you have to do that anyway, because we're migrating the web toward HTTPS.

Once you have that resolved, the enabling HTTP/2 thing, it's a flag. Either you log into your CDN and you toggle a thing, or something similar.


Tim:

That's a solid point, the SSL is actually the bigger obstacle at the moment. As Ilya's been hammering home all the way through this, it's not really an option thing anymore, it has to happen. Whether you're looking at H2 or you just want to use geolocation or whatever, service workers, whatever it is, yeah. It's not an optional thing anymore. That's the biggest hurdle, and that's the one with probably the biggest [inaudible]. Everything after that is, it's all downhill skiing.


Ilya:

Yeah. I imagine a future where we will see the HTTP ... You know how today, if you go to a broken HTTPS site, they have a bad cert, and it has that red lock? I see a future, I don't know how close it is, but I see the future where you have the HTTP site, when it's not encrypted, it will have the red lock. That's the world that we actually want to get to, and we will get there. You might as well get started on that now.


Suzanne:

I actually already enabled that option in Chrome, now I can't load anything without getting a red bar if it's only HTTP.


Ilya:

That's a good point, you can actually experience this. If you go to Chrome flags, you can toggle a flag that will do exactly this. I believe it's called mark insecure context as insecure. You can experience it today, and the only question is, when will that become the new default, not if. HTTPS and enable HTTP/2.


Andrew:

Awesome. I think, unfortunately, we are all out of time today. Everyone, thank you so much for joining the discussion today. We had so many fantastic user generated questions. Special thank you to Ilya, Tim, Suzanne, amazing stuff. Thank you guys for joining us. Feel free to check out the HTTP/2 eBook by just going under the Catchpoint site of resources, eBooks.