Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

So much negativity in this thread.

Personally I am _very_ excited by HTTP/3 (and QUIC), it feels like the building block for Internet 2.0, with connection migration over different IPs, mandatory encryption, bidirectional streams and it being a user-space library – sure, more bloat, but from now on we won't have to wait for your kernel to support feature X, or even worse, your ISP-provided router or decade old middleware router on the Internet.

I haven't had the chance to read the actual spec yet, but it's obvious that while the current tech (HTTP2) is an improvement over what we had before, HTTP/3 is a good base to make the web even faster and more secure.

HTTP/3 won't be IPv6: it only requires support from the two parties that benefit from it the most: browser vendors and web server vendors. We won't have to wait on the whole internet to upgrade their hardware.



I'm worried, not because of the standard itself, which seems well thought out, even if rushed.

I'm worried because you have a protocol implemented in the userland for a few mainstream languages. It seems everyone now has to pay the price of a protocol implementation on top of a protocol implementation on top of a protocol implementation. Big players---either because they have thousands of open source developers, or is backed by a corporation---they have it easy. Smaller players? Not so much.

Also, note that the exact problem that HTTP/3 tries to solve was known in the design process of HTTP/2 and some people even noted having multiple flow control schemes at multiple layers would become a problem. We are letting the same people design the next layer, and probably too fast in the name of time to market.

This should definitely live in a way people can make use of it easily, with an API highly amenable to binding. If it gains traction, we need a new UDP interface to the kernel as well, for batching packets back and forth. This kills operating system diversity as well, or runs the risk at doing so.

OTOH, I see the lure: SCTP never caught on for a reason, and much of this is the opposite of my above worries.


> some people even noted having multiple flow control schemes at multiple layers would become a problem

It could, but it didn't in reality. HTTP/2 has two levels of flow control, stream-level and connection-level. You use 1 connection per site and as many streams as you want multiplexed inside that connection, thus stream-level flow control is necessary to avoid stream head-of-line blocking.

The actual layering violation is connection-level flow control, which seems to duplicate TCP flow control, but it's not mandatory, as you can see most if not all open source implementations simply set a very large connection-level window size to hand off flow control at this level to TCP.

There is a good reason for this to exist, which is to compete for bandwidth with HTTP/1.1 domain sharding technique which uses N connections per "site", effectively having N times the Initial Congestion Window (IW) than what HTTP/2 can have in one connection. IW was a huge issue in improving connection startup latency, and after managing to convince Linux netdev to raise it to 10, Google couldn't get them to allow applications to customize its value anymore. The only solution for Google was to add some flow control information in HTTP/2 and pass it to coupled TCP flow control to improve IW. So in reality only one flow control scheme is working at any time, instead of the common perception of TCP over TCP meltdown. For anyone else they can simply not do connection-level flow control in HTTP/2 and nothing of value is lost.


The TCP state machine sucks and all of its timing parameters are outdated and unsuitable for modern networks. QUIC frees us from the tyranny of the kernel. Being in userspace is a feature.


The rallying cry of everybody who later comes to the realization that they have re-implemented TCP.


So what if we use our experiences and in-depth knowledge of a past protocol, take into account the flaws, and build something better? You say "re-implemented TCP" as if it's the only possible way to build a reliable packet protocol, and that it has no flaws, and we can't make any improvements to it.

TCP isn't alien technology we don't understand. We do understand it, and its limits, and its constraints, and that means we can build a better one next time.


The problem with coming “to the realization that they have re-implemented TCP” is that it was ad-hoc. In this situation, the re-implementation was done by people very familiar with TCP, both its strengths, weaknesses, and assumptions, who very deliberately set out to “re-implement” TCP to work better with how are networks actually are configured.


Maybe re-implemented SCTP, but this time it's usable.


How does it work with a debugger? With TCP the connection didn't die just because you paused the program. But when everything is in userspace then that can't happen anymore?


We aren’t talking about raw networking; generally a QUIC implementation uses the kernel UDP stack, which buffers packets until read.


No but TCP sends keep-alive packets or something in the kernel right? If you can't send anything by the timeout then the connections should drop?


Only if you set the keepalive option, such as:

  int yes = 1;
  setsockopt(s, SOL_SOCKET, SO_KEEPALIVE, &yes, sizeof(yes));
Most servers (Xserver, imapd, sshd) set it.


Vanilla TCP does not do this. Sockets remain established forever in the absence of traffic.


This is a core feature in TCP/IP. Only the endpoints actually involved in the connection care about what a "connection" really is. If they share a connection, it should be nobody else's business that they do.


This is definitely not true in this world which is filled with NATs everywhere. The intermediate routers very much care and must care about what connections exist.


> UDP stack, which buffers packets until read

Not really. Or just up to a point only. Then it will drop them into the bit bucket without telling either the sender or receiver. With TCP the sender will "find out" eventually the receiver isn't getting the data.

The point is that with streams on top of UDP all that has to happen in the application layer.


Until someone writes it into the kernel at which point that won't be a feature anymore.


Except you don't have to use it if you don't like the kernel option, which a normal application can't do for TCP.


That's true, I didn't think of it like that.


well HTTP/2 makes a lot of timing assumptions as well. just that it happens in user space.


> Being in userspace is a feature.

This.


Would we be better served by Google reimplementing TCP on top of UDP, or by fixing TCP in Android, and on their servers, and telling us how they did it?

If it's better for TCP to be handled in userspace, fine -- they should build the APIs for that on the OSes they control; and agitate for it in the OSes they don't.

And, maybe, just maybe, they could turn on path MTU blackhole detection, please please please please please; it's only been in the Linux kernel for all versions of Android, but not turned on.


> I'm worried because you have a protocol implemented in the userland for a few mainstream languages. It seems everyone now has to pay the price of a protocol implementation on top of a protocol implementation on top of a protocol implementation. Big players---either because they have thousands of open source developers, or is backed by a corporation---they have it easy. Smaller players? Not so much.

This can be partially mitigated in the same way it has been worked-around before: Through proxies. The fact that HTTP/3 is still only HTTP makes it even easier.

E.g. on the server side it might be good enough to have an API gateway, load balancer or CDN which understands HTTP/3 and forwards things in boring HTTP/1.1 to internal services. That's not very different from terminating TLS somewhere before the actual service implementation. Actually service implementations don't even have to speak HTTP - they can also talk via stdin/out to a HTTP/3 server in another language - which means back to CGI.

On the client side, we could deploy a client-side proxy server which translates localhost HTTP/1.1 requests into remote HTTP/3 requests. If that thing is part of the OS distribution, it's actually not that much different from a TCP/IP stack which is delivered as part of the kernel. However if it's not part of the OS it might cause some trust issues. And apart from that it might be a bit inconvenient for users, since now applications need to be changed to make use of the proxy.


> This can be partially mitigated in the same way it has been worked-around before: Through proxies. The fact that HTTP/3 is still only HTTP makes it even easier.

But if we’re doing that we get none of the so called benefits Google-HTTP 2.0 and Google-HTTP 3.0 brings, so what’s the point of using them in the first place?

That’s completely ignoring Google-HTTP 4.0, 5.0 and 6.0 probably coming next year, and the issue of when Google thinks it is “reasonable” to break compatibility with the real HTTP, ie HTTP 1.1.


You still get some of the benefits for the connection to the client, assuming your use case fits. Many typical small setups serve static resources through the "proxy" (i.e. nginx for static assets and distributing requests to backends), benefiting there almost automatically. Similarly CDNs, which nowadays are used even by tiny projects.

(also, if you want your concerns to be taken seriously, I'd tone it down a bit. "so called benefits", "Google HTTP", and "probably coming next year" when QUIC has been in development and testing for over 5 years all don't really give the impression you actually care about the details)


> Big players---either because they have thousands of open source developers, or is backed by a corporation---they have it easy. Smaller players? Not so much.

If you think that's bad, try building a browser from scratch these days!

Then, make it adhere 100% to the HTML5 and CSS3 specs! (W3C versions; I know WHATWG has uses living docs.)


> SCTP never caught on for a reason,

What is that reason, exactly?

I know why I never use it. The use cases where it really shines aren't that common, and it's a very heavy, telco-style, protocol.

However those things are (mostly) true for QUIC as well.


From the article:

Why not SCTP-over-UDP

SCTP is a reliable transport protocol with streams, and for WebRTC there are even existing implementations using it over UDP.

This was not deemed good enough as a QUIC alternative due to several reasons, including:

    SCTP does not fix the head-of-line-blocking problem for streams
    SCTP requires the number of streams to be decided at connection setup
    SCTP does not have a solid TLS/security story
    SCTP has a 4-way handshake, QUIC offers 0-RTT
    QUIC is a bytestream like TCP, SCTP is message-based
    QUIC connections can migrate between IP addresses but SCTP cannot


SCTP does not work through NAT and Windows does not support it.


SCTP has port numbers, so NAT is not a problem. But because of the second point, why should someone implement it? (iptables can do it of course)


I'm assuming the parent meant that NAT, as implemented in SOHO routers, does not support SCTP. They could implement it, but don't, and thus, NAT breaks it.

> But because of the second point, why should someone implement it?

I'm reading this as "SCTP has ports, why should someone implement it?" There is way more to SCTP than ports. For example, SCPT can deliver data on multiple independent streams, something HTTP/2 in many ways reinvents.


Ah sorry. Second point was Windows.


Windows


> If it gains traction, we need a new UDP interface to the kernel as well, for batching packets back and forth.

Those already exist.

http://man7.org/linux/man-pages/man2/sendmmsg.2.html http://man7.org/linux/man-pages/man2/recvmmsg.2.html


Irrespective of the protocol, my optimism for the future of the web has been curtailed by developments like extensions having less and less power over time (recent example is Google's plans to intentionally cripple ad blockers), plugins going away, hobbyist websites becoming more burdensome to set up and maintain if insecure http is deprecated, browsers planning to disable autoplay, etc. It feels like the golden age of the creative and vibrant web peaked during the brief window where all the new HTML5 stuff was around, Firefox used the old extension system, and Flash and Java applets were still common.

After that point it's been becoming more and more sterilized. My web apps that automatically played some sound aren't going to work anymore without some obnoxious "click here to begin" screen that doesn't fit in with the content. No more plugins letting us extend our browsers in new ways (what a convenient "coincidence" for Google that this gives them more control over what the user gets to do and makes tracking what goes on easier). I have to give Reddit Enhancement Suite permission every single time it tries to show a preview from a domain it hasn't previously done so from before. It's all suffocating. HTML5 makes up for some of the lost capability but it's not enough and what parts of HTML5 are going to actually work are basically at the whim of Google now.

But at least HTTP/3 will let us load buzzfeed listicles a few milliseconds faster, so there's that.


This is a special case of production values going up, as also seen in movies, video games, and many other products. User expectations gradually rise until only large organizations of professionals can meet them.

On the other hand, we already live in this world. When was the last time you used a homemade CPU or graphics chip?

It's still possible for an indie scene to arise that values hand-crafted stuff, possibly at a different layer.


Yep. Sadly, the Web is a victim of its own success. There aren't just a few bad actors anymore. There are legions more than willing to write an endless number of malware extensions with randomly permuted uuids. There's an ad industry that long ago went off the deep end and are hoping people don't notice just a little longer. The price list for exploits is well known and buyers are easy to find.

Then again, it is still a massive cross platform content publishing and distribution system that works, despite the hostile ecosystem it inhabits. And it even includes the first truly successful cross platform programming environment.

So there's that.


At least Let's Encrypt has made certificates easier than ever to add and update. Installation of a self-updating certificate takes less than 10 minutes on many server setups.


The sheer fact that you need to involve a third party for encryption shows that the web is fundamentally, conceptually broken and no longer lives up to its original design goals.


Few things

1. You don't need Web PKI certificates for encryption. Indeed in TLS 1.3 this is very obvious because the encryption switches on before any certificates are even involved. You need certificates to... certify identity. And this isn't some oddity of "the web" which might show it's "broken" but simply a mathematical fact about what identity is. If you don't want certificates, you have to just magically know every identity somehow. Works for ten PCs in your office, doesn't scale for tens of millions of web sites.

2. Tim's "Original design goals" are for a system that runs at CERN in Switzerland and is modelled on an earlier system he'd worked with in the 1980s. Tim's system has no encryption, nor does it have most other features you'd expect.


You don't need a third party. You can `openssl req` a self-signed certificate, and as long as whatever device you want to talk to accepts it, you get secure communication.

The other comment sums it up, a third party is a good line between convenience and security.


>You can `openssl req` a self-signed certificate, and as long as whatever device you want to talk to accepts it

Device? We're talking about browsers. Browsers are getting increasingly hostile towards self-signed certs. Ironically, Google doesn't trust third-party root CAs, so they became one themselves. It's good to be the exception to the rules you push on others.


The public internet is not a sandbox for hobbyists any more, like in was in 1993. Now there are incentives to crack you, impersonate you, tamper with the information you're serving. The web had to adapt or perish.


I agree. It may only theoretically be a problem that a set of trusted CAs can dictate who can communicate with each other, but theoretical problems have nasty ways of eventually becoming concrete.

It's definitely worth having the encryption that prevents a lot of problems today, but I'm worried that QUIC has no unencrypted variant at all. That's almost certainly safer for the user, but it means that if a government blacklisted you out of a certificate, you're screwed.


wat

I'm trying to interpret your stance in the most favorable possible manner, but... dude. If you think hobbyist websites are increasingly burdensome to set up, you haven't been paying any attention at all.


The environment became more restrictive with the loss of Flash/Java and now things like breaking autoplay, and more burdensome in some ways like with the https issue, even if it's faster to spin up a cloud instance and JS libraries are more streamlined now.


HTTPS (cert creation and auto-renewal) is trivial thanks to LetsEncrypt.

Flash/Java (applets, presumably) were never easier to deploy than HTML... and deploying static sites continues to get easier and easier. See eg Netlify or Zeit/Now.

Autoplay is abused by advertisers and is a terrible UX. I get that you have a particular, outdated workflow and you'd prefer that nothing change, but really that ship sailed a long time ago.


It's all well and good to opine for the days of old, but when you consider the real-world implications that led to the removal of Flash/Java from the ecosystem, I'd gladly give up the opportunity to experience your art installation without a clickthrough to keep our systems secure.


Autoplay was always terrible and I'm happy to see it die. I don't think I'm alone in this.


This off-topic post is akin to "we have homeless people, so no resources should be allocated to space flight".

The energies invested in developing HTTP successor protocols are not being deprived from efforts to stifle Google from ruining the concept of the web browser as a _user_ agent.


It's a misconception that you have to wait with IPv6.

If you're a large organisation you can move to IPv6 "today". What you do is, internally you cease buying IPv4-only gear and using IPv4 addressing etcetera. Everything inside is purely IPv6. A lot of your networking gets simpler when you do this, and debugging is a LOT smoother because there's no more "Huh 10.0.0.1, could be _anything_" everything has globally unique addresses because it's not crammed into this tiny 32-bit space.

At the edge, you have protocol translators to get from IPv6 (which all your internal stuff users) to IPv4 (which some things on the Internet use) but you probably already had a bunch of gear at the edge anyway, to implement corporate policies like "No surfing for porn at work" and "Nobody from outside should be connecting to port 22 on our machines!".

This isn't really practical for "One man and an AWS account" type businesses where your "Internet access" is a Comcast account and an iPhone, but if you're big enough to actually have an IT department, suggest they look into it. It may be cheaper and simpler than they'd realised.


> What you do is, internally you cease buying IPv4-only gear and using IPv4 addressing etcetera. Everything inside is purely IPv6.

"Throw everything away and start from scratch." uh yeah, that's totally gonna work for a large organization. They'll be done in an afternoon! That includes rewriting all your legacy apps that only support ipv4, including the ones you bought from 3rd parties where you don't even have the source code.


> It's a misconception that you have to wait with IPv6.

Yes and no. As I stated at the end of my comment, the problem w/ IPv6 is that who's benefitting the most isn't clear: I am interested in it, as a power user. Average Joe doesn't care. App developer doesn't care (no killer IPv6 apps yet). Large ISPs with extensive CG-NAT deployments don't care (not worth the money, yet, see IPv6 adoption in the UK).

Who cares about HTTP/3? Average Joe — Not really. Mozilla/Google — Hell yeah they do. It'll be in Chrome before anyone else (if it isn't already). Same with nginx/Apache/any other webserver, Joe Blog with his own VPS will want to enable it. And that's all you need.


> (no killer IPv6 apps yet).

If it helps, Apple now requires apps support IPv6-only networking.

https://developer.apple.com/support/ipv6/


This is just client side, any tunneling or NAT64 is still allowed.


Running any kind of p2p client is more feasible with ipv6 if your v4 is behind a CGNAT.


It may be my embedded developer bias but I don't actually consider moving things outside of the kernel to be necessarily a good thing. Standard kernel interfaces are (usually) a guarantee of stability and good isolation, are generally easier to probe using standard tools, easier to accelerate in hardware etc... Not everything should be in the kernel of course, but low level network protocols should be IMO because they're good targets for potential hardware-acceleration (I'm convinced that it would make sense to handle SSL in-kernel for instance, with a userland daemon handling certificate validation, but that's a story for an other day).

I mean, if you can easily update whatever userland library you're using, why can't you upgrade your OS? If the library is easy to upgrade it means that it uses a well defined and backward-compatible interface. What do you get by shifting everything one layer up? In the end it's just software, there's not really any reason why upgrading a kernel driver should be any harder than upgrading a .so/.dll.

So the logic is "kernels are too slow to update and integrate the last new standards, so let's just move everything one step up because browsers auto-update"? Except that there's no technical reason for that, on my Linux box my browser and my kernel are updated at the same time when I run "apt-get upgrade" or "pacman -Syu" or whatever applies. The kernel I'm using at the moment has been built less than a week ago.

So if the problem is that Windows sucks balls and as a result people end up effectively re-creating an operating system on top of it to work around that, then yeah, from a practical standpoint I get it but I'm definitely not "_very_ excited" about it. It's a rather ugly hack.

If in general if the question is "who do you trust more to select and implement new internet standards, kernel developers or web developers?" then I take a side-glance at the few GBs used by my web browser to display a handful of static pages at the moment and I know the answer as far as I'm concerned...

So yeah, it might make sense, but I still think it just goes to show what a shitshow modern software development has become. Instead of fixing things we just add a new layer on top and we rationalize that it's better that way.


The problem isn't your kernel that needs updating. That's the least of your problems.

The problem are the network appliances that are sitting between you and the server. i.e. the whole internet. To support feature X, everything between you and the server will need to support that (unless it's backwards compatible, but that's not always the case, as described in the article).

Decade-long adoption will solve this problem, until one day your packet gets routed through some router running Linux v2.5 and your connection silently fails.

This isn't good enough to build a faster (and more reliable) internet on, whereas UDP is a 40-year old standard, and we can assume everybody supports it, even Linux v2.5


I agree about the use of UDP to be able to reuse network gear, my comment was prompted by this part of the parent's comment:

>from now on we won't have to wait for your kernel to support feature X

This is orthogonal to the issue you're discussing (for instance as a thought experiment you could design a new protocol on top of ethernet in userland using raw sockets and it won't be supported by anybody, or you could implement something on top of TCP in the kernel and it'll work everywhere).

I just wanted to point out that outdated kernels aren't a fatality, it's a consequence of bad industry practices (in particular, although not uniquely, by Microsoft with its Windows OS). On Linux everything is updated together and the kernel is mostly just another package, so it's a non-issue. It's also means that applications don't have to package a custom updater (and all related infrastructure) by themselves.


Supporting a new protocol on top of IP using raw sockets can be supported by endpoints that are running your software. Except, of course, if you go through NAT or any of these "middleware boxes" that litter the Internet.


> On Linux everything is updated together and the kernel is mostly just another package, so it's a non-issue

Except say on my linux (ubuntu), yes the kernel is patched, but the version doesn't increase very often at all sadly. Yes I decide to run the mainline kernels since I'm on a laptop and I find that beneficial, but it's not the default of most linux installations I believe.


I just now realized that the telephony mindset has won---the "dumb network" (slung packets to an intelligent edge) that was the Internet back in 1992 is now dead a buried. All hail the "smart network!"


Why is that? What’s dumber than UDP (from the network perspective)? From the network perspective it’s a lot dumber than TCP.


Indeed. But (not sure if that's what GP meant just guessing) it's sad that tons of middleboxes that peeked too much inside packets effectively broke "end-to-end" goals, so now it's impractical to deploy any new protocols (such as SCTP) alongside existing UDP and TCP. Most practical progress is made by layering more and more on top of existing protocols :-(

Actually I'm kinda relieved QUIC succeeded at all with much less layering on top existing stuff than usual. (Compared to say Websockets-over-HTTPS-over-TLS-over-TCP-over-someIPv6-over-IPV4-tunnel...). If it's feasible to deploy a major new protocol over just UDP, that's practically as good as directly over IP!

P.S. I think encryption is the main force that held back the (economically almost inevitable) desire of middleboxes to "add value" by manipulating inner layers.


> UDP ... we can assume everybody supports it, even Linux v2.5

If you can actually remember the days of Linux 2.5 (development branch which became 2.6) this is a hilarious analogy. I guess that's what the kids are calling ancient these days, eh? Linux 2.5, when dinosaurs roamed the earth! It even did UDP, can you believe it?!


I generally agree with that, but what about mobile? AFAIK android kernel updates are really slow to reach users; when they actually do (depending on support from vendors, which vary a lot).


Linux has TLS in kernel right now, with userland cert handling. Barely anyone is using it though.

On the other hand there are some really nice QUIC implementations in Rust, and running in userspace has security advantages.


Honestly, I'd feel much better if people were standardizing QUIC and we simply run HTTP/1.1 over it.

Instead we now have transport layers that are application specific and 3 completely different web protocols with none of them being considered legacy, 2 of them being complex enough that people aren't very willing to move.

That does not look like a good foundation for anything.


Part of the process while moving QUIC through IETF was separating it from the HTTP layer, which is why now HTTP/3 and QUIC are different things - although given history and involved players, the HTTP use case was a priority, but other companies are looking to use it underneath other protocols. From my understanding, compared to HTTP/2, the stream concept has moved out of the HTTP spec in the underlying level, where it hopefully can benefit other use cases too.


Indeed, and the encryption is split too. Overall it is starting to look nice to use, although I need to read the spec in more detail to understand a bunch of the details.


http/2 usage has gone from 24% to 33% in the last year, which is pretty good growth.

https://w3techs.com/technologies/details/ce-http2/all/all


Googlenet is not internet 2.0 and barely anyone in the world beyond a couple of megacorps can benefit from HTTP/2, HTTP/3, HTTP/4, etc. It feels more like the web is dead, completely captured by megacorps.


So, presumably organisations like the IRS and Wikipedia are "barely anyone" and all of the big technology companies are "a couple of megacorps" but can you explain why you believe the _users_ don't benefit?

[About a third of all "popular" (ie top 10 million) web sites are HTTP/2 today]

Or did you just mean "I don't care about the facts, I'm angry and the world changes which I don't really understand, so I just make things up and call that truth because it's easier" ?


>[About a third of all "popular" (ie top 10 million) web sites are HTTP/2 today]

Don't forget that a huge chunk of them are hosted on megacorp cloud platforms.

Everything became so "simple" and "streamlined" that companies are forced to outsource all their hardware and platform management and then hire a small army of AWS certified devops.


> companies are forced to outsource all their hardware and platform management

Nothing is being forced. You can still set up a server in your basement, or rent/build a data center and run nginx to get all of the benefits of H2, TLS1.3, etc. You can even get "megacorp-quality platform management" with things like Outposts, GKE on-prem, Azure stack, etc.


>Nothing is being forced.

Not directly, but it is by complexity of dominant technology stacks, protocols and standards that are influenced by ubercorps.


The web is captured by megacorps but it's not captured because of HTTP/2.0. It's captured because of network effects or whatever. And you are wrong, there is a benefit from using HTTP/2.0 on any website that has more than 1 resource to download.


QUIC is developed by an IETF working group where anyone can participate, and there are definitely some productive participants who don't work for Google (or any of the other big companies).


Like they listened to varnish author about making actually useful changes? For example implementing proper sessions, so cookies could go away.


Just because they didn't agree with his suggestion to make major changes to the protocol semantics didn't mean they didn't listen to him.


TLS Token Binding provides exactly that.


Yeah, and who is using it? Why cookies are still there?


Cookies can do more than sessions.


Sure they can, but sessions are pretty much the only reason why end users have cookies enabled.


That's not really true. Independent participants don't have any power to affect any of it.


Running nginx as reverse proxy on internal system. HTTP/2 happens automagically if a client requests it.

It definitely has an impact on our system which requires sub 50ms response times on 2000+ concurrent requests.

It's a PITA if you want to debug the streams because not plain text, but given that we're over TLS, that's not really possible anyway.

In testing, we use ye-olde HTTP/1.1 and no TLS, but even over HTTP/2 and TLS, the browser will still display a JSON request/response happily. Rare that we have to go lower in the stack.


About the mandatory encryption and the performance: this will prevent ISP caching of static content. That would be bad news for, say, Steam and Debian, who use HTTP (not HTTPS [0][1]) to distribute content. (They verify integrity with secure hashing, of course.) I presume they'll decline to adopt HTTP/3.

[0] https://developer.valvesoftware.com/wiki/SteamPipe#LAN_Cachi...

[1] https://whydoesaptnotusehttps.com/


I can't believe you wrote "Internet 2.0". Not sure why.


Bit late to the party, I misunderstood the HTTP/3 but I am very excited for QUIC, and _not_ because of Google's illuminati spec but because I hope people will be interested in secure UDP.

I like datagrams so much more than an accept-listen-keepalive-blocking-foreverloop-callback-async-threaded-future hell that is TCP.

I could be wrong of course I need to read the spec too. Anything UDP makes me giddy.


> or even worse, your ISP-provided router or decade old middleware router on the Internet.

I can guarantee you that middleware will continue to exist. If they need to they'll force QUIC connections to terminate and switch to TLS 1.3. There's no way that companies will allow encrypted communications leaving their companies en-masse without being able to decrypt the content. Even more so for any totalitarian state governments that need to spy on their citizens..


> There's no way that companies will allow encrypted communications leaving their companies en-masse without being able to decrypt the content.

Then they'll install MITM certificates on the individual endpoints that they already control. The ability to intercept connections between endpoints is inexorably going away.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: