Do you think that HTTP is antiquated and should be replaced?

chrstrbrts

Senior member
Aug 12, 2014
522
3
81
Hi,

Just taking a poll here of experienced network / web guys.

Things like Websocket, webrtc, ajax, long polling, automatic refresh, etc. are all ways to get around the inherent nature of the HTTP protocol: client initiates contact with server, gets page, connection is severed.

Any new contact is brand new with no connection at all with the last or the next.

But, if you have to keep using all these techniques to get around HTTP, why not just scrap HTTP all together and come up with a truly full duplexed protocol for the modern two-way internet?

I think that we should.

But, I'm just a novice.

What do you think?
 

Ken g6

Programming Moderator, Elite Member
Moderator
Dec 11, 1999
16,648
4,590
75
I think if you want to replace HTTP you need to go a few steps further. One problem with the server connecting to the client is that this allows the client to get DDOSed. Each TCP socket has to remain open for some period of time, and this can allow all the ports to get filled.

Google wants to replace the TCP layer with QUIC over UDP. That shouldn't cause any blocking of ports, so something over UDP like this might allow the kind of full duplexed protocol you describe.

On the other hand, look at FTP. Originally it had a full duplexed "active" connection, with sockets going each way. But because of firewalls, the passive, client-initiated, single-socket connection is now more commonly used.
 

KB

Diamond Member
Nov 8, 1999
5,406
389
126
Ajax is still http. Http is get, post, delete over tcp/ip and Ajax is just post.
Websockets is pretty neat but I would rather not write my own protocol when so many exist that do a fine job. For example SPDY is a new protocol aka http 2.0 and it fixes many of the problems of http.
 

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
What do you think?
That HTTP 2.0 takes care of most of the important things for web-related stuff, and that most of its weaknesses are minor, anyway.

This modern two-way internet isn't. My PC will never see any attempt at an incoming connection from the outside world. End of story. The modem will pass it, and then it will be either ignored or logged. The client requests something of the server, and the server responds. That is a must. It can be made more efficient, and that is being done, but attempts to do it differently than that have all failed, and hopefully will keep on failing.

WebRTC is to bridge video and audio over IP with web protocols, and while it looks like a monster, so does SIP. HTTP cannot do anything like real-time communications, so the idea, which is good, is to use HTTP(S) for control, and have a sideband set of protocols for the actual communication.
 

Rakehellion

Lifer
Jan 15, 2013
12,181
35
91
That HTTP 2.0 takes care of most of the important things for web-related stuff, and that most of its weaknesses are minor, anyway.

This modern two-way internet isn't. My PC will never see any attempt at an incoming connection from the outside world. End of story. The modem will pass it, and then it will be either ignored or logged. The client requests something of the server, and the server responds. That is a must. It can be made more efficient, and that is being done, but attempts to do it differently than that have all failed, and hopefully will keep on failing.

WebRTC is to bridge video and audio over IP with web protocols, and while it looks like a monster, so does SIP. HTTP cannot do anything like real-time communications, so the idea, which is good, is to use HTTP(S) for control, and have a sideband set of protocols for the actual communication.

Bittorrent creator is trying to use their peer-to-peer protocol to replace everything. Video streaming, chat, cloud storage. Maybe websites in the future?

I kind of wanted to try their Dropbox clone, but the idea of having my files on some random guy's PC is unsettling. Not that Dropbox is much better.
 

greatnoob

Senior member
Jan 6, 2014
968
395
136
So you want a persistent protocol that doesn't fetch->upload->close like HTTP but rather open->process->process->process->close like how most game servers are? That would take much more processing power for barely any benefit. There's nothing wrong with on-demand connections except the overhead that's generated on socket accepts on bloated webservers that could be used for DDoS attacks... but even at current it's pretty hard to DDoS any 'large' site.

I have a very strong feeling that at some point in the next 40 years the web will reverse its direction - rather than doing everything in a half-assed manner it'll do one thing well. It might become a cloud service where even the rendering happens on the server and only input is handled by thin client devices.
 

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
I don't seen HTTP/2 gaining traction for 5-10 years, if not more.
There are just so many legacy systems out there.

Heck, even IPv6 isn't widespread yet, it is still a novelty for most ISPs, even though IPv4 addresses have been depleted for the most part.
HTTP 2.0 doesn't need to replace everything. New browser support, and new server support, should be enough. Fallbacks will still work fine, TMK, just that new clients going to new servers will get better performance.
 
Last edited:

Red Squirrel

No Lifer
May 24, 2003
70,294
13,646
126
www.anyf.ca
HTTP is fine as it is, no need to add more complexity for nothing. For stuff that needs continual updating then it's up to the web page coder to add that feature in using javascript, via some kind of polling.
 

Cogman

Lifer
Sep 19, 2000
10,284
138
106
I don't seen HTTP/2 gaining traction for 5-10 years, if not more.
There are just so many legacy systems out there.

Heck, even IPv6 isn't widespread yet, it is still a novelty for most ISPs, even though IPv4 addresses have been depleted for the most part.

IPv6 requires that your ISP and their tiered internet providers have router support. HTTP2 requires that your browser supports it and the person you are connecting to supports it.

Where Chrome and Firefox already support it, I think HTTP2 will be widely available pretty shortly.

That being said, there are many people who don't like different aspects about HTTP2, so no doubt there will be websites that cling onto HTTP 1.1 support.
 

Cogman

Lifer
Sep 19, 2000
10,284
138
106
HTTP is fine as it is, no need to add more complexity for nothing. For stuff that needs continual updating then it's up to the web page coder to add that feature in using javascript, via some kind of polling.

If you really need continuous updates then websockets with an ajax fallback is the way to go.
 

Leros

Lifer
Jul 11, 2004
21,867
7
81
HTTP has it's places. I don't think it should go away but maybe browsers should steer you towards HTTPS when browsing.

I can't find it, but there was a long rant about how forcing HTTPS would hurt academics. Apparently people at universities and research facilities often download very large (multi TB or PB) datasets over HTTP. They don't have the budget to get hardware for HTTPS and they don't have the budget for increased bandwidth due to lack of caching that happens with HTTPS. These sorts of places have massive caching proxies to reduce the bandwidth cost of multiple people downloading the same massive dataset.
 

chrstrbrts

Senior member
Aug 12, 2014
522
3
81
So you want a persistent protocol that doesn't fetch->upload->close like HTTP but rather open->process->process->process->close like how most game servers are? That would take much more processing power for barely any benefit. There's nothing wrong with on-demand connections except the overhead that's generated on socket accepts on bloated webservers that could be used for DDoS attacks... but even at current it's pretty hard to DDoS any 'large' site.

Well, I'm new to computers in general and very new to web / network stuff.

HTTP seems like it was an amazing accomplishment back in the day, but today with the super interactive nature of the modern internet (does anyone write static sites anymore?) I just think that two computers should be linked together the way two phones are.

It just seems more logical to me.

I have a very strong feeling that at some point in the next 40 years the web will reverse its direction - rather than doing everything in a half-assed manner it'll do one thing well. It might become a cloud service where even the rendering happens on the server and only input is handled by thin client devices.

What makes you say this?

Isn't the trend right now the exact opposite of what you're predicting?

It seems that more and more is being pushed onto the browser.
 

Crusty

Lifer
Sep 30, 2001
12,684
2
81
HTTP has it's places. I don't think it should go away but maybe browsers should steer you towards HTTPS when browsing.

I can't find it, but there was a long rant about how forcing HTTPS would hurt academics. Apparently people at universities and research facilities often download very large (multi TB or PB) datasets over HTTP. They don't have the budget to get hardware for HTTPS and they don't have the budget for increased bandwidth due to lack of caching that happens with HTTPS. These sorts of places have massive caching proxies to reduce the bandwidth cost of multiple people downloading the same massive dataset.

Yes, the crew at CERN was the one making the comments IIRC.

There's just no way for them to reasonably share their incredibly massive datasets without layers of caching.

Granted, I don't think they should really be using HTTP to distribute their datasets, but it certainly does pose a problem.
 

Crusty

Lifer
Sep 30, 2001
12,684
2
81
So you want a persistent protocol that doesn't fetch->upload->close like HTTP but rather open->process->process->process->close like how most game servers are? That would take much more processing power for barely any benefit. There's nothing wrong with on-demand connections except the overhead that's generated on socket accepts on bloated webservers that could be used for DDoS attacks... but even at current it's pretty hard to DDoS any 'large' site.

I have a very strong feeling that at some point in the next 40 years the web will reverse its direction - rather than doing everything in a half-assed manner it'll do one thing well. It might become a cloud service where even the rendering happens on the server and only input is handled by thin client devices.

That is already happening. With frameworks like ReactJS you can pre-render your site on your server using the same exact code that the client would download and then run in their local browser. Combined with React Native now you can use server side rendering and client side display for native iOS and Android apps.
 
Feb 25, 2011
16,992
1,621
126
It may not be perfect, but most of the problems are solved problems, and there are valid security concerns with making it a more two-way protocol.

So it's probably not going anywhere for a loooong time.
 

Gryz

Golden Member
Aug 28, 2010
1,551
204
106
You want to make servers (in any protocol) as much "state-less" as possible. The more state a server needs to keep, the less it scales. You also get more stability/reliability problems the longer you keep state around. There are reasons that many protocols, like NFS, DNS, etc are all stateless. Experience has shown it is the better design. Unfortunately, after 20-25 years, it seems people have forgotten the lessons learned from the past. And everybody wants to make the same mistakes all over again.

And about servers rendering everything, and clients only displaying ? Right now an average webpage is about 1-2MB, if I'm not mistaken ? If you have to send over a 1920x1080 pgn file, that would be 1920x1080x3= ~6MB. Of course you can compress the picture. But then you require processing on the client again. Also, every time a webpage changes a little bit, the server will need to send the full picture again. And when resolutions on phones, tablets and computers go up, the pictures a server needs to send will become larger. Not a good idea.

Client-rendering was an excellent idea. Just like the request/reply design of web-servers. Of course there is always room for improvements. But when the design is right, you don't want to just throw it out.
 

h4rm0ny

Member
Apr 23, 2015
32
0
0
Ah, chrstrbrts weekly essay question. I do wish you would at least tell us what marks we're getting from your tutor when you hand these threads in!

Http is going to be replaced... with https.

I'm guessing there's a good chance you're joking as we're on the AT forums but I don't know you and I know that the OP might not pick up on this so... HTTPS and HTTP are the same thing, simply one runs inside an encrypted tunnel and the other does not. But in essence, they're the same thing, much like if you put a letter in an envelope or in a steel box, you're still using the same words, letters and grammar.

Well, I'm new to computers in general and very new to web / network stuff.

HTTP seems like it was an amazing accomplishment back in the day, but today with the super interactive nature of the modern internet (does anyone write static sites anymore?) I just think that two computers should be linked together the way two phones are.

It just seems more logical to me.

You want to make servers (in any protocol) as much "state-less" as possible. The more state a server needs to keep, the less it scales. You also get more stability/reliability problems the longer you keep state around. There are reasons that many protocols, like NFS, DNS, etc are all stateless. Experience has shown it is the better design. Unfortunately, after 20-25 years, it seems people have forgotten the lessons learned from the past. And everybody wants to make the same mistakes all over again.

I could not have put this better myself - you've got right to the heart of the matter. Let the client preserve its state, let the server preserve a state to a lesser extent, and let the protocol have no concept of state whatsoever.