• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Bandwidth as a function of latency

pinion9

Banned
Maximum theoretical bandwidth is a function of latency. Does anyone have a formula for this? I am trying to quell an argument at work. For example, server A has a maximum bandwidth output of 3 Mbps. Client X has a latency of 60ms roundtrip to the server, and Client Z has a latency of 500ms roundtrip to the server, what is the maximum theoretical bandwidth these clients will receive? A forumla would be great.... Assume standard TCP/IP protocol.
 
It isn't that easy.

But in practice both of them will get very similar speeds, but one will finish the download faster - the one with the shorter latency.

We'd have to know the TCP window size to do the full analysis.

The only real difference in speed will come from the one way time for an acknowledgement. So with a nice juicy TCP window of 65 Kbytes you can figure out how many ACKs will be sent for a given conversation/transfer and then multiply by one way delay and add this to the total time for the conversation.

In all reality bandwidth is really a function of bandwidth. Latency does have an effect on how long it takes to move something given equal bandwidth however. Bandwidth is just "how fast can I get a bit onto the wire"

What kind of asine off the wall theories do the work buddies have?

 
But it is true that if the latency is too high for short period of time, the pipe will not fill completely. One provider has about 10% of the latency to a particular server as another provider. The provider with only 10% of the latency actually sees speeds of 3x higher than the other provider. Someone is trying to tell me that the latency has no effect. I say BS.
 
Latency has very, very little effect on throughput unless it gets very high (seconds) and the TCP window starts to close.

Your buddy is right.

It is not true that if latency is very high the pipe will not fill.

Now where latency comes into effect is when there is a lot of "ping-pong" where a lot of packets are going back and forth. That's when you won't be able to fill a pipe up because the delay is too great. Something like telnet is an example of ping ping - single packets back and forth. Or other applications like thin clients, poorly designed apps.

But for large transfers (generally what people speak of for bandwidth/capacity) there is very large number of packets directly to the client without an ACK and in this case the pipe is filled very easily in one direction.

-edit-

I forgot about high bandwidth, high delay networks where is "could" happen that you can't fill the pipe. But normally 10+ Mbs doesn't have high delay so its very rare.
 
pinion9, see:

http://www.psc.edu/networking/papers/model_abstract.htmlThroughput <= ~0.7 * MSS / (rtt * sqrt(packet_loss))

in particular,

Throughput <= ~0.7 * MSS / (rtt * sqrt(packet_loss))

remember that rtt = 2*latency. Graph that curve and show your co-workers. (note this says max throughput is proportional to the right hand side of that equation)

spidey07, real TCPs (e.g., with finite storage and finite windows) do more or less reflect this theoretical result in my experience. On a slow link, the link bandwidth (and serialization delay) ends up providing your max. On faster links (multimegabit on up) you really do have to watch the latency. In most real applications we can't go jumbo over the WAN, and we pretty much already have very low packet loss, so latency is the variable that counts most.
 
Originally posted by: cmetz
pinion9, see:

http://www.psc.edu/networking/papers/model_abstract.htmlThroughput <= ~0.7 * MSS / (rtt * sqrt(packet_loss))

in particular,

Throughput <= ~0.7 * MSS / (rtt * sqrt(packet_loss))

remember that rtt = 2*latency. Graph that curve and show your co-workers. (note this says max throughput is proportional to the right hand side of that equation)

spidey07, real TCPs (e.g., with finite storage and finite windows) do more or less reflect this theoretical result in my experience. On a slow link, the link bandwidth (and serialization delay) ends up providing your max. On faster links (multimegabit on up) you really do have to watch the latency. In most real applications we can't go jumbo over the WAN, and we pretty much already have very low packet loss, so latency is the variable that counts most.

Yeah, I'll go along with that. You're right. But I'm not used to seeing 500+ latency on a high speed link. Something would have to be very wrong.

That equation is the whole basis for TCP performance. But I didn't think 3 megabit would be a "fast link". I don't even know what trans US latency is nowadays...70 mS one way? Trans pacific/atlantic...150 mS. Don't recall the going numbers these days. Must be getting old/foggy.

Somebody run the math and lets see what it says on this hypothetical scenario. MSS in this case should be what - 1460? And in modern WANs packet loss should pretty much be zero.

-edit- looks like a great link, but I can't get to it. The heady stuff about TCP is always fun.

Here's another link I just googled...

http://www.acm.org/sigs/sigcomm/sigcomm98/tp/paper25.pdf

This should be fun....if packet loss is zero and the window maintains what does the graph look like? From what i remember the window is not just a function of loss but also of delay (by algorithm)...at what point does delay impact throughput and is it linear?
 
spidey07, I've seen some badly designed carrier networks that built their backbone with a bunch of short point-to-point links. They might have plenty of bandwidth, but those trips in and out of routers starts really adding up in the latency. This is one of the reasons why real carriers use color-to-color DWDM interconnects, or at least fast L2 switching and virtual circuits. Those router trips through the output buffer do add up.

Of course, a well designed network gets packet loss as close to zero as possible and minimizes latency, too. Why? Because those two things affect perceived web performance hugely, and most customers care a lot about their perceived web performance 😉

Another possibly relevant example is some satellite links, I've seen fat pipes with huge latencies in that application. TCP's performance limitations wrt latency (and the HTTP/web limitations) end up driving proprietary "web enhancement" and "TCP enhancement" layers.

And remember that edge bandwidth is starting to get interesting. Like with Verizon's fiber to the premises, that supports 30Mb/s today and 155Mb/s after a forklift upgrade to GPON. If the rest of the network could also move that kind of bandwidth, and latency stays about the same, you could hit the limit.
 
Originally posted by: cmetz
spidey07, I've seen some badly designed carrier networks that built their backbone with a bunch of short point-to-point links. They might have plenty of bandwidth, but those trips in and out of routers starts really adding up in the latency. This is one of the reasons why real carriers use color-to-color DWDM interconnects, or at least fast L2 switching and virtual circuits. Those router trips through the output buffer do add up.

Of course, a well designed network gets packet loss as close to zero as possible and minimizes latency, too. Why? Because those two things affect perceived web performance hugely, and most customers care a lot about their perceived web performance 😉

Another possibly relevant example is some satellite links, I've seen fat pipes with huge latencies in that application. TCP's performance limitations wrt latency (and the HTTP/web limitations) end up driving proprietary "web enhancement" and "TCP enhancement" layers.

And remember that edge bandwidth is starting to get interesting. Like with Verizon's fiber to the premises, that supports 30Mb/s today and 155Mb/s after a forklift upgrade to GPON. If the rest of the network could also move that kind of bandwidth, and latency stays about the same, you could hit the limit.

heh, I'm sure you and I have both designed our fair share of transport networks. You're right again though - I have seen some really dumb moves. damn layer8.

-edit- why do you think the mega routers are coming out? They have to.
 
Back
Top