• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Bandwidth as a function of latency

pinion9

Banned
Maximum theoretical bandwidth is a function of latency. Does anyone have a formula for this? I am trying to quell an argument at work. For example, server A has a maximum bandwidth output of 3 Mbps. Client X has a latency of 60ms roundtrip to the server, and Client Z has a latency of 500ms roundtrip to the server, what is the maximum theoretical bandwidth these clients will receive? A forumla would be great....
 
Originally posted by: Peter
That depends on how large the data burst is that follows the initial latency.

Next!

What a terrible answer. This is a real world problem using standard TCP/IP protocol.
 
So you got x time of delay (latency) in front of each 4-KByte packet. Unless you are transmitting full duplex, in which case the latency only occurs once because requests packets flow forth whilst data packets flow back at the same time.

The rest is simple calculations.
 
Huh? I don't think latency is at all relevant here - TCP allows you to have multiple packets in flight at the same time. As long as your latency isn't 64000 packet-transmition times long (assuming a 64k-packet window) it shouldn't matter, should it? If latency really affected bandwidth like that with TCP, satellite internet (>300ms latencies) wouldn't be able to offer high bandwidth at all, while in reality it can offer broadband-level bandwidth.
 
Latency isnt relevent here.

You can have a large pipe with high latency or a small pipe with low latency.
For example, server A has a maximum bandwidth output of 3 Mbps. Client X has a latency of 60ms roundtrip to the server, and Client Z has a latency of 500ms roundtrip to the server, what is the maximum theoretical bandwidth these clients will receive?
Initially (from 60ms to 500ms) client X would get all 3Mb of your bandwidth; after you hit the 500ms mark the bandwidth would be allocated (more or less) equally. It would take a little time for the protocol/application to level out however if it's a large transfer (i.e. a large ftp transfer) they would both eventually get roughly equal amounts of bandwidth.

Alternate scenario:
You have an application that requires every packet be delivered in order and response comes back from the client to the server. Than the MTU would make a big differance on your total throughput.

But *most* applications do not function this way.

-Erik
 
Back
Top