• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Latency/Bandwidth considerations/questions

Goi

Diamond Member
Hi,
Just a pretty random though. I was thinking, let's say I have a gigabit ethernet link between A and B. That's 133MB/s. Wouldn't that mean that if I had a packet size of say 1KB, it would have to go from A to B in 7.34us(1KB/(133MB/s)), i.e. the latency is 7.34us? If this wasn't the case, and if 7.34us later A wants to send the next packet but the first packet is still halfway along to B, wouldn't A get a collision and hence would have to wait for a random amount of time? By extension, if this happened, wouldn't that mean that we won't even get close to 133MB/s?

Following my example ebove, even basic 10Mbps ethernet would require a latency of 0.7ms, which is an order of magnitude lower than the ping times of machines in a basic LAN. Granted the ping times are round trip times, but even when halved, they're nowhere near 0.7ms, or are they?

I'm probably understanding this wrong, so please enlighten me.
 
you're getting latecny and serialization delay mixed up.

Latency is the time elapsed from the first bit enters a system to the last bit exits. That system could be an internetwork as well leading to the 50-200 ms latency.

Serilization is how long it takes a transmitter to "serialize" a frame onto the wire. What your describing is serialization delay. The slower the clock, the long the serailization delay.

-edit- modern networks are all full duplex so there are no collisions. There are separate transmit and receive paths.
 
So, the time taken from a packet send from machine A to the time machine B receives it is called what?
 
Originally posted by: Goi
So, the time taken from a packet send from machine A to the time machine B receives it is called what?

latency, not counting serilization delay.

serailization delay plus latency (latency often includes things like speed of light, etc as well) = transmission time.
 
So transmission time is the time taken for the sender to form a packet from data(serialization time?), then sending the packet to the receiver(latency), till the receiver gets the packet? Is the time taken for the receiver to extract data from it then network dependent or receiver CPU/memory dependent? Seems to me like it should be the latter.

Also, back to my original example, does that mean that while the first packet is halfway along the cable and 7.34us later another packet is ready to be sent, that the sender is still able to send it, i.e. the cable won't be "busy"?
 
Latency is generally defined as the first bit out to reach the first bit in. Everything else is serialization.

The cable will only be busy in today's full duplex networks when the transmitter is sending a frame (I'm talking layer2 here).

But when you're moving data across the US or around the world that latency adds up. On LANs we don't really care as it should be "intantaneous" - sub millisecond latency.
 
Thanks, I was referring to the LAN example. In fact, I was wondering about a point to point Gbe link where the serialization time is like 7us. From your explanation it seems that the cable won't have a problem transferring 1KB every 7us even though the latency is sub-ms.
 
Originally posted by: Goi
Thanks, I was referring to the LAN example. In fact, I was wondering about a point to point Gbe link where the serialization time is like 7us. From your explanation it seems that the cable won't have a problem transferring 1KB every 7us even though the latency is sub-ms.

nope, no problem. the transmitter will pump out as fast at it can clocked at 1000 megahz a second, minus the interframe wait time dictated by the ethernet procotol.
 
Thanks. That's what I needed to know 🙂

Another thing...assuming a direct point to point connection between A and B, and a UDP data transfer from A to B, can it be assumed that there will not be collisions, out of order packets and duplicate packets, assuming A sends 1 packet at a time in sequential order as fast as it can? Out of order packets will only be ascending, i.e. a packet with a higher sequence number can never arrive earlier than one with a lower sequence number(ignoring wraparound), and if there are jumps in sequence numbers from n to m, this indicates (m-n-1) dropped packets?
 
Back
Top