Hi,
Just a pretty random though. I was thinking, let's say I have a gigabit ethernet link between A and B. That's 133MB/s. Wouldn't that mean that if I had a packet size of say 1KB, it would have to go from A to B in 7.34us(1KB/(133MB/s)), i.e. the latency is 7.34us? If this wasn't the case, and if 7.34us later A wants to send the next packet but the first packet is still halfway along to B, wouldn't A get a collision and hence would have to wait for a random amount of time? By extension, if this happened, wouldn't that mean that we won't even get close to 133MB/s?
Following my example ebove, even basic 10Mbps ethernet would require a latency of 0.7ms, which is an order of magnitude lower than the ping times of machines in a basic LAN. Granted the ping times are round trip times, but even when halved, they're nowhere near 0.7ms, or are they?
I'm probably understanding this wrong, so please enlighten me.
Just a pretty random though. I was thinking, let's say I have a gigabit ethernet link between A and B. That's 133MB/s. Wouldn't that mean that if I had a packet size of say 1KB, it would have to go from A to B in 7.34us(1KB/(133MB/s)), i.e. the latency is 7.34us? If this wasn't the case, and if 7.34us later A wants to send the next packet but the first packet is still halfway along to B, wouldn't A get a collision and hence would have to wait for a random amount of time? By extension, if this happened, wouldn't that mean that we won't even get close to 133MB/s?
Following my example ebove, even basic 10Mbps ethernet would require a latency of 0.7ms, which is an order of magnitude lower than the ping times of machines in a basic LAN. Granted the ping times are round trip times, but even when halved, they're nowhere near 0.7ms, or are they?
I'm probably understanding this wrong, so please enlighten me.