Originally posted by: ScottMac
<snip>
...and the callee can then answer them all in turn, without delay, and without the callers having to randomly call back...
There is no such thing as a buffer with "no delay:" that is the whole reason for QOS on switches (and routers), so you can minimize delay for priority traffic.
Alright, I should have been more explicit - "without having to wait for the random backoff/re-transmit ('redial') delay".
Originally posted by: ScottMac
<snip>
Hubs == collisions == latency. No way around it. Hubs are not faster than a switch, when you connect multiple stations to the network.
<snip>
In the presence of collisions, hubs are most certainly not "latency-free".
Wrong. Latency is a transit property. Hubs are repeaters; as the bits are clocked in they are clocked out with a delay so small as to be non-existant. Collisions are an Ethernet traffic control mechanism. The frames don't hit the wire until the wire is ready to get them. Collisions happen, but are generally negligible in number in a well-designed network.
Would that "well-designed network", happen to be using switches, instead of hubs, by any chance? (And if not, then I would like to know how collisions are "negligible", on a network that uses hubs, since all stations are effectively on the same wire, and therefore the same collision domain.)
The way I measure latency, is the delay of getting data from point A to point B on the network. Whether that delay is caused by waiting in a switch's packet buffer, or waiting in a NIC's transmit buffer, due to collisions on the wire, its still waiting. But one can streamline the "wait-queue", by reducing unnecessary traffic, and intellegently queueing that traffic, in a switch. In the real world, a switch is more efficient.
Yes, in the total absence of collisions, a hub does offer slightly lower latency, since the latency is no more than that of the wire, theoretically. But when using a hub, there is no such thing as a real-world network without collisions.
Originally posted by: ScottMac
<snip>
What misconception?
Most of your posts are full of 'em.
Theoretical, or real-world?
Originally posted by: ScottMac
<snip>
Considering that no-one seems to sell just SOHO hubs anymore, .... <at reasonable proces>
Best Buy, Circuit City, CompUSA all have hubs on the shelf (and other retailers as well, I'd bet). "Reasonable Price" is a subjective term. If you need one, you need one, price becomes less relavent.
Have you actually looked? I happened to, at a CompUSA a few months ago when looking for cheap wireless gear, and the thing that struct me as funny, was that there were *no* hubs for sale. None. I explicitly noticed this, that's why I remembered it. A few years back, hubs were common, and switches of the same port count sold for a premium. How things have changed...
Originally posted by: ScottMac
<snip>
But the installations that you seem to be talking about, would be just as easily, if not more efficiently, implemented using a crossover cable ...
I'd like to see you connect dozens-to-hundreds of computers with just a crossover cable.
But that was the point - in order to implement a low-latency network, also without collisions - you would have to use cross-over cables. Likewise, I'd like to see you connect dozens-to-hundreds of computers, using only hubs, and then tell me that in the presence of heavy collisions, that they still offer a lower-latency solution, in the real world.
Originally posted by: ScottMac
To get back to my original point; for the vast majority of of networks discussed on this forum, there would be no appreciable difference between the use of a switch or a hub.
Well, FWIW, in a network with 4-6 machines (LAN party), there was a noticable difference between a hub and a switch, for playing FPS games. Granted, the hub was half-duplex, and the switch was FD, but that's kind of the point - the high number of collisions caused worse latency/lag for playing the game.
Originally posted by: ScottMac
<snip>
You don't feel that a switch offers significant advantage, by effectively eliminating collisions?
Don't put words in my mouth (or post). I explicitly stated that full duplex (if enabled) is the one consistant advantage over a hub (if the endpoint devices support concurrent bi-directional trafic, and most NICs do not).
Yes, hubs offer lower latency, working full-duplex fast ethernet NICs are a pie-in-the-sky fantasy, and switches offer no real-world benefits. Yes, sir. Forgive me for transgressing. Continuing on...
Originally posted by: ScottMac
A switch will ALWAYS add a delay to the transmission.
And so will a collision. Eliminating collisions reduces the (aggregate) delay factor for traffic on that segment.
Originally posted by: ScottMac
Even with a "wire speed" switch, the traffic flows in at speed, the traffic flows out at speed ... but not at the same time (the intrinsic latency of the switch + buffer delay).
Ok, yes, certainly. I'm not trying to claim that it doesn't, but only that in real-world traffic scenarios, that small buffering delay is still less than the delay encountered by regular collisions on the wire, so that small theoretical dis-advantage actually works out to be a decided real-world advantage. (Hence the existance of telephone call queues, rather than using CSMA/CD mechanisms to answer phones.)
Originally posted by: ScottMac
Don't get all frothy about this, it's not a religous issue. Hubs get a bad rap and most people have no idea why a "switch is always better than a hub" (aside that it's an incorrect statement, and a poorly placed and implemented switch will perform worse than a hub).
We're just having fun here, remain calm.
FWIW
Scott
Nah, I think you just miss the "old style" of ethernet, where everything was all on the same effective wire, and you had to actually drill into the cable to run a tap to some new workstation, that sort of thing. (Ok, that was slightly before my time, but the admin at my school at the time explained it to me, they still used thick ethernet for some runs.)
I guess perhaps I've just been arguing the POV relative to the aggregate traffic latency. However, if you're looking at it from the perspective of only a single frame trying to negotiate the network, then yes, multiple layers of store-and-forward switches might add some latency to that one packet, that wouldn't be encountered if it were the lone packet on the wire, and it was clear sailing, at wire-speed, over some hubs all the way to the other edge of the network. (Strangely, I have the urge to go watch TRON again, right at this moment.)