• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Switch or a Hub?? What's the difference

Vanman

Member
I've been tasked with finding a 24 port hub/switch(??) to upgrade company network to accomodate 6 more people starting soon. Currently we have an 8 port linksys router and an 8 port linksys switch linked to support current network. In looking around online I've found 2 items that "seem" to fit needs within current budget.

1) EtherFast 4124 24-Port 10/100 Ethernet Switch

Versus this

2) EtherFast II 24-Port 10/100 Auto-Sensing Hub

My question comes down to what is the main difference between a switch and a hub, and will either of these products fulfill our needs? Need to have this up and running by mid next week. Any information and advice would be appreciated.

Thanks in advance!

Dirk
 
Basically a hub is like putting a whole bunch of people in a room and having them all shout at eachother. Nobody can hear anybody else, so the only way to effectively communicate is to talk one at a time. Say you've got 10 people in the room, thats 10 mouths that could be speaking at once, but since if you all talk at the same time, you won't know whats going on, only 1 person can talk at a time.

Thats a hub, all ports hear all the traffic that is put onto the network.

A switch is like putting everybody in their own room and having a phone in each one. Anybody from any of the other rooms can dial anybody else. And all the people can talk to different people at the same time without there being any communication problems.

Thats what switches do, they segment traffic based on where it is supposed to go based on the MAC address it is supposed to go to.

A hub would get a frame in on one port and just broadcast that frame out all the other ports it has. Everybody hears it. A switch is a little smarter, it gets that same frame and looks at where it is supposed to go, it looks up on a table it has of where everybody is and only sends the frame to the port where the computer that is supposed to get it is. Because of this, everybody can talk at the same time without there being a problem.
 
A hub is a "wire emulator." A hub should behave exactly as a piece of cabling should behave (with all stations attached to and listening to the cabling). Hubs do not broadcast all traffic out all ports - it repeats all traffic out all ports - there's a big difference.

With a hub, the Ethernet protocol controls who is allowed to talk. All stations must listen while traffic's on on the wire. They must wait for a quiet wire before they can attempt to transmit. In the event that two (or more) stations attempt to transmit at the same time, the Ethernet protocol has provisions for those stations to recognize that a collision has occured, and that they must wait for a (pseudo) random time before beginning the process over again (listen, then talk when quiet on the wire).

A switch is an advanced multi-port bridge. It permits multiple pairs of stations to talk to each other at the same time (each pair must be discreet). If multiple stations attempt to talk to the same endpoint station, their traffic is buffered and delivered in the order received (generally worse performance than a hub, due to the buffering ... in this scenario, hubs are latency-free, switches delay the transmission for the time-in-buffer).

All rules of Ethernet are still followed when using a switch: the station must wait for a quiet wire, then transmit.

Because a switch IS a bridge (in operation), anytime the switch receives traffic to an unknown (layer two / MAC) destination, it will flood (NOT broadcast) the address out of all the ports except the source port.

Switches (and hubs) also propagate broadcasts and multicasts.

The one consistant advantage of the switch is "full duplex" communication (if enabled). The ports can send traffic in both directions concurrently, not necessarily from/to the same source/destination (listen to A, talk to B).

Unless properly implemented, a switch has no advantage over a hub (except full duplex, if enabled).
Improperly implemented, a switch can slow down or kill a network (rare these days, but very possible).

This has been discussed at length in the past, also try a search in this forum for additional input.

FWIW

Scott
 
Sorry, but I gotta disagree.

A switch has huge advantages over a hub once you get above 2 workstations. The fact that you can have simultanious communcation throughout the network increases concurrent throughput between stations.

While not true for most small networks, hubs are subject to the 3-4-5 rule of Ethernet. You can have 3 populated segments, 4 hubs and 5 total segments. Switches and above (routers) are not subject to this rule.

Depending on how your switch is configured, you can have "Cut through" processing which will only read the header of the frame and direct it to its destination without checking the payload, this can lead to corrupted data being forwarded. Or you can have "Store and Forward" switches that will read the entire frame, header+data and then calculate a CRC on it and compare it to the one the frame comes with. If the two don't match there is an error and the frame is dropped.

Hubs don't do either of those since they are basically stupid and will always forward flawed frames.

Oh and when you are using full duplex with a switch there is no CSMA/CD, the line contention mechanism, s turned off because you are dealing with only one node on a segment, therefore there can be no collisions with itself. So when dealing with a switch a node never has to "wait for quiet"

Unless you are overloading a switch with traffic there is no slowdown when using cut-through processing, or there is, but nothing that you are ever going to worry about or effect you. The speed of being able to have multiple workstations talk at the same time will make up for this many many many many times over. Even store and forward isn't slow.
 
The apparent bandwidth multiplication only applies between concurrent pairs (a<-->b and c<---> d and e<-->f (etc.)). In a many-to-one situation (common to most SOHO installations and many poorly designed commercial installations) a switch is at the disadvantage UNLESS the egress path is equal to (or higher than)the aggregated ingress paths (some oversubscription is accepted as a reasonable practice).

To use your analogy: with everybody in their own room with a phone, they can only call and connect to a phone that is not already in-use. If more than one person calls the same destination number, the following calls must wait until that conversation is over and the circuit is cleared (many-to-one, same bandwidth on all ports). Unless the destination has multiple numbers to call (i.e., a fatter pipe), all others must wait (buffer / queue) .... just like a hub.

The issue of cut-through passing bad frames is a non-issue. It was a marketing effort by the vendors that didn't have / couldn't get / wouldn't pay for the technology when Kalpana brought it out. Cisco was the loudest screamer of "Store &amp; Forward is the best" .... until they bought Kalpana and owned the technology that everyone else had to pay THEM for (and you forgot to mention Frag-free - the first 64 bytes are buffered, then forwarded "At least you won't get runts").

While in theory it would be true that no collision domain exists so there is no wire contention, in reality, most NIC drivers do not "do" full duplex (or do it so poorly as to not do it at all). Between active infrastructure devices, perhaps ... between a PC/workstation/ server with any but the best NICs ... no. Doesn't happen. While they are receiving any of the possible multicasts or broadcasts, they are not transmitting. So, while the CSMA/CD mechanism may not be active, the practical operational differences are negligible.

With extremely rare exception, current common switches don't do cut-through. Those that do default to store &amp; forward and generally stay that way by administrator's ignorance or choice.

With regard to the multiple workstations talking at the same time .... if they are trying to talk to the same resources at the same time, their traffic is delayed, buffered, enqueued .... latency any way you cut it. Hubs don't have latency, and they don't have packet drop, there are no head-of-the-line issues (three common specs for a switch).

Also note that this discussion surfaces a couple times a year. I bring up some points to kill some common misconceptions about hubs. Given the choice, 99% of the time I'd put in a switch ... but I also understand the design limitations of a switch and enforce proper implementation. I do this mainly beause in most of the circumstances presented on this forum (SOHO, games, a couple PCs to an Internet gateway ...) there would be absolutely no perceptable difference between using a hub and a switch ... and a hub would probably be the better choice. As far as 3-4-5 goes, it is not a design limitation to a properly designed and implemented network.

A switch is not automatically "better" than a hub. It would only be "better" if there was a specific advantage and, functionally, in most of the installations discussed here, there isn't.

Done properly, switches are a wunnerful thing. I like 'em. But I like to stir things up a little and stick up for hubs now &amp; then too....

FWIW

Scott




 
BAH!!! DOWN WITH HUBS!!! 🙂

The only thing I like them for is setting up a quick little network where I can capture traffic without having to mirror a port. I guess they still have their uses.

I'll never go back to hubs after we migrated years ago at a site. We were doing an upgrade of a lab that still had hubs, we were imaging the lab (macs) using unicasts. With a 10/100 24 port hub it took 20 minutes to image the lab, when the switch came in, a 40 port HP, it dropped to 7 minutes.
 
Originally posted by: ScottMac
A switch is an advanced multi-port bridge. It permits multiple pairs of stations to talk to each other at the same time (each pair must be discreet). If multiple stations attempt to talk to the same endpoint station, their traffic is buffered and delivered in the order received (generally worse performance than a hub, due to the buffering ... in this scenario, hubs are latency-free, switches delay the transmission for the time-in-buffer).

Err, I rather disagree. If two stations try to talk to a third, at the same time - with a hub they both have to "backoff" for some time, and then attempt re-transmit. A switch avoids this collision and retransmission delay. Your statement seems to imply that a hub would allow them to both talk to the third station at the same instant, which I know you must not mean. In the presence of collisions, hubs are most certainly not "latency-free". If you meant a switch in comparison with two pont-to-point links (in which case there are no collisions, nor any sort of store-and-forward delay), then I would agree. But most people don't set up their network with dedicated point-to-point links everywhere, that's really not the "style" of ethernet.

Originally posted by: ScottMac
The one consistant advantage of the switch is "full duplex" communication (if enabled). The ports can send traffic in both directions concurrently, not necessarily from/to the same source/destination (listen to A, talk to B). Unless properly implemented, a switch has no advantage over a hub (except full duplex, if enabled).
Improperly implemented, a switch can slow down or kill a network (rare these days, but very possible).
You don't feel that a switch offers significant advantage, by effectively eliminating collisions? (Eliminate in full-duplex, greatly reduce in half-duplex.)

The only reason that I could see for using a hub these days, is to allow passive monitoring of a physical ethernet segment, for IDS purposes. Of course, if you own a slightly higher-end switch, that allows "port mirroring", then that would work as well. Otherwise, hubs are pretty-much obsolete, especially given the limitation on the number of repeaters allowed on a fast ethernet network physical segment. (I don't think I've ever seen a "gigabit hub" either, only switches, which makes perfect sense to me.)
 
Originally posted by: ScottMac
The apparent bandwidth multiplication only applies between concurrent pairs (a<-->b and c<---> d and e<-->f (etc.)). In a many-to-one situation (common to most SOHO installations and many poorly designed commercial installations) a switch is at the disadvantage UNLESS the egress path is equal to (or higher than)the aggregated ingress paths (some oversubscription is accepted as a reasonable practice).
I still fail to see how they are at a disadvantage, relative to a hub in the same scenario, simply because the number of collisions in that many-to-one scenario is greatly reduced with a switch, thus increasing the effective utilization of the theoretical bandwidth of that link.

Originally posted by: ScottMac
To use your analogy: with everybody in their own room with a phone, they can only call and connect to a phone that is not already in-use. If more than one person calls the same destination number, the following calls must wait until that conversation is over and the circuit is cleared (many-to-one, same bandwidth on all ports). Unless the destination has multiple numbers to call (i.e., a fatter pipe), all others must wait (buffer / queue) .... just like a hub.
Not true. The better analogy would be between the callee having only a single line, with a single phone number, vs. someone that has multiple internal lines, still with only a single external phone number, but has an intenal call-queueing system, that all subsequent callers get put automatically on hold, and the callee can then answer them all in turn, without delay, and without the callers having to randomly call back, hoping to get through. Yes, there is still only one person dealing with the incoming phone calls, and their bandwidth is limited - but they can be much more efficient in dealing with the callers. The real-world existance of phone queue systems at businesses bears this out.

Originally posted by: ScottMac
While in theory it would be true that no collision domain exists so there is no wire contention, in reality, most NIC drivers do not "do" full duplex (or do it so poorly as to not do it at all). Between active infrastructure devices, perhaps ... between a PC/workstation/ server with any but the best NICs ... no. Doesn't happen. While they are receiving any of the possible multicasts or broadcasts, they are not transmitting. So, while the CSMA/CD mechanism may not be active, the practical operational differences are negligible.
So you are saying, that most network gear mfgs and vendors, have been committing fraud all of these years, selling devices that simply don't operate to their theoretical specifications?

I think I'm going to have to dig up some full-duplex network benchmarking tests to see if your theory holds any water. Somehow, I doubt it. No offense, but that sounds like outdated information from the days of NE2000 ISA cards and DOS/Win3.1/Win9x OSes. (Granted, the PCI bus is not bi-directional either, but cards nowadays have usually 64KB or more of packet buffer memory, and can easily overlap transmitting and sending of frames.) Btw, wasn't the whole point of 3Com's "parallel tasking" that they print on their cards, to enable exactly that sort of capability?

Originally posted by: ScottMac
With regard to the multiple workstations talking at the same time .... if they are trying to talk to the same resources at the same time, their traffic is delayed, buffered, enqueued .... latency any way you cut it. Hubs don't have latency, and they don't have packet drop, there are no head-of-the-line issues (three common specs for a switch).
Hubs == collisions == latency. No way around it. Hubs are not faster than a switch, when you connect multiple stations to the network.

Originally posted by: ScottMac
Also note that this discussion surfaces a couple times a year. I bring up some points to kill some common misconceptions about hubs.
What misconception? Hubs are as obsolete these days as thin-wire coax ethernet is, and the handful of accompanying terminators that always end up on some admins' desk to play with. 😛 (I had a nice collection at one point.)

Originally posted by: ScottMac
Given the choice, 99% of the time I'd put in a switch ... but I also understand the design limitations of a switch and enforce proper implementation. I do this mainly beause in most of the circumstances presented on this forum (SOHO, games, a couple PCs to an Internet gateway ...) there would be absolutely no perceptable difference between using a hub and a switch ... and a hub would probably be the better choice.
Considering that no-one seems to sell just SOHO hubs anymore, at least not at reasonable prices, I'm not so sure about that assertion, strictly on a purchase-price basis.

Originally posted by: ScottMac
A switch is not automatically "better" than a hub. It would only be "better" if there was a specific advantage and, functionally, in most of the installations discussed here, there isn't.
But the installations that you seem to be talking about, would be just as easily, if not more efficiently, implemented using a crossover cable.

I think that your original point was, that if you were bandwidth-limited, because you were trying to wire up a many-to-one connection scenario, that using a switch instead of a hub would not magically give you more bandwidth to that "one" node. That is true. However, a switch still allows more efficient and effective use of that limited bandwidth, so in real-world scenarios, should still be faster, and in cases of heavy traffic contention (collisions) markedly so.

The fact that fast ethernet limits the number of repeaters (hubs) on a segment, and gigabit pretty-much prohibits them altogether in favor of switches, is the final nail in the coffin for hubs, as far as I'm concerned.

I still keep an old 10Mbit hub around though, with a coax "uplink" port, just in case I might need to wire together some legacy thinnet segment to a more modern switched fast ethernet segment or something. Haven't actually used it in a long time, thankfully. 🙂
 
I have a fileserver on a switch, is it possible to bond 2 NICs on the fileserver to give it more bandwidth? I'm not having speed problems (I download constantly at ~300KBps and stream movies and ISOs without any problems) but just wondering since you guys went into such detail above. (using XP pro)
 
Originally posted by: VirtualLarry
The only reason that I could see for using a hub these days, is to allow passive monitoring of a physical ethernet segment, for IDS purposes. Of course, if you own a slightly higher-end switch, that allows "port mirroring", then that would work as well.

Hubs and taps are generally better for this, IMO. That's basically the only reason I keep hubs around. 😉
 
<snip>
...and the callee can then answer them all in turn, without delay, and without the callers having to randomly call back...

There is no such thing as a buffer with "no delay:" that is the whole reason for QOS on switches (and routers), so you can minimize delay for priority traffic.

<snip>
So you are saying, that most network gear mfgs and vendors, have been committing fraud all of these years, selling devices that simply don't operate to their theoretical specifications?

NO, they're not lying, they're marketing (and most folks are too ignorant to know better - that basis for most marketing and most political speeches).

<snip>
Hubs == collisions == latency. No way around it. Hubs are not faster than a switch, when you connect multiple stations to the network.
<snip>
In the presence of collisions, hubs are most certainly not "latency-free".

Wrong. Latency is a transit property. Hubs are repeaters; as the bits are clocked in they are clocked out with a delay so small as to be non-existant. Collisions are an Ethernet traffic control mechanism. The frames don't hit the wire until the wire is ready to get them. Collisions happen, but are generally negligible in number in a well-designed network.

<snip>
What misconception?
Most of your posts are full of 'em.

<snip>
Considering that no-one seems to sell just SOHO hubs anymore, .... <at reasonable proces>

Best Buy, Circuit City, CompUSA all have hubs on the shelf (and other retailers as well, I'd bet). "Reasonable Price" is a subjective term. If you need one, you need one, price becomes less relavent.

<snip>
But the installations that you seem to be talking about, would be just as easily, if not more efficiently, implemented using a crossover cable ...

I'd like to see you connect dozens-to-hundreds of computers with just a crossover cable. Before Kalpana, this was common in many companies. To get back to my original point; for the vast majority of of networks discussed on this forum, there would be no appreciable difference between the use of a switch or a hub.

<snip>
You don't feel that a switch offers significant advantage, by effectively eliminating collisions?

Don't put words in my mouth (or post). I explicitly stated that full duplex (if enabled) is the one consistant advantage over a hub (if the endpoint devices support concurrent bi-directional trafic, and most NICs do not).

A switch in half-duplex will still wait for a quiet wire before it transmits each frame, then there's the latency induced in the buffer (in the case of multiple ports contending for the same egress port, at least one other frame's delay + buffering time (store, then forward = time = latency)). A switch will ALWAYS add a delay to the transmission. Even with a "wire speed" switch, the traffic flows in at speed, the traffic flows out at speed ... but not at the same time (the intrinsic latency of the switch + buffer delay).


Don't get all frothy about this, it's not a religous issue. Hubs get a bad rap and most people have no idea why a "switch is always better than a hub" (aside that it's an incorrect statement, and a poorly placed and implemented switch will perform worse than a hub).

We're just having fun here, remain calm.

FWIW

Scott
 
Originally posted by: ScottMac
<snip>
...and the callee can then answer them all in turn, without delay, and without the callers having to randomly call back...

There is no such thing as a buffer with "no delay:" that is the whole reason for QOS on switches (and routers), so you can minimize delay for priority traffic.
Alright, I should have been more explicit - "without having to wait for the random backoff/re-transmit ('redial') delay".

Originally posted by: ScottMac
<snip>
Hubs == collisions == latency. No way around it. Hubs are not faster than a switch, when you connect multiple stations to the network.
<snip>
In the presence of collisions, hubs are most certainly not "latency-free".

Wrong. Latency is a transit property. Hubs are repeaters; as the bits are clocked in they are clocked out with a delay so small as to be non-existant. Collisions are an Ethernet traffic control mechanism. The frames don't hit the wire until the wire is ready to get them. Collisions happen, but are generally negligible in number in a well-designed network.
Would that "well-designed network", happen to be using switches, instead of hubs, by any chance? (And if not, then I would like to know how collisions are "negligible", on a network that uses hubs, since all stations are effectively on the same wire, and therefore the same collision domain.)

The way I measure latency, is the delay of getting data from point A to point B on the network. Whether that delay is caused by waiting in a switch's packet buffer, or waiting in a NIC's transmit buffer, due to collisions on the wire, its still waiting. But one can streamline the "wait-queue", by reducing unnecessary traffic, and intellegently queueing that traffic, in a switch. In the real world, a switch is more efficient.

Yes, in the total absence of collisions, a hub does offer slightly lower latency, since the latency is no more than that of the wire, theoretically. But when using a hub, there is no such thing as a real-world network without collisions.

Originally posted by: ScottMac
<snip>
What misconception?
Most of your posts are full of 'em.

Theoretical, or real-world?

Originally posted by: ScottMac
<snip>
Considering that no-one seems to sell just SOHO hubs anymore, .... <at reasonable proces>
Best Buy, Circuit City, CompUSA all have hubs on the shelf (and other retailers as well, I'd bet). "Reasonable Price" is a subjective term. If you need one, you need one, price becomes less relavent.
Have you actually looked? I happened to, at a CompUSA a few months ago when looking for cheap wireless gear, and the thing that struct me as funny, was that there were *no* hubs for sale. None. I explicitly noticed this, that's why I remembered it. A few years back, hubs were common, and switches of the same port count sold for a premium. How things have changed...

Originally posted by: ScottMac
<snip>
But the installations that you seem to be talking about, would be just as easily, if not more efficiently, implemented using a crossover cable ...

I'd like to see you connect dozens-to-hundreds of computers with just a crossover cable.
But that was the point - in order to implement a low-latency network, also without collisions - you would have to use cross-over cables. Likewise, I'd like to see you connect dozens-to-hundreds of computers, using only hubs, and then tell me that in the presence of heavy collisions, that they still offer a lower-latency solution, in the real world.

Originally posted by: ScottMac
To get back to my original point; for the vast majority of of networks discussed on this forum, there would be no appreciable difference between the use of a switch or a hub.

Well, FWIW, in a network with 4-6 machines (LAN party), there was a noticable difference between a hub and a switch, for playing FPS games. Granted, the hub was half-duplex, and the switch was FD, but that's kind of the point - the high number of collisions caused worse latency/lag for playing the game.

Originally posted by: ScottMac
<snip>
You don't feel that a switch offers significant advantage, by effectively eliminating collisions?

Don't put words in my mouth (or post). I explicitly stated that full duplex (if enabled) is the one consistant advantage over a hub (if the endpoint devices support concurrent bi-directional trafic, and most NICs do not).

Yes, hubs offer lower latency, working full-duplex fast ethernet NICs are a pie-in-the-sky fantasy, and switches offer no real-world benefits. Yes, sir. Forgive me for transgressing. Continuing on...

Originally posted by: ScottMac
A switch will ALWAYS add a delay to the transmission.
And so will a collision. Eliminating collisions reduces the (aggregate) delay factor for traffic on that segment.

Originally posted by: ScottMac
Even with a "wire speed" switch, the traffic flows in at speed, the traffic flows out at speed ... but not at the same time (the intrinsic latency of the switch + buffer delay).
Ok, yes, certainly. I'm not trying to claim that it doesn't, but only that in real-world traffic scenarios, that small buffering delay is still less than the delay encountered by regular collisions on the wire, so that small theoretical dis-advantage actually works out to be a decided real-world advantage. (Hence the existance of telephone call queues, rather than using CSMA/CD mechanisms to answer phones.)

Originally posted by: ScottMac
Don't get all frothy about this, it's not a religous issue. Hubs get a bad rap and most people have no idea why a "switch is always better than a hub" (aside that it's an incorrect statement, and a poorly placed and implemented switch will perform worse than a hub).

We're just having fun here, remain calm.

FWIW

Scott

Nah, I think you just miss the "old style" of ethernet, where everything was all on the same effective wire, and you had to actually drill into the cable to run a tap to some new workstation, that sort of thing. (Ok, that was slightly before my time, but the admin at my school at the time explained it to me, they still used thick ethernet for some runs.)

I guess perhaps I've just been arguing the POV relative to the aggregate traffic latency. However, if you're looking at it from the perspective of only a single frame trying to negotiate the network, then yes, multiple layers of store-and-forward switches might add some latency to that one packet, that wouldn't be encountered if it were the lone packet on the wire, and it was clear sailing, at wire-speed, over some hubs all the way to the other edge of the network. (Strangely, I have the urge to go watch TRON again, right at this moment.)
 
Actually, I think I get what you trying to point out originally. Hubs don't add any appreciable latency to the packet transmission, over and above what would normally be the case with ethernet, while switches do. Ok, true. But on the other hand, switches reduce the collisions, and therefore reduce/remove the latency that would result from those collisions and necessary re-transmits, so the small "latency price" to pay up-front for having a switch, generally, gives a greater real-world dividend in the end, for aggregate traffic.

I do believe, however, that it should be measurable that a switch can give more throughput than a hub, with the cost of a tiny fixed additional (nearly unnoticable) latency, especially in the case you originally cited, that of a many-to-one communication pattern, in which the collisions would be especially acute in the case of using a hub, and effectively reduce the available bandwidth by a small factor.

PS. Please forgive me for being so verbose at times. I know not how fast I type. 😛

 
hubs obsolete. most everything these days is a swithc. way back when lan party was novel, hub and switch prices differed enough that many had hubs. lanp arty nightmare if someone wanted to transfer a file...bog down everything.
 
I absolutely DO NOT miss "the old way" - I've done enough vampire taps and strung up enough of the old-style networks to appreciate the current state-of-the-art.

Again, my point is that hubs are not nearly as evil as most of the "new generation" network folks seem to believe. It was great for it's time, not nearly so great now, but it's just not as bad as most posters seem to believe it is.

Compaines really did have dozens-to hundreds of nodes per segment -on hubs- (mostly variable on the applications in use and the knowledge level of the implementers). They didn't have anywhere near the problems with dozens-to-hundreds of nodes that most home users express using three or four.

Certainly the bandwidth requirements and node capabilities have grown, technology has improved to accommodate. Absolutely no argument there. I like new tech (most of the time).

Collisions are not a bad thing ... excessive collisions are. The threshold used to be in the neighborhood of one part-per million. Switches, routers, and environmental factors for the media can produce some mutilated frames too, especially through a marginally configured WAN .... sometimes at levels approaching or surpassing the collision levels on a hub (or coax) based LAN.

(my impression) For the average Anandtech Networking Forum poster, they are putting a few machines to a (hub / switch or router with an incorporated hub / switch) which is connecting to a router / modem which is connected to either cable or DSL ... operating at 1.5 to 3 megabits down, 128 to 384 up.

Most of the traffic is headed to/from the Internet, some of the traffic occasionally goes to a printer (printer server), some occasionally goes to another machine for file or media sharing.

In the above scenario, a hub would work every bit as well as a switch. Collisions (should be) would be negligible. It isn't much of a stretch to imagine that it could be better to have the frame wait until the wire is ready than to send the frame and have it buffered for gawd-knows-how-long waiting to get into the little bitty pipe heading for the Internet (10/100/1000 meg down to 128-384K). There is a time-based lifespan on packets; if it waits too long, it'll expire and need to be re-transmitted (which takes MUCH MUCH longer than a fall-back from a collision).

If you bring in the game players, and they are playing peer-to-peer, then a switch would certainly offer the best scenario (multiple pairs in concurrent session). If the gamers are playing to a server, then we are back to the many-to-one scenario (many player to the server, which relays the updates to the other players)... which is worse-case performance for any switch (given all ports at the same bandwidth/speed).

In that case, to use a switch most effectively, the players would force their connection to 10 meg, with the server at 100meg to reduce the bottleneck at the egress port (which, at that point, is shared bandwidth, just like a hub). Of course you can kick that up an order of magnitude and do better still - 100 meg players to a one gig server connection (properly tuned, assuming the server can handle the traffic and bandwidth).

Also, lighten up on the "real world" versus theoretical stuff. I've been networking for twenty-something years, I've seen "Real World" versus theoretical, I've had to deal with elevated customer expectations due to "marketing," I've worked in an Interoperability Lab of one sort or another for over a decade; we test this stuff for a living (and money!). I'm even a "CEE" Kalpana Certified EtherSwitch Engineer (since 1994). I understand the capabilities of the current technology jus' fine (Cisco, Nortel, Extreme, Foundry ...).

I'm arguing for the sake of discussion and to perhaps get some of the new folks a different perspective. Switches are not the be-all, end-all. They're a wunnerful thing, no doubt, but for the average SOHO-kinda-person .... hubs would do at least as good of a job. BTW: The BB, CC, Fryes, and Comp-USA in my neighborhood are all in-stock on hubs if you want some 😀.

We did do some Lab tests on Hub Versus Switch a couple years ago. The results are still posted on my fluff site (ScottMac.Net). You'll see that even with two or three hosts aimed at a single endpoint, the bandwidth gets divided and produces results a little better than a hub, but not nearly at the level one might suppose. I cannot post the names of the vendors for legal reasons.

Check it out.

FWIW

Scott
 
A bridge/switch is an L2 device, a hub is a L1 device. If you have a bad or flaky NIC, or bad or flaky cables, those problems can propogate to different hub ports, while they are a lot less likely to propogate to different switch ports. That alone makes a switch a lot more valuable in my opinion in both home and corporate environments. I have several times gone into networks with odd performance problems and replaced a hub with a switch, and all the sudden things are "fixed" because of that separation. When the switch was managed, I could then figure out which station was the problem very easily (though a managed hub should be able to produce the same stats).

There are very specific situations where a hub would be advantageous over a switch (ref previous latency dicussion) but for most people under most circumstances, a switch is advantageous over a hub. I always recommend switches for new purchases, but only replace hubs with switches when there's a problem it would solve.
 
Originally posted by: ScottMac
Compaines really did have dozens-to hundreds of nodes per segment -on hubs- (mostly variable on the applications in use and the knowledge level of the implementers). They didn't have anywhere near the problems with dozens-to-hundreds of nodes that most home users express using three or four.
The quality of the hardware back then as used for businesses was probably slightly better (and of course much more expensive) than the consumer gear sold in stores these days.

Originally posted by: ScottMac
(my impression) For the average Anandtech Networking Forum poster, they are putting a few machines to a (hub / switch or router with an incorporated hub / switch) which is connecting to a router / modem which is connected to either cable or DSL ... operating at 1.5 to 3 megabits down, 128 to 384 up.

Most of the traffic is headed to/from the Internet, some of the traffic occasionally goes to a printer (printer server), some occasionally goes to another machine for file or media sharing.

In the above scenario, a hub would work every bit as well as a switch. Collisions (should be) would be negligible. It isn't much of a stretch to imagine that it could be better to have the frame wait until the wire is ready than to send the frame and have it buffered for gawd-knows-how-long waiting to get into the little bitty pipe heading for the Internet (10/100/1000 meg down to 128-384K).

Alright, given that scenario, I can see where you're coming from. Most of the posts to this forum though are requests for help from those less knowledgable. (I include myself in that category too, regarding WiFi.) If you are only talking about sharing a base-level broadband connection or printers or something, then using a hub is quite reasonable.

But if you include the collective AT forums, I see a lot more comments from posters in GH, Video, and HD, about implementing gigabit in the home, and people doing massive media-sharing, home-network video-servers, TiVos, etc. (Stuff I wish I had the funds to mess around with. 😛)

Originally posted by: ScottMac
There is a time-based lifespan on packets; if it waits too long, it'll expire and need to be re-transmitted (which takes MUCH MUCH longer than a fall-back from a collision).
That I was only semi-aware of. It makes sense that a switch's packet-buffering mechanism would have some sort of aging time-out (as do the MAC-port mapping tables), but I've never seen any appreciable evidence that such a thing happens regularly in real-world implementations with a switch. Therefore I wasn't aware that was even a problem or a consideration for implementation.

Originally posted by: ScottMac
If you bring in the game players, and they are playing peer-to-peer, then a switch would certainly offer the best scenario (multiple pairs in concurrent session). If the gamers are playing to a server, then we are back to the many-to-one scenario (many player to the server, which relays the updates to the other players)... which is worse-case performance for any switch (given all ports at the same bandwidth/speed).
But generally, the amount of traffic generated doesn't fill the pipe, but it is fairly latency-sensitive. Lots of smaller packets on the wire, rather than a full stream of bigger ones.

Originally posted by: ScottMac
In that case, to use a switch most effectively, the players would force their connection to 10 meg, with the server at 100meg to reduce the bottleneck at the egress port (which, at that point, is shared bandwidth, just like a hub). Of course you can kick that up an order of magnitude and do better still - 100 meg players to a one gig server connection (properly tuned, assuming the server can handle the traffic and bandwidth).
Well, those would be my recommendations too, for a file-server, as those transfers between clients tend to attempt to fill the client's pipe as much as possible, and at the same time, not overload the server's pipe so as to cause bandwidth-starvation for other clients. One reason for "trunking" multiple ports of a managed switch that allows it, to multiple cards on a server, or using a gigabit uplink from the switch to the server, for fast-ethernet clients. All very sound advice, of course.

But in the case of gaming, I don't think that they generate enough traffic that total bandwidth is really an issue, but by the same token, the latency added by the store-and-forward mechanisms of most switches, while non-zero, is at least generally constant, and basically allows "pipelining" the packets without worrying about the relative non-deterministic latency caused by collisions. I still personally feel that a switch is a better choice in that application, but I suppose that I will have to admit my experience is limited to the set of equipment that myself and friends have used over the years for LAN parties, which was mostly unmanaged consumer-level gear.

Originally posted by: ScottMac
Also, lighten up on the "real world" versus theoretical stuff. I've been networking for twenty-something years, I've seen "Real World" versus theoretical, I've had to deal with elevated customer expectations due to "marketing," I've worked in an Interoperability Lab of one sort or another for over a decade; we test this stuff for a living (and money!). I'm even a "CEE" Kalpana Certified EtherSwitch Engineer (since 1994). I understand the capabilities of the current technology jus' fine (Cisco, Nortel, Extreme, Foundry ...).
I'm arguing for the sake of discussion and to perhaps get some of the new folks a different perspective.

Point taken. My first comment about the old-style of ethernet was more of an homage to your apparent experience with this field, as anyone who remembers those, and even used them, obviously has been doing this for some time. 🙂

I just thought that you were mostly arguing a theoretical point, when that factor didn't have quite the real-world impact on things. I did do some reading on fast-ethernet full-duplex support though, thanks to this discussion, and learned some things. Wildpackets.com seems to have a good reference that goes into the nitty-gritty physical-layer details of things that most other reference sites tend to gloss over. (For example, I had no idea that 100-base T4 is so convoluted at the physical layer, using three twisted-pairs.)

Originally posted by: ScottMac
Switches are not the be-all, end-all. They're a wunnerful thing, no doubt, but for the average SOHO-kinda-person .... hubs would do at least as good of a job. BTW: The BB, CC, Fryes, and Comp-USA in my neighborhood are all in-stock on hubs if you want some 😀.

Interesting. I always did suspect that my local CompUSA store was a bit sub-par sometimes, although the guy at the tech-counter/security window is at least far more knowledgable than most of the computer staff at my local BestBuy, who look more like an array of out-of-work teen models than anything else.

Slightly OT, but remember when back in the day, Radio Shack used to be staffed by actual tech geeks, micro-computer enthusiasts, ham op, etc.? Those were the days... Now it's more like, "You've got questions?" "We've got cheap impulse-buy toys, cellular phones, and ripoff extended-warranty plans! - would you like one of each to go?" "No thanks." (leaves store).

Originally posted by: ScottMac
We did do some Lab tests on Hub Versus Switch a couple years ago. The results are still posted on my fluff site (ScottMac.Net). You'll see that even with two or three hosts aimed at a single endpoint, the bandwidth gets divided and produces results a little better than a hub, but not nearly at the level one might suppose. I cannot post the names of the vendors for legal reasons.

Check it out.
Thanks, I will.

Edit: quote nesting fix
 
Back
Top