• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Best performance - 2 x 24 switches or 3 x 16

Best performance - 2 x 24 switches or 3 x 16

I'm not sure if the heavier traffic between the two would cancel out the benefit of 16 nodes being integrated.
 
for anyone qualified to make an educated guess, more info is needed....

For what use?
Is there a router somewhere?
Are there servers involved?

Or are we just connecting a bunch of PC's together to form a workgroup?
 
Originally posted by: yruffostsif
for anyone qualified to make an educated guess, more info is needed....

For what use?
Is there a router somewhere?
Are there servers involved?

Or are we just connecting a bunch of PC's together to form a workgroup?

Just to connect about 30 workstations and a half dozen servers. Will be apps using SQL server as backend and at least half the users will be on it concurrently. Also, one server will be used as a web server. Only router involved is the gateway.
 
Okay, still trying to wrap my mind around this one.

Can you hack up a quick Paint diagram of what it looks like?

What I've got in my mind is 2x24, three servers and 15 clients on each one, and the obvious bottleneck being whenever several streams of data have to go through the 100MBit crossover.

Maybe some 100MBit with GBit uplink should be on your menu. IIRC, Linksys has a 16-port 10/100 with a single 1000 uplink.

- M4H
 
to answer:

2 x 24, but your screwing everyone on the second switch to share 100Mb of bandwidth.

Go for FS526T so you can at least get 1000Mb of bandwidth between switches.

Edit: Plus you'll get two spare GB ports to plug servers in if you've got adapters.
 
I don't think the gigabit crossover is in the budget. 🙁

Right now, everything is a mess of hubs, so even the 100MB crossover wouldn't be too bad comparativly.

I was just wondering if using 3 16s would mean less traffic would be sent over the crossovers. Is the switch going to remember which segment the node is on and only send it that way or will it have to discover each time? I guess that would answer it.
 
Originally posted by: yruffostsif
to answer:

2 x 24, but your screwing everyone on the second switch to share 100Mb of bandwidth.

Go for FS526T so you can at least get 1000Mb of bandwidth between switches.

Edit: Plus you'll get two spare GB ports to plug servers in if you've got adapters.

That's exactly right. Unless they can be configured for EtherChannel like Cisco switches, you're limiting everyone else on one switch to sharing a 100Mb connection.
 
Originally posted by: HeroOfPellinor
I don't think the gigabit crossover is in the budget. 🙁

Right now, everything is a mess of hubs, so even the 100MB crossover wouldn't be too bad comparativly.

I was just wondering if using 3 16s would mean less traffic would be sent over the crossovers. Is the switch going to remember which segment the node is on and only send it that way or will it have to discover each time? I guess that would answer it.

I've got a couple 24-port 3Com SuperStack III's I'll sell ya for $200 each. That'd get rid of the 100mb crossover.


(clarification)
 
Originally posted by: HeroOfPellinor
I don't think the gigabit crossover is in the budget. 🙁

Right now, everything is a mess of hubs, so even the 100MB crossover wouldn't be too bad comparativly.

I was just wondering if using 3 16s would mean less traffic would be sent over the crossovers. Is the switch going to remember which segment the node is on and only send it that way or will it have to discover each time? I guess that would answer it.

You still have to look at "worst-case scenario" - a client on the "first" switch having to hop the crossover to the second and then again to the third switch before hitting the server it wants.

- M4H
 
Originally posted by: MercenaryForHire
Originally posted by: HeroOfPellinor
I don't think the gigabit crossover is in the budget. 🙁

Right now, everything is a mess of hubs, so even the 100MB crossover wouldn't be too bad comparativly.

I was just wondering if using 3 16s would mean less traffic would be sent over the crossovers. Is the switch going to remember which segment the node is on and only send it that way or will it have to discover each time? I guess that would answer it.

You still have to look at "worst-case scenario" - a client on the "first" switch having to hop the crossover to the second and then again to the third switch before hitting the server it wants.

- M4H

I'd have two cables going out of each to each of the others. Of course, that's only if the switches remember which other switch the node is on.


A--------B
\ /
\ /
\ /
\ /
C

Looking at it...I could maybe swing a 48 port. Would one giant 48 be better performing than 2 x 24 with 1000 MB crossover. Seems like a stupid question, but isn't each channel internally limited to 200 MB or something?
 
um, unless all your users are kazaa users and dl warez and pr0n all day, the 48 port is going to work out better than the 2 x 24 even with GB uplink

The only benefit you would get with seperate switches is keeping users on the same segment as a server. Once they need the gateway, performance goes south and everyone on the first switch gets latentcy.
 
Originally posted by: HeroOfPellinor
Looking at it...I could maybe swing a 48 port. Would one giant 48 be better performing than 2 x 24 with 1000 MB crossover. Seems like a stupid question, but isn't each channel internally limited to 200 MB or something?

Yes! A single high-quality 48-port switch would be the best solution. Speed limit @ 200MB each channel, sure, but the point behind a switch is that any pair of ports has its "own channel" instead of being forced to share bandwidth. That way there's no need to worry about client-server datastreams choking the crossover link.

Multiple client access to a single server is still going to be a bitch though, since a single client can (theoretically) snarf up all the bandwidth from a server. Your situation is pretty much the Poster-Network for "Needs Gigabit." 🙂

- M4H
 
Originally posted by: MercenaryForHire
Originally posted by: HeroOfPellinor
Looking at it...I could maybe swing a 48 port. Would one giant 48 be better performing than 2 x 24 with 1000 MB crossover. Seems like a stupid question, but isn't each channel internally limited to 200 MB or something?

Yes! A single high-quality 48-port switch would be the best solution. Speed limit @ 200MB each channel, sure, but the point behind a switch is that any pair of ports has its "own channel" instead of being forced to share bandwidth. That way there's no need to worry about client-server datastreams choking the crossover link.

Multiple client access to a single server is still going to be a bitch though, since a single client can (theoretically) snarf up all the bandwidth from a server. Your situation is pretty much the Poster-Network for "Needs Gigabit." 🙂

- M4H

Well the FS750AT has 2 GB ports and the SQL server has a Gigabit NIC, so that would take care of that. I've always wondered though, what happens within a switch when you have multiple nodes all connecting to the same server at the same time. Is the only limit, the server's connection?
 
Originally posted by: HeroOfPellinor
Originally posted by: MercenaryForHire
Originally posted by: HeroOfPellinor
Looking at it...I could maybe swing a 48 port. Would one giant 48 be better performing than 2 x 24 with 1000 MB crossover. Seems like a stupid question, but isn't each channel internally limited to 200 MB or something?

Yes! A single high-quality 48-port switch would be the best solution. Speed limit @ 200MB each channel, sure, but the point behind a switch is that any pair of ports has its "own channel" instead of being forced to share bandwidth. That way there's no need to worry about client-server datastreams choking the crossover link.

Multiple client access to a single server is still going to be a bitch though, since a single client can (theoretically) snarf up all the bandwidth from a server. Your situation is pretty much the Poster-Network for "Needs Gigabit." 🙂

- M4H

Well the FS750AT has 2 GB ports and the SQL server has a Gigabit NIC, so that would take care of that. I've always wondered though, what happens within a switch when you have multiple nodes all connecting to the same server at the same time. Is the only limit, the server's connection?


the server's connection up to the switches internal switching GB limit. Since these are all unmanaged switches it will be the server's connection @ 1000b/s Managed switches take the cake with server connectivity...fast etherchannel is worth the money by itself.

The FS750AT has slots, you still have to buy the modules. Link


 
Back
Top