• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Help with choosing switch (1Gbit + 1-2*10GbE)

TehNetworker

Junior Member
Hi, i have small network, but i want to upgrade my fileserver to 8*1TB 7.2k rpm HDDs in RAID 10, this will give me huge speed => need 10GbE, is there any value optional switch /i looked on HP procurve etc. but with 10GbE support too expensive/, maybe i should try ebay?
 
...and why do you think you need 10Gb?

consumer + small network + no budget != 10Gb
 
Last edited:
Better bet for a small network is multiple gig connections in a etherchannel or link aggregation. It takes some serious hardware on the server side to push 10 gig. Like a 32 core box with 200 virtual machines on it.
 
Unless you are spending some serious money on the RAID controller, I would doubt that you are going to get much farther than an etherchannel of maybe 2 links. 7.2k SATA drives do not have the IOPS power behind them to push a lot more than that. 7.2K SATA disks will typically are in the 90 - 130 IOPS area. Even when I see say 600 IOPS off a LUN in my SAN it is rarely actually using more than 1GB/s. If I run a benchmark on the LUN, it maxes at about 190MB/s continuous (5 15k disks in this case) because adding disks is not linear but it does start to push the etherchannel connection.
 
You would aslo be spending over 7K for just the networking hardware. You will probably not find 10Gig stuff on ebay for a long time yet.

In short if you can't afford 10gigE your equipment probably isn't capable of pushing it anyways.
 
I've hit around 350MB/s running iometer with a ton of IOPS on a Promise drive array with 5 and 6 drive RAID 5 arrays. The issue is you will never hit that sort of load under normal usage on a file server, and if you do, you need more hardware. This wasn't even hitting the Windows network stack which would require a lot of tweaking to come anywhere close to the iometer numbers. Then you have the network equipment to deal with as well. Stick with 2 gb etherchannel ports on a layer 2 cisco switch and save yourself $20k.
 
Back
Top