Infiniband or 10GB Ethernet => storage server/NAS?

smangular

Senior member
Nov 11, 2010
347
0
0
In a home environment, I'm wondering if I can do a direct connect (cross-over?) Infaband link between my main PC and my ZFS Storage Server (Ethernet ok to other PCs) or if its a better idea to do 10GB Ethernet. Currently Running Community Nexenta Latest and Win8 Pro RTM. I'm looking to do something faster than 1GBE and was hoping to avoid LACP. VM/Video/iSCSI

Seams like this will be an Ebay project since Newegg's cheapest 10GBE is $354. http://www.newegg.com/Product/Produc...=1&name=10Gbps

Looking for PCIe on both sides. Tons of old equipment is available. Is this a good card or suggestions? http://www.ebay.com/itm/QLOGIC-QLE7...sk_Controllers_RAID_Cards&hash=item1e6f5401c4


P.S. Storage Server will have ZFS SSD Caching soon as well.

 
Last edited:

MarkLuvsCS

Senior member
Jun 13, 2004
740
0
76
Personally I'd prefer 10GBE vs. Infiniband, because it's easier to have a single type of cables you need to worry about. Unfortunately the cost of 10GBE is still pretty high right now, so the interface cards are pricey. The cables should be significantly cheaper for the 10GBE (Cat 6a or Cat 7) vs. Infiniband. It would also be easier for 10GBE in future hardware environments (I'm assuming RJ-45 isn't going anywhere anytime soon). 10GBE is an expensive upgrade for insane bandwidth between your main PC + Server, but in a few years it'll be cost effective enough that all your equipment can use it, and it'll be easy to expand your system.
 

smangular

Senior member
Nov 11, 2010
347
0
0
Yeah Infiniband may have more complexity as well.
It seams like 10GBE is crazy cheap unless I am missing something.

Intel NE020 with a Copper CX4 cable is about $50/card! (15m copper cx4 max distance) + the Cable ~$32 shipped for the cable.

It even seams to have Win8 Beta drivers. The *nix drivers may be a much bigger pia... It really can't be this cheap can it?

http://www.intel.com/content/www/us...apters/neteffect-ethernet-server-cluster.html

10GBE:
http://www.ebay.com/sch/i.html?_odk....m570.l1313&_nkw=NE020&_sacat=20318&_from=R40

CX4 Cable:
http://www.ebay.com/itm/10GB-Ethern...242?pt=LH_DefaultDomain_0&hash=item2c68174412

Ebay listing mentioned that for CX4 no special cross over cable is needed.
 
Last edited:

thecoolnessrune

Diamond Member
Jun 8, 2005
9,673
583
126
Nexenta will support Infiniband SRP targets and gets you far faster storage with lower latency than 10Gbe. If you are using SSD caches this can make a huge difference in storage performance.

For full support you need to stick to Mellanox Infiniband cards. And if using VMWare you're limited to ESXi 4.1 for SRP support (ESXi 5's Mellanox driver only supports about half the speed and IPoIB, which is far slower and consumes far more CPU resources).

Mellanox is going through beta testing of ESXi 5.x approved drivers with SRP support and signs point to a release before the end of the year.

Anyways, it does take a bit more configuring but the benefits are definitely there when it comes to CPU consumption and latency.

As always, depends on what your workload is and the performance of your current SAN as far as the amount of bandwidth and IOPs it is pushing. Obviously a simple array that runs at maybe a Gbit worth of bandwidth wouldn't be worth an infiniband or 10Gbe effort, but if you have something that can put out a sustained 5Gbit or higher of writes and reads, it would be worth looking into.
 
Last edited: