Are there any quad port nics faster than 10g?

boed

Senior member
Nov 19, 2009
540
14
81
I have no idea what to get at this point. I currently have 10g at home but that isn't enough - my raid controllers are pushing my 10g to the limit when copying between systems (I don't have a switch - I have a single quad port connected to 3 other systems). When I look at throughput on the ports they are maxed out. I was considering 40g but there are no quad port nics and 40g switches are too expensive for me. I was also looking for 25 or 50g or 100g nics but can't find the right option which was simple with 10g. Anyone know of any cheap 25/50 or 100g switches that are cheap or 25 or 50 or 100g quad port nics? It isn't a crisis or anything but I'd like to go to the next level on my network. I often copy several TB at a time (up to 50TB) and it would be nice if I could do it more quickly.

Thanks in advance for your help. (I"m sure if I had unlimited funds I could easily do this but I'm hoping to connect 4 PCs at 25g or faster for under $2,000 total.)
 

mv2devnull

Golden Member
Apr 13, 2010
1,521
154
106
Mellanox (16x PCIe) cards seem to have a 100Gbps QSFP28 port or two and you can get a breakout-cable 100G->4*25G. In other words that one 100G port can act as 1x100, 2x50, or 4x25 via cabling. Never seen the prices though.
 

boed

Senior member
Nov 19, 2009
540
14
81
Thanks for the help!

I think I nearly have a solution if anyone has any further suggestions in case I've gone wrong somewhere...

4x MCX456A-ECAT network cards $350x4= $1,400
2x AMQ28-SR4-M1 100g transceiver $90x2= $180
1x Karono MPO Female to MPO Female Patch Cord, 12-core Fibers, TYPE B, 16.5 ft (5M), OM3 Multi-mode Fiber Optic $50x1 = $50

I was thinking about a 5m dac splitter but I can't find a 5m or even 4m one. Or something like this - http://www.fiberon.com/product/100g-qsfp28-4x-25g-sfp28-aoc/
 

mxnerd

Diamond Member
Jul 6, 2007
6,799
1,103
126
I google searched. I don't have optical experience.

Somebody here might be able to help.
 
  • Like
Reactions: boed

Genx87

Lifer
Apr 8, 2002
41,091
513
126
Wow, that is crazy throughput. What OS? I would say get a switch. But with that data throughput you are pushing. Direct Connect is probably better. What if you doubled up the quad port 10GBe nic on your main box and a second single port 10GBe or dual port 10GBe on the clients, and use SMB 3.0? With each dual direct connection on its own network segment? MPIO iSCSI could also work. But I have found SMB 3.0 to be faster. At least on Windows boxes.
 

boed

Senior member
Nov 19, 2009
540
14
81
Thanks - I'm using Windows - not freenas. I may top out at 40g due to windows limitations (even if I eventually get all SSD) but it seems like 100g would actually be no more expensive.

I tried SMB 3.0 but the best I could get was 16Gbps - that may be as fast as I can go - no idea (I'm hoping RDMA helps). I've never tried it with a completely clean OS on both systems, which will be my next project once I get the nics.
 
Last edited:

mxnerd

Diamond Member
Jul 6, 2007
6,799
1,103
126
Windows Server 2012/2016, Windows 10 Pro & Enterprise (not Home edition) does support SMB Direct, just make sure it's enabled.
 

JackMDS

Elite Member
Super Moderator
Oct 25, 1999
29,539
418
126
Out of curiosity, what is the real functional technical needs that that Speed offers you?


:cool:
 

boed

Senior member
Nov 19, 2009
540
14
81
Needs- 10g - which I have. Wants 200g :) - which I couldn't get on a system to system copy even if I had it because my drives/controllers would never keep up. At the moment 25g would probably be nearly as fast as my drives could keep up although I'd imagine at some point the drives will all be ssd. Of course by then the tech could change. 40G would probably be more than I could get a transfer going as my system seems to max out at about 33gbps on drive to drive transfers internally. I'm just playing around with ideas at the moment and trying to become more knowledgeable about faster than 10g tech.

At the end of each week I typically do a copy that takes up to 30 minutes - not the end of the word for me but for some inane reason I want faster. When I have to do a full backup it takes about 24 hours (about 85TB of data on a 165TB drive).
 

mxnerd

Diamond Member
Jul 6, 2007
6,799
1,103
126
Why not use incremental backup or folder syncing software instead doing full backup everyday?
 

boed

Senior member
Nov 19, 2009
540
14
81
The incremental sync is the 30 minute job. Sometimes you need a full backup though when starting from scratch.
 

boed

Senior member
Nov 19, 2009
540
14
81
Thanks everyone for your help. I really appreciate it. I think with your advice I'm going to try something different and will post a new thread so it doesn't become an impossible to follow thread as I have so many questions. Thanks again!
 

thecoolnessrune

Diamond Member
Jun 8, 2005
9,673
583
126
Just in case there are other people searching this in the future, it's worth noting that as it currently stands, Port Breakout which gets referenced several times in this thread, is almost exclusively limited to switches, not card ports. This is the same Cisco, Juniper, HPE, and yes Mellanox. From their own page:

Important: The 40GbE split options is supported only on Mellanox switches and not supported on Mellanox adapters (e.g. ConnectX-3) when equipped with 40GbE ports. In case you wish to limit the 40GbE port on the adapter to 10GbE you can use QSA or similar 40GbE-10GbE cables (Refer to Mellanox.com - here)

The breakout cables are designed around density, so that users can buy 32 port 100G switches and break that out to 128 25Gb ports, collapsing them to 100Gb ports dependent on needs and growth.

One option would be to get Infiniband gear. An old 40Gb Infiniband switch is loud and obnoxious but will give you 24 ports for about $300. ConnectX-2 or ConnectX-3 VPI cards are still supported well on Windows. While it can be more difficult to manage (you'll need to create a Subnet manager if you don't have one in the switch), it's dirt cheap.
 
  • Like
Reactions: mxnerd

mxnerd

Diamond Member
Jul 6, 2007
6,799
1,103
126
@thecoolnessrune

Q: Does speed above 40Gbps have any standard? Take infiniband 40Gbps/100Gbps as example, will different vendors' adapters/switches/cables work with each other?
 

thecoolnessrune

Diamond Member
Jun 8, 2005
9,673
583
126
@thecoolnessrune

Q: Does speed above 40Gbps have any standard? Take infiniband 40Gbps/100Gbps as example, will different vendors' adapters/switches/cables work with each other?

The electrical standard and form factor that binds all these is the SFP, and that is not bound by any standards body, but rather a multi-source agreement Basically the SFP has a gentleman's agreement amongst most of the network vendors that their products are made to the same electrical standard and form factor standard.

SFP was gigabit, SFP+ was 10Gb, and SFP28 is 25Gbit. The quad lane variants quadruple the standard (QSFP is 4Gb, QSFP+ is 40Gb, and QSFP28 is 100Gb). There's also a doubled-port standard based on the latest standards by taking 2 stacked ports and bonding them into 1 interface called SFP-DD (100Gb) and QSFP-DD (400Gb).

The MSA means that all of these standards and cables are technically compatible with each other. Whether they are in practice is up to the vendor. Some vendors (like QLogic) could not care less what SFP is stuck in their switches or cards. Other companies (Cisco is of course one of the more well-known brands) does not allow it by default. It will read the SFP information and reject the SFP if it is not a Cisco transceiver unless you use "service unsupported-transceiver", which basically becomes a giant catch-all that Cisco will use to blame almost any issue with traffic on that port back to the unsupported SFP. :)

Regarding Infiniband, the Infiniband Trade Association manages the the types and the link speed. The ITA nowadays supports the above SFP standards, their own CXP standard, and in the past the old CX4 standard. The kick with infiniband is that there's quite a few possible combinations, so you have to make sure your source and destination match. For instance, there's a lot of old 40Gb Infiniband gear using FDR10 x 4 links (over a QSFP+ cable), but there's also 56Gb FDR x 4 links (over a SFP28 cable). On paper the standards look very similar, but they're completely different generations using completely different interfaces.

But the long and short of it is, for both Ethernet and Infiniband, as long as the medium is using SFP, there should be compatibility across adapters/cables/switches, unless the Vendor does not want to support that.
 
  • Like
Reactions: mxnerd

mxnerd

Diamond Member
Jul 6, 2007
6,799
1,103
126
@thecoolnessrune

The reason I asked is like the following page, does the cable vendor try to trick you thinking there are some compatibility issues between vendors by adding $10 extra?



untitled.png
 

thecoolnessrune

Diamond Member
Jun 8, 2005
9,673
583
126
@thecoolnessrune

The reason I asked is like the following page, does the cable vendor try to trick you thinking there are some compatibility issues between vendors by adding $10 extra?

It's not a compatibility issue per se, but rather a support issue. As I mentioned above, Cisco, HPE, Brocade, and many others explicitly lock out other vendor's SFP's. To get around this, the vendor will "burn" the compatible ID into the SFP Firmware, to allow it to show as a vendor's own compatible SFP. For instance, with Cisco, if you installed a non-burned SFP into your switch like above, it would err-disable the port, saying that it's not a supported transceiver until you used the "unsupported-transceiver" option to allow it. If you chose the "Compatible with Cisco" option, the SFP will be burned with a Cisco ID, so when it's inserted in the switch, the SFP tells the switch "hey, I'm totally a Cisco QSFP-100G-SR4-S, trust me." and the Cisco switch will turn up the port as a supported transceiver.

Research your switch or device to see if you need to use Firmware altered cables. As I mentioned, vendors like Intel and QLogic really don't care what transceiver you put in. It will try to turn up the port. When in doubt, lots of devices at the end of the day will accept a Cisco-burned cable (benefits of being the king) unless you're a heavy competitor in one of the spaces.
 
  • Like
Reactions: mxnerd

mxnerd

Diamond Member
Jul 6, 2007
6,799
1,103
126
It's not a compatibility issue per se, but rather a support issue. As I mentioned above, Cisco, HPE, Brocade, and many others explicitly lock out other vendor's SFP's. To get around this, the vendor will "burn" the compatible ID into the SFP Firmware, to allow it to show as a vendor's own compatible SFP..

Wow, that's great info. Thanks.
 

thecoolnessrune

Diamond Member
Jun 8, 2005
9,673
583
126
I was recently told that Intel cards do care.
I'd be curious if there's anything specific called out? Because I couldn't remember anything where the Intel Cards were vendor locked. I do know that Intel had their DA line of cards, which are made more affordable on the basis that Intel only allowed DAC cables (Direct attach cable) to be connected. If you tried to connect for instance an LC SFP to plugin Fiber, it would detect that it wasn't a DAC Cable and shut the port down.

But as far as Vendor, I've never seen them care. I've got Intel OEM X520 (10gb) and X710 (40Gb) cards kicking around with some Finisar DAC cables and they hum along fine.

That said, Intel often OEMs the cards to the big vendors (Dell, HPE, Cisco), and I'd be curious if the Firmware they put on *does* lock out vendors? I've never dug into that.