unRAID with 10Gbe, LAN Client PCs with 10Gbe, and then a (cheaper?) multi-Gbe switch?

VirtualLarry

No Lifer
Aug 25, 2001
56,570
10,202
126
https://www.eteknix.com/asus-xg-c100c-10gbase-t-network-adapter-review/

Just formulating some ideas here. Could potentially put these Asus Aquantia 10Gbe PCI-E x4 NICs into my Ryzen 5 1600 rigs, and one into my unRAID server, and then I would need a way to tie them all together. I would need four 10Gbe ports and a number of 1Gbe ports, three 10Gbe for the LAN machines, and one Gbe for the unRAID server, and then I would plug a number of smaller NAS units into the 1Gbe ethernet ports on the switch. Lastly, I would connect the dual 1Gbe LAN ports on the smaller NAS units, such that one connection went to my internet LAN, and one connection went to the storage LAN.

I would configure my Ryzen 5 1600 rigs, to still use the mobo's 1Gbe connection for internet, but have a separate storage LAN, for my unRAID server and NAS units. This would have added benefits of being able to better-secure some of the NAS units, I think. Although, they wouldn't necessarily be able to connect out to NTP or for updates, unless I bridged the LANs, or used their secondary Gbe ports to connect to the internet LAN too.

I guess Asus is also setting a switch, with two 10Gbe ports, and eight 1Gbe ports on it.

Eventually, I want a switch or switches, with ALL 2.5Gbe or 5Gbe on them. (10Gbe will probably stay expensive.)
 

VirtualLarry

No Lifer
Aug 25, 2001
56,570
10,202
126
Shrug. I know pretty-much zilch about fiber networking (other than my FIOS internet is provided by FTTP, and I watched them install the ONT).
 

mxnerd

Diamond Member
Jul 6, 2007
6,799
1,103
126
Well, that's too technical for me.

It looks like it's saying modern CPU processing is not fast enough to process smallest ethernet packet.

But that article is 3 years ago. Maybe VL's new Ryzen build would be fast enough. And I don't think he has Cat7 cable to run at 10Gbps. Probably only 5 Gbps or 2.5 Gbps can be achieved with his existing cabling.

===

Edit:

Class E cat6 cable can be up to 55 meter when running 10GBASE-T
https://en.wikipedia.org/wiki/10_Gigabit_Ethernet
 
Last edited:

sdifox

No Lifer
Sep 30, 2005
98,798
17,266
126
Well, that's too technical for me.

It looks like it's saying modern CPU processing is not fast enough to process smallest ethernet packet.

But that article is 3 years ago. Maybe VL's new Ryzen build would be fast enough. And I don't think he has Cat7 cable to run at 10Gbps. Probably only 5 Gbps or 2.5 Gbps can be achieved with his existing cabling.

That is just to illustrate the magnitude of the problem. we are talking about switch, not the client. You want to go jumbo frame on 10gb. issue is if your buffer is too small in the switch, it will get overan. 2MB is definitely too small.

4GB FC switch buffer tend to be in the 512MB to 1GB range. Granted they are for like 40+ ports, but still.
 

sdifox

No Lifer
Sep 30, 2005
98,798
17,266
126
How about ASUS 10Gbps switch? It only has 2MB buffer too.

https://www.asus.com/us/Networking/XG-U2008/specifications/

and Netgear 10 Gbps switch, also 2MB buffer.

https://www.netgear.com/images/datasheet/switches/XS708Ev2_XS716E_DS.pdf

Maybe FC works differently from ethernet?


no, those are all consumer switch, that is why :p

unless they are confident you will never lose packets due to overflow since there is only so many ports.

that cisco has jumbo at 9KB, so 200 or so packet buffer. I guess that is not too bad.
 
Last edited:

mxnerd

Diamond Member
Jul 6, 2007
6,799
1,103
126
no, those are all consumer switch, that is why :p

unless they are confident you will never lose packets due to overflow since there is only so many ports.

that cisco has jumbo at 9KB, so 200 or so packet buffer. I guess that is not too bad.

Buffalo got 9K jumbo, Asus up to 16K, Netgear 9K. I believe I haven't seen any ethernet products that don't support 9K jumbo frame.
 

sdifox

No Lifer
Sep 30, 2005
98,798
17,266
126
Buffalo got 9K jumbo, Asus up to 16K, Netgear 9K. I believe I haven't seen any ethernet products that don't support 9K jumbo frame.

I just thought 2MB is low compared to 10gbps, but if they can push them out fast enough, I guess 2MB is enough.
 

thecoolnessrune

Diamond Member
Jun 8, 2005
9,673
583
126
I just thought 2MB is low compared to 10gbps, but if they can push them out fast enough, I guess 2MB is enough.
Yeah, you're thinking about the design from an FC perspective. Even most of Cisco's TOR all-10gb Nexus gear uses 16MB buffers for 48 Ports. So yeah, larger than 2MB over 8 ports still, but not by much, especially considering this is consumer gear.

FC is different in effective implementation from Ethernet. Most Ethernet topology is a graduation in scale to the core. Our most modern rack designs for our Virt clients are almost always a form of 10Gb from the host -> multi-10Gb (or 40Gb) to the TOR Fabric-> 40Gb or 100Gb to the core. There is a clear delineation and sharp incline is traffic handling as you get closer to the core. With huge amounts of bandwidth, the buffer may fill quickly, but it almost never fills at all except for durations measured in ns, because there's just so much bandwidth to move things about. There's also a very broad movement of traffic as SQL talks to Data workers, as SAP talks to Reporting nodes, as replications move through the router. An Ethernet network is closest to a beltline, with everyone getting on and off with regularity through the core.

FC doesn't often get designed like that. FC is very north -> south, and everything tends to have the same topology. 8Gb FC is probably 8Gb all the way up to near the core, where it *might* hit 16Gb. Storage traffic is very bursty too, asking for every bit of that bandwidth in fat, but short chunks. It's almost always north -> south too, as you may have 1, 2, maybe 3 or 4 SANs all handling the bulk of data moving for hundreds of servers. So you'll often find use cases where hundreds of ports are being serviced with 8Gb/s chunks of data all the way up the chain to a couple of SAN ports that are being assaulted, often with micro-bursts of data that the SAN simply can't handle across its 2-6 8Gb or 16Gb ports. In those cases, huge buffers are necessary too keep up with data delivery that is often occurring on only a couple of ports connected to the SAN itself. It's more akin to an Expressway where there's multiple ways on, but from there, you're on it until you get to the end.

The need for buffers is simply higher than it is on most Ethernet deployments.
 

VirtualLarry

No Lifer
Aug 25, 2001
56,570
10,202
126
Thanks for the info on the competing 2.5/5GbitE standards. Distressing. I guess I was mostly interested in NBase-T gear.