New aquantia 5gbps ethernet appearing

Laststop311

Member
Apr 24, 2013
70
3
36
I am putting off my PC build till pci-e 4.0 and building a NAS instead. I see some new motherboards supporting 5gbps ethernet ports now. I know the router must support 5 gbps and the nas must support it as well. I basically need a router with 3x 5gps ports and 3x 1gbps ports and I need a nas system that is going to support this 5 gbps speed. I'll worry about the PC when I build it. Also I know it will only work at 5gps for only the pcs that support it thru the whole chain PC router NAS but other 1gbps pc's can still access it just at the lower speed.

What is the cheapest way for me to make this set up a reality. Not like cheapest like garbage components but best bang for the buck to make a 5gps nas connection to my eventual new pc?

All the internal inside wall cabling is category 6a so no worries there should handle the 5gbps no prob
 

mindless1

Diamond Member
Aug 11, 2001
8,059
1,445
126
Cheapest way is forget about PCIe 4 and just build systems with at least one free PCIe 3, 1x slot, then wait for enterprise gear to be released, then wait for consumer grade gear to be released, then wait for it to drop to normalized pricing instead of paying the early adopter tax. Literally, cost isn't going to be about "garbage components", rather the latest fastest thing costs disproportionately more just because it's the latest tech.

I assume you'll be populating the NAS with SSDs, since 5Gb for HDDs would yield minimal benefit to cost ratio. Anyway, if I were you then I'd forget about PCIe 4 and just build a new PC when other aspects seem to be a bottleneck, putting a 5GB controller in both the NAS and primary workstation PC, direct wiring them without a switch (or router is what you mentioned), and having a second NIC to connect to the LAN. No need to pay the early adopter price when to start out you will only have two devices that need to communicate at 5Gb. When 5Gb consumer grade switches become commonplace, get one. That's the cheapest way.
 

richaron

Golden Member
Mar 27, 2012
1,357
329
136
I assume you'll be populating the NAS with SSDs, since 5Gb for HDDs would yield minimal benefit to cost ratio.

This is an odd thing to say, since even a single modern 5400RPM (slow) NAS HDD can provide more than double the data for a standard gigabit (1Gbe) connection. And by "odd" I mean completely misguided and misguiding. I can't imagine your logic in relation to a RAID setup.

For the record I'm also waiting on consumer 5Gbe networking gear, and even though there's a price premium I do think it's worth considering current motherboards with this gear attached. But there is one point where I do agree with above poster; that is we still have some waiting to do before a full system is viable...
 

mindless1

Diamond Member
Aug 11, 2001
8,059
1,445
126
^ It's not odd at all to state, since HDD peaks on benchmarks and average utilized data rates across a network to a NAS are usually quite different numbers.

Sure you could tell me you're editing video directly from your NAS, and then I'd tell you to do it with an SSD in same system. Guess it depends on the application, whether we want to daydream about some theoretical scenario or approach it from real world uses on a home networked NAS, not an enterprise server.

I did not state "no performance gain" but rather "minimal benefit to cost ratio" after the OP wrote "cheapest way", and yes I meant now or in the near future when I wrote "pay the early adopter price" (for the switch(es), not a motherboard integrated network adapter), not a few years from now when the gear costs an order of magnitude less.

Now about that "misguided and misguiding" remark, you're basically conceding the same thing I wrote so it seems you are the one more misguided.
 
Last edited:

richaron

Golden Member
Mar 27, 2012
1,357
329
136
^ It's not odd at all to state, since HDD peaks on benchmarks and average utilized data rates across a network to a NAS are usually quite different numbers.

Sure you could tell me you're editing video directly from your NAS, and then I'd tell you to do it with an SSD in same system. Guess it depends on the application, whether we want to daydream about some theoretical scenario or approach it from real world uses on a home networked NAS, not an enterprise server.

I did not state "no performance gain" but rather "minimal benefit to cost ratio" after the OP wrote "cheapest way", and yes I meant now or in the near future when I wrote "pay the early adopter price" (for the switch(es), not a motherboard integrated network adapter), not a few years from now when the gear costs an order of magnitude less.

Now about that "misguided and misguiding" remark, you're basically conceding the same thing I wrote so it seems you are the one more misguided.
No I'm not conceding the same thing you wrote. What I took issue was with this:
I assume you'll be populating the NAS with SSDs, since 5Gb for HDDs would yield minimal benefit to cost ratio.
And this is completely misguided.

I told you that a NAS drive can easily need double the bandwidth supplied by current gigabit networking. I know this because I have a bunch of NAS drives and I use them... sometimes as a NAS. They are also in RAID so I know about that too, and I have compared local and networked accessing. I actually know this from experience, it sounds like you are the one confusing theories.

Firstly If were are talking about benefit to cost ratio for a NAS, this goes way WAY down when you waste cost on systems which can saturate many times over even 5Gb/s networks (like a SSD RAID would). I'm fully aware of the advantages of SSDs with response times and random writes, I'm also fully aware of the workloads placed on a huge majority of NAS's. So if you are somehow trying to shift the argument towards the advantages of SSDs, which in the vast majority of NAS workloads would negligible compared to a HDD array which could also saturate 5Gb/s; again you are misguiding readers.

As most NAS geeks would agree, single parity RAID is basically dead for a reliable NAS with modern multi-TB drives. And other real world values I've seen are benchmark comparisons of 4, 6, and 8 drive raidz2 pools; showing there's a performance sweet spot with 6 drives. 6 drive theoretical max performance ~4xdrive=~800-900MB/s. Obviously there's overheads and accounting for real world performance this seems like a great fit for a 5Gb/s NAS. In fact for NAS workloads it would be the best possible combination of price/performance and price/capacity possible. In other words the "best possible benefit to cost ratio".

I just wanted to clear that up before any readers took your advice. But it isn't highly technical so I'm sorry to those who came here looking for something more juicy. Also not highly technical is searching for 2.5/5Gbe switches, which I have been looking for also. The only one I found is that Netgear (?) PoE one designed for business AC AP's.
 

mindless1

Diamond Member
Aug 11, 2001
8,059
1,445
126
^ You're very misguided.

I wrote minimal benefit to cost ratio. Prove me wrong. Show us a reasonably priced 5GbE switch. If you can't, you're foolish for continuing to argue because again, I did not write "no performance gain".

Now you're writing "can easily need double the bandwidth". NO! A HDD has no "need", no matter what you build, something is going to be the bottleneck and you stop reaching for that next performance plateau when it becomes cost prohibitive.

You should try reading for comprehension.
 
Last edited:

XavierMace

Diamond Member
Apr 20, 2013
4,307
450
126
No I'm not conceding the same thing you wrote. What I took issue was with this:

And this is completely misguided.

I told you that a NAS drive can easily need double the bandwidth supplied by current gigabit networking. I know this because I have a bunch of NAS drives and I use them... sometimes as a NAS. They are also in RAID so I know about that too, and I have compared local and networked accessing. I actually know this from experience, it sounds like you are the one confusing theories.

Firstly If were are talking about benefit to cost ratio for a NAS, this goes way WAY down when you waste cost on systems which can saturate many times over even 5Gb/s networks (like a SSD RAID would). I'm fully aware of the advantages of SSDs with response times and random writes, I'm also fully aware of the workloads placed on a huge majority of NAS's. So if you are somehow trying to shift the argument towards the advantages of SSDs, which in the vast majority of NAS workloads would negligible compared to a HDD array which could also saturate 5Gb/s; again you are misguiding readers.

As most NAS geeks would agree, single parity RAID is basically dead for a reliable NAS with modern multi-TB drives. And other real world values I've seen are benchmark comparisons of 4, 6, and 8 drive raidz2 pools; showing there's a performance sweet spot with 6 drives. 6 drive theoretical max performance ~4xdrive=~800-900MB/s. Obviously there's overheads and accounting for real world performance this seems like a great fit for a 5Gb/s NAS. In fact for NAS workloads it would be the best possible combination of price/performance and price/capacity possible. In other words the "best possible benefit to cost ratio".

I just wanted to clear that up before any readers took your advice. But it isn't highly technical so I'm sorry to those who came here looking for something more juicy. Also not highly technical is searching for 2.5/5Gbe switches, which I have been looking for also. The only one I found is that Netgear (?) PoE one designed for business AC AP's.

I'm not sure where you're getting your numbers from but definitely not reality as they're way high. There's also multiple threads already on 5GbE/10GbE where this was beat to death. Most consumer spindles sit around 120MB/s in all but the absolute best scenarios. A GOOD GbE setup can do around ~115MB/s. Most consumer NAS products fail to even saturate a single GbE setup. If you want to swing to the enthusiast side, such as a 6 drive raidz2 pool, sure it can exceed GbE. Even then, a 6 drive Z2 array is more like 450MB/s, not 800MB/s-900MB/s But most enthusiasts serious enough to spend that kind of cash on their storage are also perfectly happy to buy surplus networking gear. There's a wide variety of surplus options that will get you far beyond GbE speeds for a reasonable price. Far less than the price of a new 2.5GbE/5GbE switch.
 
  • Like
Reactions: mxnerd

mxnerd

Diamond Member
Jul 6, 2007
6,799
1,101
126
I'm not sure where you're getting your numbers from but definitely not reality as they're way high. There's also multiple threads already on 5GbE/10GbE where this was beat to death. Most consumer spindles sit around 120MB/s in all but the absolute best scenarios. A GOOD GbE setup can do around ~115MB/s. Most consumer NAS products fail to even saturate a single GbE setup. If you want to swing to the enthusiast side, such as a 6 drive raidz2 pool, sure it can exceed GbE. Even then, a 6 drive Z2 array is more like 450MB/s, not 800MB/s-900MB/s But most enthusiasts serious enough to spend that kind of cash on their storage are also perfectly happy to buy surplus networking gear. There's a wide variety of surplus options that will get you far beyond GbE speeds for a reasonable price. Far less than the price of a new 2.5GbE/5GbE switch.

Yep. I probably won't buy any 2.5GbE/5GbE for next 5 years, or even longer, because I just don't have the need for home usage.

Just expressing the fact there is no 2.5G/5G switch on the market for home users yet earlier.

Early adopters will always be business users and it will be expensive.
 

richaron

Golden Member
Mar 27, 2012
1,357
329
136
I'm not sure where you're getting your numbers from but definitely not reality as they're way high. There's also multiple threads already on 5GbE/10GbE where this was beat to death. Most consumer spindles sit around 120MB/s in all but the absolute best scenarios. A GOOD GbE setup can do around ~115MB/s. Most consumer NAS products fail to even saturate a single GbE setup. If you want to swing to the enthusiast side, such as a 6 drive raidz2 pool, sure it can exceed GbE. Even then, a 6 drive Z2 array is more like 450MB/s, not 800MB/s-900MB/s But most enthusiasts serious enough to spend that kind of cash on their storage are also perfectly happy to buy surplus networking gear. There's a wide variety of surplus options that will get you far beyond GbE speeds for a reasonable price. Far less than the price of a new 2.5GbE/5GbE switch.

Yeah due to certain constrains my current NAS is running a 4 drive WD red raidz2 setup. Local access (from local SSds) being 240-300MB/s (120-150MB/s per drive), and since raidz2 scales nicely with 2 more drives it will be around double that with 6 drive. This is almost a perfect speed match for NAS workloads with a 6 drive dual parity RAIDz2. Moar speed will be almost useless and SSD's with moar less-latency as will be equally useless in the vast majority of NAS workload despite what some muppets claim..

Your comparisons to consumer prebuilt NASs are almost meaningless, but I'm willing to accept read/write values can even be as low as half the maximum values. This is what I've noticed and this falls in line with my previous comments.

Think about it... So when you said:
Even then, a 6 drive Z2 array is more like 450MB/s, not 800MB/s-900MB/s.
Whilst your values are obviously lower than theoretical maximum I stated (and lower than I consider average performance), they also obviously back up my claim:
there's overheads and accounting for real world performance this seems like a great fit for a 5Gb/s NAS.

5Gb/s theoretical max being almost 600MB/s, but reality will be closer to 500. So why are you pretending to offer a counter-point?
 
Last edited:

XavierMace

Diamond Member
Apr 20, 2013
4,307
450
126
Whilst your values are obviously lower than theoretical maximum I stated (and lower than I consider average performance), they also obviously back up my claim

The numbers you just stated for YOUR system doesn't get remotely close to the numbers you originally claimed and even giving you doubling your speed by adding two drives, that's still putting you WAY under the speed you originally claimed of 800-900MB/s for a 6 drive RaidZ2 (which would be 200-225MB/s per drive). In fact, that puts you closer to my numbers than your original claims. Ignoring my experience, I can't find anyone getting close to the numbers you're claiming when running valid benchmarks on a RaidZ2 setup, even using the newer high capacity WD Red's which are among the fastest of the consumer spindles. Fastest benchmark's I've seen of the newer large WD Red's puts them at 180MB/s best case scenario. I see multiple benchmarks on the older Red's, including here on AT that put them at 120-140MB/s. The large bulk of consumer drives are in that same range. Those are all single drive numbers and large file transfers, meaning best case scenario. You're not going to get peak per-drive performance in a RaidZ2 array. Therefore the AVERAGE RaidZ2 setup isn't coming close to your claimed numbers without other assistance (IE caching) so I'm not sure what your "average" number are based on.

A quick Google search of people running 6 drive RaidZ2 arrays, 430-480MB/s was the pretty consistent result, even with WD Red's. Keep in mind, RaidZ2 is hardly your best option (even within the ZFS world) if you're that concerned about performance and it's REALLY easy to get artificially inflated numbers on ZFS rigs. For example:

Code:
write 20.48 GB via dd, please wait...
time dd if=/dev/zero of=/tank/dd.tst bs=2048000 count=10000

10000+0 records in
10000+0 records out

real        6.5
user        0.0
sys         6.4

20.48 GB in 6.5s = 3150.77 MB/s Write

wait 40 s
read 20.48 GB via dd, please wait...
time dd if=/tank/dd.tst of=/dev/null bs=2048000

10000+0 records in
10000+0 records out

real        3.8
user        0.0
sys         3.8

20.48 GB in 3.8s = 5389.47 MB/s Read

That's off one of the two ZFS boxes I'm running right now at my house. If you've got some actual data showing a 6 drive RaidZ2 benching at 800-900MB/s without "cheating", by all means feel free to post it. So far your numbers look and sound like pure speculation from somebody who just jumped on the ZFS bandwagon. Not to mention these are direct to drive benchmarks and do not reflect the performance you'll be getting on network transfers.

5Gb/s theoretical max being almost 600MB/s, but reality will be closer to 500. So why are you pretending to offer a counter-point?

Because you completely ignored the rest of my post. The number of people with network storage systems in their home that can saturate GbE is extremely small and most of them (like myself) have enough money into it that spending $300-$500 for high speed connectivity using widely available surplus equipment is a non-issue. Regardless if you're getting your numbers or my numbers for a RaidZ2 array, that does not reflect an average consumer setup. If you're running a "normal" 1 PC to 1 NAS setup, you can do 40Gb/s Infiniband for like $150. Have more systems than that? A 3 PC 40Gb/s Infiniband setup will run you $500-ish. Need longer distances but don't want to break the bank? 4Gb/s FC setups can be done for under $150. I wouldn't hold my breath on being able to build a 802.3bz for less than that in the next 5 years because there simply isn't enough demand for it.

I'm all for faster speeds but the number of people who can make use of faster than gigabit network speeds in the home is extremely small. You used the term "NAS Geeks". Most "NAS geeks" already have faster than 1GbE network speeds and aren't running RaidZ2 any more.
 

13Gigatons

Diamond Member
Apr 19, 2005
7,461
500
126
Can't believe this thread...some points to be made.

1. 5Gb/s Ethernet will be widespread and included on all future motherboards and laptops. Just like 1Gb/s is today.
2. Slow HDD are going to be replace sooner or soon. (Already have a all SSD server)
3. Most people don't want to buy surplus equipment and mess with things that aren't standard for Home or small networks. I don't want to buy a $1200 switch.
4. You don't need it but you want it. I want faster network equipment available. Just like I want AMD Zen or GTX1080.
5. Fiber Internet is pushing the need for multi-gig equipment and only cost $70 a month. My line is provisioned to 1.4Gb/s but equipment limited to 800mb.
6. You can't have Wifi without wires at some point in your home or office.

1Gb/s Ethernet had a good run, it's time to replace it. Done.
 

Red Squirrel

No Lifer
May 24, 2003
67,397
12,142
126
www.anyf.ca
I will definitely welcome something faster than 1gbps but have to admit it's more than fast enough for my needs even with my mass storage server and me moving big files around. But once it hits an affordable price then I may upgrade just because why not. I'll probably wait for 10gbps gear to become affordable and use it for the storage layer only and stick to gig for network layer. But I use all cat6 throughout the house, so it is 10gig compatible. For the network layer 1gbps is more than enough and I have a 24 port gigabit switch that I probably won't want to replace any time soon as it will take a while for 5 or 10gbp higher density switches to become common and cheap.
 

Rifter

Lifer
Oct 9, 1999
11,522
751
126
Untill SSD prices start to approach spinner prices i dont see how this even matters. less than 1% of home users can even saturate a Gbe Link anyways. It wont be untill everyone has all SSD storage in their NAS's that this will be an issue, and SSD prices have been going up not down lately. So maybe in 5-10 yeas upgrading to 10Gbe will be viable, talking about this now is kinda pointless.