Question Are we ready for 10 Gbe networks? I just want to chitchat

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

iamgenius

Senior member
Jun 6, 2008
803
80
91
Hi networking gurus.

I'm a guy who just hates slowness:cool:

Whenever there is a way to speed things up, I'll just do it. I have been toying with the idea of converting my 1Gb/s LAN into a 10 Gb/s LAN. Of course I will need to change my main router and NIC's in my machines. I save all my data into a NAS device, and this NAS backs up itself to another similar NAS in the same network. Not the very best solution but it works. I also move do a manual backup to a local drive and move files between my machines. I will be limited of course by the maximum transfer rate of my mechanical HDD's and the SATA interface transfer rate. But my nvme ssd's configured in RAID 0 can still utilize the 10 Gbps connection. It is very nice seeing huge files get transferred immediately. Are 10gbe routers/switches readily available? Are there NAS devices with 10 Gbps network connection? When I search, I still don't see them as common. Maybe we are still not ready? But 10 Gbe has been around for some time now.

I won't even mind building a NAS devices based on ssd's, heheh !

Let's chitchat
 

Viper GTS

Lifer
Oct 13, 1999
38,107
433
136
Sorry for the many questions, but I think I changed my mind a bit. Yes my NAS speed is faster than 1Gbps but it is not much faster, and since I mainly want the transfer rate to be fast between my main machine and my main NAS device, I think the wisest option is to just to add an additional NIC to my main machine and setup link aggregation between the PC and the NAS. I will need a switch that supports link aggregation. I can buy a used one for that. Now that I think about it, SSD caching may not benefit me much. I will only be able to increase my NAS speed very much if I invest heavily on SSD's.

Right? What do you think? Is it easy to setup? Is it possible to run a cat6 cable directly between the pc and the NAS, or should both ports on the pc and the NAS go to the switch (4 ports total) and then I should bond the two links?

This almost certainly won't do what you want. Link aggregation buys you total throughput, but not on a single traffic flow. If your goal is to increase performance to a single client you need to go to multi/ten-gigabit OR have a unique workflow that you could split and direct traffic across adapters that are NOT aggregated.

Viper GTS
 
  • Like
Reactions: iamgenius

iamgenius

Senior member
Jun 6, 2008
803
80
91
Hmmmmm. I get it. So unless you are doing multiple file transfers between the NAS and several clients, it won't help. I'm glad I asked. Thank you very much. It is like you said, I want to increase performance to my main machine that is generating most of my data files. I will go back to the 10 Gbps solution then. Since I can't find a card that has a sfp+ port with support for 2 M.2 ssd drives, I think I'll just create a direct cat6/cat7 connection between my NAS and my main machine. I will use the E10M20-T1 adapter card for my NAS, and something like this :


as a 2nd NIC for my main machine. Then, I will just have to somehow configure the direct connection. Any problem with this setup??? If not, I will just go ahead and pull the trigger.

Edit: That will also cost me the price of two M.2 ssd drives. Do you recommend any for caching?
 
Last edited:

Viper GTS

Lifer
Oct 13, 1999
38,107
433
136
Yes, your understanding is correct - If you were talking to multiple devices (in either direction) then link aggregation might be of benefit to you. For writing a single file from a single workstation to a single network share you won't see anything over 1 gigabit.

Those two cards should be fine, they're both multi-gig devices. Whether your workflows would benefit from caching is something you'd have to determine. My general view is that this stuff is fairly cheap in the grand scheme of things so there's little risk in trying it and seeing if you like it/if it benefits you. If this is a stretch for you financially then you'll definitely want a better understanding of your IO needs. Cache can help dramatically in some situations, and not at all in others. From the brief description you have provided so far I would guess it won't be of significant benefit.

Viper GTS
 
  • Like
Reactions: iamgenius

iamgenius

Senior member
Jun 6, 2008
803
80
91
Yes, your understanding is correct - If you were talking to multiple devices (in either direction) then link aggregation might be of benefit to you. For writing a single file from a single workstation to a single network share you won't see anything over 1 gigabit.

Those two cards should be fine, they're both multi-gig devices. Whether your workflows would benefit from caching is something you'd have to determine. My general view is that this stuff is fairly cheap in the grand scheme of things so there's little risk in trying it and seeing if you like it/if it benefits you. If this is a stretch for you financially then you'll definitely want a better understanding of your IO needs. Cache can help dramatically in some situations, and not at all in others. From the brief description you have provided so far I would guess it won't be of significant benefit.

Viper GTS

I frankly didn't read how ssd caching exactly works, but I think it is similar to the CPU cache. If you access/open a frequently opened file then it will help. Frequently accessed/opened/written/read files will reside in the ssd for faster transfer rates. If suddenly, however, I tried to open a file that I haven't used for a long time (or haven't used once after I enabled ssd caching, then there will be no benefit at all.

I don't think I will benefit from this much.
 

mv2devnull

Golden Member
Apr 13, 2010
1,498
144
106
That MD3 series was news to me. An external (DAS) box with RAID controller (and cache). I knew MD1-series disk shelves and "proper" NAS/SAN.

SAN solutions can be "tiered". The SSD in them is not cache. Rather the SAN box moves data between tiers (fast SSD and slow HDD) in order to keep the frequently accessed files on the SSD. Not quite a consumer's home option.

as a 2nd NIC for my main machine. Then, I will just have to somehow configure the direct connection.
With two NICs you should set up two distinct subnets.

Lets assume a typical home setup:
* Router acts as DHCP, has address 192.168.1.1 and hands out addresses from subnet 192.168.1.0/24
* PC and NAS use DHCP to get network config
* Gateway (aka default route) is 192.168.1.1
* You might have set "static addresses" in the router, so that PC and NAS always get same address

Now you add second NIC to PC. It is not connected to any DHCP server.
You set manual config. For example, subnet 192.168.8.0/24 (prefix 24 is equivalent to netmask 255.255.255.0), address 192.168.8.1, no gateway.

If the NAS has two network cards, then the first remains as is and the second has to be configured manually:
address 192.168.8.2, netmask 255.255.255.0, no gateway
Then PC should be able to connect to 192.168.8.2.
NAS probably broadcasts to both networks about it's existence, so clickety-clap GUI crap probably shows it twice on PC, once on both subnets.

If the NAS has only one network card, then it has to be reconfigured.
It might be possible to run DHCP server on the PC (for the 192.168.8.0/24 subnet) so that NAS can still use DHCP.
If NAS needs to connect anything other than the PC, then the PC (192.168.8.1) must act as gateway, i.e. the PC must route between the subnets. What is the MS term for router? "Network sharing"?
 

iamgenius

Senior member
Jun 6, 2008
803
80
91
With two NICs you should set up two distinct subnets.

Lets assume a typical home setup:
* Router acts as DHCP, has address 192.168.1.1 and hands out addresses from subnet 192.168.1.0/24
* PC and NAS use DHCP to get network config
* Gateway (aka default route) is 192.168.1.1
* You might have set "static addresses" in the router, so that PC and NAS always get same address

Now you add second NIC to PC. It is not connected to any DHCP server.
You set manual config. For example, subnet 192.168.8.0/24 (prefix 24 is equivalent to netmask 255.255.255.0), address 192.168.8.1, no gateway.

If the NAS has two network cards, then the first remains as is and the second has to be configured manually:
address 192.168.8.2, netmask 255.255.255.0, no gateway
Then PC should be able to connect to 192.168.8.2.
NAS probably broadcasts to both networks about it's existence, so clickety-clap GUI crap probably shows it twice on PC, once on both subnets.

If the NAS has only one network card, then it has to be reconfigured.
It might be possible to run DHCP server on the PC (for the 192.168.8.0/24 subnet) so that NAS can still use DHCP.
If NAS needs to connect anything other than the PC, then the PC (192.168.8.1) must act as gateway, i.e. the PC must route between the subnets. What is the MS term for router? "Network sharing"?

This is what I wanted to see exactly. I wanted somebody to say it. My NAS actually has 4 NIC's but that doesn't matter as I will only use one of them. The 2nd one will be the E10M20-T1 which is a 10Gbps card with space for two M.2 SSD's. This is the card that needs to be configured manually.I understand what you said. Your explanation is good. Should be easy to do after I get the cards. Wish me good luck.

Thanks a million mv2devnull
 

iamgenius

Senior member
Jun 6, 2008
803
80
91
Just so that you guys know, I found this NAS performance tester:


I think it works well. I did ask in the other forum, but I ended up finding it myself. It detects NAS share drives automatically. It does use various file sizes. Try it out and let me know what you think. I like it.
 
Last edited:
  • Like
Reactions: SamirD

iamgenius

Senior member
Jun 6, 2008
803
80
91
I'm pulling the trigger:



I'm only worried that the adapter card may not be compatible with all ssd's. They synology page only mentions synology drives as compatible, however this page:


explains that it may or may not work.
 
Last edited:

Viper GTS

Lifer
Oct 13, 1999
38,107
433
136
I'm pulling the trigger:



I'm only worried that the adapter card may not be compatible with all ssd's. They synology page only mentions synology drives as compatible, however this page:


explains that it may or may not work.

Synology wants to sell you their drives. I wouldn't worry about it, this stuff is very standardized. It will almost certainly just work.

Viper GTS
 
  • Like
Reactions: iamgenius

Fallen Kell

Diamond Member
Oct 9, 1999
6,036
429
126
If you want to really pull the trigger on faster home networks I would highly recommend reading up on Brocade ICX 6610 or 7250. Personally I have a 6610.

My main server is connected via 40GbE to the 6610 using a QSFP+ DAC cable (it is running XCP-NG to host multiple virtual machines, one of which is a freenas host with direct hardware pass-through of my SAS controllers). I also have a PFsense router setup as a router-on-a-stick connected via 40GbE to the 6610 as my edge router. I have the 6610 (which is a fully managed layer 3 switch/router) performing all routing between my internal VLANs except for the WAN VLAN and a transit VLAN (basically a VLAN on my LAN side that any of my VLANs that need to have access to the internet can reach), such that the PFsense system routes between the WAN and Transit VLAN. The 6610 then can handle everything else at full wirespeed since it has hardware ASICs to manage the rest.

My cable modem connects to a SFP+ port with a RJ45 converter at 2.5Gbps with that port specified as being on the WAN VLAN. I have ACL rules to prevent management connections to the switch from the WAN VLAN, my Guest VLAN, and Internet of Things VLAN. If need be in the future, the RJ45 SFP+ transceiver supports up to 10GbE, and could support a new cable modem at that speed (if one is ever made).

I have a 10GbE in my HTPC and my gaming computers, and they also connect over RJ45 SFP+ transceivers.

My wifi access point has a SFP+ port for connecting at 10GbE and is connected with a DAC cable to the 6610.

It all works very well and was not expensive at all. I picked up the 6610 for just under $200 shipped (refurbished, but with a 1 year warranty). My particular model can support 2x40GbE (QSFP+), 16x10GbE SFP+) and 24x1GbE RJ45 ports and has redundant fans and power supplies. The 40GbE cards are dirt cheap ($29 shipped), with the RJ45 Intel 10GbE cards I got in the HTPC and gaming system being around $150 each (there are TONS of counterfeit Chinese Intel 10GbE cards on the market so you really need to watch out where you buy from).

I would really read up on the Brocade ICX switches and see if one might make sense if you need 10GbE to more than 2 systems. One word of warning is that the 6610 is loud. This isn't some consumer switch but a full enterprise class switch which needs to be programmed/configured. It runs FastIron OS which syntaxes wise is very similar to CISCO, so if you are familiar with configuring a CISCO switch, this is easy. It also has a web based configuration which makes it a lot friendlier to people who do not have previous experience with enterprise switches, but there are still some things that can really only be done via command line.
 
Last edited:
  • Like
Reactions: SamirD

iamgenius

Senior member
Jun 6, 2008
803
80
91
If you want to really pull the trigger on faster home networks I would highly recommend reading up on Brocade ICX 6610 or 7250. Personally I have a 6610.

My main server is connected via 40GbE to the 6610 using a QSFP+ DAC cable (it is running XCP-NG to host multiple virtual machines, one of which is a freenas host with direct hardware pass-through of my SAS controllers). I also have a PFsense router setup as a router-on-a-stick connected via 40GbE to the 6610 as my edge router. I have the 6610 (which is a fully managed layer 3 switch/router) performing all routing between my internal VLANs except for the WAN VLAN and a transit VLAN (basically a VLAN on my LAN side that any of my VLANs that need to have access to the internet can reach), such that the PFsense system routes between the WAN and Transit VLAN. The 6610 then can handle everything else at full wirespeed since it has hardware ASICs to manage the rest.

My cable modem connects to a SFP+ port with a RJ45 converter at 2.5Gbps with that port specified as being on the WAN VLAN. I have ACL rules to prevent management connections to the switch from the WAN VLAN, my Guest VLAN, and Internet of Things VLAN. If need be in the future, the RJ45 SFP+ transceiver supports up to 10GbE, and could support a new cable modem at that speed (if one is ever made).

I have a 10GbE in my HTPC and my gaming computers, and they also connect over RJ45 SFP+ transceivers.

My wifi access point has a SFP+ port for connecting at 10GbE and is connected with a DAC cable to the 6610.

It all works very well and was not expensive at all. I picked up the 6610 for just under $200 shipped (refurbished, but with a 1 year warranty). My particular model can support 2x40GbE (QSFP+), 16x10GbE SFP+) and 24x1GbE RJ45 ports and has redundant fans and power supplies. The 40GbE cards are dirt cheap ($29 shipped), with the RJ45 Intel 10GbE cards I got in the HTPC and gaming system being around $150 each (there are TONS of counterfeit Chinese Intel 10GbE cards on the market so you really need to watch out where you buy from).

I would really read up on the Brocade ICX switches and see if one might make sense if you need 10GbE to more than 2 systems. One word of warning is that the 6610 is loud. This isn't some consumer switch but a full enterprise class switch which needs to be programmed/configured. It runs FastIron OS which syntaxes wise is very similar to CISCO, so if you are familiar with configuring a CISCO switch, this is easy. It also has a web based configuration which makes it a lot friendlier to people who do not have previous experience with enterprise switches, but there are still some things that can really only be done via command line.
I think that's beyond me now, but I will look into it.
 

SamirD

Golden Member
Jun 12, 2019
1,489
276
126
www.huntsvillecarscene.com
Do you know of a way to measure NAS performance (transfer rate)? Something like HD tune? I couldn't find a tool that does it, not in the package manager for my synology either.
Someone used to make this portable 'nastester' software that I've found to be pretty accurate. You can actually use any test that works on network drives as a nas is just that.

Nevermind--you found it. :D
 
Last edited:
  • Like
Reactions: iamgenius

SamirD

Golden Member
Jun 12, 2019
1,489
276
126
www.huntsvillecarscene.com
And no, I'm not scarifying redundancy for speed. Raid 5 is better for me. No Raid 0.
With today's areal densities and the error specs of drives still nearly the same as they have always been, raid5 will offer you little or no protection from a total failure when a when a second drive fails during a rebuild operation. Raid1 or Raid0+1 if you want real redundancy.
 
  • Like
Reactions: iamgenius

SamirD

Golden Member
Jun 12, 2019
1,489
276
126
www.huntsvillecarscene.com
I think that's beyond me now, but I will look into it.
That's beyond most of us! lol! 40Gb! :eek:

You should be able to set this up pretty easily. Just connect the pc and nas and assign a static IP to the new 10Gb interface on both ends that is in the same subnet, eg 192.168.10.2 and 192.168.10.3. Then when doing file transfers, specify this new IP address as the nas IP and it will use the new 10Gb connection to the nas. If you use the 192.168.1.x IP, it will use the 1Gb connection.
 
  • Like
Reactions: iamgenius

mv2devnull

Golden Member
Apr 13, 2010
1,498
144
106
With today's areal densities and the error specs of drives still nearly the same as they have always been, raid5 will offer you little or no protection from a total failure when a when a second drive fails during a rebuild operation. Raid1 or Raid0+1 if you want real redundancy.
Indeed. Raid5 array can lose at most one disk and the risk of second disk failing during rebuild is significant.

Raid1 (mirror) is for two disks. At most one of them can fail and risk of second failure during rebuild is about same as with raid5.

Raid0+1 (or 1+0) has at least four disks. The array can tolerate another failure (or even many) during rebuild of first failed drive, but only if the second fail is not the mirror of the first fail.

Raid6 can survive the failure of any two disks in the array.
 

ch33zw1z

Lifer
Nov 4, 2004
37,759
18,039
146
And as always, RAID is NOT a backup. Regardless of which raid you choose, never assume you data is safe. You need a backup plan. My backup plan is at least one USB disk which is "cold", meaning it doesn't just sit there and run. I plug it in every week or so and run my backup robocopy command. Typically I have at least one more disk with the data stored on it as well. Businesses I work with typically have tape backups, which I'm not doing lol

I agree with mv2dev as well - raid 6 is more robust than 10 or 0+1, and is typically coupled with a hot spare or two.
 

iamgenius

Senior member
Jun 6, 2008
803
80
91
You got me worried guys. I have 5 disks. I think I will add one more and convert my volume to RAID 6. I think that can be done seamlessly without losing data, right?
 

ch33zw1z

Lifer
Nov 4, 2004
37,759
18,039
146
You got me worried guys. I have 5 disks. I think I will add one more and convert my volume to RAID 6. I think that can be done seamlessly without losing data, right?

Typically that's not seamless and requires a RAID delete / create. However, you may be able to add a hot spare.
 

iamgenius

Senior member
Jun 6, 2008
803
80
91
Typically that's not seamless and requires a RAID delete / create. However, you may be able to add a hot spare.

Synology says otherwise:


Sounds like it is doable. Can anybody confirm? Anyone who has done it before with synology? I will still need to back up my data though.
 

ch33zw1z

Lifer
Nov 4, 2004
37,759
18,039
146
Synology says otherwise:


Sounds like it is doable. Can anybody confirm? Anyone who has done it before with synology? I will still need to back up my data though.

Then have at it. Read the notes tho.
 

Fallen Kell

Diamond Member
Oct 9, 1999
6,036
429
126
It might be possible from RAID 5 to RAID 6 depending on the controller and number of disks in the array. Typically I would say no, it is not possible, but if they did some custom RAID 6 and not the RAID 6 as by the standards, sure, I can see it being possible to convert it to RAID 6.

The RAID 5 to standards actually has the parity blocks spread out across all the drives (i.e. there isn't simply a disk sitting there that has parity blocks on it, every disk has data and parity blocks). However, if the designers of the controller decided to force all parity blocks onto a single drive, you could convert from RAID 5 to RAID 6, as it would simply be a process of placing a second parity block that includes the data blocks and first parity block (from the RAID 5) onto the new disk as the RAID 6 parity block. This setup of RAID with single parity on a dedicated disk is actually RAID 4. I don't think there is a standard RAID level that has a RAID 5 + a dedicated parity disk for two parity blocks, but to save confusion, if a company did make such a thing, they might call it RAID 6 as it acts so similarly.
 

Fallen Kell

Diamond Member
Oct 9, 1999
6,036
429
126
That's beyond most of us! lol! 40Gb! :eek:
I went 40Gb because it was cheaper than going 10. As stated, fiber 10Gb adapters run around $60-70, with RJ45 adapters going for around $100-150. I got dual port 40Gb QSFP+ adapters (really 56Gb FDR infiniband ports, but only when running as infiniband, and only 10 or 40GbE ethernet) for $29! And the switch I picked up was only another $50 more than the switches that only had 4x10Gb ports, but mine has support for 16x10Gb connections as well as 2x40GbE connections. So by spending $50 more on my switch, I net saved $10-30 on network cards (i.e. if I spent for the $150 switch, I would have had to spend $120-140 for network cards, vs spending $200 on the switch and spending $60 on those two network cards, so $270-$290 vs $260) and at the same time it upgraded me to 40GbE for two of my main systems, and have additional 10GbE ports for use across other devices in the future. In other words, it was a no-brainer upgrade.

The reason the 40GbE stuff is so cheap is that all the big companies are ripping it out right now and replacing their QSFP+ 40GbE infrastructure with QSFP28+ 100/400GbE infrastructure. All the 40GbE stuff is flooding the used, off-lease, and refurb market right now and many people don't know what it is or how to use it since it is not RJ45 connectors, and not SFP+ connectors, but in some cases you can convert from QSFP+ to SFP+ with breakout cables (one QSFP+ on one end and 4xSFP+ on the other, which my switch supports for 2 of my QSFP+ports) and on client NIC side there is a single adapter to go from a QSFP+ directly to a SFP+, however you will spend as much on that adapter as you would for buying a SFP+ network card in the first place (so not worth it). But if you are simply using QSFP+ DAC cables to connect between 2 QSFP+ ports, it is dirt cheap to do.
 
Last edited: