• Guest, The rules for the P & N subforum have been updated to prohibit "ad hominem" or personal attacks against other posters. See the full details in the post "Politics and News Rules & Guidelines."

Question Are we ready for 10 Gbe networks? I just want to chitchat

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Viper GTS

Lifer
Oct 13, 1999
37,965
301
136
Sorry for the many questions, but I think I changed my mind a bit. Yes my NAS speed is faster than 1Gbps but it is not much faster, and since I mainly want the transfer rate to be fast between my main machine and my main NAS device, I think the wisest option is to just to add an additional NIC to my main machine and setup link aggregation between the PC and the NAS. I will need a switch that supports link aggregation. I can buy a used one for that. Now that I think about it, SSD caching may not benefit me much. I will only be able to increase my NAS speed very much if I invest heavily on SSD's.

Right? What do you think? Is it easy to setup? Is it possible to run a cat6 cable directly between the pc and the NAS, or should both ports on the pc and the NAS go to the switch (4 ports total) and then I should bond the two links?
This almost certainly won't do what you want. Link aggregation buys you total throughput, but not on a single traffic flow. If your goal is to increase performance to a single client you need to go to multi/ten-gigabit OR have a unique workflow that you could split and direct traffic across adapters that are NOT aggregated.

Viper GTS
 
  • Like
Reactions: iamgenius

iamgenius

Senior member
Jun 6, 2008
592
20
81
Hmmmmm. I get it. So unless you are doing multiple file transfers between the NAS and several clients, it won't help. I'm glad I asked. Thank you very much. It is like you said, I want to increase performance to my main machine that is generating most of my data files. I will go back to the 10 Gbps solution then. Since I can't find a card that has a sfp+ port with support for 2 M.2 ssd drives, I think I'll just create a direct cat6/cat7 connection between my NAS and my main machine. I will use the E10M20-T1 adapter card for my NAS, and something like this :


as a 2nd NIC for my main machine. Then, I will just have to somehow configure the direct connection. Any problem with this setup??? If not, I will just go ahead and pull the trigger.

Edit: That will also cost me the price of two M.2 ssd drives. Do you recommend any for caching?
 
Last edited:

Viper GTS

Lifer
Oct 13, 1999
37,965
301
136
Yes, your understanding is correct - If you were talking to multiple devices (in either direction) then link aggregation might be of benefit to you. For writing a single file from a single workstation to a single network share you won't see anything over 1 gigabit.

Those two cards should be fine, they're both multi-gig devices. Whether your workflows would benefit from caching is something you'd have to determine. My general view is that this stuff is fairly cheap in the grand scheme of things so there's little risk in trying it and seeing if you like it/if it benefits you. If this is a stretch for you financially then you'll definitely want a better understanding of your IO needs. Cache can help dramatically in some situations, and not at all in others. From the brief description you have provided so far I would guess it won't be of significant benefit.

Viper GTS
 
  • Like
Reactions: iamgenius

iamgenius

Senior member
Jun 6, 2008
592
20
81
Yes, your understanding is correct - If you were talking to multiple devices (in either direction) then link aggregation might be of benefit to you. For writing a single file from a single workstation to a single network share you won't see anything over 1 gigabit.

Those two cards should be fine, they're both multi-gig devices. Whether your workflows would benefit from caching is something you'd have to determine. My general view is that this stuff is fairly cheap in the grand scheme of things so there's little risk in trying it and seeing if you like it/if it benefits you. If this is a stretch for you financially then you'll definitely want a better understanding of your IO needs. Cache can help dramatically in some situations, and not at all in others. From the brief description you have provided so far I would guess it won't be of significant benefit.

Viper GTS
I frankly didn't read how ssd caching exactly works, but I think it is similar to the CPU cache. If you access/open a frequently opened file then it will help. Frequently accessed/opened/written/read files will reside in the ssd for faster transfer rates. If suddenly, however, I tried to open a file that I haven't used for a long time (or haven't used once after I enabled ssd caching, then there will be no benefit at all.

I don't think I will benefit from this much.
 

mv2devnull

Golden Member
Apr 13, 2010
1,334
55
91
That MD3 series was news to me. An external (DAS) box with RAID controller (and cache). I knew MD1-series disk shelves and "proper" NAS/SAN.

SAN solutions can be "tiered". The SSD in them is not cache. Rather the SAN box moves data between tiers (fast SSD and slow HDD) in order to keep the frequently accessed files on the SSD. Not quite a consumer's home option.

as a 2nd NIC for my main machine. Then, I will just have to somehow configure the direct connection.
With two NICs you should set up two distinct subnets.

Lets assume a typical home setup:
* Router acts as DHCP, has address 192.168.1.1 and hands out addresses from subnet 192.168.1.0/24
* PC and NAS use DHCP to get network config
* Gateway (aka default route) is 192.168.1.1
* You might have set "static addresses" in the router, so that PC and NAS always get same address

Now you add second NIC to PC. It is not connected to any DHCP server.
You set manual config. For example, subnet 192.168.8.0/24 (prefix 24 is equivalent to netmask 255.255.255.0), address 192.168.8.1, no gateway.

If the NAS has two network cards, then the first remains as is and the second has to be configured manually:
address 192.168.8.2, netmask 255.255.255.0, no gateway
Then PC should be able to connect to 192.168.8.2.
NAS probably broadcasts to both networks about it's existence, so clickety-clap GUI crap probably shows it twice on PC, once on both subnets.

If the NAS has only one network card, then it has to be reconfigured.
It might be possible to run DHCP server on the PC (for the 192.168.8.0/24 subnet) so that NAS can still use DHCP.
If NAS needs to connect anything other than the PC, then the PC (192.168.8.1) must act as gateway, i.e. the PC must route between the subnets. What is the MS term for router? "Network sharing"?
 

iamgenius

Senior member
Jun 6, 2008
592
20
81
With two NICs you should set up two distinct subnets.

Lets assume a typical home setup:
* Router acts as DHCP, has address 192.168.1.1 and hands out addresses from subnet 192.168.1.0/24
* PC and NAS use DHCP to get network config
* Gateway (aka default route) is 192.168.1.1
* You might have set "static addresses" in the router, so that PC and NAS always get same address

Now you add second NIC to PC. It is not connected to any DHCP server.
You set manual config. For example, subnet 192.168.8.0/24 (prefix 24 is equivalent to netmask 255.255.255.0), address 192.168.8.1, no gateway.

If the NAS has two network cards, then the first remains as is and the second has to be configured manually:
address 192.168.8.2, netmask 255.255.255.0, no gateway
Then PC should be able to connect to 192.168.8.2.
NAS probably broadcasts to both networks about it's existence, so clickety-clap GUI crap probably shows it twice on PC, once on both subnets.

If the NAS has only one network card, then it has to be reconfigured.
It might be possible to run DHCP server on the PC (for the 192.168.8.0/24 subnet) so that NAS can still use DHCP.
If NAS needs to connect anything other than the PC, then the PC (192.168.8.1) must act as gateway, i.e. the PC must route between the subnets. What is the MS term for router? "Network sharing"?
This is what I wanted to see exactly. I wanted somebody to say it. My NAS actually has 4 NIC's but that doesn't matter as I will only use one of them. The 2nd one will be the E10M20-T1 which is a 10Gbps card with space for two M.2 SSD's. This is the card that needs to be configured manually.I understand what you said. Your explanation is good. Should be easy to do after I get the cards. Wish me good luck.

Thanks a million mv2devnull
 

iamgenius

Senior member
Jun 6, 2008
592
20
81
Just so that you guys know, I found this NAS performance tester:


I think it works well. I did ask in the other forum, but I ended up finding it myself. It detects NAS share drives automatically. It does use various file sizes. Try it out and let me know what you think. I like it.
 
Last edited:

iamgenius

Senior member
Jun 6, 2008
592
20
81
I'm pulling the trigger:



I'm only worried that the adapter card may not be compatible with all ssd's. They synology page only mentions synology drives as compatible, however this page:


explains that it may or may not work.
 
Last edited:

Viper GTS

Lifer
Oct 13, 1999
37,965
301
136
I'm pulling the trigger:



I'm only worried that the adapter card may not be compatible with all ssd's. They synology page only mentions synology drives as compatible, however this page:


explains that it may or may not work.
Synology wants to sell you their drives. I wouldn't worry about it, this stuff is very standardized. It will almost certainly just work.

Viper GTS
 
  • Like
Reactions: iamgenius

Fallen Kell

Diamond Member
Oct 9, 1999
5,530
188
106
If you want to really pull the trigger on faster home networks I would highly recommend reading up on Brocade ICX 6610 or 7250. Personally I have a 6610.

My main server is connected via 40GbE to the 6610 using a QSFP+ DAC cable (it is running XCP-NG to host multiple virtual machines, one of which is a freenas host with direct hardware pass-through of my SAS controllers). I also have a PFsense router setup as a router-on-a-stick connected via 40GbE to the 6610 as my edge router. I have the 6610 (which is a fully managed layer 3 switch/router) performing all routing between my internal VLANs except for the WAN VLAN and a transit VLAN (basically a VLAN on my LAN side that any of my VLANs that need to have access to the internet can reach), such that the PFsense system routes between the WAN and Transit VLAN. The 6610 then can handle everything else at full wirespeed since it has hardware ASICs to manage the rest.

My cable modem connects to a SFP+ port with a RJ45 converter at 2.5Gbps with that port specified as being on the WAN VLAN. I have ACL rules to prevent management connections to the switch from the WAN VLAN, my Guest VLAN, and Internet of Things VLAN. If need be in the future, the RJ45 SFP+ transceiver supports up to 10GbE, and could support a new cable modem at that speed (if one is ever made).

I have a 10GbE in my HTPC and my gaming computers, and they also connect over RJ45 SFP+ transceivers.

My wifi access point has a SFP+ port for connecting at 10GbE and is connected with a DAC cable to the 6610.

It all works very well and was not expensive at all. I picked up the 6610 for just under $200 shipped (refurbished, but with a 1 year warranty). My particular model can support 2x40GbE (QSFP+), 16x10GbE SFP+) and 24x1GbE RJ45 ports and has redundant fans and power supplies. The 40GbE cards are dirt cheap ($29 shipped), with the RJ45 Intel 10GbE cards I got in the HTPC and gaming system being around $150 each (there are TONS of counterfeit Chinese Intel 10GbE cards on the market so you really need to watch out where you buy from).

I would really read up on the Brocade ICX switches and see if one might make sense if you need 10GbE to more than 2 systems. One word of warning is that the 6610 is loud. This isn't some consumer switch but a full enterprise class switch which needs to be programmed/configured. It runs FastIron OS which syntaxes wise is very similar to CISCO, so if you are familiar with configuring a CISCO switch, this is easy. It also has a web based configuration which makes it a lot friendlier to people who do not have previous experience with enterprise switches, but there are still some things that can really only be done via command line.
 
Last edited:

iamgenius

Senior member
Jun 6, 2008
592
20
81
If you want to really pull the trigger on faster home networks I would highly recommend reading up on Brocade ICX 6610 or 7250. Personally I have a 6610.

My main server is connected via 40GbE to the 6610 using a QSFP+ DAC cable (it is running XCP-NG to host multiple virtual machines, one of which is a freenas host with direct hardware pass-through of my SAS controllers). I also have a PFsense router setup as a router-on-a-stick connected via 40GbE to the 6610 as my edge router. I have the 6610 (which is a fully managed layer 3 switch/router) performing all routing between my internal VLANs except for the WAN VLAN and a transit VLAN (basically a VLAN on my LAN side that any of my VLANs that need to have access to the internet can reach), such that the PFsense system routes between the WAN and Transit VLAN. The 6610 then can handle everything else at full wirespeed since it has hardware ASICs to manage the rest.

My cable modem connects to a SFP+ port with a RJ45 converter at 2.5Gbps with that port specified as being on the WAN VLAN. I have ACL rules to prevent management connections to the switch from the WAN VLAN, my Guest VLAN, and Internet of Things VLAN. If need be in the future, the RJ45 SFP+ transceiver supports up to 10GbE, and could support a new cable modem at that speed (if one is ever made).

I have a 10GbE in my HTPC and my gaming computers, and they also connect over RJ45 SFP+ transceivers.

My wifi access point has a SFP+ port for connecting at 10GbE and is connected with a DAC cable to the 6610.

It all works very well and was not expensive at all. I picked up the 6610 for just under $200 shipped (refurbished, but with a 1 year warranty). My particular model can support 2x40GbE (QSFP+), 16x10GbE SFP+) and 24x1GbE RJ45 ports and has redundant fans and power supplies. The 40GbE cards are dirt cheap ($29 shipped), with the RJ45 Intel 10GbE cards I got in the HTPC and gaming system being around $150 each (there are TONS of counterfeit Chinese Intel 10GbE cards on the market so you really need to watch out where you buy from).

I would really read up on the Brocade ICX switches and see if one might make sense if you need 10GbE to more than 2 systems. One word of warning is that the 6610 is loud. This isn't some consumer switch but a full enterprise class switch which needs to be programmed/configured. It runs FastIron OS which syntaxes wise is very similar to CISCO, so if you are familiar with configuring a CISCO switch, this is easy. It also has a web based configuration which makes it a lot friendlier to people who do not have previous experience with enterprise switches, but there are still some things that can really only be done via command line.
I think that's beyond me now, but I will look into it.
 

ASK THE COMMUNITY