Question Are we ready for 10 Gbe networks? I just want to chitchat

Page 5 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

iamgenius

Senior member
Jun 6, 2008
826
113
106
Hi networking gurus.

I'm a guy who just hates slowness:cool:

Whenever there is a way to speed things up, I'll just do it. I have been toying with the idea of converting my 1Gb/s LAN into a 10 Gb/s LAN. Of course I will need to change my main router and NIC's in my machines. I save all my data into a NAS device, and this NAS backs up itself to another similar NAS in the same network. Not the very best solution but it works. I also move do a manual backup to a local drive and move files between my machines. I will be limited of course by the maximum transfer rate of my mechanical HDD's and the SATA interface transfer rate. But my nvme ssd's configured in RAID 0 can still utilize the 10 Gbps connection. It is very nice seeing huge files get transferred immediately. Are 10gbe routers/switches readily available? Are there NAS devices with 10 Gbps network connection? When I search, I still don't see them as common. Maybe we are still not ready? But 10 Gbe has been around for some time now.

I won't even mind building a NAS devices based on ssd's, heheh !

Let's chitchat
 

sdifox

No Lifer
Sep 30, 2005
100,220
17,894
126
WTF... i thought i was a baller with 4 x 4TB SSD's in R0 just for gaming.
Someone actually put 4 x8tb sabrents in a bifurcation card outside enterprise environment?



Ahh it was ADAM, yea....
I lose in every ground against that guy, however im pretty sure he is not running an enthusiest tier platform, so his GPU is probably running at 8X, which i know is not much different in speed, but i have OCD in allowing a 1700 dollar video card to run at 8x.



Theoretically SATA 6G = 6Gb/s which loosely translate to about 750MB/s
A single SATA can do about 550MB/s on a long chain transfer... meaning not a bunch of 4k random files.
So technically 2 SSD's in R0 should cap it, however i think i read somewhere real world, the cap is actually seen at 3 SSD's in R0.
So if your after the max in that SATA I/O port, you would need a cumulative transfer speed of greater then 750MB/s if you wish to cap.
PCIe 4.0 8x is plenty for video cards.
 

aigomorla

CPU, Cases&Cooling Mod PC Gaming Mod Elite Member
Super Moderator
Sep 28, 2005
21,067
3,574
126
Okay, I see it now. Current games are certainly huge. So you guys have a set for your OS and another set as a gaming drive? Is it to reduce loading times? Because supposedly once you load things into your RAM(much faster) the disk shouldn't matter unless you keep loading some new stuff from the disk, right?

I'm talking about SATA III because it is what my NAS uses to connect to drives. If I was thinking speed from the beginning I would have gotten the fastest SATA III drive which will most likely be an ssd because I don't think there is a mechanical HDD that can do ~ 600 MB/s.

we usually have a dedicated game drive, because if things go south, we can reformat windows, and not lose our game library which can get really large... Yes with faster internet we could just redownload the library in minutes, but i like to keep things separated.

I am running 4 x 4TB SSD in R0 for a cumulative space of 14.4TB of storage space just dedicated to games.
I thought i had the 1 up on adam, but he tied in raw storage, but won me on speed.
However again, you don't see much of a load time difference between having nVME and SSD-R0 config, so ultimately im not losing out on much.

As for SATA III vs SATA 6G just divide it in half.
So SATA III would be theoretically 350mb/s which yes a single spinner could not do, as i believe they all cap around 220-240mb/s, but could be easily capped at R0+1 or Raid-10 config using 6 drives.

PCIe 4.0 8x is plenty for video cards.

If im not mistaken, he is on a Z470 which does not have PCI-E 4.0.

But yeah i 8x is enough, but he has a 3090 like i do, and OCD would kill me to allow a 1800 dollar card run anything lower then the full 16x.
OCD is already killing me on not getting a TR4 platform and running it on PCI-E 4.0 with 16x as it is.
 

Aikouka

Lifer
Nov 27, 2001
30,383
912
126
I made a switch to 10Gbps not too long ago. I ended up buying the Ubiquiti US-16-XG, which I connected via a DAC cable to my existing Ubiquiti Dream Machine Pro. My main purpose was to speed up transmissions from my main desktop, which handles all of my multimedia, to the machine that stores them (my server). I equipped my server with a StarTech ST10000SPEXI. I really should've just gone with a cheaper SFP+ solution, but it works fine. Fortunately, my main desktop has an ASRock Z370 Gaming Professional i7, which was released back when 10Gbps was the cool middle-ground feature. Today, you seem to mostly get 2.5Gbps if you're lucky. Although, it does use an Aquantia PHY, and I'm not sure how good it is. I do have an extra Intel-based card that I've been tempted to try.

One problem that I did run into is that my server is also running a SAS RAID card, SAS Expander and a GeForce 1050 Ti. The SAS Expander doesn't require data transfer, but it does need to be held in place and provided power. I did manage to find a 3D printed bracket for the SAS Expander that could attach it to the case, but I couldn't find a good place to put it in the case. My server is running a simple Ryzen setup (using an R7-2700), so I upgraded its existing X470 board to an ASUS Pro WS X570-ACE. This was an important swap for two reasons. The first is that it has three 16x PCI-E slots, and the second is that the 1x slots are open-ended. That second one is a big deal, because it meant that I could put my SAS Expander, which has a 4x connector for some reason, into the slot. The first set of pins are the ones that provide the power, which is all that it needs, so it doesn't matter that I'm not in a 4x or larger slot.

Although, I do technically have two issues that hamper my speed a bit. My main desktop mostly uses an HDD for a scratch drive, so files residing on it aren't going to transfer all that fast in the first place (about 200MB/s tops). Also, my server runs UNRAID, and its cache drive is an SSD, but I'm pretty sure it's a SATA drive. So, at best, I tend to see around 500MB/s when writing to the server.
 

DaaQ

Platinum Member
Dec 8, 2018
2,005
1,434
136
 

iamgenius

Senior member
Jun 6, 2008
826
113
106
It will be fine. Just back up your data, preferably to more than one destination, and give it a shot. Synology says it's available on some models, so maybe yours has the option. I recommend taking notes during the process, grabbing screen snips, timestamps, etc.. often when trying something new we get anxious, so this will help to stay calm and not click things that can make it worse. Also, if there's a cli and commands to monitor it, I would keep those handy. IMXP, gui can get weird / timeout during things and you may think it didn't work, but cli will say otherwise.

Edit: also make sure your enclosure is at the latest code, so all the latest bug fixes are install. This way if you need to open a ticket with synology,.they don't just brush it off as "are you at the latest code" be.

Edit 2: There's also a synology community here, if nobody here can say they've done it, it works, here's the details....then maybe the community over there can help ya out.

For instance, Here's a post that claimed it took so long, that the user just canceled the task, blew away the config, created the RAID 6, then restored it from backup.

If you're interested in this community helping more, post specs:

- Synology model and code level
- Drive Model's
- Specific RAID config as reported by NAS unit.

I did it. I switched from SHR to SHR-2 (RAID 6). It took 19 days to finish. So I now have data protection for 2-drive fault tolerance. No more RAID 5.
 
Last edited:
  • Like
Reactions: ch33zw1z

ch33zw1z

Lifer
Nov 4, 2004
39,749
20,323
146
I did it. I switched from SHR to SHR-2 (RAID 6). It took 19 days to finish. So I now have data protection for 2-drive fault tolerance. No more RAID 5.

Thanks for the update! 19 days, boy oh boy you get a badge for patience :p
 

Red Squirrel

No Lifer
May 24, 2003
70,561
13,802
126
www.anyf.ca
I don't need 10g, but at some point I'll probably set it up for inter-server storage at least (NAS to VM) because YOLO. Funny thing is I bought a quad port gig nic years back to do teaming but the only way to install it is to reboot the NAS which is not something for the feint of heart as there is a good chance of losing a couple drives in the process, so I've just been holding off for years lol. Basically if I am absolutely forced to reboot it due to an extended power outage or something, I have the card on hand to install it. But the longer I wait the less relevant that idea becomes since 10g stuff is getting more affordable.

Suppose I can save that card for a firewall build at some point.