10GE upgrade question

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

imagoon

Diamond Member
Feb 19, 2003
5,199
0
0
Not talking about theoretical maxes, 50-75MB/S is pretty normal for GigE file transfers.

I can't agree with you on that. 125MB/s is theory. I see 115MB/s daily. 50-75MB/s is pretty normal for a small home NAS at best. Quick glance at one of the VNX machines shows me 8 1gb/s paths doing 108MB/s to 117MB/s according to the switch ports.
 

XavierMace

Diamond Member
Apr 20, 2013
4,307
450
126
I can't agree with you on that. 125MB/s is theory. I see 115MB/s daily. 50-75MB/s is pretty normal for a small home NAS at best. Quick glance at one of the VNX machines shows me 8 1gb/s paths doing 108MB/s to 117MB/s according to the switch ports.

That's at the switch, what's the OS saying for file transfers across that?

I'm not saying 50-75MB/s is the best you can do, but at least in my experience that's what I see in most small to medium businesses. Admittedly I'm dealing with educational and financial institutions that are incredibly stingy with their money, so YMMV. $50k for a SAN certainly would lead me to believe the environment in question probably falls into that category.
 

imagoon

Diamond Member
Feb 19, 2003
5,199
0
0
That's at the switch, what's the OS saying for file transfers across that?

I'm not saying 50-75MB/s is the best you can do, but at least in my experience that's what I see in most small to medium businesses. Admittedly I'm dealing with educational and financial institutions that are incredibly stingy with their money, so YMMV. $50k for a SAN certainly would lead me to believe the environment in question probably falls into that category.

106MB/s vcenter iso to the file server. ~360MB/s intraserver. Heartbleed patch time provided a good quick test this morning.
 

azazel1024

Senior member
Jan 6, 2014
901
2
76
That's at the switch, what's the OS saying for file transfers across that?

I'm not saying 50-75MB/s is the best you can do, but at least in my experience that's what I see in most small to medium businesses. Admittedly I'm dealing with educational and financial institutions that are incredibly stingy with their money, so YMMV. $50k for a SAN certainly would lead me to believe the environment in question probably falls into that category.

Yeah, I disagree. That might be typical with small files or with really performance constrained storage systems, like old/small NAS appliances, but it is not typical on anything like real hardware. Crap, my Celeron G1610 based file server from a pair of RAID0 2TB 5400rpm drives can easily push a pair of GbE links with SMB3 over it to 235MB/sec and a single link to 117MB/sec with anything like a large file.

Even small file directory copy of 4-15MB files from a single 500GB 7200rpm drive that is 80% utilized runs to the server from my desktop at ~90-100MB/sec, which is pure disk speed there.

I'd think driver issues or something similar is causing problems.

Not particularly related, but I know with my laptop I have some significant driver issues under Win8.1. Its a Realtek GbE NIC in there, but depending on driver version and settings it likes to run at 25-40MB/sec reads off my server, but it'll push 108MB/sec writes to the server all day long. If I get the settings and driver version right it'll do 114MB/sec both directions all day long (well, till my SSD decides it needs trimming).

Now I don't know what kind of reads these are. If they are DB read/writes from spinning disk, that actually might be roughly what you'll be able to push depending on how the disk arrays are configured and what kind of database tables and row sizes you are looking at.
 

Todd33

Diamond Member
Oct 16, 2003
7,842
2
81
As stated earlier, the PCI-e is absolutely not the case.

It could be network driver, it could be firmware on the switch, it could be the file server. We need to approach this as a process of elimination since we arent versed in your environment. Is it possible to try iperf to another 10G device using the same switch path?

We are saying the same thing here in house. The IT group wanted to give up and say that 50MB/2 is the best we can do, but we pushed back.

We have three pieces (maybe four).

1. The File server, 80TB Panasas. This should kick ass, especially since it is typically only used by 1-2 people at a time, mostly just me though.

2. The cheap 10G switch. IT was going to look into its configuration.

3. The PC consuming the data. It is using 10G according to lshw:
82599EB 10-Gigabit SFI/SFP+ Network Connection
speed=10Gbit/snnection

4. The cables? I would assume the IT guys are not this dumb. all three devices are in the same room, no distances involved.
 

Todd33

Diamond Member
Oct 16, 2003
7,842
2
81
Yes I know, but the OP is saying he is working with rather high performance storage and networking here. 50MB/sec would not be typical for a GbE transfer in such a scenario.

I'm also dealing with larger files, 50-600MB. The IT guy did a netperf from the PC to another 10GE PC on the same network, he got 5.5Gbps, so at least we have verification that something got into the realm of 10Gb. So with the switch and PC seemingly ruled out, we are on to the Panasas. The netperf on the Panasas is not playing nice with the other versions.
 

XavierMace

Diamond Member
Apr 20, 2013
4,307
450
126
Yes, but with no disrespect to the OP, I don't think he knows what high performance means when it comes to infrastructure and storage. I know people who think their Core 2 Duo is a screaming fast computer. Here's what I base this on:

$50k may sound like a lot for a storage system to the average user but that's like a drop in the bucket in the enterprise sector. I've sold numerous storage setups that were in the 7 figures price wise. The SAN upgrade we just did at work was just under $10M. If you drive a Civic, just about everything seems high performance. If you drive a Ferrari, you have much different standards. When people are talking about a network/storage setup for a business, I assume we are comparing "Ferraris" not "Civics".

Secondly, given the only descriptor given about the switch in this setup was "cheap" would again indicate to me we aren't talking a high performance build by business/enterprise standards as those are mutually exclusive goals. Don't get me wrong, depending on the usage/environment cheap may be far more important than fast. There's nothing wrong with going cheap as long as it does what you need it to.

Thirdly, based off everything I see in this thread, it would seem to indicate that they went from a single GigE link to a single 10GigE link. Regardless of what you are using for connectivity, I couldn't tell you the last time I saw a network storage system that was hooked up using a single interface. I haven't had less than a pair of GigE links on my storage at home in like 6 years. Even the environments I deal with running EOL equipment are still usually running 2-4 links. Performance aside, most businesses want some sort of redundancy.

The setup they have may be perfectly suited to what they need (if working properly), possibly even over kill. But my spidey senses are telling me the people involved may not be what you call experts on setting up networks or storage systems. It's entirely possible they are just "good with computers" and therefore have become the defacto IT guy. I see that all the time. Hell, even at my current job we have a lot of really smart people that basically have to learn things as they go because we have no expert for the particular item we are working on.
 

XavierMace

Diamond Member
Apr 20, 2013
4,307
450
126
I'm also dealing with larger files, 50-600MB. The IT guy did a netperf from the PC to another 10GE PC on the same network, he got 5.5Gbps, so at least we have verification that something got into the realm of 10Gb. So with the switch and PC seemingly ruled out, we are on to the Panasas. The netperf on the Panasas is not playing nice with the other versions.

That takes me back to checking the cables going to the SAN. I get that may sound stupid, but start with the simple stuff. I've seen people waste hours on the phone with vendors swapping drivers, firmware, etc only to find out it was a bad cable. Takes like 30 second to swap a cable.

The other thing I do see is they are advertised as being optimized for Linux Clusters. How is it's storage being presented to the clients and what OS are they running?
 

Todd33

Diamond Member
Oct 16, 2003
7,842
2
81
No disrespect taken, but high performance in the context is something more than 1Gbps and less than 10Gbps. The file server hardware is not really in question IMO, any good raid should be able to saturate 10G when serving one client. How it is configured is in question. It has 10G, mult-1G and infiniband built in and we are using the 10G as far as I know.


Panasas is the premier provider of high performance scale-out storage solutions for big data workloads. All Panasas ActiveStor storage appliances leverage Panasas PanFS parallel file system to deliver superior performance, data protection, scalability, and manageability.

Panasas systems are optimized for highly demanding HPC storage applications in the bioscience, energy, government, finance, manufacturing, and other core research and development sectors.

If anything it is overkill for our use since usually only 1-2 people ever access it at once.
 

XavierMace

Diamond Member
Apr 20, 2013
4,307
450
126
Todd33, am I correct in believing your involvement in this is just needing the data off the storage and/or paying the bill? If so, the people with control over the SAN really need to be involved in this.

Once again, I'm not trying to call anyone dumb. But if you have people in over their heads or dealing with technology they haven't used before, mistakes can happen. Even mistakes that might seem "stupid" to outsiders.

The sticking point for me is the fact that you are saying you are getting nearly identical speeds after upgrading to 10GbE. That seems to coincidental to be drivers/configuration to me, unless you've got a switch port configured for just 1GbE. That would seem to indicate there's still a 1GbE link in the picture somewhere. You've eliminated the client side, which leaves either switch configuration, cabling, or the SAN itself. Make sure the cable are correct, make sure they are actually plugged into the 10GbE port on the SAN. The only thing worse than making a dumb mistake is not finding the dumb mistake.
 

Todd33

Diamond Member
Oct 16, 2003
7,842
2
81
I am working with the IT guy, I just wanted some info that I can challenge him with. They are telling me stuff like "you need 3Ghz CPU for every 1Gb speed" and trying to convince us that 10Gb is not possible. I think once they dig a bit deeper with the SAN, it will be an obvious configuration problem. Thanks.
 

imagoon

Diamond Member
Feb 19, 2003
5,199
0
0
I am working with the IT guy, I just wanted some info that I can challenge him with. They are telling me stuff like "you need 3Ghz CPU for every 1Gb speed" and trying to convince us that 10Gb is not possible. I think once they dig a bit deeper with the SAN, it will be an obvious configuration problem. Thanks.

I wouldn't believe that for a second. 1.60ghz Intel atom's can max a gigabit connection when paired with an SSD drive or when using synthetic testing like iperf.

--edit--

To give you perspective:

http://www.witchcraftbib.co.uk/western-digital-book-live-review/

That little tiny $99 NAS is rated around 50MB/s. You can't tell me that thing has a roaring 3ghz+ processor in it. Basically your SAN is being out performed by what amounts to a 5400rpm SATA hard drive.
 
Last edited:

Demo24

Diamond Member
Aug 5, 2004
8,356
9
81
I am working with the IT guy, I just wanted some info that I can challenge him with. They are telling me stuff like "you need 3Ghz CPU for every 1Gb speed" and trying to convince us that 10Gb is not possible. I think once they dig a bit deeper with the SAN, it will be an obvious configuration problem. Thanks.



I can tell you that's bull, a lot of that gets offloaded by the raid cards on these things anyway. I have a emc vnxe3300, its got a couple intel xeons with lower ratings than that and it has no trouble pushing 10gb to normal operating limits.

If you have a panasas, I'm guessing activestor 11, or 14, built within the past couple years I can guarantee you it will have no trouble at all maxing out 10gb. Heck with 80tb of fast storage I'd probably be doing 40gb connections just to be safe.

It sounds like a couple things are going on here:

A) you are expecting much faster out of a desktop or server not on the 10gb network, or not properly configured there.

B) some lazy, or perhaps slightly unknowledgeable, it people.


What would help is to try and draw out a map of how this is connected, and where you are testing it from.
 

Todd33

Diamond Member
Oct 16, 2003
7,842
2
81
Right now the computer with the 10G is connected to the same 10G switch as the Panasas all in the same room. We already know the same computer and same switch gets ~5.5Gbps with another computer on the same network. So at this point all signs are pointing to a poorly configured Panasas.
 

Todd33

Diamond Member
Oct 16, 2003
7,842
2
81
Some data form IT!

From Client to SAN
Size Size Size Time Throughput
bytes bytes bytes secs. 10^6bits/sec

524288 16384 16384 10.00 9396.73


From SAN to client
Size Size Size Time Throughput
bytes bytes bytes secs. 10^6bits/sec

87380 524288 524288 10.23 3200.71

So not very symmetric, but still better than the real world 500Mbps we are seeing with real files.
 

heymrdj

Diamond Member
May 28, 2007
3,999
63
91
Some data form IT!

From Client to SAN
Size Size Size Time Throughput
bytes bytes bytes secs. 10^6bits/sec

524288 16384 16384 10.00 9396.73


From SAN to client
Size Size Size Time Throughput
bytes bytes bytes secs. 10^6bits/sec

87380 524288 524288 10.23 3200.71

So not very symmetric, but still better than the real world 500Mbps we are seeing with real files.

No that's pretty on point. Your client can read faster than it can write. The Panasas (with all of its storage systems) can write way faster than a single client can read or write. So what you're seeing is pretty accurate. Your system can read off as fast as is possibly can to the SAN because the SAN will take all your bits per second and then some. The SAN, however, cannot send as fast to you because your workstation doesn't have nearly the storage subsystem. The SAN is forced to slow down and wait for your slower workstation to write the data to its own storage.
 

XavierMace

Diamond Member
Apr 20, 2013
4,307
450
126
No that's pretty on point. Your client can read faster than it can write. The Panasas (with all of its storage systems) can write way faster than a single client can read or write. So what you're seeing is pretty accurate. Your system can read off as fast as is possibly can to the SAN because the SAN will take all your bits per second and then some. The SAN, however, cannot send as fast to you because your workstation doesn't have nearly the storage subsystem. The SAN is forced to slow down and wait for your slower workstation to write the data to its own storage.

No it's not, as the others said they gave him netperf results which aren't hitting the disk. With that in mind, the tests should be basically the same going either way. The only good thing to come of that test is that you are exceeding GbE both ways so that eliminates some possibilities.

Have them run that same test using the other client (the one you used to verify the problem client is getting 10GbE speeds) and compare the results. If that one tests fine, I would swap the NIC between the two and see if the problem follows.
 

Todd33

Diamond Member
Oct 16, 2003
7,842
2
81
The previous results with the two computers (no SAN) were netperf, just under 6Gbps. I don''t know if that was both ways or one, I'll find out.
 

imagoon

Diamond Member
Feb 19, 2003
5,199
0
0
It also may be worth asking about how they have the disk groups configured. I can have 4 rack large EMC VNX and still give your server a LUN stored on a pair of SATA 7200rpm drives. No amount of 10GBe is going to over come the fact that I gave you a RAID 1 of slow disks. The issue gets worse if I mapped 3 servers to that same disk group. All 3 have to complete.