10GE upgrade question

Todd33

Diamond Member
Oct 16, 2003
7,842
2
81
We recently swapped out a 1GE for a 10GE in a dual Xeon PC connected to a Panasas file server via a 10GE switch. The improvement is near zero. What are we missing? The IT guy claims that PCs can't handle 10GE, is he correct? We are using Fedora 18 with NFS3. We don't have a Panasas file system driver yet.
 
Feb 25, 2011
16,955
1,594
126
Google "NFS Tuning" - there are a number of system settings that you need to implement before you'll be able to take advantage.

It's stuff like buffer size, frame size, and a bunch of other stuff that the existing guides do a better job explaining than I could here.
 

RadiclDreamer

Diamond Member
Aug 8, 2004
8,622
40
91
Your pcs are most likely connected via 1gb links, so really you will see little difference with 1 pc, where you see the difference is now the server can handle the network from multiple pcs at once without being overwhelmed. You also need to make sure the disks on the server can perform well enough that your 10gb connection isnt getting wasted by slower disks
 

master_shake_

Diamond Member
May 22, 2012
6,425
291
121
shocked you are seeing no improvements.

going from 1gbe to 10gbe IPOIB i get ~5 times more performance.

and thats a 4x slot on a p45 board.
 

RadiclDreamer

Diamond Member
Aug 8, 2004
8,622
40
91
Have you tried running iperf or are you measuring it via windows etc? It could be the disks limiting the transfer.
 

Todd33

Diamond Member
Oct 16, 2003
7,842
2
81
That should be our next step, though I'm not sure how that would work with the Panasas. We might need another computer for iperf.
 

Sequences

Member
Nov 27, 2012
124
0
76
How much was your improvement? What are you trying to improve? Were you maxing out your 1GE before the upgrade? Some numbers would be useful.
 

Todd33

Diamond Member
Oct 16, 2003
7,842
2
81
We were getting like 50MB/s before and after the upgrade. The Panasas can feed data a lot faster than that in theory. The data is in large files mostly, 50-300MB.
 

azazel1024

Senior member
Jan 6, 2014
901
2
76
Something else is holding it up then. If you couldn't saturate GbE previously, upgrading to 10GbE isn't going to do a thing.
 

XavierMace

Diamond Member
Apr 20, 2013
4,307
450
126
We were getting like 50MB/s before and after the upgrade. The Panasas can feed data a lot faster than that in theory. The data is in large files mostly, 50-300MB.

We need more info about the setup and usage to be able to assist. All the network bandwidth in the world won't help if you are trying to feed it using a single 5400rpm SATA 1 drive.
 

Todd33

Diamond Member
Oct 16, 2003
7,842
2
81
We need more info about the setup and usage to be able to assist. All the network bandwidth in the world won't help if you are trying to feed it using a single 5400rpm SATA 1 drive.

It's fed by a newish storage system by these guys https://www.panasas.com/, so plenty fast.

The computer reading it is a dual Xeon system which is reading directly into RAM, no disk IO.
 

XavierMace

Diamond Member
Apr 20, 2013
4,307
450
126
That still tells us nothing. Which model? How many blades? Looking at their current models the basic blade is a pair of spindle drives and a single SSD. Wouldn't be the first time I've seen a blade enclosure with a single blade in it.

Do you still have GbE connections on either of these boxes? Have you confirmed the storage traffic is going over the 10GbE?
 

Todd33

Diamond Member
Oct 16, 2003
7,842
2
81
That still tells us nothing. Which model? How many blades? Looking at their current models the basic blade is a pair of spindle drives and a single SSD. Wouldn't be the first time I've seen a blade enclosure with a single blade in it.

Do you still have GbE connections on either of these boxes? Have you confirmed the storage traffic is going over the 10GbE?

It hard for me to say much about the server, other than it cost >$50k and has 80TB. I know the computer on the other end is 10GbE, I verified that. If this was computer to computer, I could run iperf, but Panasas is beyond my control.

Hypothetically speaking, assuming the file server has infinite bandwidth, where would the bottleneck be? Is it the PCI-e bus and system RAM? The switch?
 

XavierMace

Diamond Member
Apr 20, 2013
4,307
450
126
Right now you are saying you are getting 50MB/s. That's about normal for a single GbE connection which would seem to indicate to me you don't have a 10GbE all the way back to the storage.

You're no where near the limitations PCI-e or RAM and you state you've verified the "client" system has a 10GbE connection. Therefore the issue is either the file server or the infrastructure in between. You can't rule out the file server just because it cost lots of money.

I'm assuming we are talking copper, has anyone verified the cables?
 

RadiclDreamer

Diamond Member
Aug 8, 2004
8,622
40
91
It hard for me to say much about the server, other than it cost >$50k and has 80TB. I know the computer on the other end is 10GbE, I verified that. If this was computer to computer, I could run iperf, but Panasas is beyond my control.

Hypothetically speaking, assuming the file server has infinite bandwidth, where would the bottleneck be? Is it the PCI-e bus and system RAM? The switch?

As stated earlier, the PCI-e is absolutely not the case.

It could be network driver, it could be firmware on the switch, it could be the file server. We need to approach this as a process of elimination since we arent versed in your environment. Is it possible to try iperf to another 10G device using the same switch path?
 

XavierMace

Diamond Member
Apr 20, 2013
4,307
450
126
To elaborate on the PCI-e not being an issue....

My SAN has a single PCIe Quad Port 4Gb Fiber HBA running to a 4Gb switch then a Quad Port 4Gb HBA on the ESX hosts. The ESX hosts are getting 1300MB/s write speeds. Obviously FC vs Ethernet is a bit of an apples to oranges, but it shows that PCIe is not the issue.
 

imagoon

Diamond Member
Feb 19, 2003
5,199
0
0
Right now you are saying you are getting 50MB/s. That's about normal for a single GbE connection which would seem to indicate to me you don't have a 10GbE all the way back to the storage.

You're no where near the limitations PCI-e or RAM and you state you've verified the "client" system has a 10GbE connection. Therefore the issue is either the file server or the infrastructure in between. You can't rule out the file server just because it cost lots of money.

I'm assuming we are talking copper, has anyone verified the cables?

50MB/s is 1/2 of 1 gig so he isn't even at the limits of a single 1 gig connection. Something is limiting performance here. I would lean towards the storage or something in the network layer. Also all the disk in the world isn't going to mean anything if the SAN guy only gave you a lun on a single disk.
 
Feb 25, 2011
16,955
1,594
126
To elaborate on the PCI-e not being an issue....

My SAN has a single PCIe Quad Port 4Gb Fiber HBA running to a 4Gb switch then a Quad Port 4Gb HBA on the ESX hosts. The ESX hosts are getting 1300MB/s write speeds. Obviously FC vs Ethernet is a bit of an apples to oranges, but it shows that PCIe is not the issue.

Your numbers aren't adding up. 4Gb would only top out at ~500MB/sec.

The ESX hosts are probably getting 1300MBps write because VMWare hosts will sometimes use unallocated RAM as a disk cache.

(Just the first thing that comes to mind.)
 

XavierMace

Diamond Member
Apr 20, 2013
4,307
450
126
Your numbers aren't adding up. 4Gb would only top out at ~500MB/sec.

The ESX hosts are probably getting 1300MBps write because VMWare hosts will sometimes use unallocated RAM as a disk cache.

(Just the first thing that comes to mind.)

Read more carefully. :)


Quad Port, meaning 4x 4Gb/s links.
 

XavierMace

Diamond Member
Apr 20, 2013
4,307
450
126
50MB/s is 1/2 of 1 gig so he isn't even at the limits of a single 1 gig connection. Something is limiting performance here. I would lean towards the storage or something in the network layer. Also all the disk in the world isn't going to mean anything if the SAN guy only gave you a lun on a single disk.

Not talking about theoretical maxes, 50-75MB/S is pretty normal for GigE file transfers.