• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

10GE upgrade question

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Panasas is a proprietary system. With NFS everything has to go through the director blade, but if you use their panfs you get much better results I hear. We are currently waiting for the drivers for are flavor of Linux, so now we are using NFS3.
 
The previous results with the two computers (no SAN) were netperf, just under 6Gbps. I don''t know if that was both ways or one, I'll find out.

Let me try to clarify. Let's call your computer "Client A". Let's call the other system you were testing with "Client B". We now have hard numbers going from SAN to Client A and Client A to SAN. Next test would be to do the same test from SAN to Client B and Client B to SAN. I would also like to see the same test done from Client A to Client B and Client B back to Client A. I realize you said you'd already tested client to client, but hard numbers would be nice.
 
It also may be worth asking about how they have the disk groups configured. I can have 4 rack large EMC VNX and still give your server a LUN stored on a pair of SATA 7200rpm drives. No amount of 10GBe is going to over come the fact that I gave you a RAID 1 of slow disks. The issue gets worse if I mapped 3 servers to that same disk group. All 3 have to complete.

While this is true, the current testing isn't hitting disk, so that's moot at this time.
 
While this is true, the current testing isn't hitting disk, so that's moot at this time.

Since he mentioned 6gb/s client to client and some odd numbers from that SAN that might be caching it seem that it is not as moot as you claim it is. He is also showing in excess of 9gb and 3gb transfers which can be asymmetrical if the SAN is being used at that moment by other devices. Many SANs also "QoS" "system" activity lower which can be skewing results. A network reading of 9gb/3gb while a bit odd and warrant some research, combined with the 5.5gb and 6gb reading from the other clients tends to lean towards the storage device itself far more than the network.

I think Todd33's hunch that the SAN is badly configured might be correct.
 
Last edited:
Back
Top