• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

4 Gbit fiber channel throughput?

full duplex a wee bit more than 425megabits.

So if I was moving data in one direction at 350 MB/sec, it would be more logical to consider the link as a bottleneck than the physical disk when I have an aggregate with 40 15k spindles in it, correct?
 
Yeah... theoretical maximums are one thing... real world performance is another. I'm seeing up to 380 MB/sec unidirectional right now and I'm suspecting that's the limit of the link.

The ability of the controller and the design of the array play a part. Saying you have 40 arms doesn't tell us much about the performance. A 40 arm RAID0 or JBOD is going to be limited at the HBA for example.
 
The ability of the controller and the design of the array play a part. Saying you have 40 arms doesn't tell us much about the performance. A 40 arm RAID0 or JBOD is going to be limited at the HBA for example.

The array is striping with dual parity. They're 15k drives... enterprise grade SAN, albeit a couple generations old.
 
The array is striping with dual parity. They're 15k drives... enterprise grade SAN, albeit a couple generations old.

RAID6 or RAID60 then? RAID6 on 40 drives could hit a controller limitation (something has to do all the XOR work) or RAID60 could result in 1/2 the write performance you expect, or in some cases depending on controller implementation 1/2 the read performance.
 
Last edited:
RAID6 or RAID60 then? RAID6 on 40 drives could hit a controller limitation (something has to do all the XOR work) or RAID60 could result in 1/2 the write performance you expect, or in some cases depending on controller implementation 1/2 the read performance.

It's NetApp's RAID-DP. I'm confident the disk is not the issue because the same SAN saw MUCH higher aggregate throughput when it was used for primary storage.

And it's not 40 disks in a single raid group... multiple RAID groups make up the aggregate that this volume is on.
 
It's NetApp's RAID-DP. I'm confident the disk is not the issue because the same SAN saw MUCH higher aggregate throughput when it was used for primary storage.

And it's not 40 disks in a single raid group... multiple RAID groups make up the aggregate that this volume is on.

At that point I would agree ("the more you know.") Try adding some concurrent sessions. Odds are it is not unlike some of the new SSD boards where a single session really doesn't show the power of the system. However @ 350MB/s you are definitely getting close to the max of the 4GB. I would honestly only expect 380-390MB/s out a fully tuned and utilized setup. All of the overhead is the encapsulated protocol (IE SCSI.) A 4GB line will hit 400MB/s true transfer because the 'fiber overhead' is in the .25 of the 4.25 line rate of the fiber.
 
Last edited:
I am able to get it to go up to about 380 MB/s under ideal circumstances... this is with 3 or 4 LTO 4 tape drives restoring data. Three drives running will push it up to 380... when the 4th starts, the aggregate throughput stays the same but the individual jobs slow down.
 
I am able to get it to go up to about 380 MB/s under ideal circumstances... this is with 3 or 4 LTO 4 tape drives restoring data. Three drives running will push it up to 380... when the 4th starts, the aggregate throughput stays the same but the individual jobs slow down.

Well then that should answer your original question. "Yes you are at the limit." 🙂
 
Well then that should answer your original question. "Yes you are at the limit." 🙂

The other component is the backup software. Every time I've discussed throughput with the vendor, they're impressed I'm getting as much as I am. I think they're used to customers that write data from multiple servers to a single tape and still don't fill a tape. Our full backup set is 15 LTO 4 tapes.

Also, 380 MB/sec is 3 Gbit... 75% of the theoretical maximum. 25% overhead seems quite high, but I have little experience with fiber channel so I'm not sure what to expect. Hence this thread. 🙂
 
The other component is the backup software. Every time I've discussed throughput with the vendor, they're impressed I'm getting as much as I am. I think they're used to customers that write data from multiple servers to a single tape and still don't fill a tape. Our full backup set is 15 LTO 4 tapes.

Also, 380 MB/sec is 3 Gbit... 75% of the theoretical maximum. 25% overhead seems quite high, but I have little experience with fiber channel so I'm not sure what to expect. Hence this thread. 🙂

Rated max is 400MB/s. (one way) The ".25" in the math is reserved for fiber overhead. Add on the SCSI over head, 380 is "damn good." Even 350 is pretty good at 87.% utilization.
 
Back
Top