Where is the bottleneck?

taelee1977

Member
Feb 9, 2005
28
0
0
I've been doing some tests recently here at work on our gigabit network and I've noticed something peculiar. When I do large file transfers between servers, I noticed that I don't get anywhere near what I should be getting for a gigabit network. Heck, I don't even maximize a 100 baseT network either. Out of all the file transfer tests, the fastest I was able to get was 91Mbits/s on a gigabit switch. We've come to the conclusion that the hard drive was the bottleneck. Once the initial burst from data in the cache runs out, it seems the transfer speed slows to what the hard drive can sustain. Keep in mind we're using SCSI RAID drives here. I find all this rather strange since the hard drive manufacturers have advertised speeds much faster than what I'm getting. I remember the ultra DMA speeds were like 66MBytes/s, which would still be much faster than what I'm getting. What gives? Am I missing something here? I tested this out on my home computer going from SATA to SATA hard drive... and the results weren't much different. Any input/ideas/consiracy theories would be much appreciated. Thanks!
 

Chaotic42

Lifer
Jun 15, 2001
33,929
1,097
126
I'm not an expert, so I'm just going to throw out some ideas here.

Is the SCSI RAID Adapter on a regular PCI slot?
Are you sure the network is running at 1Gb?
Is the NIC on a regular PCI slot where it may be limited by other devices?

I guess you could try setting up a RAM drive and transferring data to it. If you get better results, it could be something with your drives.
 

Crusty

Lifer
Sep 30, 2001
12,684
2
81
What is your setup with the hards drives?

What type of drives?
What controller card?
What type of raid?
What type of slot is the raid card in?
 

taelee1977

Member
Feb 9, 2005
28
0
0
Yes, the SCSI RAID card is on a PCI slot. Heck, we've even tried going server to server and skip the switch altogether by using a cross-over cable and the results were the same. We've tried testing on multiple servers and the results were the same. I've had a friend at a different company run the same tests and his tests came out worse than ours!

Please try transferring a large amount of data (1GB+) either between hard drives or through the network and let me know your results. Please stats type of network(100BaseT, Gigabit, etc), the size of data, and approximate transfer time.

HAVE WE ALL BEEN FOOLED BY THE MARKETING??? It seems pointless to me if our hard drives can't even output data faster than 90Mbits/s sustained yet, we're upgrading to a gigabit network.
 

Crusty

Lifer
Sep 30, 2001
12,684
2
81
The marketting is for CAPABLE speeds of the particular type of storage.

IE ata33,ata66,ata100,ata133. Those are just the speeds of the channel, so an ata33 channel is capable of transferring 33MB/s, ata66, 66MB/s. etc

Now, when you say SCSI, I am assuming you are using at LEAST U160, if not U320 scsi.

With those, you can transfer up to 160MB/s or 320MB/s across the SCSI chanel.

These have nothing to do with actual hard drive speeds. A typical IDE drive can sustain around 25-30MB/s of constant read write. When you get into upper end of SCSI drives, you can start seeing some transfers up to 90-100MB/s. (these numbers might be totally wrong, but it gives you the general idea). Basically, the HDD's cannot transfer enough to saturate the capacity of your channel.

Until you provide us with more details on your setup we cannot help you improve it.
 

taelee1977

Member
Feb 9, 2005
28
0
0
MCrusty and Chaotic, thanks for the quick reply. I would actually be thrilled to even have 25MB/s(200Mbits/s) transfer speed.

The specs are just standard Dell servers.

Raid card is Dell Perc 4di running on Seagate Cheetah 10k RPM, SCSI-3 hard drives on RAID 5.

These are relatively new Dell servers were put in recently.
 

Chaotic42

Lifer
Jun 15, 2001
33,929
1,097
126
If the SCSI adapter is standard 32-bit/33MHz PCI, it will be competing with everything else on your PCI bus for 133MB/s of bandwidth, which could very well be the problem.
 

taelee1977

Member
Feb 9, 2005
28
0
0
I got similar results on my home computer which I recently put together. I tried transferring from one Hitachi SATA hard drive to another identical hard drive. I don't have have any PCI cards installed and my video card is PCI-e.
 

imported_Phil

Diamond Member
Feb 10, 2001
9,837
0
0
Originally posted by: Chaotic42
If the SCSI adapter is standard 32-bit/33MHz PCI, it will be competing with everything else on your PCI bus for 133MB/s of bandwidth, which could very well be the problem.

Exactly what I was going to post.

taelee1977, is the RAID card in a regular-looking PCI slot, or a PCI slot that's a fair bit longer? If it's the former, then you're limited to 133Mb/sec anyway, if it's the latter then you have other issues.

Download HDTach- post your results when you've run it on the RAID drive. Your "Burst" rate should be pretty high- 160Mb/sec to 320Mb/sec should be expected of a decent card that's in a PCI-X (note: not PCI-Express) slot.

[Edit] Don't forget, of course, that you're not limited by the read speed of the machine that's serving up the files, you're limited by the write speed of the destination machine. If the destination has a RAID-5 array, then bear in mind that without a decent RAID card with sufficient cache, your write speeds will be pretty poor.
 

taelee1977

Member
Feb 9, 2005
28
0
0
The server sending the data and the server receiving the data both have similar hardware. Both running RAID 5... both recently purchased commonplace Dell servers. I believe the RAID card is the only card in the PCI slot. The gigabit cards are onboard. Onboard devices don't share the PCI bandwidth do they? Even if the bandwidth was limited to 133MB/s shared... lets say half of that was used by the hard drive... say, approximately 60MB/s(420Mb/s). That's still a huge jump from the results I'm consistantly getting across all Dell servers.
 

Chaotic42

Lifer
Jun 15, 2001
33,929
1,097
126
If (God forbid) the NIC adapter and SCSI card are both on the PCI bus (they could very well be, you'd need to look through your motherboard manual), then you'd be getting crummy performance. Does the computer have an AGP card? If not, your video is going through the PCI bus.

We'd need a detailed list of hardware to really be sure.
 

Crusty

Lifer
Sep 30, 2001
12,684
2
81
If you are running a RAID5 array then those numbers are right on par, unless you are using an array with many many drives, as in 6+ drives in the array and you are using a good controller.

If you want to see some strong performance, put those drives in RAID 1 and try to read from it. I have a gigabit network at home, with homebrew cables and it tested at around 540mbps. Now, when I was transfering large files across the 1gbps network, I would only get 25MBps(200mbps).

How many drives are in your raid arrays?
 

imported_Phil

Diamond Member
Feb 10, 2001
9,837
0
0
Originally posted by: taelee1977
Everything on the Dell servers are onboard(including video card) except the RAID card, which is on PCI.

And therein lies your problem.

Not only should decent RAID cards should always be used with a 64-bit PCI-X slot, your write performance is killing... well, your performance.

While the array may be able to sustain 133Mb/sec reading*, it definitely won't sustain that while writing. 30Mb/sec, as a ballpark figure for your setup (drives, controller, config etc).

* Although that figure will be something in the region of 120Mb/sec because of the shared nature of the PCI bus.
 

NeonFlak

Senior member
Sep 27, 2000
549
5
81
Also keep in mind that the only Gigabit nics out right now that can even sustain close to gig transfer are all onboard intel nics.
 

ASK THE COMMUNITY