Where is the bottleneck?

taelee1977

Member
Feb 9, 2005
28
0
0
I've been doing some tests recently here at work on our gigabit network and I've noticed something peculiar. When I do large file transfers between servers, I noticed that I don't get anywhere near what I should be getting for a gigabit network. Heck, I don't even maximize a 100 baseT network either. Out of all the file transfer tests, the fastest I was able to get was 91Mbits/s on a gigabit switch. We've come to the conclusion that the hard drive was the bottleneck. Once the initial burst from data in the cache runs out, it seems the transfer speed slows to what the hard drive can sustain. Keep in mind we're using SCSI RAID drives here. I find all this rather strange since the hard drive manufacturers have advertised speeds much faster than what I'm getting. I remember the ultra DMA speeds were like 66MBytes/s, which would still be much faster than what I'm getting. What gives? Am I missing something here? Have we all been fooled by the marketing???:| If the hard drive can't even pump out data faster than 91Mbits/s, what's the point in getting anything faster than 100BaseT? I tested this out on my home computer going from SATA to SATA hard drive... and the results weren't much different. Please try transferring a large amount of data (1GB+) either between hard drives or through the network and let me know your results. Please stats type of network(100BaseT, Gigabit, etc), the size of data, and approximate transfer time. Any input/ideas/consiracy theories would be much appreciated. Thanks!
 

TGS

Golden Member
May 3, 2005
1,849
0
0
You may be confusing data transfer speeds compared to network speeds. Your gigabit network has a theoratical limit of 1000Mbits/second. Pratical limtis with overhead is much less. Keep in mind your frame size can make a difference. I think you could increase MTU size as well. Though if your hard drive I/O is under 91Mbits/s you may need to look at your array.

What is the overall setup? Array hardware, network hardware for both server and clients, etc..
 

taelee1977

Member
Feb 9, 2005
28
0
0
The specs are just standard Dell servers.

Raid card is Dell Perc 4di running on Seagate Cheetah 10k RPM, SCSI-3 hard drives on RAID 5.

These are relatively new Dell servers that were put in recently. I know there are overheads and such... but my problem is... we're not even getting close to the stated 1000Mbits/s. I'd be happy with even half the theoretical limit. I've had a friend at a different company with a gigabit switch run similar tests and his results came out worse than ours. I've tried running hard drive to hard drive transfers using my dual SATA drives on my home computer and I've gotten similar results.
 

spidey07

No Lifer
Aug 4, 2000
65,469
5
76
are the disk controller and gigabit card on the same bus? put them on different busses if you can.

9000 byte frames (MTU) will help to speed it up as well.

You should get faster than 91 Mbs. Somewhere in the 200-500 Mbs range depending on server hardware.
 

taelee1977

Member
Feb 9, 2005
28
0
0
spidey07,

Yea, 200-500 Mb/s is kind of what I expected. The gigabit cards are onboard. I ran these tests on recently purchased commonplace Dell servers. I'm wary of making registry changes on servers, so I will try changing the MTU on test workstations.
 

mparr1708

Senior member
Jan 5, 2005
258
0
0
Your switches could also be part of what is limiting you. Not all switches are created equally. That being said there are alot of factors that play on how fast a file will transfer from one server to another. Your current method of testing is more suited to test the disk subsystem then your network throuhput capability. There are a number of factors to think about when you talk about disk speed as well. What type of raid is set up? How many drives are part of the logical set? How long has the raid been in place and is it in a degredated state?

I'm not actually looking for you to answer these question but rather give ideas of why your not seeing the speeds your expecting. I would expect even half of 1gb to be very unrealistic when you talk about transfer speeds coming from one server to the other across 1gb network.
 

mboy

Diamond Member
Jul 29, 2001
3,309
0
0
Use netIQ to check.

Onboard gigabit on that Dell server most likely is sharing the system bus keeping you well under tru gigabit speeds.
 

TGS

Golden Member
May 3, 2005
1,849
0
0
Yeah you would think they should put those on seperate buses. If you are getting similiar issues doing direct peer to peer file transfer, you should give dell a call to find out if there is something wonky with the way they have the onboard and perc default setup.

You would expect them to forsee I/O contention problems, but they may overlook that for the price issue.

If anything, if you can, get your hands on a discrete gigabit card(s), and try moving data between two servers. Compare those results with the onboard chip. Chances are they may be dropping an under performing onboard chip in the server to keep costs down.

I just looked again, at your first post and 91Mbit is 11.3Mbytes, so you are passing into the theoritical limit for fast ethernet. Another thing is how is the file structure that you are trying to read from laid out. I can tell you on a fiber channel networked tape unit, even with a 2Gbit backbone, we rarely saw speeds over 20-30MB/s. Though of course there is more overhead with FCP to scsi across a bridge, but even with twice the backbone we saw about a tenth of the total bandwidth being used on that particular device.

I think the problem may just be the data structure. I really don't think unless its a huge flat dummy file, you will see anywhere near gigabit speeds. Even with a good deal of optimism, at 800Mbit unless you have a large storage system behind that network you will rarely ever see that particular speed hold up for long. I think the benefit in Gigabit networks is more overall bandwidth for larger datacenters. Where everything is behind a few larges switches, there isn't so much network traffic trying to hit a particular subnet with the resources you want.

Just off the top of my head, on a large storage system, even though it's an ultra2 bus, with around 64+ drives in an array we only see around 50-60MBs on updates. Which is nice for a ultra2 setup, but still far below what you would expect for data transfer. Especially internal to the box.
 

jonny13

Senior member
Feb 16, 2002
440
4
81
In my home network with a Linksys gigabit switch, I would say I average 15-35MB/s (120-280Mbit). This is of course actual speed and doesn't include the overhead, so I get in the 150-350 range. Certainly not as high as it could, but compared to the 8-10MB/s I was getting with the 100BT network, I will certainly take it. It also depends on the OS. I can pull files to W2k3 from XP faster than pushing from XP to W2k3 for whatever reason. I have noticed that when transferring on my XP rig to anything else, the memory was really getting hit. When I had 512, it was going down to just about zero and then bounce up to 75 before crawling back down to zero. So, ram was definitely an issue.
 

TGS

Golden Member
May 3, 2005
1,849
0
0
That's what I'm saying, your bandwidth is increased a good deal vs fast ethernet. Though you won't against a single host suck up all the bandwidth provided. You now have the overhead on the backbone to run 3 clients at full bore and not impact others network throughput.
 

azev

Golden Member
Jan 27, 2001
1,003
0
76
The fastest I saw on my gigabit home network is 40MB/sec. That is transfering file from 2 file server both have U160 SCSI with 8 73Gig 10k Drive. I use dell 5324 as the switch btw.
 

Red Squirrel

No Lifer
May 24, 2003
69,956
13,468
126
www.anyf.ca
Is this a transfer via netbios? Since netbios is not the fastest transfering protocol (mind you, it should stil not cap out like it is for you)

Just for fun, try different protocols like FTP. I'll do a transfer test when I get home and post the results.
 

Red Squirrel

No Lifer
May 24, 2003
69,956
13,468
126
www.anyf.ca
Ok not sure if I did this right.

I transfered exactly 1GB (1024 * 1024 * 1024 bytes) in 128secs so if I calculated right that's 64Mbps. (over 100mbps network)

I go through a switch which uplinks to the router, and the server I transfered to is directly on the router. It's a linux server, and this is through samba, the machine I transfered from is a win2k box.

All drives are IDE and network is all home based equipment, no fancy cisco stuff. (does linksys count? :p)
 

Crusty

Lifer
Sep 30, 2001
12,684
2
81
Originally posted by: RedSquirrel
Ok not sure if I did this right.

I transfered exactly 1GB (1024 * 1024 * 1024 bytes) in 128minutes so if I calculated right that's 64Mbps. (over 100mbps network)

I go through a switch which uplinks to the router, and the server I transfered to is directly on the router. It's a linux server, and this is through samba, the machine I transfered from is a win2k box.

All drives are IDE and network is all home based equipment, no fancy cisco stuff. (does linksys count? :p)

Do you mean 128 seconds?!