Transfer Speeds

Anteaus

Platinum Member
Oct 28, 2010
2,448
4
81
I think I know the answer to this question but I'm going to ask regardless because it doesn't make much sense.

I have two SATA3 hard drives on the same controller. When I transfer files from one of these drives to my Ubuntu Server via gigabit ethernet, on average I see 90-110 megabytes per second. When I transfer files between the two drives on the same machine, I see transfer rates at roughly half that.

From my mind this is due to the controller having to handle both read and write at the same time, whereas when I do an Ethernet transfer the source and destination controllers only have to do one or the other.
That said, Sata3 has a theoretical max of 6 gigabit per second. I would have thought that the hard drives would have been the bottleneck. Sata3 should be able to easy handle an up and down total of ~200 MB/s (100 MB/s X 2) which would accommodate the physical transfer limitation of the hard drives.

Am I on the right track here, or am I misunderstanding how these controllers work. My point being that even if all drives on the SATA3 controller share total bandwidth, 100/100 should be attainable on the local controller, right?

Please feel free to educate me on this. Thanks.
 

Elixer

Lifer
May 7, 2002
10,371
762
126
What motherboard / chipset is this ?
For what it is worth, depending on file size, and what part of the HD you are reading/writing to, and how fragmented it is, it can and does alter speeds.
 

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
My guess is you're using either a PCI, or PCIe 1.0 1x, SATA controller, assuming you're testing with the same kinds of files.
 

greenhawk

Platinum Member
Feb 23, 2011
2,007
1
71
When I transfer files between the two drives on the same machine, I see transfer rates at roughly half that.

That said, Sata3 has a theoretical max of 6 gigabit per second. I would have thought that the hard drives would have been the bottleneck. Sata3 should be able to easy handle an up and down total of ~200 MB/s (100 MB/s X 2) which would accommodate the physical transfer limitation of the hard drives.

Read speed of a drive is generally higher than write speeds, but writing to a second drive should still be near max speeds assuming no issues like fragmentation on the receiving drive (espically if nearly full).

What I see as more likly is that there is a bottle neck with the sata controller. It is not uncommon on motherboards that the manufacture cheaps out and only gives a single PCI-E lane to the add on sata controller, so limiting total speeds to well below SATA speeds. This would occur depending on the motherboard and if you are using the add on sata controller (ie: not the chipset based sata ports).


As to sata 3's 6gbit/s, due to overheads in the protocol, the max is 600MB/s each way. It is why you see SSD's topping out about 550MB/s read.

a spinning hdd is way way slower with only the best ones pushing over 100MB/s read consistently, well short of the full speeds of the sata connection. The bottleneck for this is the ability of a drive to transfer data from the spinning platters to the read/write heads.

Of course, this all assumes your two drives are two physical drives and not two partitions on a single drive. Copying back to itself is known to be slower again than any sort of test due to the need to move the read/write head around a lot. It is why seek time on a good drive is low.

And to make matters more complicated, due to how the file system handles moving data, moving a large number of small files is going to be slower than moving one large file.
 

Anteaus

Platinum Member
Oct 28, 2010
2,448
4
81
I'll describe my setup just so that you understand what I'm working with. Up until a few days ago I had 4 3TB WD Green drives in my desktop for storage. These aren't 7200 RPM drives, but individually they are capable of at least 100 MB/s read or write because I've witnessed it. Two of them were moved to a new Ubuntu Server I built a few days ago.

Desktop -MSI P67A-GD55 / i7-2600k / 8GB
Server - Gigabyte H87M-D3H / i5-4570S / 8GB

When transferring via Ethernet to one system or the other I get around ~100 MB/s transfer rates, with slightly reduced rates when transferring lots of small files. When transferring from drive to drive within either system, I get 1/4 to 1/2 of that speed on average when transferring the same files. I'm not using raid. All drives are using GPT and are the same with the exception that I'm using EXT4 on the Linux drives, but that doesn't seem to affect overall transfer speed.

Due to the fact that the systems are independent with similar drives and have similar results, I can only thing that it is either a controller thing or the drives themselves. I've ruled out drives, because anecdotally if the drives can move at over 100 MB/s over Ethernet, it should be a reasonable assumption that the same rates can be met locally. I also have a WD 1TB Black and Seagate 4TB in the mix that I've tried with similar results so I don't think it is specifically drive related.

As someone mentioned earlier, I'm beginning to believe it is a limitation of the integrated SATA controllers and it's normal. I just don't understand the technical reasoning behind the slowdown given SATA3's overhead. I concur with the earlier assessment on read speeds as I have two Samsung 830 250GB SSDs.
 

Elixer

Lifer
May 7, 2002
10,371
762
126
Does the CPU spike during transfers?
Is there anything else reading/writing to the devices in question (you said it was a server...)

I know I can transfer files from SSD to SSD at full throttle, and that was the case when I transferred files from 1 WD black to another for backups.
So, something doesn't sound right here...