• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

SATA PCI cards that work well under Linux?

I'd probably look for something with a Silicon Image or Promise chipset, although I can't vouce for either of them personally since I'm using the onboard nVidia SATA stuff on my board.
 
This page has information on various SATA controllers and people's experiances with them...
http://linuxmafia.com/faq/Hardware/sata.html

This page has information on various individual chipsets used in cards and their current driver support status.
http://linux-ata.org/

Personally I went for the cheapest card that advertised Linux support in Newegg.com. Silicon Image seems to be the cheapest right now. I have 2 of them, 2 ports each. 13 bucks right now, I think I paid like 20 bucks for them.

But probably be worth looking at something nicer if you want performance.
 
cp is pretty fast.

The things you have to worry about when copying files is stuff like permissions, ownerships, symbolic links, special files.

Now if your just copying data then realy ownerships, symbolic links, and permissions are important. Timestamps are important also if you use those for managing your data.

Special files are only worth worring about if your duplicating a operating system. If your copying a runnning operating system then temporary and virtual file systems come into considuration also. /dev directory woudl contain a lot of special files, you don't want to copy /tmp, you probably want to copy /var. /sys and /proc you don't want to copy. Of course that's only realy worth worying about if your copying a running operating systems. The easier way is to boot up a knoppix cdrom and copy a operating system using that.


For different ways of copying files there are lots of options.
Probably the most popular is 'tar'. It takes a lot of things into considuration and can help preserve data integrity.

Another way would be to use 'dd' to duplicate raw devices.. so you can do things like copy one partition to another partition or make a image of a harddrive. Also dd has special features for changind data formats so if your backing up a disk to tape you can do that efficiently... but a lot of that doesn't matter so much anymore.

rsync is a poopular way to backup files over slower networks, like to a remote site over a WAN link. It uses checksums to ensure data integrety, it uses a special algorythm to only copy changes in files. So say you have 80 gigs of data to backup, but you only modify a 100 megs of it a day then rsync will just copy that 100 megs if you backup daily.

Other ones I don't know much about are cpio and ar.

For 'cp' you may want to look at the --archive switch. It's a GNU-ism which basicly combines a few normally aviable switches which makes it simple to backup files without changing the metadata on them much. Read the 'man cp' file for details.

One usefull example would be to combine commands.. Such as tar, bzip2, and netcat. To archive files across the network. (note that this is done in plain text so it isn't safe over unsecure networks)

On the receiving computer you create a pipe for netcat listenning on a port...
nc -l -p 8000 | tar xjfv -

Then on the sending computer you'd go:
tar cjfv - /data | nc remote.computer 8000

The v option is for verbose so you can see the progress.

If you want to time it you can use the the 'time' command and the '-q seconds' switch for netcat to tell it to break the pipe after it gets EOF (otherwise it will sit there forever waiting for more data).

The the 'j' switch for tar tells it to use the bzip2 compression method, which will help you send data quicker over slower networks. However it is cpu intensive and it is optimized for lots of small files.. you may get faster results with using z for gzip compression or using no compression at all.

That is how I copy my operating system when I upgrade the harddrives or want to change partitions around. There are various options to tar you would want to check out.

You can find other samples all over the internet and such for doing things like that.


But for in the computer cp usually works well. The --archive switch makes it easier so you don't have to remember the switches.

For fancier stuff rsync and tar are worth checking out.

To time and see progress for doing cp command you can go like this:

time cp -v --archive /mnt1 /mnt2
or whatever.

Verbose tells you what it's doing, time will tell you how long the operation took after the command exits. Gives a good idea on performance and is usefull for doing simple benchmarking for network and drive transfers and such.

It's pretty cool all the different things you can do.
 
If you're just using cli tools anyway why run X at all?

But it might not make a difference depending on where the problem is. Run 'vmstat 1' for a bit and see what the si/so columns look like, that's swapin/swapout and if they're not doing much then memory won't make much of a difference.
 
Here are some stuff I think it could possibly be.. With a setup like this I figure you should expect around 30MB/s

It looks like you've ran out of RAM and it's trying to swap but since the drives are all in use it's just going to slow down the system. Software raid generally is going to get a good performance boost from having a goodly amount of RAM.. your cpu is actually used up 100%, but 88+% of it is I/O wait so it's definately stuck waiting on the drives for someting.

from remote kill X and shutdown gnome stuff to free up some more ram.
/etc/init.d/gdm stop
Should work for Ubuntu and Debian.


Another thing to check out are /proc/mdstat
to make sure that the array is healthy. If it's rebuilding or something then it's going to be slow.

If your setup has drives were you have IDE drives were one or more drives are in the 'Slave' position then that will cause a dramatic drop in performance. On a system no harddrive should ever be slave...

Use hdparm program to determine if DMA access is turned on for your drives...
hdparm /dev/hda
to check
hdparm -d1 /dev/hda
to activate it.

Also, I know it sounds stupid but I've done it, is that make sure that the drive your copying to is actually mounted and such. Setting up stuff I have umounted something and I forget I've done that before.

 
Bonnie++ is a decent benchmark, but have you tried hdparm yet? It won't be accurate with regards to real-world usage but it'll give you a good idea as to the ceiling and if something is wrong (like DMA disabled on PATA drives for example) it'll usually point it out pretty quickly.
 
/dev/hdc:
Timing cached reads: 1152 MB in 2.00 seconds = 575.96 MB/sec
Timing buffered disk reads: 60 MB in 3.08 seconds = 19.45 MB/sec

That's slow.

This is my laptop drive...
/dev/hda:
Timing cached reads: 620 MB in 2.00 seconds = 309.96 MB/sec
Timing buffered disk reads: 84 MB in 3.05 seconds = 27.56 MB/sec

And that is a 5400 RPM. A modern desktop drive (say a 120gig 8meg 7200) should be over 30MB/s. Something with high capacity should be faster then that.

The md0 should be a bit faster also, but that's harder to deal with.

See this post for some good drive tweaking..
http://usalug.org/phpBB2/viewtopic.php?p=33142

There example is:
hdparm -d1 -m16 -A1 -a64 -u1 /dev/hda

Although I like -a256

Also acoustic management may be turned on.. so maybe try -M254 also.

(and also -k1 so it keeps the settings)
 
egads.

Makes a huge difference. I knew that amount of RAM had a big impact on software raid performance, but I didn't think it made that big of a difference.
 
The problem is that you probably still have a entry in /etc/fstab for your md0 mountpoint.

commenting it out should stop the system from trying to fsck it.
 
Back
Top