cp is pretty fast.
The things you have to worry about when copying files is stuff like permissions, ownerships, symbolic links, special files.
Now if your just copying data then realy ownerships, symbolic links, and permissions are important. Timestamps are important also if you use those for managing your data.
Special files are only worth worring about if your duplicating a operating system. If your copying a runnning operating system then temporary and virtual file systems come into considuration also. /dev directory woudl contain a lot of special files, you don't want to copy /tmp, you probably want to copy /var. /sys and /proc you don't want to copy. Of course that's only realy worth worying about if your copying a running operating systems. The easier way is to boot up a knoppix cdrom and copy a operating system using that.
For different ways of copying files there are lots of options.
Probably the most popular is 'tar'. It takes a lot of things into considuration and can help preserve data integrity.
Another way would be to use 'dd' to duplicate raw devices.. so you can do things like copy one partition to another partition or make a image of a harddrive. Also dd has special features for changind data formats so if your backing up a disk to tape you can do that efficiently... but a lot of that doesn't matter so much anymore.
rsync is a poopular way to backup files over slower networks, like to a remote site over a WAN link. It uses checksums to ensure data integrety, it uses a special algorythm to only copy changes in files. So say you have 80 gigs of data to backup, but you only modify a 100 megs of it a day then rsync will just copy that 100 megs if you backup daily.
Other ones I don't know much about are cpio and ar.
For 'cp' you may want to look at the --archive switch. It's a GNU-ism which basicly combines a few normally aviable switches which makes it simple to backup files without changing the metadata on them much. Read the 'man cp' file for details.
One usefull example would be to combine commands.. Such as tar, bzip2, and netcat. To archive files across the network. (note that this is done in plain text so it isn't safe over unsecure networks)
On the receiving computer you create a pipe for netcat listenning on a port...
nc -l -p 8000 | tar xjfv -
Then on the sending computer you'd go:
tar cjfv - /data | nc remote.computer 8000
The v option is for verbose so you can see the progress.
If you want to time it you can use the the 'time' command and the '-q seconds' switch for netcat to tell it to break the pipe after it gets EOF (otherwise it will sit there forever waiting for more data).
The the 'j' switch for tar tells it to use the bzip2 compression method, which will help you send data quicker over slower networks. However it is cpu intensive and it is optimized for lots of small files.. you may get faster results with using z for gzip compression or using no compression at all.
That is how I copy my operating system when I upgrade the harddrives or want to change partitions around. There are various options to tar you would want to check out.
You can find other samples all over the internet and such for doing things like that.
But for in the computer cp usually works well. The --archive switch makes it easier so you don't have to remember the switches.
For fancier stuff rsync and tar are worth checking out.
To time and see progress for doing cp command you can go like this:
time cp -v --archive /mnt1 /mnt2
or whatever.
Verbose tells you what it's doing, time will tell you how long the operation took after the command exits. Gives a good idea on performance and is usefull for doing simple benchmarking for network and drive transfers and such.
It's pretty cool all the different things you can do.