Blazer7: Your comparison is very strange, because you should compare RAIDs with the same available space, not with the same number of disks.

And in this case RAID 10 _does_ mean more noise and power consumption, because you need more disks to get the same amount of space as with RAID 5.
Mr
hashbang my comparison is not strange at all. From your answer I understand that we are not exactly in the same wavelength. In my last post I was comparing raid 5 vs raid 10 with the same number of disks. Maybe this was not very clear to you. My suggestion of using larger disks in order to compensate for the loss of disk space must have been misleading to you but this was in order to make up for the space lost by the raid 10 using more disks for redundancy. My though was that instead of you spending the extra cash you indented for a good controller, you should spend them for larger hard disks instead. By doing so you would end up with the same number of disks but in greater capacity thus making up for the loss of space because of the raid 10 config. My ?measure? of comparison is not just number of disks and total usable capacity but economics also. Doing so would result in the same number of disks so same noise levels and same or almost same power consumption. These days bigger disk do not necessary mean more power hungry.
The argument that you can take larger disks is not always valid. For example, I was planning to get 4 750GB drives and run them as RAID 5 (I'm running a RAID 5 with 4 180GB drives + 2 non-raid drives, BTW). That would give me about 2.2 TB of available disk space. You suggestion to run RAID 10 instead means that I have to take either 6 750GB drives or 4 1000GB drives. The first option means more noise/power consumption and no free ports on MCH9 (and how am I supposed to connect a DVD writer, a removable drive for backups, maybe blu ray or HD DVD in the future then?), also it will not be possible to add new drives to the RAID if needed. The second options means to pay too much money for disks, because 1000GB drives are highly overpriced. I'd rather pay for a good hardware RAID controller with 8 ports than for 1000GB drives, which will become much cheaper in a year or so.
Once again we approach the same problem from a different angle. Right now you are running a raid 5 with 3x180GB disks=540 GB of usable disks + 1 disk for parity. Your plan is to go for a big upgrade to 3x750GB disks=2.250GB of usable space + 1 disk for parity. My suggestion to you is for 4 x 1TB disks=2TB of usable space + 2 disks for redundancy. With this you'll lose 250GB (10% of your usable disk space) but you will gain on security. The price will still favor the raid 10 config as 4 1TB drives will cost less than 4 750GB drives + a good controller.
Since I always compare raid arrays with the same number of disks expect the same noise levels and the same or almost the same power consumption. Most good quality mobos sport a 6-port raid controller via southbridge + 1 or 2 additional controllers. Even if I was suggesting a 6-port raid 10 config vs a 4 disk raid 5, that wouldn't pose a problem for you. At least not one like you describe.
Most good quality mobos sport additional controllers for optical devices and the likes, like the GA-X38-DQ6 (6 ports via Intel's ICH9R southbridge + 2 ports from 1 Gigabyte controller) or the GA-N680SLI-DQ6 (6 ports via nVidia's MCP55PXE southbridge + 4 ports from 2 Gigabyte controllers). Add 1 PATA controller that supports another 2 drives and you have no problem adding secondary devices like optical drives and the likes. This is not an assumption, this is pure facts. You can have a look at my rig included in my signature. I use 8 Hds + 3 opticals + 1 dedicated backup device and everything is connected to on-board controllers. No sweat, no conflicts.
Your conclusion that RAID 10 is faster than RAID 5 is wrong. That's true only compared to RAID 5 with 3 disks. RAID 5 with 4 disks (on a decent controller) is faster than any RAID 10. Your are right about the latency, but on a good RAID 5 controller the increased latency is barely noticeable. This is more than compensated by higher transfer rate, unless you run very latency critical applications like a database server.
Maybe you haven't read the article I mentioned in my earlier post thoroughly. Raid 5 is fast no question about it but it has extended latencies, parity calculation, it also has to pass from the pci-e bus to the southbridge and then to the northbridge and cpu. It is also more likely that it will become a bottleneck in extremely intensive I/O operations also, not to mention that the write performance of a raid 5 array is lousy. If you don't spend much to buy a good controller then the cpu may have to do a lot of the parity calculation and this will take it's toll on the entire system.
Raid 10 is on the southbridge already, the info doesn't have to travel over any bus and does not need to do any parity calculations. If you compare a 4 disk raid 5 to a 4 disk raid 10 expect about the same speeds. Theoretically raid 5 should be a little faster but practically it's not. If you do extend your raid 5 in the future, like you mentioned in one of your earlier posts, then compared to a 6 disk raid 10 expect to lose hands down, good controller or bad controller, the parity calculation alone will take it's toll, not to mention anything about double latencies compared to the raid 10 arrays.
This is what internet forums are for, and this is why I'm asking it here. Before I buy a new motherboard, I'll do a research to make sure that it is compatible with my cards.
I totally agree with you but do not expect to find answers for everything. Usually the accumulated knowledge of forum member can work miracles but that's the rule and like any rule it has exceptions, but I do agree with you wholeheartedly.
You seem to prefer economical solutions. So, how am I supposed to make a complete backup/restore of 2 TB of data economically? Burn it on DVDs? I'd be busy doing that until my retirement! ;-) Professional backup devices designed for such amount of data are more expansive than a RAID controller like 3ware 9650SE. And I won't needed most of the time. I don't make complete backups of my RAID. I backup only data that I could not restore from other sources after a RAID failure, like my own documents, config files etc.
I was using SCSI disks for more than 10 years and I ended up using ?affordable? solutions because I just couldn't afford to keep up with SCSI prices. You may have noticed the ?you
may need to rebuild your raid from scratch? note. There are some cases especially when migrating from one of Intel's ICHxR to another ICHxR controller that everything will work fine and you won't have to do a thing (OS excluded), but most times you will have to rebuild your raid. You are right on the backup thing as it is almost impossible to backup TBs of data. As I pointed out myself the migration is a point where the add-on cards win.
The migration however is something that you will have to deal with only a few times. What you will have to deal with more often is security. Lose 1 disk and you are on the edge but things are slightly better with a raid 10.
If you compare a 4 disk raid 5 vs a 4 disk raid 10 then the raid 5 array affords to lose 1 disk only. Raid 10 affords to lose 1 disk also but theres still a 66% chance that the array will survive a 2nd loss. With 6 disk arrays a raid 5 can afford the loss of 1 disk only where raid 10 can afford 1 disk, have a chance of 80% to survive a 2nd loss, and if it does then it has another 50% chance to survive a 3rd loss. Despite the loss of a disk or disks, if a raid 10 remains operational it won't suffer any read/write penalties, the same does not apply for raid 5 arrays.
On top of that you should take into account raid rebuild times. If you replace a disk in a raid 5 array the controller should start calculating all parities/file fragments from all remaining disks to rebuild the array and this is complex, it takes time and while rebuilding it consumes much of the cpu's time also. As for the rebuild of a raid 10 disk, this is a joke.
From my point of view if you really want to make sense of an expensive add-on raid 5 card then you should go for a SCSI controller that supports hot-spare, and SCSI U320 disks + a server board too. If you are not willing to go for this kind of hardware then the best you can do is buy a good SATA2 controller and use WD Raptors or Seagate Barracudas 7200.11 drives but this is far from a professional setup. Trust me I know because I use 6 Raptors in my current setup (see my rig in my signature) and have a system that utilizes a 4 disk raid 5 based on SCSI U160 LVD disks at work. If you want to spend, spend big, if you can't, don't spend at all. This is only my humble opinion though.
Sure! But my question was about X38 and PCIe x4 compatibility, not about RAID configurations
Well we are both to blame for this. We've turned this into a raid 5 vs raid 10 thread. What can I say.
There are rumours that X38 boards do have problems with pci-e x4 & pci-e x8 add-on cards. Since pci-e x4 & x8 controllers are expensive, I would suggest that you wait for the upcoming X48 boards. The new boards should appear Q1 2008.
PS
I actually enjoy our argument here but this must end. I do respect your resolution on this even if I do not agree 100% and I would have gone a different way myself. Regardless of all this, I wish you good luck with your future choices.