X38 and PCI Express x4 compatibility (RAID controllers)

hashbang

Junior Member
Oct 29, 2007
6
0
0
Hi!

Does anybody has experience with X38 boards and hardware RAID controllers? In a German forum some guy has reported that his GA-X38-DQ6 did not boot with a 3ware Escalade 9650SE attached.

There is also an article on Fudzilla about PCIe problems with X38:

http://www.fudzilla.com/index....view&id=3063&Itemid=37

Does anybody know more details? I wanted to buy the gigabyte board mentioned above, but that report made me cautious. I hope that this board was just broken in some way, but I have a bad feeling about it. It would be too bad if PCIe x4 Controllers do not work with X38 chipset at all. Unfortunately most tests are concentrated on graphics performance and do not cover such topics.
 

renethx

Golden Member
Apr 28, 2005
1,161
0
0
Who the hell would want to install a RAID controller in a X38 board? It's a waste of money. Why not choose nForce 680i for this purpose? It has 3 or 4 PCI Express x16 slots at x16, x16, x8 or x16, x8, x8, x8 (like MSI P6N Diamond), hence offers better expandability at a lower cost.
 

hashbang

Junior Member
Oct 29, 2007
6
0
0
P35/X38 have much better overclocking capabilities with Q6600. This is the reason I'm looking for an Intel motherboard. As for extension slots, one x16 for the video card, one x4 for RAID and some x1 slots for future extensions are enough for me. I'm not planning to install other high bandwidth cards in foreseeable future.
 

Blazer7

Golden Member
Jun 26, 2007
1,136
12
81
The X38 boards will be replaced soon by X48 boards. As for the extra controller, the GA-X38-DQ6 features 6 SATA2 ports via Intel's ICH9R southbridge chip. It is capable of RAID 0, 1, 10 & 5 and this is a quite decent controller. Unless you are in need of a serious RAID 5 controller with XOR logic in hardware it doesn't make much sense to spend the extra $$$ for an add on card.
 

hashbang

Junior Member
Oct 29, 2007
6
0
0
I need RAID 5 with currently 4 drives (and an option to add more drives when needed). And most important: it should still work in a new motherboard. I've had bad experience with software RAID 5. Is there any info if the issues have been fixed in X48? In fact, I didn't even see any official statement that such issues have been confirmed. Only several forum posts. OTOH, some other guys reported that they could successfully use an x4 RAID card.
 

Blazer7

Golden Member
Jun 26, 2007
1,136
12
81
Even so, a raid 10 with 6 disks will give you more speed, greater security, peace of mind as there won't be any conflicts or other issues and will still cost you less. A good raid 5 controller with 6 ports will cost you an arm and a leg and there is no warranty that will run trouble free with this or future mobos.
 

hashbang

Junior Member
Oct 29, 2007
6
0
0
RAID 10 has several disadvantages:

- more disk space wasted
- more noise
- higher power consumption -> even more noise due to cooling

An no, it won't give more speed compared to a good RAID 5 controller. Raid 10 gives double transfer rate compared to a single disk. RAID 5 gives theoretical speed of multiple of the number of drives minus 1.

A good raid 5 controller with 6 ports will cost you an arm and a leg and there is no warranty that will run trouble free with this or future mobos.

Chances are still much better than with on board RAID. Usually I can plug the controller into the new motherboard. With software RAID I will most likely have to rebuild the RAID -> need to make a complete backup/restore, too much hassle. Good hardware RAID controllers aren't cheap, I know.
 

Blazer7

Golden Member
Jun 26, 2007
1,136
12
81
You are right about the space wasted but that?s about it. The noise/power consumption argument is non existent. Noise has to do with number of disks not with controllers. The same goes for power consumption and the extra noise due to cooling.

If you compare a raid 5 with 4 disks to a raid 10 with 4 disks noise levels with or without cooling are the same and so is power consumption. The same goes for raid 5 with 6 disks compared to raid 10 with 6 disks. In both comparisons expect raid 10 to be faster as there is no parity calculation. Raid 10 offers more safety also and this is undisputed.

There is a very good article here that describes both raid 5 and 10 with all their advantages and disadvantages. The conclusion is that raid 10 is faster and more secure. Raid 5 is quite fast but a lot is wasted because of parity calculation and extended latencies.

My argument is that you are planning on setting a RAID 5 with 4 drives and an option to add more drives when needed and this board has a 6 port controller capable of raid 10 which is faster and more reliable.

The cost of the add-on controller compared to the cost of getting larger disks to compensate for the loss of space from a raid 10 config favours raid 10 also. The raid migration favours the add-on card for sure and so far this is the only real advantage, at least from my perspective.

Talking about raid migration you may find yourself in a tough position if your next mobo has issues with your add-on card and this would be hard to swallow especially if your add-on card has been paid with blood. With raid 10 you will probably have to rebuild your raid from scratch but that?s about it. It is very easy to find boards with 6-port on board controllers. On board controllers also have a better record when it comes to conflicts.

PS
This thread really belongs to the Peripherals or General Hardware section.
 

Dougall

Junior Member
Oct 31, 2007
2
0
0
Doesn't the intel architecture (for desktop motherboards) only allow you to use up to 4 disks in any RAID configuration? Unless of course this has changed with the all new X38 chip.
 

hashbang

Junior Member
Oct 29, 2007
6
0
0
Blazer7: Your comparison is very strange, because you should compare RAIDs with the same available space, not with the same number of disks. :) And in this case RAID 10 _does_ mean more noise and power consumption, because you need more disks to get the same amount of space as with RAID 5.

The argument that you can take larger disks is not always valid. For example, I was planning to get 4 750GB drives and run them as RAID 5 (I'm running a RAID 5 with 4 180GB drives + 2 non-raid drives, BTW). That would give me about 2.2 TB of available disk space. You suggestion to run RAID 10 instead means that I have to take either 6 750GB drives or 4 1000GB drives. The first option means more noise/power consumption and no free ports on MCH9 (and how am I supposed to connect a DVD writer, a removable drive for backups, maybe blu ray or HD DVD in the future then?), also it will not be possible to add new drives to the RAID if needed. The second options means to pay too much money for disks, because 1000GB drives are highly overpriced. I'd rather pay for a good hardware RAID controller with 8 ports than for 1000GB drives, which will become much cheaper in a year or so.


Your conclusion that RAID 10 is faster than RAID 5 is wrong. That's true only compared to RAID 5 with 3 disks. RAID 5 with 4 disks (on a decent controller) is faster than any RAID 10. Your are right about the latency, but on a good RAID 5 controller the increased latency is barely noticeable. This is more than compensated by higher transfer rate, unless you run very latency critical applications like a database server.

Talking about raid migration you may find yourself in a tough position if your next mobo has issues with your add-on card and this would be hard to swallow especially if your add-on card has been paid with blood.

This is what internet forums are for, and this is why I'm asking it here. Before I buy a new motherboard, I'll do a research to make sure that it is compatible with my cards.

With raid 10 you will probably have to rebuild your raid from scratch but that?s about it.

You seem to prefer economical solutions. So, how am I supposed to make a complete backup/restore of 2 TB of data economically? Burn it on DVDs? I'd be busy doing that until my retirement! ;-) Professional backup devices designed for such amount of data are more expansive than a RAID controller like 3ware 9650SE. And I won't needed most of the time. I don't make complete backups of my RAID. I backup only data that I could not restore from other sources after a RAID failure, like my own documents, config files etc.


This thread really belongs to the Peripherals or General Hardware section.

Sure! But my question was about X38 and PCIe x4 compatibility, not about RAID configurations. ;-)
 

Dougall

Junior Member
Oct 31, 2007
2
0
0
Originally posted by: Dougall
Doesn't the intel architecture (for desktop motherboards) only allow you to use up to 4 disks in any RAID configuration? Unless of course this has changed with the all new X38 chip.

Whoops I meant ICH9R chip (onboard) and I just read that ICH8R can only do 4 disks in single array and ICH9R can do up to 6, so yeah I won with the most useless post!!! Oh yeah!!
 

Blazer7

Golden Member
Jun 26, 2007
1,136
12
81
Blazer7: Your comparison is very strange, because you should compare RAIDs with the same available space, not with the same number of disks. :) And in this case RAID 10 _does_ mean more noise and power consumption, because you need more disks to get the same amount of space as with RAID 5.

Mr hashbang my comparison is not strange at all. From your answer I understand that we are not exactly in the same wavelength. In my last post I was comparing raid 5 vs raid 10 with the same number of disks. Maybe this was not very clear to you. My suggestion of using larger disks in order to compensate for the loss of disk space must have been misleading to you but this was in order to make up for the space lost by the raid 10 using more disks for redundancy. My though was that instead of you spending the extra cash you indented for a good controller, you should spend them for larger hard disks instead. By doing so you would end up with the same number of disks but in greater capacity thus making up for the loss of space because of the raid 10 config. My ?measure? of comparison is not just number of disks and total usable capacity but economics also. Doing so would result in the same number of disks so same noise levels and same or almost same power consumption. These days bigger disk do not necessary mean more power hungry.

The argument that you can take larger disks is not always valid. For example, I was planning to get 4 750GB drives and run them as RAID 5 (I'm running a RAID 5 with 4 180GB drives + 2 non-raid drives, BTW). That would give me about 2.2 TB of available disk space. You suggestion to run RAID 10 instead means that I have to take either 6 750GB drives or 4 1000GB drives. The first option means more noise/power consumption and no free ports on MCH9 (and how am I supposed to connect a DVD writer, a removable drive for backups, maybe blu ray or HD DVD in the future then?), also it will not be possible to add new drives to the RAID if needed. The second options means to pay too much money for disks, because 1000GB drives are highly overpriced. I'd rather pay for a good hardware RAID controller with 8 ports than for 1000GB drives, which will become much cheaper in a year or so.

Once again we approach the same problem from a different angle. Right now you are running a raid 5 with 3x180GB disks=540 GB of usable disks + 1 disk for parity. Your plan is to go for a big upgrade to 3x750GB disks=2.250GB of usable space + 1 disk for parity. My suggestion to you is for 4 x 1TB disks=2TB of usable space + 2 disks for redundancy. With this you'll lose 250GB (10% of your usable disk space) but you will gain on security. The price will still favor the raid 10 config as 4 1TB drives will cost less than 4 750GB drives + a good controller.

Since I always compare raid arrays with the same number of disks expect the same noise levels and the same or almost the same power consumption. Most good quality mobos sport a 6-port raid controller via southbridge + 1 or 2 additional controllers. Even if I was suggesting a 6-port raid 10 config vs a 4 disk raid 5, that wouldn't pose a problem for you. At least not one like you describe.

Most good quality mobos sport additional controllers for optical devices and the likes, like the GA-X38-DQ6 (6 ports via Intel's ICH9R southbridge + 2 ports from 1 Gigabyte controller) or the GA-N680SLI-DQ6 (6 ports via nVidia's MCP55PXE southbridge + 4 ports from 2 Gigabyte controllers). Add 1 PATA controller that supports another 2 drives and you have no problem adding secondary devices like optical drives and the likes. This is not an assumption, this is pure facts. You can have a look at my rig included in my signature. I use 8 Hds + 3 opticals + 1 dedicated backup device and everything is connected to on-board controllers. No sweat, no conflicts.

Your conclusion that RAID 10 is faster than RAID 5 is wrong. That's true only compared to RAID 5 with 3 disks. RAID 5 with 4 disks (on a decent controller) is faster than any RAID 10. Your are right about the latency, but on a good RAID 5 controller the increased latency is barely noticeable. This is more than compensated by higher transfer rate, unless you run very latency critical applications like a database server.

Maybe you haven't read the article I mentioned in my earlier post thoroughly. Raid 5 is fast no question about it but it has extended latencies, parity calculation, it also has to pass from the pci-e bus to the southbridge and then to the northbridge and cpu. It is also more likely that it will become a bottleneck in extremely intensive I/O operations also, not to mention that the write performance of a raid 5 array is lousy. If you don't spend much to buy a good controller then the cpu may have to do a lot of the parity calculation and this will take it's toll on the entire system.

Raid 10 is on the southbridge already, the info doesn't have to travel over any bus and does not need to do any parity calculations. If you compare a 4 disk raid 5 to a 4 disk raid 10 expect about the same speeds. Theoretically raid 5 should be a little faster but practically it's not. If you do extend your raid 5 in the future, like you mentioned in one of your earlier posts, then compared to a 6 disk raid 10 expect to lose hands down, good controller or bad controller, the parity calculation alone will take it's toll, not to mention anything about double latencies compared to the raid 10 arrays.

This is what internet forums are for, and this is why I'm asking it here. Before I buy a new motherboard, I'll do a research to make sure that it is compatible with my cards.

I totally agree with you but do not expect to find answers for everything. Usually the accumulated knowledge of forum member can work miracles but that's the rule and like any rule it has exceptions, but I do agree with you wholeheartedly.

You seem to prefer economical solutions. So, how am I supposed to make a complete backup/restore of 2 TB of data economically? Burn it on DVDs? I'd be busy doing that until my retirement! ;-) Professional backup devices designed for such amount of data are more expansive than a RAID controller like 3ware 9650SE. And I won't needed most of the time. I don't make complete backups of my RAID. I backup only data that I could not restore from other sources after a RAID failure, like my own documents, config files etc.

I was using SCSI disks for more than 10 years and I ended up using ?affordable? solutions because I just couldn't afford to keep up with SCSI prices. You may have noticed the ?you may need to rebuild your raid from scratch? note. There are some cases especially when migrating from one of Intel's ICHxR to another ICHxR controller that everything will work fine and you won't have to do a thing (OS excluded), but most times you will have to rebuild your raid. You are right on the backup thing as it is almost impossible to backup TBs of data. As I pointed out myself the migration is a point where the add-on cards win.

The migration however is something that you will have to deal with only a few times. What you will have to deal with more often is security. Lose 1 disk and you are on the edge but things are slightly better with a raid 10.

If you compare a 4 disk raid 5 vs a 4 disk raid 10 then the raid 5 array affords to lose 1 disk only. Raid 10 affords to lose 1 disk also but theres still a 66% chance that the array will survive a 2nd loss. With 6 disk arrays a raid 5 can afford the loss of 1 disk only where raid 10 can afford 1 disk, have a chance of 80% to survive a 2nd loss, and if it does then it has another 50% chance to survive a 3rd loss. Despite the loss of a disk or disks, if a raid 10 remains operational it won't suffer any read/write penalties, the same does not apply for raid 5 arrays.

On top of that you should take into account raid rebuild times. If you replace a disk in a raid 5 array the controller should start calculating all parities/file fragments from all remaining disks to rebuild the array and this is complex, it takes time and while rebuilding it consumes much of the cpu's time also. As for the rebuild of a raid 10 disk, this is a joke.

From my point of view if you really want to make sense of an expensive add-on raid 5 card then you should go for a SCSI controller that supports hot-spare, and SCSI U320 disks + a server board too. If you are not willing to go for this kind of hardware then the best you can do is buy a good SATA2 controller and use WD Raptors or Seagate Barracudas 7200.11 drives but this is far from a professional setup. Trust me I know because I use 6 Raptors in my current setup (see my rig in my signature) and have a system that utilizes a 4 disk raid 5 based on SCSI U160 LVD disks at work. If you want to spend, spend big, if you can't, don't spend at all. This is only my humble opinion though.

Sure! But my question was about X38 and PCIe x4 compatibility, not about RAID configurations

Well we are both to blame for this. We've turned this into a raid 5 vs raid 10 thread. What can I say.

There are rumours that X38 boards do have problems with pci-e x4 & pci-e x8 add-on cards. Since pci-e x4 & x8 controllers are expensive, I would suggest that you wait for the upcoming X48 boards. The new boards should appear Q1 2008.

PS
I actually enjoy our argument here but this must end. I do respect your resolution on this even if I do not agree 100% and I would have gone a different way myself. Regardless of all this, I wish you good luck with your future choices.
 

hashbang

Junior Member
Oct 29, 2007
6
0
0
Thank you for your opinion. I respect you choice, although I'm still inclined to RAID 5. Yes, I know about relatively bad write performance. This is actually the only real disadvantage of RAID 5 for me. But it's overweighted by other advantages, IMHO.

BTW, 3ware SATA controllers do support hot spare. And there is also an optional battery backup unit to prevent data loss on power failure. No need to go SCSI here unless you need 15.000 rpm drives and are ready to pay the premium (I don't). As for drives, I was planning to get 4 WD7500AAKS, because they produce less noise according to reviews, have good price per gb ratio and are reasonably fast. Raptors are very nice drives, but they are to small and too loud for me. I'm not after benchmark records. My primary criteria are space, reasonable reliability, low noise and easy upgrade path without hassle.

Back to the topic: so, nobody actually tried to run an x4 PCI Express card on an X38 motherboard here? Or maybe someone knows at least some real facts about this issue, not just rumors?

Waiting for X48 is a possible option, of course, but first, it hasn't been jet officially confirmed that X38 has compatibility issues, neither that they will be fixed in X48. Maybe the problems reported in forums are just motherboard related, who knows. Also, I've already been waiting too long, My hope was X38, and I was planning to upgrade now. The reported issues are very disappointing, and there are actually only 2 available DDD2 X38 mobos right now (I don't count variations like DQ6/DS5, which are practically the same boards). I hope that more good mobos will appear in the nearest future.
 

Sheninat0r

Senior member
Jun 8, 2007
515
1
81
I read somewhere about someone with a PCI-e x4 card in their x16 slot that wouldn't work; I don't know if it was an X38, but it was on a fairly recent mobo and they fixed it by setting their BIOS to dual vid card mode. I guess you could try that?
 

Blazer7

Golden Member
Jun 26, 2007
1,136
12
81
Mr hashbang, there is a good thread about Gigabyte?s GA-X38-DQ6 mobo here and I remember a guy asking about using a 3ware 9650se controller with this mobo. I do not know if he has gone ahead with it but it may be worth for you to investigate. His post was on page 3 of that thread. On the same thread another member has mentioned something about reported problems with pci-e x4 & x8 cards but that?s about it.