Installing Linux on ICH5R RAID0

aheartattack

Member
Aug 18, 2006
39
0
0
I've got 2 80gig satas in RAID-0 via the intel ICH5R RAID controller on my 875P motherboard. XP and all my data is on those 160gigs, and I want to use the RAID-0 for gaming performance. No matter how I try, no linux is installing to it. It just says something about a new kernel not supporting software RAID and that though the BIOS is reporting RAID, it's only software RAID. Could someone help me with a workaround so that I can have linux, (preferably SUSE 10) along with my XP on my RAID? Thanks.
 

cleverhandle

Diamond Member
Dec 17, 2001
3,566
3
81
The Linux SATA RAID FAQ claims that Fedora Core supports installing to a software RAID array. Most distros do not, because the hardware sucks, is poorly documented, makes booting difficult, and requires kernel infrastructure that is constantly changing. You'll be much better off getting yourself another hard drive and installing Linux on that. From there you can get access to the RAID (accessing it is not that hard, installing to it and booting from it is the problem).

Better yet, forget using the crappy software RAID for whatever "gaming performance" you think you're getting. If you can't buy a real controller, just use single drives.
 

aheartattack

Member
Aug 18, 2006
39
0
0
There is a glitch to that. The intel ICH5R RAID provides 'true' RAID for two sata drives. Several review tests have shown significant performance increases over two single sata drives. My board also has a promise sata raid controller which is purely soft raid. I used it before and noticed better performance after switching to the intel controller.

That said, another sata non-raid on the promise will solve my problem. And I did install linux on a friends sata that way as a test. I was wondering if it's possible on this raid. :)
 

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
The intel ICH5R RAID provides 'true' RAID for two sata drives

If it was true RAID you wouldn't have these problems. The problems stem from the fact that the RAID is done in the drivers and not at the hardware level, there's just enough magic to allow the BIOS to boot from the 2 drives after that everything is in software.

I was wondering if it's possible on this raid. :)

Possible and too difficult to not be worth are seperate things, I would say that this falls into the latter category.
 

SleepWalkerX

Platinum Member
Jun 29, 2004
2,649
0
0
Try going to the BIOS and set your RAID to be detected as IDE. Then Suse should pick up on it I think.
 

postmortemIA

Diamond Member
Jul 11, 2006
7,721
40
91
Only Fedora Core 5 supports ICH5R, NVRAID, etc. SUSE won't "pick it up", it will see two drives like every other distro.

And linux soft-RAID support sucks, don't blame the hardware...it works just fine and gives great performance gain in Windows.
 

aheartattack

Member
Aug 18, 2006
39
0
0
I think by 'true' RAID, intel means the SATAs can be raided and are handled differently from the IDEs on the same controller. And from experience, I can say that the performance boost in Windows is quite significant. Much better than on the Promise 'software' RAID controller also on the board.

To SleepWalkerX, there aren't settings on the BIOS to get the RAID detected as IDE.

Looks like I'll have to go over to Fed 5. I'm personally a bit afraid of Fed. Their english is ...erm....shall we say... confusing. A few years back, when I hadn't gotten into technical stuff, I tried to install a Red Hat distro. It gave such a kind little warning that said stuff like "oh, this won't hurt very much. It's for your own good" like a dentist. Well that day, I learned not to trust dentists. :) I know Feds have got better, but you know how a childhood fear can cling on. Besides, the YAST installer is so much prettier.

Are there RAIDs on 965/975 boards? I'm going to upgrade some time soon to a conroe. If there is RAID on those boards, are they 'software', ('true' if it counts) or 'hardware' RAID?
 

SleepWalkerX

Platinum Member
Jun 29, 2004
2,649
0
0
Originally posted by: postmortemIA
Only Fedora Core 5 supports ICH5R, NVRAID, etc. SUSE won't "pick it up", it will see two drives like every other distro.

And linux soft-RAID support sucks, don't blame the hardware...it works just fine and gives great performance gain in Windows.

Eh? Why would you say that only FC5 supports ICH5R?.. The ICH5R's raid is supported by ata_piix which has been in the 2.6 kernel for a long time now.. http://linuxmafia.com/faq/Hardware/sata.html#intel-ich5

I think it can be solved by messing around in the BIOS.
 

cleverhandle

Diamond Member
Dec 17, 2001
3,566
3
81
Originally posted by: SleepWalkerX
Eh? Why would you say that only FC5 supports ICH5R?.. The ICH5R's raid is supported by ata_piix which has been in the 2.6 kernel for a long time now...
Having a module in the kernel is different than being supported in the installation. AFAICT, booting to a dmraid device requires some special logic in the initrd that isn't standard yet.
I think it can be solved by messing around in the BIOS.
I doubt it. Not if the OP wants to keep the Windows data that's already on the array.
Originally posted by: aheartattack
Looks like I'll have to go over to Fed 5. I'm personally a bit afraid of Fed. Their english is ...erm....shall we say... confusing. A few years back, when I hadn't gotten into technical stuff, I tried to install a Red Hat distro.
A few years is an eternity in open source time - you shouldn't judge a distro based on experiences that old.

As far as onboard RAID is concerned, it's all software RAID except for a few very high-end server motherboards (>$500 stuff). If you want real RAID, then get something like a 3Ware card - they have a 2-drive card for a little under $150.

 

drag

Elite Member
Jul 4, 2002
8,708
0
0
Originally posted by: aheartattack
I think by 'true' RAID, intel means the SATAs can be raided and are handled differently from the IDEs on the same controller. And from experience, I can say that the performance boost in Windows is quite significant. Much better than on the Promise 'software' RAID controller also on the board.

This would probably have to do with the Intel drivers being much bettter then the Promise ones. Read the Linux SATA FAQ.

'REAL' hardware raid has a seperate proccessor that calculates out parity data and other things that frees up your system for more important things. What you have is commonly refered to as 'fakeraid' in Linux-land.

Is similar to winmodem vs hardware modem. It's subsituting drivers for hardware. That's not to say that it's not usefull. For instance Winmodems, once you had a fast enough machine, tended to be faster and more flexible then their more expensive hardware counterparts.

However in Linux-land the linux software raid is very kick-ass. It's one of the 'special' things that Linux does well. It's faster and more reliable then any fakeraid drivers.

It's even faster then most hardware raid. This is because your proccessor is much much more powerfull then the general purpose embedded proccessors people use on hardware raid. There is a price you pay for this though.. Your PCI bandwidth gets soaked up by software raid as the data and parity are done over your system's buss. This limits the usefullness of software raid machine to basicly fast network storage and medium duty server stuff.

To SleepWalkerX, there aren't settings on the BIOS to get the RAID detected as IDE.

Looks like I'll have to go over to Fed 5. I'm personally a bit afraid of Fed. Their english is ...erm....shall we say... confusing. A few years back, when I hadn't gotten into technical stuff, I tried to install a Red Hat distro. It gave such a kind little warning that said stuff like "oh, this won't hurt very much. It's for your own good" like a dentist. Well that day, I learned not to trust dentists. :) I know Feds have got better, but you know how a childhood fear can cling on. Besides, the YAST installer is so much prettier.

The thing is is that unless your requiring Windows compatability you should just use Linux software MD raid.

Beleive me, performance is one thing that Linux kernel developers care about above all else. If it was faster/better to do the fakeraid route then there would be much better support for it.

Are there RAIDs on 965/975 boards? I'm going to upgrade some time soon to a conroe. If there is RAID on those boards, are they 'software', ('true' if it counts) or 'hardware' RAID?

I like this page:
http://linuxmafia.com/faq/Hardware/sata.html

This is reports from real-world experiance with Linux and various SATA raid devices. Using that you can find out what devices you can buy that are 'real' raid, and what vendors support fakeraid drivers on Linux (although as I pointed out, unless you require windows compatability on the same machine it's better to use standard linux md software raid).

Some of the SATA raid devices there are very nice. Battery backup for large disk cache. Full hotplug support. Good error detection mechanisms. Good fun stuff.


Also besides Linux software raid, and real raid, and propriatory drivers fakeraid support there is open source fakeraid support in the form of DMraid.

DM is 'device mapper' and is used for things like LVM (logical volume management). The MD is 'multiple device' and is the seasoned way (and much prefered way) to do software raid. So they are a bit confusing.

The official site for DMraid support is:
http://people.redhat.com/~heinzm/sw/dmraid/

Devices currently supported:
Adaptec HostRAID ASR
Highpoint HPT37X
Highpoint HPT45X
Intel Software RAID
JMicron JMB36x
LSI Logic MegaRAID
NVidia NForce
Promise FastTrack
Silicon Image Medley
VIA Software RAID


Personally I am using MD software raid 5 in my Debian server box. Actually I have been playing around with it and I have my Desktop system mounted over ISCSI on gigabit ethernet running on my Debian server.

I have 5 120gig drives, of slightly different makes and sizes. I have a partition on each so that they all match. 4 are on SATA to PCI adapters and 1 is on the onboard PATA IDE controller. Performance is pretty good. (around 52-64 MB/s)

I recently expanded it on the fly from 4 devices to 5, which is something that is special new with Linux 2.6.17. Couldn't do that previously with software raid 5 in Linux.

For best performance for software raid you want controllers running on a switched PCIe bus and a system with a lot of excess RAM for good cache performance.

edit:

Actually right now that I think about it instead of going to install Linux on a fakeraid device with precious Windows-based data you want to protect I would just get a cheap SATA to PCI adapter and install Linux on that. It may be a lot easier that way.
 

aheartattack

Member
Aug 18, 2006
39
0
0
To SleepWalkerX, I tried that link. They told me to make the controller SATA only from BIOS (with config SATA as RAID). Did it, still didn't work.

Thanks guys, it seems it's best and way easier to get a new SATA and put Linux on it. I won't even need a PCI adapter. I have two ports on the Promise controller that can be used as SATA only(non-RAID) and is readable from both Windows and Linux(all distros) without the need of a driver. One quick question, If I use Linux soft RAID on the new single HDD, will I get better performance? I mean putting a single drive in RAID-0, would it perform better than left as non-RAID?

Oh, I'm not judging Linux coz of my bad experience. I actually like it. I've downloaded the SUSE 10 trial DVD and have it at hand. Now if only someone would go out and make something like the FlyakiteOSX thing for Linux to make it look like a mac....not that the latest KDEs are bad. Still, a mac ui is a mac ui.
 

kobymu

Senior member
Mar 21, 2005
576
0
0
Originally posted by: drag
Originally posted by: aheartattack
I think by 'true' RAID, intel means the SATAs can be raided and are handled differently from the IDEs on the same controller. And from experience, I can say that the performance boost in Windows is quite significant. Much better than on the Promise 'software' RAID controller also on the board.

This would probably have to do with the Intel drivers being much bettter then the Promise ones. Read the Linux SATA FAQ.

'REAL' hardware raid has a seperate proccessor that calculates out parity data and other things that frees up your system for more important things.
I don?t know, if you put the parity calculation issue aside (RAID 3 & 5), then what are the distinction between hardware RAID and software RAID? Or more specifically what does it take on the hardware side to accomplish RAID 0, 1 and JBOD? All it takes basically is adding to the HD controller some (definitely not a lot of) logic that all it does is manage a tablet for internal reference. Other then position calculation (which data goes where) there isn?t that much work for the hardware to do (parity calculation are of course a different beast all together).

I did a research on that topic 3 or 4 years ago, and didn?t manage to come up with anything meaningful (I don?t know how to reveres engineer drivers so all I did was read a host of white papers and just plain old experiment). The only thing I managed to conclude is the following lame thumb rule:

If you can install DOS, or any legacy OS that is entirely depended on the BIOS, on the RAID volume (more to the point the BIOS extension that the so called RAID card/controller present to the OS), then the RAID is a 'hardware' RAID.

It's not perfect but it?s the best I can do.
 

drag

Elite Member
Jul 4, 2002
8,708
0
0
One quick question, If I use Linux soft RAID on the new single HDD, will I get better performance? I mean putting a single drive in RAID-0, would it perform better than left as non-RAID?

No, I don't think so. (although I've never tested it myself or heard anybody doing that) It's a extra uneeded software layer. I don't think it will even work that way, but I am not sure.
 

postmortemIA

Diamond Member
Jul 11, 2006
7,721
40
91
Originally posted by: SleepWalkerX
Originally posted by: postmortemIA
Only Fedora Core 5 supports ICH5R, NVRAID, etc. SUSE won't "pick it up", it will see two drives like every other distro.

And linux soft-RAID support sucks, don't blame the hardware...it works just fine and gives great performance gain in Windows.

Eh? Why would you say that only FC5 supports ICH5R?.. The ICH5R's raid is supported by ata_piix which has been in the 2.6 kernel for a long time now.. http://linuxmafia.com/faq/Hardware/sata.html#intel-ich5

I think it can be solved by messing around in the BIOS.

I know 100% for sure that you can't do that, and you are just speculating. Thinking that it can be done (without actually trying) and trying 10 different distros are two differengt things. The people should give advices for things they have tried or been directly involved with.

As others have said, installer and few more things need to have dmraid support, which is as of now only present FC5. This feature was planned all along for FC5 release. Will other follow it? I doubt it, perhaps Gentoo, and RedHat Enterprise Linux. The dmraid module isn't stable and full featured yet.

 

drag

Elite Member
Jul 4, 2002
8,708
0
0
I know for a fact that I could install Debian on Dmraid if I felt like it.

It would involve booting from a knoppix cdrom. Installing dmraid on the knoppix cdrom (if needed), then running a debootstrapinstall for Debian Etch. I would have to make sure to install the kernel and make sure that proper hook and setup scripts exist for mkinitramfs and that would be that.

Here is the document for Ubuntu dmraid install howto. https://help.ubuntu.com/community/FakeRaidHowto They have their 'live linux' style cdrom installer now, so it forgoes the whole knoppix bootup thing. I'd probably pull the hook and setup scripts from that for dmraid-enabled mkinitramfs if I was doing a Debian install.
https://help.ubuntu.com/community/FakeRaidHowto

Here is the howto for Gentoo. http://gentoo-wiki.com/HOWTO_Install_Gentoo_with_NVRAID_using_dmraid

Of course that would be a lot of work for realy no benifit. Linux software raid is better from what I understand, unless you need Windows compatability.

Also I assume that when the dmraid packages support 'Intel Software raid' that includes the ICH5R. Of course if Fedora Core supports dmraid out of the box on it's installer then that should make life easier if you choose that.
 

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
And from experience, I can say that the performance boost in Windows is quite significant. Much better than on the Promise 'software' RAID controller also on the board.

They're both software RAID, I would just guess that Promise's driver sucks.

One quick question, If I use Linux soft RAID on the new single HDD, will I get better performance? I mean putting a single drive in RAID-0, would it perform better than left as non-RAID?

If adding RAID to one drive made it faster, wouldn't everyone do it?
 

postmortemIA

Diamond Member
Jul 11, 2006
7,721
40
91
Originally posted by: Nothinman
And from experience, I can say that the performance boost in Windows is quite significant. Much better than on the Promise 'software' RAID controller also on the board.

They're both software RAID, I would just guess that Promise's driver sucks.

Real reason is that although both Promise and intel soft-RAIDs are onboard, intel/NVIDIA/VIA one is not tied to PCI, but to IDE controller with much more bandwidth.
 

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
Real reason is that although both Promise and intel soft-RAIDs are onboard, intel/NVIDIA/VIA one is not tied to PCI, but to IDE controller with much more bandwidth.

No, if you look that IDE controller is still on the same PCI bus as the rest of the system. Actually I suppose it's PCIe but even if that's true the difference should be negligable since just 2 drives wouldn't even be saturating a 'normal' PCI bus.
 

kobymu

Senior member
Mar 21, 2005
576
0
0
Originally posted by: Nothinman
And from experience, I can say that the performance boost in Windows is quite significant. Much better than on the Promise 'software' RAID controller also on the board.

They're both software RAID, I would just guess that Promise's driver sucks.

The performances in intel ICH5R IS better and it DOES have to do with the fact the Promise IS a run-of-the-mile PCI device. However -

Originally posted by: Nothinman
Real reason is that although both Promise and intel soft-RAIDs are onboard, intel/NVIDIA/VIA one is not tied to PCI, but to IDE controller with much more bandwidth.

No, if you look that IDE controller is still on the same PCI bus as the rest of the system. Actually I suppose it's PCIe

- this is incorrect. From the era of the 440BX intel chipset, the integrated IDE controller on the south bridge did not use the PCI bus. Unlike additional IDE/SATA controllers that most (as in the vast majority) still use the old PCI bus to communicate with the southbridge, since they are integrated on motherboard level and not on the chip level.

The price delta between PCI and PCIe IDE/SATA controllers is still significant, but that will probably change in 8-12 months.

if that's true the difference should be negligable since just 2 drives wouldn't even be saturating a 'normal' PCI bus.

HDD capable of 60 MB/sec sustain rate (read/write) are quite common these days, and are not very expansive, 2 of them in a RAID 0 gives you a sustain rate of 120 MB/sec and the PCI (133 MB/sec) is known for its not so small overhead, so the difference between southbridge integrated IDE controllers to the add-on PCI chips CAN be significant.
 

aheartattack

Member
Aug 18, 2006
39
0
0
The reason I switched to the intel ICH5R from the board's Promise was that I found somewhere stating exactly that: the Promise goes through PCI while the intel doesn't. Other implementations of the 875 board from ASUS and Gigabyte all provided an aditional controller, but the intel one (RAID, not board) performs the best. It seems the vendors added that controller to support 2 more SATAs after the intel RAID. If a friend brings his SATA over to copy something, I can plug it into the Promise. That's an added bonus for me. [At that time, now these new boards have like 6 SATA ports...]
 

SleepWalkerX

Platinum Member
Jun 29, 2004
2,649
0
0
Originally posted by: postmortemIA
Originally posted by: SleepWalkerX
Originally posted by: postmortemIA
Only Fedora Core 5 supports ICH5R, NVRAID, etc. SUSE won't "pick it up", it will see two drives like every other distro.

And linux soft-RAID support sucks, don't blame the hardware...it works just fine and gives great performance gain in Windows.

Eh? Why would you say that only FC5 supports ICH5R?.. The ICH5R's raid is supported by ata_piix which has been in the 2.6 kernel for a long time now.. http://linuxmafia.com/faq/Hardware/sata.html#intel-ich5

I think it can be solved by messing around in the BIOS.

I know 100% for sure that you can't do that, and you are just speculating. Thinking that it can be done (without actually trying) and trying 10 different distros are two differengt things. The people should give advices for things they have tried or been directly involved with.

As others have said, installer and few more things need to have dmraid support, which is as of now only present FC5. This feature was planned all along for FC5 release. Will other follow it? I doubt it, perhaps Gentoo, and RedHat Enterprise Linux. The dmraid module isn't stable and full featured yet.

Ok you were right, it has to be supported by dmraid. There's a tutorial here still wants to use suse 10.0 with dmraid. But I have dmraid with my suse 10.1 so its supported by this distro.
 

postmortemIA

Diamond Member
Jul 11, 2006
7,721
40
91
You have dmraid installed during installation of suse as one of packages. That doesn't mean it is part of installer. I have suse 10.1 and it has acknowledged me nicely that it can't install itself on RAID array.