Promise Taketh Away

hirschma

Member
Mar 3, 2000
146
4
81

I wanted to mention something that really, really pissed me off.

In the last few years, I've built a number of "storage appliances". The formula is typically the same: Linux or FreeBSD, plus a whole lot of drives, plus cheap controllers in many PCI slots. Add software RAID 5, 0 or 1, depending on application. Not always the fastest thing, but it has worked great for me. I've done it about five times.

I've always used Promise controllers for this. I've had three, four, even five cards in the same computer, no issues, great performance until you soak the PCI bus.

So, a friend calls me up. He wants to have a terabyte server on his home lan. I tell him what I've done, he thinks it'll work out for him great... off to the store he goes. Gets drives, three Promise controllers.

We have nothing but problems. Can't figure it out. Finally, we send Promise some email. Their response: we've set the BIOS to only allow TWO cards in one computer. We only officially support one now. This is not documented anywhere.

My guess: Promise wants to protect their RAID card business, especially the fakey hardware raid cards that really do it in BIOS or the driver.

Does this suck or what?

JH
 

The J

Senior member
Aug 30, 2004
755
0
76
I've never used PCI to IDE bridge cards, but why would the card care how many are in the computer? It seems like the PC's BIOS would be more concerned with it than the cards' BIOSes. Can your friend at least return one of them?
 

hirschma

Member
Mar 3, 2000
146
4
81
Originally posted by: The J
I've never used PCI to IDE bridge cards, but why would the card care how many are in the computer? It seems like the PC's BIOS would be more concerned with it than the cards' BIOSes. Can your friend at least return one of them?

That's exactly my point - you should be able to have as many of these cards as you want. Typically, it is either an OS or driver limitation that would keep you from using more than a certain number.

HOWEVER: Promise actually now seems to put in some kind of limitation code in their bios. In other words, an "artificial" limitation.

 

Peter

Elite Member
Oct 15, 1999
9,640
1
0
The problem isn't so much the cards, it's the BIOS. Typically, there is very little room for the system BIOS to map extension card BIOSes into - only 128 KBytes total on most. Recent graphics cards take 64 of that, bootable ethernet eats into it as well, so there's not much remaining.

However, since you typically don't need more than one card to provide boot services, that's not really a limitation. Furthermore, sophisticated expansion card BIOSes, like for example LSI's LSI/Symbios/NCR SCSI BIOSes, handle ALL compatible cards with ONE instance of their expansion BIOS, rather than having one instance per card.

What I'm trying to say is that technically there's no excuse for the limitation Promise is erecting here.
 

hirschma

Member
Mar 3, 2000
146
4
81
Originally posted by: Peter
The problem isn't so much the cards, it's the BIOS. Typically, there is very little room for the system BIOS to map extension card BIOSes into - only 128 KBytes total on most. Recent graphics cards take 64 of that, bootable ethernet eats into it as well, so there's not much remaining.

What I'm trying to say is that technically there's no excuse for the limitation Promise is erecting here.

I think that we're in agreement, but again...

The older Promise cards, like the Ultra 100 controller, would bring up to two instances of the BIOS, and then stop (or run out of room). But Linux ignores the card (and main) BIOS anyway, and would bring they all up. FreeBSD, too - I've had up to 5 at once.

The newer cards behavior changed. The tech made it sound like it was on purpose...

Thanks.

Jonathan
 

Peter

Elite Member
Oct 15, 1999
9,640
1
0
Exactly. As I said, the card BIOS is required only for boot services. Technically there's no reason for such a multi-card limitation. (Not that piling up storage controllers on a 32-bit PCI bus would make much sense anyway ;))
 

Homerboy

Lifer
Mar 1, 2000
30,890
5,001
126
er ahh...
1 card = 4 devices right?
4x200gig HDs = 800gig
usually 2 onboard IDE controllers = 800 more gigs.
why do you need more than 1 for 1 TB?
 

hirschma

Member
Mar 3, 2000
146
4
81
Originally posted by: Homerboy
why do you need more than 1 for 1 TB?

RAID 5 Mini-Primer: Capacity = (Number of drives -1) x Size of smallest drive.

Therefore, if you want 1TB from 250GB drives, you need _5_.

Also, on RAID 5 (and most other RAIDS), all the drives are active at the same time. If you use a master/slave relationship, you cut your performance in half.

Lastly, RAID 5 can handle _one_ drive going down at a time. Under certain circumstances, a master dying can take down the slave, too, or vice versa.

So given all that, we need 3 cards.