• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Suggestions on cost effective PCI-X sata raid card (not pci-e)

TSCrv

Senior member
After searching around and not finding any useful reviews or threads, here i am.

System: poweredge 600sc mainboard (acer altos g301), it has 4 pci-x 64 bit 33mhz slots, and all of the regular goodies you would normally see on a lower end server.

purpose of server: file server, AD, Exchange (small, under 25 users).. currently replacing a temporary server with problems..

now, for what i am looking for: a 4 port (or more) sata raid card, supporting raid 5 properly and able to do the A) full setup through the onboard boot rom. B) must be able to set array size, like if the formatted capacity is 931 gbs, set it to use 925 in case a drive fails and the replacement drives' sizes dont match up. C) management software, able to add/remove/change things around in the controller from within server 2003 r2 ent, see drive status, failed drives, rebuild progress, etc... i really dont like flying blind... D) is it safe to assume that all of these controllers support hotplug and automatic rebuild?

as for price, under 300... if its anymore i could go buy an intel serverboard and be just as happy. so far in my looking for controllers, i have only seen like 5, 3 of which being name brand, no documentation on the windows based software capabilities, and like no reviews. another nice plus would be the ability to add multiple controllers of the same model in the same system...

any suggestions? so far my efforts to find which controllers would work best for me have been fruitless... x_x
 
I just wanted to post to express my happiness at someone who knows the distinction between PCIe and PCI-X 😛.

And bump incase anyone else knows 😉.
 
Reasonable and PCI-x raid 5 is an oxymoron. A SCSI U-320 card would be almost as "reasonable"... Good luck.

.bh.
 
Originally posted by: Roguestar
I just wanted to post to express my happiness at someone who knows the distinction between PCIe and PCI-X 😛.

And bump incase anyone else knows 😉.

oh man dont even get me started about people who think they are the same...

well im still looking, and also peeking down other paths for what i need to do to get operational.. being new server or splitting workload between 2 servers (one with sata raid)

with a pci-x 33mhz slot what kind of bandwidth am i looking at? nothing else will be connected to the bus, no ide (at all), only thing is onboard gigabit and onboard video.
 
Never heard of 33 MHz PCI-X. Most run at 100 or 133 MHz. That's a LOT of downclocking.

And it's going to be tough with a $300 budget. Multilane, 2GB cache support, BBU support, and IOP341 are highly desired but not reachable within your budget constraints.
 
The 600SC had really weird PCI slots, which originally caused a lot of headaches with owners, since they were 3.3V ONLY.

Spec:
Five PCI expansion slots located on the system board. PCI slots 2 through 5 are 64-bit, 33-MHz, 3.3-V slots; PCI slot 1 is a 32-bit, 33-MHz, 3.3-V slot.

---------------------------------------------------------------------

I don't believe that you can assume that a lower-end controller will support HotSwap.
 
new server it is then, ill just throw in some ide drives and replace one of the older stations in the house (replacing a p2)

actually it isnt a 600sc, just the same mobo as it, ive had 2 600sc's come through the shop, had to replace 1 mainboard, and also this acer g301, all of the mainboards are identical build numbers (not googleable)

thanks guys
 
Reasonable and PCI-x raid 5 is an oxymoron. A SCSI U-320 card would be almost as "reasonable"... Good luck.

Yeah, but once you factor in the cost to build a decently sized RAID array with SCSI drives you're going to cry. 320GB SATA drives are the best $/GB out there, as long as you don't need insane seek speed.
 
I had both a highpoint 2220x and a 1820a, both 8 way cards. I liked both but the administration of the server was a pain. I went to a NAS. I still have the 1820a for sale in fs/ft for a buck 30 if you care.

On your list the only thing the 2220x an 1820a doesn't support is B (the 1820a may not support hot swap. not sure). B is a messed up requirement that doesn't make any sense to me. If this is running anything important there should ALWAYS be at least 1 hot spare. ALWAYS. There are exactly zero reasons why you should make your array smaller than the max size of the disk, and exactly zero reason to build an array without identically sized disks.

If the major concern is staying under $300 and not going as cheap as possible, then I would go with a new 2220x from highpoint which is about 250. That supports online raid capacity migration (adding a disk to increase capacity without taking the array offline) and spanning across cards (up to 16 drives in an array, I had 14). In PCI-X raid 5 and 12 250 gb drives I was maxing out the gigabit connection out of the box when reading data. It was insanely fast.
 
Originally posted by: Evadman

B is a messed up requirement that doesn't make any sense to me. If this is running anything important there should ALWAYS be at least 1 hot spare. ALWAYS. There are exactly zero reasons why you should make your array smaller than the max size of the disk, and exactly zero reason to build an array without identically sized disks.

the reason why i would downsize the allocated space per drive is because of the following:

out of about a dozen or raid rebuilds i have done for clients that my company supports, i find myself not being able to find the same size drive to replace the failed one... 80 gb does not mean 80gb, especially with WD. The LBA numbers wouldnt match, and about half the time i would have to replace an 80gb with 1 size up, because i cant find a drive with enough space. i have started allocating all but about roughly 1gb of unused space on the smaller drives, more on the larger drives in all of the arrays i create and since then i havent run into the problems mentioned above.

example:
raid consists of 3 drives:
250153 mb
250153 mb(this drive failes)
250153 mb

i put this replacement drive in:
25071 mb

i cant resync the volume because i need roughly 82 more meg in the new drive. i have seen this problem with multiple WD drives, awful drives imo but their failures mean billable time for me.

yes we should put hotspares into the servers we sell, but when the client denys it theres not much we can do.


as for my plans, well i just ran into some bumps in the road, server is being reallocated, SO, that being said, ill shift the topic to the capabilities of onboard raid for different chipsets, but instead ill jsut create a new thread.
 
Back
Top