Question SAS/SATA drives that work with Windows 11

boondocks

Member
Mar 24, 2011
84
2
71
Looking for a cheap used/refurb/whatever SAS drive for my ever growing number of SATA HDD's/SSD's. Must work in Windows 11!

I bought a LSI SAS9300 16i (dual 3008 controllers) and it worked fine in my Win 10 machine but my Win 11 machine HATES it and I can find no way to use it, so I'm looking for an 8i board for my ASUS Z590 mobo.

Any recommendations? I need something pretty much plug and play, don't want to be flashing BIOS etc. if possible as I'm new to SAS boards.

Thanks all!!
 

Micrornd

Golden Member
Mar 2, 2013
1,279
178
106
I can't help with an Asus board specifically, but I have a Gigabyte Aorus Z590 Master board running a LSI 9271-8i and Lenovo 16port SAS/SATA Expander Card with a 16 disk SATA R6 array.
Even though the 9271 is a PCIe 3.0 card and the slot the 9271 is installed in is set as PCIe 4.0 and it works without any problems.
I'm using LSI drivers dated 10/22/2018 with Windows 11 22H2 22621.1486
And it almost maxes out (a shade over 9.2) a 10GbE connection when transferring files.
Does that help any ?

(edited to reflect speeds from spinners and not ramdrive)
 
Last edited:
  • Like
Reactions: boondocks

aigomorla

CPU, Cases&Cooling Mod PC Gaming Mod Elite Member
Super Moderator
Sep 28, 2005
20,846
3,190
126
wait... the LSI SAS9300 what problems does windows 11 exactly have?
Your talking about this guy:

This is not a real dedicated Raid card per say, its a HBA... Host Bus Adapter.
They are sort of the same, but the offload and control work is done on the CPU rather the Card itself on a HBA.


Now that you know its different, can you find out whats not working on windows 11?

If your HBA is not working, then it has to do with your platform... ie.. board + cpu... possibly your trying to plug in the card on a 16x slot that only has wire traces for 4x.

HBA's and most controllers reqiure a full dedicated 8x pci-e slot.


If you want a real dedicated Raid card tho... which has its own issues over HBA on more advance OS's or Virtualization, but my personal favorate is this guy:


I like adaptec more then LSI, because of the Super CAP battery.
It sort of almost never dies, vs the Li-ion battery LSI's have.

But that is a real Raid Card.
Meaning it has onboard Memory.
It has the ANNOYING long bootup.
You setup the Raid on the card itself and then mount the virtual drive on windows.
This plus more that is stated on that link i gave above.
 
Last edited:

Micrornd

Golden Member
Mar 2, 2013
1,279
178
106
I like adaptec more then LSI, because of the Super CAP battery.
It sort of almost never dies, vs the Li-ion battery LSI's have.
FYI :)- Both LSI/Broadcom and Adaptec/Microchip have that Super Cap option available, but neither comes with the card standard. (Dealers of either brand do sometimes sell them as a retail package though).
The AFM-700 you pointed out lists at $216 direct from Microchip, while the CacheVault Protection Module for the LSI 9300 series lists at $187 retail.
Both can be found for considerably less on the net without much effort.
I believe the supercap flash modules from LSI first became available with the 9000 series LSI MegaRAID cards.

Lithium battery cache protection hasn't been available for some time now on any RAID card that one would consider relatively current.
But they are still around for those retro builds for us old-timers ;)
 

boondocks

Member
Mar 24, 2011
84
2
71
Sorry for the late reply, I didn't get any notifications.

I did all the testing I knew how to do in WIN11, and on bootup (which was sloooooooooow even in CSM mode) the file manager usually refused to work, the desktop would go all haywire, the taskbar would randomly disappear. When I could actually get Disk Manager to open, it would show none of the drives attached to the SAS board.

The board WAS inserted into an x8 slot, and I even connected additional power to it.

The seller on EvilBay does not have nor know how to install WIN11 on his testing machines, apparently, so was no help basically. I tried two different but similar boards.
I'm done with the SAS9300.
===============
As an alternative he did have some Adaptec 71605 cards and I ended up with one of those.
I have 16 drives connected and working.

The only problem I'm experiencing is booting in UEFI mode, the mobo BIOS seems to become confused and thinks the GPT header is corrupt. It comes to a full stop and displays similar message.
Strange to me, because the boot drive is actually an NVMe drive on the mobo, and is the only drive NOT connected to the SAS board.
But Pushing F1 to enter the mobo BIOS, then hitting ESC to exit and a normal boot follows.

As for monitoring the drives, I'm using the Megaraid Storage Manager as that's all I could find, that works from the browser. I've updated the driver to the latest I could find, which was not new by any means. v7.5.0.52013

Since I'm new to the SAS world I'm just learning as I can. Any pointers to a newer driver or a less clunky GUI monitoring solution welcome!
 

aigomorla

CPU, Cases&Cooling Mod PC Gaming Mod Elite Member
Super Moderator
Sep 28, 2005
20,846
3,190
126
You can't solve that chunky GUI.
I hate it, but its the only option.

I try to not use dedicated Raid cards anymore, because HBA's are more versatile in the long run as it uses the CPU and not its own controller.
The reason why we said you had problems was because you were trying to use a HBA as a SAS RAID raid controller, which its just not possible.

Now im thinking your issues with UEFI can be related to the onboard bios of that raid controller somehow.
 

Tech Junky

Diamond Member
Jan 27, 2022
3,412
1,145
106
I tried a dedicated card awhile back and couldn't get it working either. Don't feel bad. They're finicky and the hba route is much easier since it's basically a mux without all of the added problems. Back in the day when cpus sucked at raid they were preferred but these days I might use 1% CPU on raid.
 

Micrornd

Golden Member
Mar 2, 2013
1,279
178
106
How big is your PSU?
With a high spinning disk count, you need a larger than normal PSU.
I know when I tried to start a server build with 14 x18tb Iron Wolf Pros with a 850w name brand PSU, it gave all kinds of errors.
Sometimes it didn't want to boot, sometimes when it did HDDs didn't show up, sometimes drives dropped out of the arrays, sometimes the RAID card didn't show up, sometimes the expander didn't show up. Seems like it was a one of those carnival wheels you spin and it stops on a different error
I chased my tail for a week before I thought to change out the PSU.
Even though on paper the PSU had more than enough on the 12v line based on the OEMs amp draw, it just wouldn't work.
I stuck in a spare 1600w I had on hand and everything worked perfectly.

edit - changed the HDD count to 14
 
Last edited:
  • Like
Reactions: boondocks

bigboxes

Lifer
Apr 6, 2002
38,606
11,977
146
How big is your PSU?
With a high spinning disk count, you need a larger than normal PSU.
I know when I tried to start a server build with 15 x18tb Iron Wolf Pros with a 850w name brand PSU, it gave all kinds of errors.
Sometimes it didn't want to boot, sometimes when it did HDDs didn't show up, sometimes drives dropped out of the arrays, sometimes the RAID card didn't show up, sometimes the expander didn't show up. Seems like it was a one of those carnival wheels you spin and it stops on a different error
I chased my tail for a week before I thought to change out the PSU.
Even though on paper the PSU had more than enough on the 12v line based on the OEMs amp draw, it just wouldn't work.
I stuck in a spare 1600w I had on hand and everything worked perfectly.
Sounds anecdotal. Are you sure it's the wattage (bigger PSU) or just a different PSU that was needed? What is this name brand PSU? Just curious as to why an 850W PSU could not handle that load.
 
  • Like
Reactions: Pohemi

Tech Junky

Diamond Member
Jan 27, 2022
3,412
1,145
106

I tend to agree @bigboxes though the link above takes into consideration things I had not thought of. It could simply had been the distribution of power. Though I would think a decent 850w psu should be able to handle 15 drives withe ease. I run 5 drives and don't have an issue but, the startup surge of power needed might have exceeded the rail supply needed to get things going. As with every build though there are many variables unknown without posting the complete build details.
 

boondocks

Member
Mar 24, 2011
84
2
71
Outside of the long, sometimes complicated boot times in UEFI mode, everything is working well.
I'm not using RAID on my drives, as I have a bunch of disparate sizes from 4TB to 18TB. (well I have some smaller SSD's in the mix).
The only drive actually connected directly is the NVMe drive on the mobo, all the rest are connected to the SAS/SATA board.

Oh and in reply to @Micrornd , sorry I missed your post. I have a full WIN11 Pro on a clean install.
 

boondocks

Member
Mar 24, 2011
84
2
71
OK I went back and reread the links explaining the difference between RAID cards and HBA and I'm still confused.

All I want is a card to handle a bunch of drives and not be a pain in my butt. (16)

If there's a better alternative for that than my Adaptec 71605 please do tell.

The advertisement/receipt from the seller says:
Adaptec ASR-71605 HBA mode 6GB SAS/SATA Unraid
 

aigomorla

CPU, Cases&Cooling Mod PC Gaming Mod Elite Member
Super Moderator
Sep 28, 2005
20,846
3,190
126
Raid cards also unless flashed to pure IT have a bootup sequence too.
After your board posts, your card will then post bios, tell you how many active arrays you have, and so forth.
Then you get into windows.

This second post is what urks me now.
NTY on the extra 1min or so while the card is posting.
 
  • Like
Reactions: boondocks

Tech Junky

Diamond Member
Jan 27, 2022
3,412
1,145
106
the extra 1min or so while the card is posting.
NTM UEFI takes longer than legacy BIOS did. UEFI seems to take its time posting most of the time and on rare occasion just boots. Maybe it's just me or maybe it's the build. Didn't see the delay using an 8700K setup vs 12700K. Though laptop does boot quicker than the server. Both using NVME drives for the OS but running different OS.

I think dealing with a true raid card would actually drive me nuts with that kind of delay since my setup is also my router / firewall in addition to storage. With weekly reboots for kernel updates to keep things secure I think the double POST would turn into a scheduled reboot in the middle of the night. Though there was a string of kernel releases that required rollbacks until the shook out whatever was causing he issue I was seeing at the time.

Anyway. KISS works best unless you want a challenge. Nothing guarantees it will work though either. I think for those wanting to run 15+ drives might benefit from one of those HBA's that takes M2 format and put some SATA M2 cards onto it instead of drives to mux a few of them off a single slot.


Paired with a MOBO with a ton of M2 sockets though would be more optimal for full speed of all drives if using higher spec drives. It all comes down to planning before building though. Having the scope laid out makes it more attainable w/o over spending on parts and pieces.
 
  • Like
Reactions: boondocks

Micrornd

Golden Member
Mar 2, 2013
1,279
178
106
Sounds anecdotal.
NOT anecdotal. Two PSUs were tried - Seasonic Focus GX850. I also tried a used EVGA 850 B5, same results.

Server Build - Rosewill RSV-L4500U case w/2x80mm and 6x120mm fans, Gigabyte Aorus Z590 Master, I7-11700K, 128gb (4x32gb) Nemix (rebranded Crucial Ballistix) DDR4 3200, AK120-Black Thermalright Heatsink, LSI MegaRaid 9271-8i, Lenovo 03X3834 16 port SAS/SATA expander, Nicgiga 10G base-T network card, 1-Sabrent 1TB Rocket 4 Plus NVMe Gen4, 2-PNY CS900 SSD 250gb, 14-Seagate IronWolf Pro 18TB NAS hdds, no video card, Windows 11 Pro for Workstations (UEFI, no CSM support enabled).

On paper, I agree, either of the 850w's should have handled it but both acted the same.
Sometimes failed to boot beyond the bios, other times failed when initializing the array (2-250 SSDs as a CacheCade 2, 13-18tb as an R6, 1-18tb as a hot spare)
Failures while initializing happened at random from just starting to 98%.
Changing only the power supply to a larger PSU solved all the problems. This Plex server has been running continuously for 4 months now.
My guess is that a 1000w would handle it, but I didn't have one handy.
The only reason I went to a LEPA 1600w PSU is because I keep several on hand, as I've never had one fail or give any problems (other than their extended length in small cases).
 

WelshBloke

Lifer
Jan 12, 2005
30,453
8,112
136
Raid cards also unless flashed to pure IT have a bootup sequence too.
After your board posts, your card will then post bios, tell you how many active arrays you have, and so forth.
Then you get into windows.

This second post is what urks me now.
NTY on the extra 1min or so while the card is posting.
I have an old raid card and if I enable the resize bar option in the mb bios (or whatever that option is called) it stops the raid cards bios from loading. The raid arrays are still usable in Windows and everything works as normal. I think that resizebar disables CSM and that stops the raid bios from loading.
Make s the whole boot sequence a lot quicker and if I ever need the raid bios I reenable the CSM and reboot.
 

boondocks

Member
Mar 24, 2011
84
2
71
So I'm booting up in UEFI mode, but it takes several tries, shutting down, restarting. Eventually the mobo BIOS exclaims that the "GPT header is corrupt" and gives an option to enter the BIOS. So I press F1, enter the BIOS, then just hit ESC then acknowledge and it then boots straight into Windows.
It would probably be quicker to just boot in CSM mode, actually, and less hassle. But the machine normally is up and running 24/7 save Windows updates.

Either way, once in Windoze everything works.
As I say, I'm not currently using RAID. I have a weird assortment of drives, and basically just want them all to work.
I am noticing some slowdowns in transferring files between drives....not sure if this is a Windows thing or what. When I've encountered this in the past it was usually fixed by replacing the SATA cable.
I'm using brand new breakout cables, but I suppose there could be a fault with one of them.

I appreciate all the info, guys. I'm still learning. I'm listening.
 
Last edited:

Tech Junky

Diamond Member
Jan 27, 2022
3,412
1,145
106
When I upgraded to ADL and UEFI and this windows nonsense there were issues. Even though I wasn't running windows on my server all of this caused issues with how things booted. The intent was to move my drive over and enable csm and be done with it. Clear the secure boot crap and just reboot and let it regenerate and it should just start working.

Some delay is to be expected between UEFI and the card. As to slow transfers I put that on Windows. If you booted into Linux with a USB drive I'll bet the issue goes away or is significantly less slow.
 
  • Like
Reactions: boondocks

boondocks

Member
Mar 24, 2011
84
2
71
I saw a couple of posts that seem to indicate that when the mobo BIOS is set to UEFI mode, the mobo BIOS is first in line, then the SAS/SATA card next.
That would be the opposite of booting in CSM mode then.
Is this correct?

I'm still not understanding why the onboard NVMe drive is not being seen by the mobo BIOS when booting in UEFI mode. (all other drives are connected to the SAS board)
...why I have to shut the pc down and reboot usually several times to get anything to happen, boot wise
...and why after a mandatory trip into the BIOS on UEFI boot, I can just exit the BIOS and get a normal Windows boot. (BIOS says OK OK I'm awake now. lol)

Maybe the Asus mobo BIOS just gets "confused". IDK. One would think this day and time Asus would surely account for these things.
 

Tech Junky

Diamond Member
Jan 27, 2022
3,412
1,145
106
CSM allows for using MBR
UEFI forces using GPT

Cards are going to always be secondary to BIOS/UEFI in terms of POST / boot

For the BIO/UEFI > NVME it's still waiting for the AIC to post which is why in most cases you just want a HBA as it doesn't need to post and just allows for more ports to be used for drives by using the PCIE slot.

For the boot issue... is it possible one of the drives has an MBR / GPT partition from a prior install of an OS?