• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

PERC controller issue

ebberz

Member
Currently trying to understand how the PERC controller stores a rebuilds its data and am wondering if anyone can advise?

Our Poweredge 6850 has a PERC 4e/Di controller with 4 locally attached SCSI disks.
These are split into two virtual arrays.

Virtual Disk 0 -> 0:0 and 0:1 -> logical volume C and D
Virtual Disk 1 -> 0:2 and 0:3 -> logical volume E

On Friday we had an outage and according to OpenManage there was an issue with disks 0:0 and 0:1. Someone attended site and apparently re-seated the disks and they were all showing on-line. Not quite sure if anything else went on but nothing further shown in the logs.

However the OS lost the E drive and the disk was showing up in logical disk manager as an uninitialised drive. I wasn't in the office at the time so rather than try and re-tag the config in the perc and reboot they created a new partition, formatted and scheduled a restore.

So my questions are if the disk raid config is stored on the drives does it also store the logical drive information? I thought that would have been stored in the MBR although if it lost the E drive why didn't it also lose C and D?

If it is stored in the disks does that mean the MBR drops that row if it can no longer find it or is provided with new information?

As virtual disk 1 was not initialised and a new config applied etc when the logical disk manager creates the new partition etc does it tell openmanage about the new logical drive. ie. how does it know about that or does it not care? What happens if you swapped out the perc?

Any help appreciated.
 
What was the outage. Power or drive issue ?

Usually if you lost the E drive like that, you recreate the partition and run a disk undelete software, it'll get restored, some how the mbr got destroyed.

I know for sure the raid config on the perc4e is store on both drive and controller.

The only thing I can think of was someone accidentally wiped the E drive virtual disk in the perc bios and recreate it
 
My guess:

System had a BIOS halt on power up with the PERC complaining about the disk set. Tech blindly mashed the option to drop the configuration attempting to get in to the BIOS config rather than telling it to skip and then rescanning the buses inside the BIOS and then reimporting the now "foreign" disk sets.

PERC4 keeps a copy of the RAID config in its NVRAM and tags the disks with an UUID and a copy of the NVRAM. If you drop the config, you need to reimport the disk set as a foreign disk set, let the PERC read the configs and re-signature the disks and reboot. If you skip reimporting the disk configs and try to rebuild the configuration in the hope it will just "reimport" them, you get blank disks.
 
Cheers for the feedback, 2nd option sounds possible.

Server was hung so it was power cycled and then got into a boot loop due to several security issues. When it came back disks 0 and 1 (virtual disk 1) were showing in a fault state. This was where he said he re-seated them and everything came back but E was now missing from the OS.

If the MBR got destroyed wouldn't that take out C and D also?
Virtual Disk 2 was listed and healthy it just didn't have the logical drive on it anymore.

So if you use the PERC option to create the logical drive rather than the OS that gets stored in the NVRAM and tagged on the disk? What happens if you create it via the OS, does the PERC care? i.e. in this scenario and you wanted to get it back if you did it via the OS would you be able to use what's in NVRAM to get the whole thing back or would the PERC not know about anything other than the virtual disk settings?
 
The PERC NVRAM contains the disk UID, the slot it should have been in and what the disk is a member of (loosely there is more stuff there.) The PERC itself wouldn't care about the MBR and a logical drive in the PERC world is just raw space.

This is how the PERC would see the disks from your example:
0:0 - 0:1 Raw space Sector (example only) 0 to 2000 -> logical vdisk 0, 2001 -> 8000 logical vdisk 1.
0:2 - 0:3 Raw space, all sectors logical vdisk 2.

This gives you 3 vdisks. At this point MBR doesn't even exist nor would the PERC care about it because it is "just data"

The PERC then presents 3 vdisks as disks to the server / OS

When you install Windows on vdisk 0, it writes the MBR to the disk. You as the end user can then split that virtual disk in to partitions. At that point it is an OS thing.

So now you have vDisk 0 -> Partition 0, Partition 1

Windows sees: "C: 10GB D: 90GB" (or whatever)
PERC sess: "100GB of data"

The jist of it is your last sentence. The PERC knows nothing and cares about nothing other than the virtual disk settings.

hope that isn't to convoluted.
 
Last edited:
Back
Top