Is there a limit to how much storage Server 03 Enterprise can handle?

Fraggable

Platinum Member
Jul 20, 2005
2,799
0
0
Here at work we just got a new PowerEdge 2900, it has 5 750GB SAS drives in it in a RAID 5 array. The idea was to partition it into a 300GB C: drive and the rest of the space into a ~2.5TB extended volume, with 7 ~350GB logical partitions. This plan went great until we got Server 03 Enterprise loaded, and disk manager showed the unpartitioned space as a 1.7TB and a 744GB unpartitioned area - it split the empty space on its own. We can't do anything with the 744GB space, but have been able to do what we wanted with the 1.7TB.

I can't imagine this is a limitation of the OS, could it be a limitation of the PERC 6 controller? Any ideas?
 

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
Did you create a legacy PC style partition table or a GPT? I know you need to use GPT to have volumes larger than 2TB but I thought the limitation was per volume/partition. But either way it might be worth trying just to see if it fixes things up for you.
 

Fraggable

Platinum Member
Jul 20, 2005
2,799
0
0
Well I chatted with Dell support and they pointed out that there is a limit of 2TB of addressable space on the disk OR array that the C: partition is on - some limitation of the MBR I guess. The solution is to isolate the OS drive from the array (not a good option) or separate one of the drives as a separate disk (a better, though not wonderful option). I'm doing the latter, which unfortunately means reinstalling Server 03 or imaging the system, killing the RAID array, re-creating it and dropping the image.
 

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
It's generally recommended to put the OS on a separate array from your data anyway.
 

Fraggable

Platinum Member
Jul 20, 2005
2,799
0
0
Originally posted by: Nothinman
It's generally recommended to put the OS on a separate array from your data anyway.

Well yeah but I need redundancy for the OS drive and can't really afford to dedicate 2 of the 5 drives to a separate RAID 1 or something like that.

The purpose of this server is to run VMware. We take Acronis images of our customers' servers, convert them to VMware disk files, and boot them up as a disaster recovery test. In the event of a disaster we would do the same conversion and boot the image on a spare server pre-loaded with an OS and VMware. This server has to host 7 test images at once - hence the large disk space - 350GB per VMware machine.
 

MerlinRML

Senior member
Sep 9, 2005
207
0
71
Dell explained the problem pretty well, I think. Your boot disk cannot be a GPT disk, so the MBR Legacy you're booting from must be less than 2TB.

That Dell 2900 should be able to handle more than 5 disks. You should look into getting a couple small disks to dedicate as a RAID 1 array for your OS and leave the large disks for your data array. Your PERC controller should be able to handle the extra disks, I believe.

Just as a side note, with several virtual machines running off the same RAID 5 array, I hope you're not doing anything too disk I/O intensive. Otherwise, you may be better of dedicating individual disks to each VM rather than having all VMs thrash all the disks at the same time. Especially with your SATA disks, which aren't going to have the seek times to handle your effectively multi-user load, you might need to do some analysis on your IO utilization patterns.
 

Fraggable

Platinum Member
Jul 20, 2005
2,799
0
0
Originally posted by: MerlinRML
Dell explained the problem pretty well, I think. Your boot disk cannot be a GPT disk, so the MBR Legacy you're booting from must be less than 2TB.

That Dell 2900 should be able to handle more than 5 disks. You should look into getting a couple small disks to dedicate as a RAID 1 array for your OS and leave the large disks for your data array. Your PERC controller should be able to handle the extra disks, I believe.

Just as a side note, with several virtual machines running off the same RAID 5 array, I hope you're not doing anything too disk I/O intensive. Otherwise, you may be better of dedicating individual disks to each VM rather than having all VMs thrash all the disks at the same time. Especially with your SATA disks, which aren't going to have the seek times to handle your effectively multi-user load, you might need to do some analysis on your IO utilization patterns.

Some great ideas there, thanks. I isolated 1 disk for the OS for now, we'll image that to an external drive.

All we're doing on here is converting TIB files to VMDK files, and booting the VMDK to make sure it runs. Because of this we wanted a lot of speed, but weren't too worried about running a lot of VMs at once since we just need to boot them once or twice and then shut down.