New VMWare server/file server

child of wonder

Diamond Member
Aug 31, 2006
8,307
176
106
Coming up soon I'm going to be doing away with having a VMWare ESX server and a separate Debian File server. Right now they each have:

VMWare ESX
AMD X2 5000+ BE
4x1GB PC5300
3x73GB 10k SCSI RAID 5
- VMWare installed
- VMFS for VMs

File Server
Pentium M 1.6GHz
1GB PC5300
2GB USB Flash drive with Debian installed
1x1TB WD GP HDD

Now I'll be running a single server using the free VMWare server:

AMD X2 5000+ BE
4x1GB PC5300
2x250GB WD SATA HDDs RAID 1
- Debian
- /vm for VMs
3x1TB WD GP SATA HDDs in RAID 5 (will use WDTLER to turn on TLER)

The reasons for consolidating like this are several: my file server's onboard Gb NIC uses the sky2 driver and likes to go down, the two servers use a combined 150W while idle and consolidating them will save power, and I can sell the SCSI drives and parts from the file server and come out a little money ahead, and lastly, while ESX is AWESOME to toy with I just never have enough VMs loaded on it to justify the power and equipment costs.

Here's where my question comes in:

I'm planning on using software RAID on this server. Someone else has asked what the best file system for a 5TB file server would be.

What about a 2TB file system? I'll be loading a lot of HD content onto the file server and streaming it to and from my HTPC. I was planning on using mdadm with LVM on top and the entire RAID 5 array formatted with ext3 so I could resize the file system when I eventually add another 1TB drive or two.
 

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
Personally I'd go with XFS, it had some issues in the past having been directly ported from Irix but I haven't had a single major issue and I've been using it since the 1.0 release on 2.4 kernels. The only downside is that you can't shrink an XFS filesystem like you can ext3, but it grows online just fine.
 

Brazen

Diamond Member
Jul 14, 2000
4,259
0
0
I, too, would use XFS. I use XFS everywhere I can. Add the noatime option for that little extra somethin'-somethin'.

I would also put your file server withinn a virtual machine. Keep the VMWare Server host nice and clean. In fact, that IS how I do it. My home server is CentOS 4 with VMWare Server installed. The virtual machine files are kept on one harddisk and then I have a raid array attached as a raw disk to my virtual machine file server.
 

child of wonder

Diamond Member
Aug 31, 2006
8,307
176
106
Originally posted by: Brazen
I, too, would use XFS. I use XFS everywhere I can. Add the noatime option for that little extra somethin'-somethin'.

I would also put your file server withinn a virtual machine. Keep the VMWare Server host nice and clean. In fact, that IS how I do it. My home server is CentOS 4 with VMWare Server installed. The virtual machine files are kept on one harddisk and then I have a raid array attached as a raw disk to my virtual machine file server.

What effect does having the file server as a VM have on the network throughput?

I attempted this with ESX and got pitifully slow network throughput over my Gb network. The best speeds it could manage were 12MB/s.
 

Brazen

Diamond Member
Jul 14, 2000
4,259
0
0
The throughput is fine for me, but my home file server has only ever been inside a vm. Now at work, when I moved our file server into a vm, objectively it "felt" faster and a lot of users also commented on a speed increase. We also switched from a Windows file server to using Samba on CentOS, so that is what I attribute the speed increase to, but being a vm did not apparently slow it down.
 

child of wonder

Diamond Member
Aug 31, 2006
8,307
176
106
Has anyone seen any performance data of Debian 32 bit vs. Debian 64 bit? I'd like to go 64 bit however my motherboard's onboard SATA controller (ATI SB600) is not supported in 64 bit unless I install using the latest testing build and I'd rather stick to stable.
 

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
The only performance reason to consider a 64-bit system is if you want a single process to be able to address >4G of VM. 32-bit Linux systems can easily address >3G of memory with PAE enabled with some caveats with regards to how the HIGHMEM and LOWMEM segregation that can cause out of memory situations even if you've got gobs of HIGHMEM free.

But you could also do a normal 32-bit install and then later install a 64-bit kernel. This will eliminate the memory segregation problems in the kernel but each process will still be limited to 3G of VM individually.

And it seems pretty strange that a 32-bit kernel would support a device that the 64-bit one wouldn't since in 99.9% of cases the driver is exactly the same.
 

child of wonder

Diamond Member
Aug 31, 2006
8,307
176
106
I find it odd as well. Apparently this ATI chipset is supposed to supprt 64 bit DMA but it actually doesn't. The fix isn't in the stable kernel yet.

I've got 32 bit Debian installed right now. My server has 4GB of RAM and will be running VMWare Server. Doesn't sound like I should have any memory issues.
 

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
Make sure all 4G of memory is actually seen, you might need a bigmem kernel to see it all.
 

child of wonder

Diamond Member
Aug 31, 2006
8,307
176
106
Now this is odd... installing linux-image-2.6.18-6-686-bigmem causes a kernel panic when the server reboots.

run-init: /sbin/init: Accessing a corrupted shared library
Kernel panic - not syncing: Attempted to kill init!

When I try rebooting again I get:

Running /scripts/init-bottom
request_module: runaway loop modprobe binfmt-0000

Google-fu-ing now...
 

child of wonder

Diamond Member
Aug 31, 2006
8,307
176
106
Not finding any solutions out there... it seems this ATI chipset does not play nice with Linux.

Ubuntu 7.10 might be worth a shot I suppose, but I'd prefer to use Debian.

Or I could use linux-image-2.6.18-6-686 kernel but only have access to 3.5GB of my RAM. :p