Question Virtualization : Need opinions searching for a machine

b4u

Golden Member
Nov 8, 2002
1,379
2
81
Hi,

I'm helping a familiar with an infrastructure problem, after a problem with one of their servers.

We are talking about a small company, with 2 servers and 3 desktops, for a total of 3 people working there.

Recently, one of the servers just crashed, the time has come to just re-think the infrastructure, so I was asked for some help with it.

The first immediate step will be to put a temporary machine to replace the broken server, reinstall the OS and put all information from the broken server hard disks (data is intact and working backups exist), so people can go on with their daily work.

Then it will be a matter of deciding what to do next. My thoughts would be to get a refurbished server (attempting to keep the budget within some limits), and just move on to virtual machines.

The needs:
- 1x Windows 2019 Server, 8Gb RAM for an SQL Instance (with 3x 1Gb databases)
- 1x Windows 2019 Server, 8Gb RAM for an SQL Instance (with 100x 200Mb databases)
- 1x Windows 2009 Server, 4Gb RAM for Domain Controller, AD, DHCP, DNS, File Sharing, etc...
- 1x Linux Server, 4Gb RAM for a small cloud (sharing files remotely between employees)

These are the very minimum machines I'm thinking off. There would be two SQL Server machines as each one would serve different purposes and may require different software installments, hence this separation into 2 virtual machines. Each SQL Server machine will have around 4 desktops working on them, not in a very intensive usage.

To do this, I was thinking about the following machine:
- HP Proliant DL380 G7
- 72Gb RAM DDR3 10600R
- 2x Intel Xeon X5650 (six core each)
- 8x HP 300Gb SAS 10K 2.5"
- Raid Controller with 512Mb memory


So my thinking is, and I would appreciate any comments about them and the solution overall:

1- The first obvious question: would it be a more than enough machine for the job? Should I look for something else maybe?

2- As for provisioning, from my calculations, it has enough resources. Enough RAM for all machines and 8x 300Gb disks. I would put them in 3 groups: 3x300Gb raid5 (total 600Gb for OS main startup partitions), another 3x300Gb raid5 (total 600Gb for adding additional disks for data, as I don't want to put data on C: drives) and finally 2x300Gb (unused, just so if any of the other disks need replacement). Seems to be too easy to think about it and I'm worried it is more complicated than this in the real usage scenario, would this be a good config?

3- I'm thinking about VMWare vs Xenserver, the cost is important and features too, but for instance I don't think I would need something like real-time migration or dynamic resource allocation, that is, if I need to migrate the machine or add more resources, I can easily schedule a downtime with the users, so would Open Xenserver be a better choice? Or better go with VMWare?

4- That machine comes with 300Gb SAS drives. Probably I would need that space for peace of mind, even though my initial thought would be to go for 146Gb 15K SAS drives, just because they would be quicker and easier/cheapest to buy. Should I just think about this machine will all 300Gb disks, or go for some 146Gb 15K just for the OS vm disks (for instance 4x146Gb 15k, plus 4x300Gb 10k)?

5- I've also tumbled across some info on the web stating that VMWare 6.7 (the latest version) does not support DL380 G7, would this be true?

Your help would be very much appreciated.


Thank you in advance.
b4u
 
Last edited:

crashtech

Lifer
Jan 4, 2013
10,435
2,048
136
I'm following this thread because this sort of thing interests me. I don't have enough knowledge to implement such a scheme, but I would be concerned about the age of the machine being considered, it must be close to a decade old. I haven't looked for comparable newer ones, but I do know that Haswell-EP CPUs have come down drastically in price, are far more capable than Westmeres currently proposed, and will not be obsolete out of the box.
 
  • Like
Reactions: Flayed

crashtech

Lifer
Jan 4, 2013
10,435
2,048
136
I guess I ought to provide an example of what kind of hardware might be more suitable. It's difficult to find economical servers with drives, since companies upgrading often remove them rather than secure erase them. Here's a short list that will build a 2U Supermicro server, all the items add up to about $1111:

Supermicro 2U server with 2x 2695v3, 64GB DDR4, 2x 300GB SAS HDDs.

Eight more HDDs

Edit: I found one with drive trays.

Twin 14-core CPUs are overkill, but that was what popped up when I looked. For what you get, that unit is a pretty good deal. A less expensive one with lower core count is probably out there. So-called refurbished servers are usually a lot more money for the same thing, all they usually do is blow them out with compressed air and boot them up to see if they work. So since you'll be running it through it's paces anyway, you might as well just buy straight up used.
 

b4u

Golden Member
Nov 8, 2002
1,379
2
81
I guess I ought to provide an example of what kind of hardware might be more suitable. It's difficult to find economical servers with drives, since companies upgrading often remove them rather than secure erase them. Here's a short list that will build a 2U Supermicro server, all the items add up to about $1111:

Supermicro 2U server with 2x 2695v3, 64GB DDR4, 2x 300GB SAS HDDs.

Eight more HDDs

Edit: I found one with drive trays.

Twin 14-core CPUs are overkill, but that was what popped up when I looked. For what you get, that unit is a pretty good deal. A less expensive one with lower core count is probably out there. So-called refurbished servers are usually a lot more money for the same thing, all they usually do is blow them out with compressed air and boot them up to see if they work. So since you'll be running it through it's paces anyway, you might as well just buy straight up used.

I'll have a look into supermicro servers. Some time ago I was looking into building a NAS box and I've looked into supermicro motherboards (new) as they looked very robust for the job. They have good products, time to search for refurbished servers ...

Also I'm also start to look into dell servers, like a Dell PowerEdge R610 with 2x Hex Core XEON X5650 2.66Ghz 24GB DDR3, but it seems to be a bit worse than the HP Proliant DL380 G7. They both have the same processors, but it seems that HP hardware is easier and cheaper to find.

There are so many good refurbished servers around, that is just feels wrong that software like VMWare just drops support on them. They may not be as fast and as efficient as the latest ones, but their usage depends on the needs of their users. It's very tempting to get a refurbished server for the job.
 

mxnerd

Diamond Member
Jul 6, 2007
6,799
1,098
126
Why not run Hyper-V machines instead of ESXi? It's a lot easier. ESXi is way too complex for a small company.

raid 1 (2 disks) for boot os, raid 6 (4 disks) for data, and 2 disks for spare

WIndows server has Anywhere Access service which allows remote file sharing, no need for another Linux server.

==

Oops, didn't know MS removed all those Easy Remote Access features.

Maybe a Synolog/QNAP NAS can fit the bill better?

 
Last edited:

b4u

Golden Member
Nov 8, 2002
1,379
2
81
Well, a few weeks latter, and as of today, I'm still looking for a good solution BUT:

Put up a computer to temporarily replace the broken server, and after some testing with virtualization software, I opted to go for proxmox (also tried xcp-ng).

I can say that I'm pretty satisfied with the software, it does a very good job in managing the VMs, and at the moment I have 2x Windows Server 2019 running there.

I have a double thought of buying a refurbished server or build a server (it may also be with refurbished hardware in good condition). Anyway, after the server is in place, it will be a matter of installing proxmox and transfering the VMs to the new box.

As for the hardware, I recently found a supermicro refurbished server as bellow (for around 400€ including postage), but still I'm looking everyday for some more opportunities.
  • Supermicro 2U chassis 825TQ-563LPB

  • Supermicro X9DRi-F motherboard

  • 2x Intel Xeon E5-2620 v1 (6 Core 2.0Ghz)

  • 4x 8Gb DDR3 1333Mhz ECC Registered DIMMs (total 32Gb)
  • 8x 3.5" Hard Drive Bays including caddies (no disks)
  • 2U Supermicro Rail Kit
  • PSU 560w

The board only supports Xeon E5-2600 v2 family, but I believe I can latter on buy a 2x Xeon E5 2680 v2 2.8GHz 10core for less than 200€ and that will bump up performance if needed be, together with some more RAM.

Also since I'm looking for SAS, I would have to buy a SAS controller, and probably the hard drive bays included will not be usable, I would have to install some 2.5" caddies on the 5.25" drive bays in the case, getting something like:


Boy, it's complicated to try to build up a server with refurbished hardware!
 

crashtech

Lifer
Jan 4, 2013
10,435
2,048
136
@b4u , boards like the X9DA7 have SAS onboard. Supermicro has apparently changed their site recently and made it more difficult to view discontinued models, unfortunately.
 

b4u

Golden Member
Nov 8, 2002
1,379
2
81
@b4u , boards like the X9DA7 have SAS onboard. Supermicro has apparently changed their site recently and made it more difficult to view discontinued models, unfortunately.

You should be referring to the following supermicro X9DA7 mobo:

Uhm, I'll see if I can find one around. For the moment, the one I described in my last post does not have a SAS controller onboard. In fact, some mobos I've been finding and searching for do not include SAS onboard, which is a bit odd ... wouldn't we all expect to find an onboard SAS on a mobo that is targeted at server builds?

But then again ... the most common SAS is about 6Gbs, with 10k and 15k disks. A more recent SAS revision version goes up to 12Gbs, but the price is higher, of course. Now looking at 6Gbs SAS, would they be really beneficial versus SATA3 6Gbs (with 7200rpm drives)?

Maybe the SAS will win only for the fact they are more robust drives, but then again, a Western Digital HGST drive would be a very robust one, at a lower price, so it makes me think if SAS would be a really beneficial choice here.
 
Last edited:

b4u

Golden Member
Nov 8, 2002
1,379
2
81
Uhm ... I found a nice combo:

  • Supermicro X9DA7

  • 2x Intel E5-2670 v1

  • RAM 128Gb (16x8Gb 2rank) Samsung M391B1G73BH0-CK0 8Gb 2Rx8 PC3 12800E-11-11-E3

  • 2x Heatsink and Fan Intel E62476-001 CNFN5462J1
(should be a very basic cooling solution)


Total price is around 400€, must keep an eye on it, even though I will need a chassis, can be a tower that can accommodate an E-ATX.
 

richaron

Golden Member
Mar 27, 2012
1,357
329
136
I like the idea of making use of cheap old server hardware, but at the same time I wonder about the price & performance compared to something like my 1st gen Ryzen 1700 with 64GB ECC. Not to mention the power consumption o_O

Of course one of the problems with a rig like mine is lack of RAM slots and getting sufficient density eudimms might ruin the value proposition, but from the OP it seems 32GB is close to being enough anyway...

Also SAS drives and RAID controllers are so 2004 man. Again dedicated pro hardware has a inherent appeal, and there might be practical advantages such as if you also spring for some sort of battery/capacitor power protection. But I have it on good authority all the cool kids just use (modern) software raid and (modern) consumer drives these days.
 

KentState

Diamond Member
Oct 19, 2001
8,397
393
126
I have two SuperMicro servers right now.

The first is a SC825 chassis, X9DRi-LN4F+, 128GB DDR3, 2x2620v2, 2xSamsung Evo 512 SSD boot and 8*10TB WD Red storage. I have also added both a Chelsio 10Gbe and Chelsio 40Gbe network cards, HP H220 SAS controller and a pair of NVMe PCI adapters with Intel 660p drives.

It runs Windows 2019 Server w/9 VMs - 2x domain controllers, 2x SQL Server DB servers, 1x IIS server, 1x FreeNas, 1x Windows 10, 2x general app servers.
With all that going, it draws around 300watts and has no problem managing it all.

The second is a SC846 chassis, X9DRi-F, 128GB DDR3, 2x2650v2, 24x6TB HGST/Seagate Ironwolf drives, 1TB 960 EVO with Chelsio 10GBe and Chelsio 40Gbe network cards. It also has a SC847 JBOD enclosure with 18x1TB SSD drives.

It runs FreeNas on the bare metal and the CPU's are probably overkill but both systems came built so I'm not feeling like swapping them around.


I wanted to point out what these are capable of doing and should suffice for your needs. Also, if you have any specific questions about the hardware, let me know.

IMG_5929.jpg
 

b4u

Golden Member
Nov 8, 2002
1,379
2
81
I like the idea of making use of cheap old server hardware, but at the same time I wonder about the price & performance compared to something like my 1st gen Ryzen 1700 with 64GB ECC. Not to mention the power consumption o_O

Of course one of the problems with a rig like mine is lack of RAM slots and getting sufficient density eudimms might ruin the value proposition, but from the OP it seems 32GB is close to being enough anyway...

Also SAS drives and RAID controllers are so 2004 man. Again dedicated pro hardware has a inherent appeal, and there might be practical advantages such as if you also spring for some sort of battery/capacitor power protection. But I have it on good authority all the cool kids just use (modern) software raid and (modern) consumer drives these days.

Well in fact I'm targeting a minimum 32Gb RAM and 2x Xeon processors, that will give me room for assigning 2xvCPU and 4Gb for each machine (may give more RAM if some VM needs it). Power if of course important, but dunno if at the long run, buying a more expensive machine to save power costs will compensate for it (really dunno).

For storage I'm targeting some SAS hot-plug caddies for speed and easy of replacement, but even though it may allow raid5/6, I'm thinking seriously on going with ZFS and raidz2 for 2 drive redundancy, so that will in fact be a software raid solution. Maybe I'll have to go up a bit higher in RAM since ZFS needs memory.

The think about SAS is that I really don't know if a 6Gbps 15k disk will be better than a WD HGST 7200rmp SataIII 6Gbps disk. Bandwidth is theoretically the same, SAS may be more robust but WD HGST should be at the same level.

Thanks
 
Last edited:

b4u

Golden Member
Nov 8, 2002
1,379
2
81
I have two SuperMicro servers right now.

The first is a SC825 chassis, X9DRi-LN4F+, 128GB DDR3, 2x2620v2, 2xSamsung Evo 512 SSD boot and 8*10TB WD Red storage. I have also added both a Chelsio 10Gbe and Chelsio 40Gbe network cards, HP H220 SAS controller and a pair of NVMe PCI adapters with Intel 660p drives.

It runs Windows 2019 Server w/9 VMs - 2x domain controllers, 2x SQL Server DB servers, 1x IIS server, 1x FreeNas, 1x Windows 10, 2x general app servers.
With all that going, it draws around 300watts and has no problem managing it all.

The second is a SC846 chassis, X9DRi-F, 128GB DDR3, 2x2650v2, 24x6TB HGST/Seagate Ironwolf drives, 1TB 960 EVO with Chelsio 10GBe and Chelsio 40Gbe network cards. It also has a SC847 JBOD enclosure with 18x1TB SSD drives.

It runs FreeNas on the bare metal and the CPU's are probably overkill but both systems came built so I'm not feeling like swapping them around.


I wanted to point out what these are capable of doing and should suffice for your needs. Also, if you have any specific questions about the hardware, let me know.

Well I have no experience with supermicro. They seem to make good hardware and one thing I hate about HP and doesn't seem to happen with supermicro is the fact that to update BIOS, I don't need to have an active support account with them, It just feels wrong to have to pay a bill to have a BIOS update.

I really like your hardware, even though it seems overkill for my needs, but the fact that you are running fine with X9 mobos and 2600v2 Xeons is quite refreshing when I'm thinking about such CPUs and those mobos can be found at a better price.

So from what I see you have gone for SATAIII 6Gbps drives, it was something I found missing on some supermicro mobos, the lack of SAS controller. But how are your systems dealing with them? You think you're missing something for not going SAS?

You have SSDs to boot from, I was not thinking going that way at the moment (maybe I'll think about it in the meanwhile), but I thought about 3x USB3.1 32Gb drives booting VM software on a raidz2 config. I'll have to check some SDD drives, as they may be better and safer alternatives. Those would be just for boot and store some ISOs, the VMs would be on more robust drives for sure, if going SATA they would most probably be WD HGST disks.

Does supermicro have some bootable software that we can use to test the system?

Thanks


For quick reference:
 
Last edited:

KentState

Diamond Member
Oct 19, 2001
8,397
393
126
I'm using LSI/Broadcom based controllers. No problems with the SATA drives and they are plenty fast in my experience. As far as boot media, SSD's are very affordable and more robust than flash drives.

For example: https://www.ebay.com/itm/LSI-SAS-92...e=STRK:MEBIDX:IT&_trksid=p2057872.m2749.l2649

Supermicro is like any other enterprise server where they have a BIOS where you can set the boot order/device and then IPMI for remote controller like HP's ILO.
 

richaron

Golden Member
Mar 27, 2012
1,357
329
136
Well in fact I'm targeting a minimum 32Gb RAM and 2x Xeon processors, that will give me room for assigning 2xvCPU and 4Gb for each machine (may give more RAM if some VM needs it). Power if of course important, but dunno if at the long run, buying a more expensive machine to save power costs will compensate for it (really dunno).

For storage I'm targeting some SAS hot-plug caddies for speed and easy of replacement, but even though it may allow raid5/6, I'm thinking seriously on going with ZFS and raidz2 for 2 drive redundancy, so that will in fact be a software raid solution. Maybe I'll have to go up a bit higher in RAM since ZFS needs memory.

The think about SAS is that I really don't know if a 6Gbps 15k disk will be better than a WD HGST 7200rmp SataIII 6Gbps disk. Bandwidth is theoretically the same, SAS may be more robust but WD HGST should be at the same level.

Thanks
Yeah I get the appeal of pro hardware including a dual CPU setup but again I'd be interested in the price/performance vs something like my "old" 1700. I know pro hardware includes more features but I know many who have pro hardware who could easily have used consumer stuff, and I've even worked in the pro sector with racks full of consumer hardware.

I use ZFS myself and it's true it's pretty heavy on the RAM, but the "tiny" drives you've mentioned aren't the normal usage case and I doubt would need more than a couple of gigs. Plus if you want max performance then go with SSDs. Personally if I would avoid any extra controllers if possible, they would just add one more layer of hardware and software to go wrong, and on this scale I doubt the extra performance (if any) would be worth it vs onboard SATA (or the most basic PCIe SATA card).

*snip* ...SSDs to boot from, I was not thinking going that way at the moment (maybe I'll think about it in the meanwhile), but I thought about 3x USB3.1 32Gb drives booting VM software on a raidz2 config. I'll have to check some SDD drives, as they may be better and safer alternatives. Those would be just for boot and store some ISOs, the VMs would be on more robust drives for sure, if going SATA they would most probably be WD HGST disks.
Obviously it's possible to use USB drives, but their latency (esp' relating to random performance) is just awful. And you'll need 4 for a RAIDz2 setup. Even the concept of a USB flash drive RAIDz2 setup makes me think you're more interested in fantasizing about hardware than practical solutions. That's why I've been playing Devil's advocate and at least bringing up the idea of consumer level hardware. Whatever floats your boat of course, and I've been guilty of living in a fantasy world myself so don't think I'm being too judgy. They say the world needs dreamers.
 

b4u

Golden Member
Nov 8, 2002
1,379
2
81
I'm using LSI/Broadcom based controllers. No problems with the SATA drives and they are plenty fast in my experience. As far as boot media, SSD's are very affordable and more robust than flash drives.

For example: https://www.ebay.com/itm/LSI-SAS-9207-8i-SATA-SAS-6Gb-s-PCI-E-3-0-Host-Bus-Adapter-IT-Mode-SAS9207-8i-US/123713611833?ssPageName=STRK:MEBIDX:IT&_trksid=p2057872.m2749.l2649

Supermicro is like any other enterprise server where they have a BIOS where you can set the boot order/device and then IPMI for remote controller like HP's ILO.

Yeah I get the appeal of pro hardware including a dual CPU setup but again I'd be interested in the price/performance vs something like my "old" 1700. I know pro hardware includes more features but I know many who have pro hardware who could easily have used consumer stuff, and I've even worked in the pro sector with racks full of consumer hardware.

I use ZFS myself and it's true it's pretty heavy on the RAM, but the "tiny" drives you've mentioned aren't the normal usage case and I doubt would need more than a couple of gigs. Plus if you want max performance then go with SSDs. Personally if I would avoid any extra controllers if possible, they would just add one more layer of hardware and software to go wrong, and on this scale I doubt the extra performance (if any) would be worth it vs onboard SATA (or the most basic PCIe SATA card).

Obviously it's possible to use USB drives, but their latency (esp' relating to random performance) is just awful. And you'll need 4 for a RAIDz2 setup. Even the concept of a USB flash drive RAIDz2 setup makes me think you're more interested in fantasizing about hardware than practical solutions. That's why I've been playing Devil's advocate and at least bringing up the idea of consumer level hardware. Whatever floats your boat of course, and I've been guilty of living in a fantasy world myself so don't think I'm being too judgy. They say the world needs dreamers.

Yes, I guess you're both right, I would be better with some SSD drives for VM software boot, as it seems the most reasonable choice for a working environment, not just a test/lab environment.

As for disks, I'm at the moment more inclined to go for SATA, as most probably the performance will be sufficient for the needs, but then I'll need to check for controllers, as the onboard ones may not bring more than 2-4 SATAIII connectors. Also I'm very much inclined to forget about raid5/6 (going SAS only if speed was an improvement) and opt for ZFS, even if that means getting some more RAM to be on the safe side.

The fact that NVMe SSDs are getting cheaper and raising capacity it's very tempting:
  • SSD M.2 2280 Intel 660p 1TB QLC NVMe is selling for 123€
  • SSD M.2 2280 Corsair Force Series MP510 960GB 3D TLC NVMe for around 155€

An adapter like the following one will put me in extra 20€ max:

At the moment I'm targeting enterprise hardware for it's endurance, even though there is not so good enterprise hardware around, and some consumer hardware it very good and can take a place on enterprise world. I would prefer Xeon for ECC memory support (even though AMD consumer hardware also supports ECC), together with a nice mobo from a maker that has proven its value, I have no experience with supermicro but I found very good feedback about them over the last 3-5 years.

All comments and real world experiences I've been receiving so far are very useful to make an educated choice, keep them coming this way, they are very much appreciated :)
 

KentState

Diamond Member
Oct 19, 2001
8,397
393
126
That's the adapter I use with my 660p 1TB drives. I have them mirrored using Storage Spaces and have the VM's installed on them. Not sure I would trust that approach in a production envrinment. If you have enough spinning disks, they will be sufficient for what you described. The 660p are good for short bursts of writes, but they dip below the performance of a single HDD after the cache is exhausted. For the price, you can find Samsung 970 Evos (not plus) for a little more and are much better. The Intel drives were meant for light home use. Think Steam game library.
 

b4u

Golden Member
Nov 8, 2002
1,379
2
81
Hi Again,

I've just stumbled across the following server, selling for around 750€:

Chassis: Supermicro CSE-826 (2U, 12x 3.5" bays at front, 2x 2.5" bays at back)
Motherboard: Supermicro X9DRD-7LN4F
CPU: 2x Intel Xeon E5-2620 v2 @ 2.1Ghz - 6 Core (12 Cores total)
HDD: 2x 2.5" Bay Mounts with 1x 120GB OS SSD - 2x 3.5" SATAIII 6TB HGST 7200rpm Enterprise HDD
Memory: 128GB DDR3 ECC
RAID: LSI SAS - Supports Integrated RAID (RAID 0/1/10/1E)
Rack kit included
Backplate: BPN-SAS2-826EL1
PSUs: 2x Supermicro 920W 1U Redundant Power Supply


It seems to be more than enough for my VM needs, something like:
- 4x Windows Server 2019;
- 2x Linux Ubuntu Server.
No machine would need more than 4Gb RAM at this time, no more than 2 vCPUs and 40-70Gb disk space for a start.

The disks will be thoroughly tested and used for something like keeping external backups of some data, I normally have a hard time trusting disks I don't buy myself. I'm maturing the idea of putting 2x sata SSD for VM OS, then some 4x SSD for VM data itself, I'm sure I can find some reasonable price somewhere (edit: uhm, oh no I can't).

From your knowledge and experience, would this be a good server for the job?

Many thanks.
 
Last edited:

b4u

Golden Member
Nov 8, 2002
1,379
2
81
Uhm ... any opinion on the above build for a VM server?

I'm also worried about power consumption, as already stated on a previous message. This case has 2x920W, which can be too much, and for the job I could use another chassis that comes with a 450W PSU, but I don't think that 450W will suffice for such hardware.

I don't know how much it would cost per year for such a machine, have no clue or idea on power consumption such server would make.

Thanks
 
Last edited: