Looking for advice

undeadraver

Junior Member
Oct 16, 2014
6
0
0
Hello everyone,

I am looking for advice to be able to achieve the following setup.

I need to build a complex LAB using 2 computers that will be running Hyper-v and 1 computer that will be running server 2012 R2 as a ISCI provider.

For the 2 computer running Hyper-v i was planning to have them run with 32 gig of ram each.
They will run 2 disk in raid 1 only for the OS all the VM will be on the third computer.

Now this is where am not sure what to pick

For the motherboard i was hoping for something with 2 network port
For the CPU i was hoping with something with 8 core maybe I7?
For the GPU i was hoping i could use the one on the CPU.

For the computer running the ISCI i was planing on a simple build with a motherboard having 2 network port 8 gigs of ram I5 and 2 SSD in raid 1 SATA3 and 3 Hybrid drive in raid 5


I was wondering if this make sense for you guy's being a software guy and not hardware kind of need help for that.

The purpose of this setup is to get a test Domain up and be able to test hyper-visor and around 10 running VM at the same time.

I live in Canada and plan on buying on the web (NCIX, Newegg, tiger direct)

Any advice on what component would help me a lot.

i was thinking of spending around 2k for this.


Thanks in advance
 

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
It makes sense, yes, but I don't know if 2kCAD is doable.

iSCSI is what you're trying to get at, methinks.

Don't put the HDDs in RAID 5, with many VMs. Leave RAID 5 on HDDs for your home movies and music.

Don't bother with SSHDs.

An i7 will surely be too much for your budget.

Can you order from Computer Upgrade King in the U.S. at all (they are an Amazon seller as well as having their own site); and if so, any idea how much more it will be? Their TS140 deals are pretty much impossible to beat.

With a Silverstone PS08B, i5-4430, ASRock H97M Anniversary, 32GB RAM, single 1TB HDD, and 300W PSU (Seasonic SS-300ET), I'm at over $700, already. With just 1TB RAID 1 on each machine, I go over 2kCAD total, evenb dropping down to i3-4150s.

Even used, it's a tall order due to the RAM and drives, as they can take half or more of 2kCAD by themselves. Unless a U.S.-sourced TS140 for <$300 CAD each can be used, I am skeptical that 2kCAD is a reasonable figure. Even then, it will be close, unless you have a line on some cheap used/refurb hardware upgrades.
 
Last edited:

undeadraver

Junior Member
Oct 16, 2014
6
0
0
The 2k figure i can play with it easy. since to be honest the first purchase could be 16 gig per machine and for the disk i could end up waiting for the boxing day to expand just get the basic to run some VM for test.

The only thing is that the MotherBoard doesn't have 2 ethernet port would probably end up buying a PCI network card.

But looking at the price at the moment

Case:Silverstone Precision Series PS08 mATX Tower Case Black 2X5.25 4X3.5INT 1X2.5INT USB3.0 No PSU 44$ CAD
CPU:Intel Core i5 4430 Quad Core 3.0GHZ Processor LGA1150 Haswell 6MB Cache Retail 200$ CAD
PS:Seasonic SS-300ET 40$ CAD
RAm:16 gig 2x8 around 175$ CAD

So only missing the disk not to shabby and for the third PC i don't need high ram or high cpu so that machine would be cheaper. Since it will be only running ISCSI (because cheaper to build a pc than buying a full SAN).

Let me know if this make sense. Thanks
 
Feb 25, 2011
16,994
1,622
126
1) What are you trying to accomplish here? Is this for learning or for production work?

Either way, if you intend to use the iSCSI machine as a SAN (feeding iSCSI LUNs to the VM Hosts) you're firstly better off with four HDDs in RAID-10, and secondly you're way overspeccing by going with an i5 and SSDs, unless you've got BIG plans for expansion and want to keep the same SAN.

This would work for your iSCSI box:

http://ca.pcpartpicker.com/p/VpQgCJ

Just run FreeNAS off of a thumb drive for an OS, no need to get fancy with RAID-1 boot volumes. It'll be plenty reliable, honest.

If it's for production and/or moneymaking, I'd probably go with a more expensive build with ECC RAM, but otherwise... *shrug*

This is as cheap as I can come up with for your 2 Hyper-V boxes. It does put you a bit over the $2k total though.

http://ca.pcpartpicker.com/p/bwDKNG
 

mfenn

Elite Member
Jan 17, 2010
22,400
5
71
www.mfenn.com
This is as cheap as I can come up with for your 2 Hyper-V boxes. It does put you a bit over the $2k total though.

http://ca.pcpartpicker.com/p/bwDKNG

You could go with the Core i5 4590 for $210 ea. on the Hyper-V boxes to save ~$200, which would just squeak you under $2k total. You're unlikely to see sustained CPU loads in a lab environment, so the loss of HT is totally acceptable in my opinion.
 

undeadraver

Junior Member
Oct 16, 2014
6
0
0
Well the needs exactly are the following:

Will be running 2 Hyper-v server 2012 R2. Planing to only have the Hyper-v OS on the local disk.
They will run the following
Machine 1: DC/DNS/DHC
Machine 2: SCCM
Machine 3: SCOM
Machine 4: SCVMM
Machine 5: ochestrator
machine 6,7 will be SQl server 2012 with 4 instance (2 each)
machine 8:Remote access of some sort
machine 9,10,11 Client windows 8.1, windows 10

Maybe other machine will be added from time to time.

These are just test machine for practice and training. Also they will be use to show some of the services to the customer. I don't think the CPU will be used much mostly access to the disk for all the DB.

let me show you what am building now and give me your opinion.

Hyper-V computer (time 2)

motherboard:http://www.ncix.com/detail/asrock-h97m-pro4-lga-1150-0a-97212-1246.htm 105 $
Ram:http://www.ncix.com/detail/corsair-vengeance-heatspreader-cmz16gx3m2a1600c9-16gb-03-72016-1246.htm 170$
HD:http://www.ncix.com/detail/sandisk-ultra-plus-64gb-2-5in-e4-91500-1246.htm 57$
Tower:http://www.ncix.com/detail/apex-tx-606-u3-matx-htpc-case-29-86981-1469.htm 47$
CPU:http://www.ncix.com/detail/intel-core-i5-i5-4590-haswell-24-96202-1246.htm 230$


only getting 16 gigs of ram for the time being will get another 16g and a 2 port network card during the box office


SAN computer

motherboard:http://www.ncix.com/detail/asrock-h97m-pro4-lga-1150-0a-97212-1246.htm 105 $
Tower:http://www.ncix.com/detail/logisys-cs305-black-atx-tower-70-36020-1519.htm 33$
Ram:http://www.ncix.com/detail/g-skill-ripjaws-x-f3-14900cl9d-8gbxl-8gb-fa-58519-1246.htm 90$
CPU:http://www.ncix.com/detail/intel-pentium-g3220-dual-core-90-89866-1246.htm 60$
Disk: http://www.ncix.com/detail/toshiba-...7200rpm-34-77085-1246.htm?affiliateid=7474144 88$ x4 raid 10
Netowrk card: http://www.ncix.com/detail/syba-dual-port-gigabit-ethernet-e4-76927-1513.htm 37$


For this machine i might get 2 more SSD and make them a RAID 1 only to host the DB that way all the OS will run on the RAID 10 of 2 tb and the DB will run on a dedicated SSD raid.

I will also need a router to be able to connect to my home network as well.

Thanks for the input :)

buying all from the same place because it's easier and never ad any issue with them in the past.
 
Last edited:

mfenn

Elite Member
Jan 17, 2010
22,400
5
71
www.mfenn.com
It's helpful if you list out the part names in addition to linking to them so that people who are familiar with the parts don't have to click through every link.

Anyway, the parts you've listed out are fine, and give you enough slack to put in some higher quality PSU's like the Corsair CX430 (or at least buy one as a spare in case one of the bundled PSUs goes out at an inopportune time). You're definitely missing out on some deals by going with a single vendor instead of spreading it out, and that is limiting you to 16GB per box at the start instead of 32GB per box.

I'd definitely upgrade the RAM before I added SSDs to the SAN box. At 11 VM's are 32GB of RAM total, you're looking at an average of 2.9 GB per machine (assuming page sharing saves you enough for the host OS). That's a little thin, and your database boxes will benefit from more RAM before they benefit from faster storage.

You don't really need a router, just a cheap gigabit switch to connect the boxes together and uplink to your existing router.
 

mfenn

Elite Member
Jan 17, 2010
22,400
5
71
www.mfenn.com
Just saw your latest post. That's blowing your budget by $400 and not signficantly better than Dave's last two builds which were less expensive (actually a little worse in some ways).
 

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
Why do you think you need a pair of SSDs on your iSCSI box?
+1. That's >$200 for nothing. SSDs there make sense if and only if you are going to (a) get an LSI controller w/ Cachecade support (and then just 1), or (b) not use HDDs, for IOPS over space and money.
 

undeadraver

Junior Member
Oct 16, 2014
6
0
0
I am going with 2 SSD so that i can host the DB in mirror to make sure the load is balance between the host.

Pretty sure having 11 Vm + the db running at the same on the same raid will make thing slow.

So this is why am going with this.

mfenn am not sure why you say getting worst for 400 more?

The CPU is better (don't plan to overclock)
The disk for the SAN box are faster but less place (should not be a issue for the time)
Also i have added some network card (the one in his build it's not a network it point to a cable)


For the switch i was looking for something with a layer 3 because i need to do VLANS so was maybe looking at a netgear 24 port (since with just the phase one of the lab i have 10 port taken)

I am more than willing to make some change if you can point out directly what is flawed will help me a lot.
 

mfenn

Elite Member
Jan 17, 2010
22,400
5
71
www.mfenn.com
I am going with 2 SSD so that i can host the DB in mirror to make sure the load is balance between the host.

RAID 1 doesn't worth that way. Each drive will see the full IOPS of the workload, doubling it down to the drive level.

Pretty sure having 11 Vm + the db running at the same on the same raid will make thing slow.

The way to make a small database fast is to have enough RAM on the DB server, not to put in on SSDs. Ideally you want to have the entire DB sitting in memory on the SQL server. The Windows iSCSI target stack running over Gigabit kills performance enough such that the SSDs are largely superfluous.

mfenn am not sure why you say getting worst for 400 more?

The CPU is better (don't plan to overclock)
The disk for the SAN box are faster but less place (should not be a issue for the time)
Also i have added some network card (the one in his build it's not a network it point to a cable)

Worse in some ways.

- The E3-1246 V3 is 100 MHz slower and $10 more expensive than the i7 4790 (though both are too expensive as I pointed out).
- The WD Black drives are the same speed (same platter density and rotation rate) as the Toshiba but smaller in capacity.
- You're right that PCPP screwed up the link on the NIC. The card you picked out is fine.

For the switch i was looking for something with a layer 3 because i need to do VLANS so was maybe looking at a netgear 24 port (since with just the phase one of the lab i have 10 port taken)

VLANs are a layer 2 feature, not a layer 3 feature. If you want to use 802.3ad link aggregation (presumably you will do this with your multiple NICs) and 802.1q VLANs, then you are looking for a Layer 2 Managed switch. Something like this TP-LINK 24-port switch for $230 would be fine if you want something new.

However, you should also consider getting a used Cisco switch if you want to get some experience with a switch OS that you'd likely use on the job. Catalyst 2900 series switches can be had for pretty cheap on Ebay.
 
Feb 25, 2011
16,994
1,622
126
However, you should also consider getting a used Cisco switch if you want to get some experience with a switch OS that you'd likely use on the job. Catalyst 2900 series switches can be had for pretty cheap on Ebay.

Some other companies license the Cisco iOS to run their switches too. (Like the one I work for, which shall remain nameless.)

Worth checking into and/or keeping in mind, just in case you find a deal.
 

undeadraver

Junior Member
Oct 16, 2014
6
0
0
The layer 3 switch it's because i need the routing between different VLAN and other network device (to get internet access)

Was pretty sure needed layer 3 for this.

no way i can have all the the DB running into the RAM.

And this is for most proc place if you have a DB that his 250 gig will you give SQL 250 gig of ram ?


So figure that have them run of the SSD would give me better performance than having them run on the same RAID as all the machine OS.

yeah on the CPU front i was thinking that it was overkill. But i kind of wanted the Ht for easier core placement in the VM.
 

mfenn

Elite Member
Jan 17, 2010
22,400
5
71
www.mfenn.com
Some other companies license the Cisco iOS to run their switches too. (Like the one I work for, which shall remain nameless.)

Worth checking into and/or keeping in mind, just in case you find a deal.

And many of the Cisco competitors have essentially cloned IOS (the original IOS with a big I), so you end up with very familiar interfaces for most switches from the "big guys".
 

mfenn

Elite Member
Jan 17, 2010
22,400
5
71
www.mfenn.com
The layer 3 switch it's because i need the routing between different VLAN and other network device (to get internet access)

Was pretty sure needed layer 3 for this.

You've got loads of VM capacity, add a router! If you go with used Cisco gear, you'll most likely get the layer 3-enabled firmware anyway. However, it doesn't hurt to enable the routing feature on one of your VMs (such as the one that's running DNS).

no way i can have all the the DB running into the RAM.

What sort of database are you going to be running? Certainly the databases for the software products you've mentioned won't use more that a few gigs in a lab environment. Even for larger databases, you'll find that the "hot" pages are well-cached.

And this is for most proc place if you have a DB that his 250 gig will you give SQL 250 gig of ram ?

Not sure what you're asking here.

So figure that have them run of the SSD would give me better performance than having them run on the same RAID as all the machine OS.

I think you're severely overestimating the load that a lab is going to put on the system, and also overestimating the capability of your network links to saturate a modest RAID array.