• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

--Major ESXI Server Build--

krazylary

Junior Member
So I just signed a contract to be in a datacenter. I will have a full 42u's 40 amps power and 1gb conection. (fiber to the switch). So.... I am cancling all my servers at hostgator and liquid web. I just bought. VMware vSphere Essentials Plus Kit. I want to build a top notch server.

So what i am going to be doing with this build is. Lots of VPS web hosting. business apps. server 2008, exchange server and such..

SO my budget is 25k what would you build? Please comment or sugest is you have actually know what i am trying to accomplish..

So far i have

2x xeon E5-2680
Mobo MBD-X9DRH-7TF-O
Raid card 9271-8iCC (possibly 2 )
SuperChassis CSE-846E26-R1200B
4x Memory 3RSH160011R5H-64GQ


Here is were i am in limbo

ssd's my thoughts m500 CT480M500SSD1
hd's seagate CS

What do you think?
 
I would buy Dell R720xd or T620's (rack config) and add a bunch of SSD's. You can get thirty-two 2.5” hot-swap drive bays on the T620.

P.S. don't be scared away by the web pricing. Get in touch with a dell sales rep. they are pretty good about giving discounts.
 
Last edited:
Essentials plus gets you 3 Hosts with 6 CPUs. Since you list nothing to help us gauge your needs I would go with Gunbuster except I would drop the SSD part because local storage is minimally useful in ESXi.

Basically get Del 620 / 720 for hosts, attach to some flavor of SAN. What you need I can't help you with because you didn't list your actual needs.

Building your own generally won't save you any money long term so I wouldn't bother with that. Also it will likely not be on the VMWare supported list so you may not get support.
 
You're going about this all wrong, and I think you should set aside some capital for a consultant.

What's your Data look like (what percentage IOPS are VM's, Database transactions, etc. etc.)

How many users are you looking to support?

What's your return on investment look like (is this even feasible)

Is there monitoring that you are getting that you need support for? (Many managed services companies offer monitoring probes to watch for issues in your environment)

You gave a budget for the project but how much do you have for support?

How much downtime is acceptable? Do your customers know this?

It sounds like you are not using any sort of HA in mind, and that is not acceptable in any sort of SLA (I can't imagine a business where multi-day downtime is acceptable)

You need to do at minimum 2 hosts, sized in focus of computing horsepower needed. I've seen from our own customers that HP tends to do a little better on pricing while being more frustrating with support, IBM is the opposite and has ridiculous pricing but blistering fast support, and Dell kind of sits in the middle. You need to make these HA host with vMotion between them to allow one to fail over to the other. Obviously, each server should be sized so that no server goes above 50% load at any time (ideally 40% load).

You should be using virtualized Storage for these 2 nodes. Get quotes for a EMC VNXe3100 as that is a good Small Business start for virtualized storage that should get you around 5TB of raw storage for around 8 grand (not including support options) using 900GB SAS drives.

Lastly, you need switching and meshing. iSCSI or FC is fine depending on your needs or limitations imposed by your DC. If you expect to need alot of bandwidth, going FC now may avoid costs later if you outstrip your iSCSI practicality, although moving to 10Gb can give you more headroom.

You need ports for management, ports for iSCSI (2 to 4), and ports for vMotion on hosts. On a base host, you should be looking into at least needing 6 ports minimum per host. Your SAN system will also need at *least* 4 links into your iSCSI platform, assuming you're starting cheap with 1Gb links. Of course, this needs to be distributed over two switches for high availability. And you should be using separate switches for iSCSI from regular traffic. So you're talking 4 switches for best practices just to start.

What about security? I assume you're not just taking a link into your environment? Is there a Datacenter provided firewall context you're going through? Or do you need to provide your own Security Appliance? Do you know how to manage one? If not, you probably need to look into a managed appliance.

And honestly, that's all just the beginning. When going out on your own, there is alot that goes into building an environment, and it could be a lot of work. I'm not trying to discourage you in the slightest. I'm only trying to say that there are a lot of things that need to be managed here and I want you to be sure you got your bases covered 🙂

If any of the above feels confusing to you, then here's your opportunity to learn! But you should also probably look into either hiring someone to manage this environment (if you don't have the resources), or at minimum look into getting a consultant to give you relevant suggestions. Good luck! 🙂
 
Excellent post by thecoolnessrune. To add to that:

1. Does the $25K include or exclude vSphere Essentials Plus (~$5K)?
2. How much storage do you need? (In addition to what does your data look like as asked by tcr)
3. Do you have to provide your own UPS, or is the rack on protected power?
4. Do you have any networking gear?
 
Thanks for the posts.

Gunbuster I am checking out the dells right now!


@imagoon after testing with some of my ssd's intel 520 raid 0 and esxi. I found that that i got better performance with 9271-8iCC and cache cade 2.0 enabled. So i have decided to go with 8x Seagate Constellation ES.3 ST4000NM0033 4TB with 2x intel dc s3700 200gb. My needs currently are 13 private vps runnning cpanel/RHEL/Centos 4-6 cores (not dedicated) each 8-10 gb ram 700 50 gb drive space. 4 windows 7 boxes. 4 core 6 gb ram. on the horizion is vm horizion and more vps's.

@ thecoolnessrune How am i going about this wrong? I know how to use vmware I am confident in setting it up. I mainly wanted help with raid questions.. but you have helped me allot in your post so thank you. Don't worry read on..

My data is mainly web serving. I pulled the numbers and A good friend of mine and (eal partner of vmware adn looked into the data of my iops. My needs are not extreme high at this point.

My return on investment is positive cash flow first month. as i am splitting the collocation cost with 2 other business partners. and we each get 14 u's so the colocation cost on my end is 700 a month. I have the cash for the hardware so i am not getting a loan. and currently my receivables are in the 3k a month range.

The datacenter provides temp, power, and bandwidth monitoring and i wil bring in kvm over ip.

As far as security. I am a pen tester. I have the security side handled fairly well. fyi I will be using sohpos vitural firewall appliance (astro).

I am boot strapping this project. I will build 1 Server. then as cash flow happens which i project in 4 months i will then build 2 other boxes. Yes HA is a big deal and the SLA's that i have with my clients are best effort. But i have never had "days of downtime". the most i have ever had was about 1 hour. waiting on hostgator.

I have unique customers mainly because I get the opportunity to work with who i want to. i host the back end and maintain there Linux security and cpanel security. I do not get into there "stuff" so as far as support I mainly provide it.

So based on your post I am going to rethink my approach. Because you are right i need 2 hosts at min.


@mfenn 25k does not include vsphere. so total 30k if you will.
stoage needs are about 15TB (mainly web serving and web apps)
the colo provieds power backup and clean power. That was a big deal....
I have lots of good networking gear. already



I have further questions again about raid. When running esxi and raid is it better to have mutable raid 10's or 1 large raid with one massive data store?

cant i connect with SFF-8088 for the san?

have you seen or delt with http://www.openfiler.com/

I have more questions but i cant keep my eyees open..... 🙂
 
Last edited:
9271-8iCC is a local controller... If your doing esxi you really want shared storage. Unless you are getting the $500 esxi essentials. Using local storage with a virtual vCenter is a management grid lock as you cannot reboot the host the vCenter VM is on for host patching. You would also be unable to put the host in to maintenance mode and still use vCenter to handle patch management.

I highly recommend against vCenter on local storage on ESXi. If you insist on local storage only, you need another box to run vCenter. Local storage also eliminates most of the ESXi useful features that you are buying as vMotion, patch management, DRS and the like will simply not function.
 
You can vMotion with local storage in ESXI 5. It's less than ideal, but you can do it.

5.1 specifically and it is still having "I'm a new function" pains. Also it makes a standard vMotion go from 15 - 60 seconds in to a multi-hour ordeal. Also he only has one box, and would need to have at least 2x the storage (same capacity in each box, never exceeding ~45% utilization) to have a chance at moving all the machines off. At that point you could get a cheap EQL DAS solution, plug in the drives there and have a shared solution. He also hasn't listed the network gear he would need to make it work.
 
Last edited:
@imagoon after testing with some of my ssd's intel 520 raid 0 and esxi. I found that that i got better performance with 9271-8iCC and cache cade 2.0 enabled. So i have decided to go with 8x Seagate Constellation ES.3 ST4000NM0033 4TB with 2x intel dc s3700 200gb.

Basically, your main problem is that you're thinking about this at too low of a level. You do NOT want to be building these servers yourself, you are not so budget-constrained as to need that. You have $25K in cash right now, you can put together something nice right now rather than cobbling together junk over time.

From Dell (cause I am familiar with their product line, HP could do something similar).

Hypervisor - PowerEdge R620 $5500

The high points are, 12 cores, 128GB RAM, no HDD, quad-port GigE, iDRAC 7 Enterprise, VMware ESXi 5.0 pre-installed on dual internal SD card (no license).

Code:
Chassis Configuration:
Chassis with up to 4 Hard Drives and 3 PCIe Slots         4H3P         1         [317-8730][331-4822]         1530
    Processor:
Intel® Xeon® E5-2640 2.50GHz, 15M Cache, 7.2GT/s QPI, Turbo, 6C, 95W, Max Mem 1333MHz         E52640         1         [317-9595][331-4762]         1550
    Additional Processor:
Intel® Xeon® E5-2640 2.50GHz, 15M Cache, 7.2GT/s QPI, Turbo, 6C, 95W         2E52640         1         [317-8688][317-9609][331-4762]         1551
    Memory Configuration Type:
Performance Optimized         PEOPT         1         [331-4428]         1562
    Memory DIMM Type and Speed:
1600MHz RDIMMS         1600RD         1         [331-4424]         1561
    Memory Capacity:
16GB RDIMM, 1600MT/s, Low Volt, Dual Rank, x4 Data Width         16GBRLR         8         [319-1812]         1560
    Operating System:
No Operating System         NOOS         1         [420-6320]         1650
    OS Media Kits:
No Media Required         NOMED         1         [421-5736]         1652
    RAID Configuration:
No RAID for H310 (1-10 HDDs)         NRH310         1         [331-4222]         1540
    RAID Controller:
PERC H310 Integrated RAID Controller         PH310IR         1         [342-3528]         1541
    Embedded Systems Management:
iDRAC7 Enterprise         DRAC7E         1         [421-5339]         1515
    Select Network Adapter:
Broadcom 5720 QP 1Gb Network Daughter Card         5720QP         1         [430-4418]         1518
    Power Supply:
Dual, Hot-plug, Redundant Power Supply (1+1), 1100W         RPS1100         1         [331-4607]         1620
    Power Cords:
No Power Cord         NOPWRCD         1         [310-9057]         1621
    Power Management BIOS Settings:
Power Saving Dell Active Power Controller         DAPC         1         [330-5116]         1533
    Rack Rails:
ReadyRails™ Sliding Rails Without Cable Management Arm         RRNOCMA         1         [331-4766]         1610
    Bezel:
No Bezel         NOBEZEL         1         [313-0869]         1532
    Internal Optical Drive:
DVD ROM, SATA, Internal         DVD         1         [318-1390]         1600
    System Documentation:
Electronic System Documentation and OpenManage DVD Kit         EDOCS         1         [331-4513]         1590
    Virtualization Software:
VMware ESXi v5.0U2 Embedded Image on Flash Media         VMESXI         1         [421-9756]         1656
    Internal SD Module:
Internal Dual SD Module with 1GB SD Card         1GDSD         1         [331-4441][342-3595][342-3595][468-4612]         1640
    Warranty & Service:
3Yr Basic Hardware Warranty Repair: 5x10 HW-Only, 5x10 NBD Onsite         U3OS         1         [936-1787][936-9443][939-4668][939-4768][989-5121][994-4019]         29
    Proactive Maintenance:
Maintenance Declined         NOMAINT         1         [926-2979]         33
    Installation Services:
No Installation         NOINSTL         1         [900-9997]         32
    PowerEdge R620:
PowerEdge R620         R620         1         [225-2108]         1
    Shipping:
PowerEdge R620 Shipping - 4/8 Drive Chassis         SHIP48         1         [331-4761]         1500

Storage - PowerVault MD3200i $15000

This is a 12 drive iSCSI unit with 12 3TB drives for 36TB raw.

Code:
PowerVault MD3200i:
PV MD3200i,RKMNT,iSCSI, 12 Bay, Dual Controller         M32ID         1         [224-8206]         1
    Bezel:
No Bezel Option         NOBEZEL         1         [313-0869]         17
    Hard Drives:
3TB 7.2K RPM Near-Line SAS 6Gbps 3.5in Hot-plug Hard Drive         3TBSHHD         12         [342-2337]         1209
    Rails:
Rapid Rails for Dell Rack         RRAIL         1         [330-6048]         27
    Power Cords:
No Additional Power Cords         NOPWRCD         1         [310-9057]         38
    Warranty & Service:
3Yr Basic Hardware Warranty Repair: 5x10 HW-Only, 5x10 NBD Onsite         U3OS         1         [922-5697][926-9762][927-1989][929-6318][931-2360][994-4019]         29
    Installation Services:
Remote Implementation of a Dell PV MD 3 Series Array         RMIMSWT         1         [961-3869]         32
    Proactive Maintenance:
Proactive Maintenance: 1 event per year, Remote Delivery, 3 Year         ENTM1R3         1         [988-9449]         33
    Hard Drives:
HD Multi-Select         HDMULT         1         [341-4158]         8
    Power Supply:
Power Supply, AC 600W, Redundant         PS600W         1         [332-0746]         36
    Software Data Protection and Performance Features:
No data protection software         NOSW         1         [410-1074]         1203

I should emphasize that these are LIST prices. You can do better by calling and negotiating.
 
The NAS software i am going to use is sansymphony. Was on the phone all day with different company's.

My set up has changed and the budget has changed. after talking to a number of vmware "experts"

I am going to build 2 hosts with this
SUPERMICRO MBD-X9DRH-7TF-O
SUPERMICRO SuperChassis CSE-846E26-R1200B
2x Wintec Server Series 64GB (4 x 16GB) 240-Pin DDR3 SDRAM DDR3 1600
Intel Xeon E5-1660 Sandy Bridge-EP 3.3GHz
Intel E1G44HTBLK 4x lan
also the motherboard has 10gb lan on board
(all pass vmware, supermicro, compatibiliy)

Then i will be building this as the SAN
The resion for the mobo choise is the 10gb onboard.
SUPERMICRO SuperChassis CSE-846E26-R1200B
(10 gb lan)
MBD-X9DRH-7TF-O
Intel Xeon E5-1660 Sandy Bridge-EP 3.3GHz
Intel E1G44HTBLK 4x lan
Wintec Server Series 64GB (4 x 16GB) 240-Pin DDR3 SDRAM DDR3 1600
sansymphony (software)
16x Seagate Constellation ES.3 ST1000NM0023 1TB
LSI MegaRAID LSI00334 (9271-8iCC)
2x intel dc s3700 200gb
 
Notice that the big Achilles heel is in your Storage Engine. You have redundancy in your hosts but no redundancy in your Storage Engine. The big difference between the NetApp systems, EMC systems, and even the Dell PowerVault systems (as shown above) that makes for the large price difference is that these are *two* machines working under a homogeneous operating system. These virtualized storage systems have two controllers, and two "service processors" that act together in an internal High Availability mode when things are fine but can also run by themselves in the case of a failure.

With your above configuration, what is the point of 2 hosts if your SAN engine fails? To make the above work you have to buy 2 identical storage engines, and create an HA environment with SAN software that supports it (Nexenta for instance does by having 2 Storage systems, plus a third "director" that can be bypassed to a default storage system in the case of the director failing, though the director can also be made redundant). Basically, to do this the "home grown way" and get the same sort of reliability, you have to turn 1 storage appliance into 2 identical storage engines (half the storage efficiency) and 2 directors. That's why fully virtualized storage appliances are so appealing despite their high prices.
 
That's pretty much the flaw in almost every esx design, people get 2 host but only 1 san.. they think the san is bullet proof. If this is critical, budget for 2 storage
 
That's pretty much the flaw in almost every esx design, people get 2 host but only 1 san.. they think the san is bullet proof. If this is critical, budget for 2 storage

True buy the san's themselves are doubled up to begin with (multiple controllers, Processing units, network connections and so on) vs. building a single PC system. That was the guys major point. Sure the whole system could explode, but considering sources of failure a single drive array with duplicate PSU's/Controller/CPU's/memory banks/ is a lot closer to doubling up without the expense of actually doubling up and dealing with keeping the data stores duplicated and therefore also just plain having to double up on capacity without actually gaining any.

I also have to add a question with the desire to constantly want to add an Intel SSD or 2. Any particular reason. Without some pretty in depth and complicated caching system an SSD isn't going to help the performance of a SAN.
 
Last edited:
That's pretty much the flaw in almost every esx design, people get 2 host but only 1 san.. they think the san is bullet proof. If this is critical, budget for 2 storage

Our EMC gear is 2 isolated controllers in a single box. All the chassis are dual pathed and dual powered with dual PSUs. In most cases losing the SAN means we lost all power phases and ran out of batteries. SANs may look like a single box but they are far more duplicated and resilient than the servers they look like.
 
I also have to add a question with the desire to constantly want to add an Intel SSD or 2. Any particular reason. Without some pretty in depth and complicated caching system an SSD isn't going to help the performance of a SAN.

Depending on the vendor, almost all major SAN Providers today can utilitize SSDs as a read cache. If your workload is write heavy, then you can use the SSDs in this regard as long as the Storage Vendor is using a robust tiered storage system. Netapp, for instance, is great about this, in the fact that you can have their configuration of a half shelf or full shelf (DS4243 or 4246, for 12 or 24 drives total), of 100GB SSDs, then a second shelf with 900GB 15K drives, and then a third shelf with 2TB 7.2k drives. These shelves can all be setup as a virtual storage pool that will constantly shift read and write operations to maximize throughput of the total storage pool.

If you do not have one of these proprietary storage engines with the proper licenses however, then your limit is usually read caching over SSDs (for instance, the ZFS file system supports this feature through the L2ARC cache).
 
Thank you all for you honest feedback it is very appreciated.

First off does anyone have anything against sansymphony or strong opinions not to use it as a nas?

I confirmed with sansymphony that LSI cachecade 2.0 will run in the back end and cache perfectly fine. and then the software level of sansymphony will tune the luns as needed.

Another ad-on with the lsi cards that i will be adding is "data recovery' that required a raid 1 set up that will take snapshots of the whole raid that is on the card. After talking to lsi ithey are coming out at the end of the month with a version of the 9271-8i that has "sli if you will" that if a raid card fails it will switch over to the spare.

As far as the 1 nas issue you guys brought up, has been my biggest concern. Not cpu limits not ram limits but I/O limits. I am far more concerned with san redundancy than having 2 hosts. But i have to have the 2 hots to run esxi "right".. that being said. I think in the end i am still in limbo with the san system. i am leaning to a duel raid card duel processor with lot of ram... Word to the wise Make sure as hell that your back planes and expander cards are compatible with all facets of your hardware. for example this case SUPERMICRO SuperChassis CSE-846E26-R1200B... The E26 part of the model number is the backplane that you want the 2 means they have redundant lsi expander chips and the 6 is 6gb sas...

++++Do you guys know anything about this personalty?+++
Another major issue in my mind is the bandwidth between hots to san. I think 10gb will be fine. But I want to be able to use SFF-8088. I am still trying to get a answer if i can use that and it still be conspired a san and not a das.
+++++++++++++++++++++++++++++++++++++++++++++


The build has once again has been tweaked.


Will update later today just a quick check in. back on the phone all day.. and to ad to the fun i had another huge client want to sign up and they need 4 server 2012 vm's 2 rhel vm's...

@mfenn I keep looking into dells site but i will stay stong.. I love the hardware build process. 🙂
 
5.1 specifically and it is still having "I'm a new function" pains. Also it makes a standard vMotion go from 15 - 60 seconds in to a multi-hour ordeal. Also he only has one box, and would need to have at least 2x the storage (same capacity in each box, never exceeding ~45% utilization) to have a chance at moving all the machines off. At that point you could get a cheap EQL DAS solution, plug in the drives there and have a shared solution. He also hasn't listed the network gear he would need to make it work.

Like I said, it's not ideal, but it does work in my experience. I moved half a dozen VM's a few weeks ago doing that. Took about two hours but it worked fine. Capacity doesn't have to be the same as long as there's sufficient capacity on the destination host.
 
Thank you all for you honest feedback it is very appreciated.

First off does anyone have anything against sansymphony or strong opinions not to use it as a nas?

I confirmed with sansymphony that LSI cachecade 2.0 will run in the back end and cache perfectly fine. and then the software level of sansymphony will tune the luns as needed.

Another ad-on with the lsi cards that i will be adding is "data recovery' that required a raid 1 set up that will take snapshots of the whole raid that is on the card. After talking to lsi ithey are coming out at the end of the month with a version of the 9271-8i that has "sli if you will" that if a raid card fails it will switch over to the spare.

As far as the 1 nas issue you guys brought up, has been my biggest concern. Not cpu limits not ram limits but I/O limits. I am far more concerned with san redundancy than having 2 hosts. But i have to have the 2 hots to run esxi "right".. that being said. I think in the end i am still in limbo with the san system. i am leaning to a duel raid card duel processor with lot of ram... Word to the wise Make sure as hell that your back planes and expander cards are compatible with all facets of your hardware. for example this case SUPERMICRO SuperChassis CSE-846E26-R1200B... The E26 part of the model number is the backplane that you want the 2 means they have redundant lsi expander chips and the 6 is 6gb sas...

++++Do you guys know anything about this personalty?+++
Another major issue in my mind is the bandwidth between hots to san. I think 10gb will be fine. But I want to be able to use SFF-8088. I am still trying to get a answer if i can use that and it still be conspired a san and not a das.
+++++++++++++++++++++++++++++++++++++++++++++


The build has once again has been tweaked.


Will update later today just a quick check in. back on the phone all day.. and to ad to the fun i had another huge client want to sign up and they need 4 server 2012 vm's 2 rhel vm's...

@mfenn I keep looking into dells site but i will stay stong.. I love the hardware build process. 🙂

First of all, if you're using SANSymphony's auto-tiering functionality then you wouldn't use LSI's CacheCade system. CacheCade isn't all that great anyways due to its 512GB limit per controller. It also wouldn't provide hotspot monitoring and granular tiering like SS's software does. You will also be wanting to use SS's snapshotting system. Remember the idea behind virtualized storage is to *abstract* storage from the system. You can't do that if you're using proprietary hardware functions that require identical hardware.

Dual controller and dual cpu's doesn't fix the most common problem with servers behind the hard drives or power supplies failing, which is the motherboard failing If that happens, your entire storage engine goes down anyways.

If you don't want to do a real virtualized storage system, then you really have no choice but to double the whole setup. And since it doesn't appear that SS will act as a Director, you can't access the same hard drives from two different hosts. So you'll have to cut your efficiency in half and provide the same storage to both nodes.

You have the SuperMicro backplane ideas just fine, I have the single port version of that chassis in my home lab, I still wouldn't consider this sort of SuperMicro build unless I was doubling my storage redundancy though. If the mobo or power distribution module failed the entire storage engine would go down.. You *never* want a storage system to go completely down unexpectedly, and without a completely replicated system of what you have above you still have that possibility.

Why do you *want* to use SFF-8088? Why wouldn't you use the proper technology for the task? You would use SFF-8088 to connect Storage shelves to separate controllers. Yes, if you were using SFF-8088 you would more than likely be making a DAS, not a SAN.

I still have to say that in a production environment, you really need to jump up your hardware allocation in your SAN area. Either get an enterprise grade storage appliance, or be prepared to donate quite a bit of rack space, efficiency losses, and costs into creating it from "scratch." 🙂
 
Like I said, it's not ideal, but it does work in my experience. I moved half a dozen VM's a few weeks ago doing that. Took about two hours but it worked fine. Capacity doesn't have to be the same as long as there's sufficient capacity on the destination host.

But this also ruins your ability to recover VM's from a downed node, because they are locked in on storage it can't access. So its really only vMotion .5. The idea of separating storage from the VM host is so that you always have a way to get a VM back up.
 
But this also ruins your ability to recover VM's from a downed node, because they are locked in on storage it can't access. So its really only vMotion .5. The idea of separating storage from the VM host is so that you always have a way to get a VM back up.

I'm not arguing that. I completely agree it's not a good way of doing it. All I'm doing it pointing out it can be done.
 
Thank you all for you honest feedback it is very appreciated.

First off does anyone have anything against sansymphony or strong opinions not to use it as a nas?

I confirmed with sansymphony that LSI cachecade 2.0 will run in the back end and cache perfectly fine. and then the software level of sansymphony will tune the luns as needed.

Another ad-on with the lsi cards that i will be adding is "data recovery' that required a raid 1 set up that will take snapshots of the whole raid that is on the card. After talking to lsi ithey are coming out at the end of the month with a version of the 9271-8i that has "sli if you will" that if a raid card fails it will switch over to the spare.

As far as the 1 nas issue you guys brought up, has been my biggest concern. Not cpu limits not ram limits but I/O limits. I am far more concerned with san redundancy than having 2 hosts. But i have to have the 2 hots to run esxi "right".. that being said. I think in the end i am still in limbo with the san system. i am leaning to a duel raid card duel processor with lot of ram... Word to the wise Make sure as hell that your back planes and expander cards are compatible with all facets of your hardware. for example this case SUPERMICRO SuperChassis CSE-846E26-R1200B... The E26 part of the model number is the backplane that you want the 2 means they have redundant lsi expander chips and the 6 is 6gb sas...

++++Do you guys know anything about this personalty?+++
Another major issue in my mind is the bandwidth between hots to san. I think 10gb will be fine. But I want to be able to use SFF-8088. I am still trying to get a answer if i can use that and it still be conspired a san and not a das.
+++++++++++++++++++++++++++++++++++++++++++++


The build has once again has been tweaked.


Will update later today just a quick check in. back on the phone all day.. and to ad to the fun i had another huge client want to sign up and they need 4 server 2012 vm's 2 rhel vm's...

@mfenn I keep looking into dells site but i will stay stong.. I love the hardware build process. 🙂

I love it too, I get the urge to build it yourself - I really do. But with a $25k budget and real customers on the line here building it yourself is the wrong answer. Home-built does not belong in a business environment except on the EXTREME budget end of things where you simply don't have a choice.

This is not you, so this is a terrible idea.

Viper GTS
 
I love it too, I get the urge to build it yourself - I really do. But with a $25k budget and real customers on the line here building it yourself is the wrong answer. Home-built does not belong in a business environment except on the EXTREME budget end of things where you simply don't have a choice.

This is not you, so this is a terrible idea.

Viper GTS

Bahh...

The more I get into the details of the whole system the more i waver. I go back and forth.. I am going to dig into dell hp and such... Screw you all and your logical arguments for going with a branded setup 🙂
 
Also have you considered backup plans yet? EMC Avamar is available as a VMWare virtual machine and can be fantastic at de-duping and backing up VMWare systems at both the image level and the guest level. Data Domain's (now part of EMC) De-duplication algorithms are pretty much the best in the industry (and I say that as a NetApp fanboy). The first backup will be network intensive but after that, Avamar should leave a fairly light load on your network footprint for backup to your SAN depending on how homogenous the data is.
 
Back
Top