vmWare ESXi Server - 8xSAS or 8xSSD

TechBoyJK

Lifer
Oct 17, 2002
16,699
60
91
I'm putting together a server (actually 2, one will be a backup) that will run vmWare ESX

Dual 2.6Ghz Quad Core
24GB RAM

There will be 8 small CentOS vm's running (12GB each), as well as 1 WIN7 box for administration (40GB).

1 of the CentOS boxes will have a 2nd volume for /files and I'll start it at 100GB.
1 of the CentOS boxes will have a 2nd volume for /data and I'll start it with 50GB.

All in all I expect to need about 300GB of local storage. If it grows beyond that, I'm going to get a NAS drop (redundant NetApp's with 15K SAS) from my ISP and move the /files volume to that via iSCSI. That's the only volume I anticipate out growing local storage.

That said, I'm torn on what kind of local storage I want to use. I could go with 15K SAS drives, which are known to be reliable, work well under pressure, and resilient to long term, continued use.

Or, I could go with SSD's.

Right now, I'm leaning towards getting 8 Samsung 840 Pro Series 128GB SSDs. They have great reviews. I would put 5 in a raid5 array to get 480GB useable, and have 3 reserved as hot spares.

http://www.newegg.com/Product/Product.aspx?Item=N82E16820147192

I also like that they'll use much less power which will ultimately factor into my hosting costs. Server will also be much quieter and put off less heat with the SSDs.

Thoughts?
 
Last edited:

imagoon

Diamond Member
Feb 19, 2003
5,199
0
0
I'm putting together a server (actually 2, one will be a backup) that will run vmWare ESX

Dual 2.6Ghz Quad Core
24GB RAM

There will be 8 small CentOS vm's running (12GB each), as well as 1 WIN7 box for administration (40GB).

1 of the CentOS boxes will have a 2nd volume for /files and I'll start it at 100GB.
1 of the CentOS boxes will have a 2nd volume for /data and I'll start it with 50GB.

All in all I expect to need about 300GB of local storage. If it grows beyond that, I'm going to get a NAS drop (redundant NetApp's with 15K SAS) from my ISP and move the /files volume to that via iSCSI. That's the only volume I anticipate out growing local storage.

That said, I'm torn on what kind of local storage I want to use. I could go with 15K SAS drives, which are known to be reliable, work well under pressure, and resilient to long term, continued use.

Or, I could go with SSD's.

Right now, I'm leaning towards getting 8 Samsung 840 Pro Series 128GB SSDs. They have great reviews. I would put 5 in a raid5 array to get 480GB useable, and have 3 reserved as hot spares.

http://www.newegg.com/Product/Produc...82E16820147192

I also like that they'll use much less power which will ultimately factor into my hosting costs. Server will also be much quieter and put off less heat with the SSDs.

Thoughts?

What are you using it for? So far you have not stated anything that would help anyone help you decide on what you need. If you are just storing files, 8 SSDs will likely be over kill unless you are running 10Gbit or teamed 1Gbit adapters.

What version of ESXi are you using. Where are you putting your vCenter server? 40GB isn't enough to install vCenter properly. In the long run you would be better with shared storage to start with and just boot the hosts with a USB stick.

You might be better off using 1 drive in each host as an SSD cache.
 
Last edited:

TechBoyJK

Lifer
Oct 17, 2002
16,699
60
91
Using the free version of vmWare, (5.5 or whatev is current when this is deployed), so vCenter isn't needed.

The CentOS vm's will be a combination of Apache Web Servers, a file server, a mysql server, a qmail server, a zimbra server, etc.

Because of this, I can't isolate the host machines purpose into a specific function.

I'd actually prefer to start on shared storage, because the available netapps are really nice, mirrored, and transactionally replicated offsite. However, it's costly. It'll run me about $150 a month per 100GB. If the local storage can see me through the first 2 years of production, or at least to the point the costs of shared storage is justified, then I'll save quite a bit of money.

Part of why I want to use SSD's is because I feel 5 of them in a Raid5 would pretty much remove any i/o bottleneck the server will see. Other than users uploading photos, most traffic leaving the host machine (via nic) will be compiled web content. Most of the disk i/o will be related to internal activity on the host (talking between vm's).
 
Last edited:

imagoon

Diamond Member
Feb 19, 2003
5,199
0
0
I guess the question would be more: will you have enough disk IO to justify the SSD vs capacity for growth. I also would question if 3 hot spares are really needed. It doesn't sound like you need the IO or capacity a this point really. The Samsung Pro isn't a real "enterprise" server drive while the 15k rpm SAS disks you mentioned are so what tier are you trying to reach? Decent SAS RAID controller are not typically cheap and the ones that can handle the full load of SSD's are rarely cheap.

Basically photos could sit on 7200rpm drives and likely be fast enough for the web unless you have a 4Gbit connection to the net. MySQL is more suited to an SSD assuming the transaction load is high. Mail only needs an SSD if the transaction loads are high. I run 800 users or 7 data spindles and 3 log spindles to give you an idea (Exchange in this situation) and there is rarely disk issues that is due to the disks themselves.

How big do you expect this website get? Planed growth etc?

Either will likely be fine when you start. Depending on growth they might be fine for years.
 

TechBoyJK

Lifer
Oct 17, 2002
16,699
60
91
I guess the question would be more: will you have enough disk IO to justify the SSD vs capacity for growth. I also would question if 3 hot spares are really needed. It doesn't sound like you need the IO or capacity a this point really. The Samsung Pro isn't a real "enterprise" server drive while the 15k rpm SAS disks you mentioned are so what tier are you trying to reach? Decent SAS RAID controller are not typically cheap and the ones that can handle the full load of SSD's are rarely cheap.

Basically photos could sit on 7200rpm drives and likely be fast enough for the web unless you have a 4Gbit connection to the net. MySQL is more suited to an SSD assuming the transaction load is high. Mail only needs an SSD if the transaction loads are high. I run 800 users or 7 data spindles and 3 log spindles to give you an idea (Exchange in this situation) and there is rarely disk issues that is due to the disks themselves.

How big do you expect this website get? Planed growth etc?

Either will likely be fine when you start. Depending on growth they might be fine for years.

I'm hosting this server at the datacenter I work at. I get hosting cheap, and their primary concern is how much power I use. I can keep my monthly hosting costs absurdly cheap as long as I don't draw too much power. I've assumed that high speed 15k SAS drives will require a noticeable amount of power when compared to SSD's. I don't have any real world data, so I might be wrong?

The mail server will only be doing outbound smtp for site emails. (new user registration, pw reset requests, etc).

The Zimbra server may actually end up hosted on another server, or on my ISP's vmWare cloud. The zimbra server will be for company emails, not sending anything regarding the site. So nothing else on the server needs it or relies on it.

The busiest vm's will be

WebServer (not much i/o since it's db driven)

DB (biggest concern, so I like the idea of this being on ssd's)

Image Server (stores AND hosts images) aka images01.company.com
*This is the vm that will grow the fastest, and I plan to move the image volume to shared storage as soon as it outgrows it's initial quota of 100GB. The vm itself will still be hosted locally, but the image folder would be moved to shared SAN. I'm actually going to run the iSCSI connection to the storage network on day1 and get the link configured. That way if the volume fills up fast I can quickly deploy a new volume and migrate to it.

Catalog Server - This will host the lucerne search catalog files. This will most likely be high I/O as well. Possibly as busy as the DB.

The reason I'm thinking 3 hot spares is paranoia about ssd dependability. 5 128GB would give me the capacity I need, and I have 3 extra slots. It might be overkill, heck ssd's might be overkill, but I want to rule out as much I/O bottlenecking as possible during initial launch. If I can saturate this server with the SSD's, then I'll be justified in migrating into something more significant.

My overall assessment is that the Samsung SSD's are the best performing and most reliable 'power user' grade ssd (according to newegg/amazon reviews). They may not be enterprise, but as long as I have 3 hot spares (and a few laying around to replace the hot spares) I should be safe. For the money, I'd get more bang for my buck with the SSD's versus similar size 15K SAS drives. I think we would outgrow the server before the SSD's would be driven to death.
 

Chess

Golden Member
Mar 5, 2001
1,452
7
81
I would go with SAS drives and not the SSD that is just me though

I personally think the SSD's are overkill..... have you though about possibly going half and half ?
 

TechBoyJK

Lifer
Oct 17, 2002
16,699
60
91
I would go with SAS drives and not the SSD that is just me though

I personally think the SSD's are overkill..... have you though about possibly going half and half ?

I agree SSD might be overkill, but then I would know it won't be a bottleneck.

I'm paying for this out of pocket, so costs/benefit is a big factor.

For the price of a new Samsung 840 Pro 146GB SSD, I'm looking at buying used SAS drives, which I won't trust any more than I do SSD's.
 

brshoemak

Member
Feb 11, 2005
166
4
81
I agree with imagoon in that I would go with SAS drives with an SSD as host cache. It can really make a substantial difference.

btw, USED SAS drives would be a scary proposition for me regardless of the cost savings. Also, RAID10 is almost always preferable in my eyes for a number of reasons, but if you do want to go the RAID5/6 route that's fine.
 

skace

Lifer
Jan 23, 2001
14,488
7
81
You have 2 hosts and are looking at ESXi 5.5. You are right on the cusp of VSAN which allows for shared-nothing storage replication across a cluster. Since I never had much use for VSAN, I do not know every specific and you would want to research them. But it would mean a couple things:
1. You need to buy real licenses for your ESXi 5.5 hosts
2. Buy vCenter
3. Potentially buy VSAN cals
4. Buy a mix of SSD and HDD (read the notes on VSAN).

This would give you a number of things.
1. At least HA level clustering, potentially DRS. Your "backup" as you are calling it today, would be an active/active backup that could take over in roughly 3 minutes for any load.
2. Your storage would be redundant without netapp/eql level costs.
3. Some level of SSD performance, although I don't think that is your main concern at all right now (your stability is incredibly lacking at this point)
4. vCenter manageability (go the Virtual Appliance route on your small environment)
5. Easy growth

I'm not going to list every possibility you have but you should at least do your homework on this one.