Go Back   AnandTech Forums > Hardware and Technology > Memory and Storage

Forums
· Hardware and Technology
· CPUs and Overclocking
· Motherboards
· Video Cards and Graphics
· Memory and Storage
· Power Supplies
· Cases & Cooling
· SFF, Notebooks, Pre-Built/Barebones PCs
· Networking
· Peripherals
· General Hardware
· Highly Technical
· Computer Help
· Home Theater PCs
· Consumer Electronics
· Digital and Video Cameras
· Mobile Devices & Gadgets
· Audio/Video & Home Theater
· Software
· Software for Windows
· All Things Apple
· *nix Software
· Operating Systems
· Programming
· PC Gaming
· Console Gaming
· Distributed Computing
· Security
· Social
· Off Topic
· Politics and News
· Discussion Club
· Love and Relationships
· The Garage
· Health and Fitness
· Home and Garden
· Merchandise and Shopping
· For Sale/Trade
· Hot Deals with Free Stuff/Contests
· Black Friday 2014
· Forum Issues
· Technical Forum Issues
· Personal Forum Issues
· Suggestion Box
· Moderator Resources
· Moderator Discussions
   

Reply
 
Thread Tools
Old 05-15-2012, 04:01 PM   #1
Brovane
Diamond Member
 
Brovane's Avatar
 
Join Date: Dec 2001
Location: Orange County, CA
Posts: 3,108
Default EMC vs Compellent

So we are going to be pulling the trigger probably in the next 30-days for a replacement Storage Array. We are currently running a CX3-40 array with about 50TB raw of storage and we currently use about 6.5k Iops at the 95% percentile. We are looking for a replacement array with around 75TB of usable space and able to handle about 10k Iops. We are currently strictly FC however we would like the new array to support 10Gb iSCSI and FC. We have competitive quotes from both Dell and EMC, both are fairly close in price. The EMC array seems the safe bet however I am concerned that the EMC array has a higher TCO because you have to forklift the array every 5-6 years. The Compellent array seems more flexible and easier to upgrade without having to do a forklift upgrade. I was asking if anyone has any experience with these arrays? The EMC we are looking at a VNX 5300 and the Compellent is a SC040 controller.
Brovane is online now   Reply With Quote
Old 05-16-2012, 11:29 AM   #2
cgehring
Junior Member
 
Join Date: May 2012
Posts: 8
Default

In 2009 we were very similar. We had a CX3-40 with about 60TB raw and looked at many vendors when it came down to replacement. We opted for the CML for several reasons. Replication was at the top of our list, and it can't get more simple than on the Compellent. Reporting is amazing and fast and makes visibility into what the system is doing simple. Since 2009, we have upgraded 3 times, and also have a BC SAN now with full replication. Our recurring maintenance is much lower, and we have found that the marketing pitch of tiering holds true. Because our unused data flows downward into cheap SATA, typically we only have to purchase inexpensive SATA to add capacity instead of expensive SSD or 15K FC. We are in the process of upgrading to the Series 40 controllers, and were surprised how inexpensive the upgrade was. We just pay for the new hardware, and all our licenses carry over. No fork-lift, and no downtime. We purchased with about 80TB, and now the production SAN is over 140TB, about to go over 200TB with the next upgrade. On any given day, we peak at 25,000 IOPS. The only thing I would caution you about is letting either vendor tell you that you can do more IOPS with less spindles. Unless you are throwing SSD into the equation, drives are not any faster today than they were 10 years ago... a 15K drive will give you about 150 IOPS, a 10K drive about 100, and a 7.5K drive about 75. Add up the drives and IOPS and make sure they put you over 10K or you will not be happy. 3TB drives are awesome, but they get you to your capacity number with not enough spindles to handle the performance.

One last pitch on the IOPS side, I would argue that with the same number of disks, the Compellent will hit a higher total IOPS number because all disks are balanced and working together all the time. On the EMC you are still carving it up and will find hot spots and cold spots. Cold spots on a SAN are like wasted money.

I'm not connected with Compellent, just a very happy customer :-) I hope you find this helpful.
cgehring is offline   Reply With Quote
Old 05-16-2012, 04:53 PM   #3
Cr0nJ0b
Senior Member
 
Cr0nJ0b's Avatar
 
Join Date: Apr 2004
Posts: 965
Default life span of a system...

I'll avoid the sales pitch and leave that to customers...I'm a vendor. I've been in the storage market for over 20 years and I'm biased toward EMC.

One point that I would like to clarify, is the forklift upgrade question. In my experience every system I have ever seen has required some type of forklift upgrade at some point. And this also depends on what you consider a "forklift" upgrade. In my experience a FL upgrade means taking the whole array out, moving a new array in and migrating data. Generally the reason you would do this is to take advantage of new technology that wasn't in existence or mature at time or release. When the market moved from SCSI to FC, you had to replace the guts of the array and the disk shelves to take advantage of the FC parts and get the most out of the system.

Generally a 5 year life cycle is pretty good for a storage array IMO. The disk density and performance at the 5 year mark usually make for a compelling swap business case. If Dell is telling you that you won't need to do a forklift upgrade at some point, I think they are not giving you the whole picture. I'm not sure how many arrays they have sold over the last 5 years, but it's a number bigger than 1, with many of them gone today.

As an example, your CX3-40 array was released in 2006 and will end support in 2014 (EOSL). That same array could have been upgraded with data in place to a CX4-xxx array at any point during it's useful life.(not sure if the upgrades are still available), but that would have changed the system to a CX4 array that is still supported today with no EOSL listed. that's 8 years for in family and ?? probably 10-12 years for out of family with data in place upgrade. I personally think that's a pretty good story. You won't likely see EMC tell a customer that they can just add a SAS card to a controller to add that functionality since the system are too highly integrated...but you are likely to see new port connectors or protocol enhancements on the same base connector technology. EMC added automatic sub-lun data tiering to arrays well after the release of the original system for example.

BTW I ran across this link about future proof compellant. It's not bashing, just stating what I pretty much already said.

http://www.evanhoffman.com/evan/2011...f-not-so-much/
__________________
ASUS P8Z68-V Pro, 4 x 4GB DDR (1600MHz) Corsair CMZ8GX3M2A1600C9B
i5 3570K stock speed, XFX HD6870 Radeon ; 1 x 180GB Intel SSD G2 SATAII in JBOD (Boot)
4 X 750GB SATAII RAID0 (Storage); DVD-RW(SATA) , CD-RW (IDE) OptiArc
Cr0nJ0b is offline   Reply With Quote
Old 05-16-2012, 10:36 PM   #4
Emulex
Diamond Member
 
Join Date: Jan 2001
Location: ATL
Posts: 9,557
Default

what's the TCO maintenance and obvious upgrades from 2006 to 2014?
__________________
-------------------------
NAS: Dell 530 Q6600 8gb 4tb headless VHP
KID PC1: Mac Pro Dual nehalem - 6gb - GF120 - HP ZR30W
Browser: Dell 530 Q6600 4GB - Kingston 96gb -gt240- hp LP3065 IPS - 7ult
Tabs: IPAD 1,2,3 IPOD3,HTC flyer, Galaxy Tab - all rooted/jb
Couch1: Macbook Air/Macbook White
Couch2: Macbook Pro 17 2.66 Matte screen - 8GB - SSD
HTPC: Asus C2Q8300/X25-V - Geforce 430- 7ult - Antec MicroFusion 350
Emulex is offline   Reply With Quote
Old 05-16-2012, 10:52 PM   #5
SaveYourBacon
Junior Member
 
Join Date: May 2012
Posts: 1
Default

Another happy Compellent customer here. We spent 9 months evaluating EMC, NetApp, and Compellent. I had about 20 meetings with those three vendors and read every "EMC/NetApp/Compellent VS" thread that exists on the web (Google Alerts are great for stuff like that; it's how I saw this thread).

In short we went with Compellent because I like their architecture and I've never researched a product where it was so hard to find a customer who wouldn't buy again. I talked to 5 customers in person (or over the phone) and talked to countless folks online. Compellent is a great product and their support is top notch. Phone Home and their ability to quickly SSH into your array and diagnose problems make support calls go very smoothly. They also have some exciting new features coming out (sorry... NDA; oddly I couldn't find chatter online or I would share a link).

In terms of EMC I ultimately felt that they had a decent solution but it is a bit outdated and overly dependent on cache. They also merely shoe-horned two older product lines into their VNX line. Naturally, they have added some new features, but their snapshot technology is outdated and their user interface needs a major overhaul... all just my opinion of course. To EMC's credit though, they must be doing something right because they have a huge customer base.

One other thing to look for while you reading these threads online. Most of the time EMC employees and vendors come to the rescue of EMC in threads like this (to their credit, they are open about it). Make sure you are finding customers who are willing to talk about EMC, not just folks loyal to EMC because it pays well to work there and/or sell their products (Reddit is a great start).

Feel free to PM me if you would like to chat over the phone; I could speak volumes on this subject!

Last edited by SaveYourBacon; 05-16-2012 at 10:57 PM.
SaveYourBacon is offline   Reply With Quote
Old 05-17-2012, 01:06 PM   #6
KentState
Diamond Member
 
KentState's Avatar
 
Join Date: Oct 2001
Location: Atlanta, GA
Posts: 5,787
Default

At my last company, I was in the middle of rolling out two VNX 5500, but was not very happy with the companies ability to deliver what they promised. After this last experience with EMC, I'd probably avoid them in the future.
KentState is offline   Reply With Quote
Old 05-17-2012, 07:02 PM   #7
KameLeon
Golden Member
 
Join Date: Dec 2000
Posts: 1,788
Default

I would suggest you take a good look at IBM Storage as well - the v7000 or XIV Gen3. They have come a long way. We have customers that use both, and the ones who have switched love it. We've also been hearing a lot of complaints from our customers using EMC lately. Maybe have IBM do a POC for you...

Here's a good read if you're interested:
http://www-03.ibm.com/systems/storag...ost-study.html
__________________
RiG
KameLeon is offline   Reply With Quote
Old 05-17-2012, 10:47 PM   #8
Brovane
Diamond Member
 
Brovane's Avatar
 
Join Date: Dec 2001
Location: Orange County, CA
Posts: 3,108
Default

Thanks everyone for you responses.

We are basically looking at Compellent and EMC for the SAN solution.

Thank you for the link about the Compellent never having to do a forklift.

I have been reading a lot on both the arrays.

The quote from EMC they have us using 106x300GB 10k drives in one RAID 5 pool for all of our SQL Servers and VMware storage. By striping across some many spindles they tell us they can deliver fairly high IOPS to or SQL clusters and ESX clusters. They also have 5x200GB SSD drives setup for fast cache which also raises the IOPS. They also had setup 30x3TB NL-SAS in a RAID 6 pool for file shares and also for secondary ESX storage, snapshots-test machines etc. For this secondary pool I had them add 25x600GB 10k disks so we had some faster storage in the pool so hot data could at least be moved up to some faster storage. We are also going to use their X-Blade technology for a NAS head for the SAN so I can do NFS and CIFS. So in total the VNX5500 will have 166 disks in it. At maximum the array will support 250 disks. The array will deliver around 10,000 iops consistently according the EMC as they have it configured.

The Compellent configuration. Was a total of 96 drives to deliver 10k iops. We have 60x600GB 15k disks and a total of 36x2TB NL-SAS disks. They have quoted the new SC040 controller with 12GB of RAM using the new 64-bit OS that Compellent recently released. We are also aware of some exciting features that are to be released soon by Dell for Compellent (also under NDA).

Overall the current CX3 series we have have is in its 6th years and usually from a budget cycle I tell upper management that they need to plan on replacing a array between the 5-6year mark(they don't like this one). I was also suspicious of the "no fork lift upgrade". I consider a forklift basically having to setup a brand new array in a new rack migrating all the data over and then decommissioning the old array. The thing I like about Compellent is the ability to easily slide a new head in for the array as you upgrade and be able to take advantage of your old storage. With or current EMC array we bought some DAE's in the last couple of years that will essentially become expensive door stops after we replace the array. For example if we had compellent we could have both FC and SAS for the back-end bus. For the VNX it is only SAS so your older FC DAE are door stops. Also the Compellent licensing you own it separately from the array. So as you upgrade the licensing transfer to the new array. With EMC we are having to buy new licensing. We have almost $80k worth of licensing in the EMC quote.

One issue that we have is essentially "no body ever got fired for buying EMC storage". Overall I think the Compellent storage is more flexible, easier to manage and would give us lower TOC however we are still hesitant because EMC has a long track and some really good products.

For example with the VNX we can essentially support 250 disks. For that 251st disk you are essentially looking at second array or doing a in-place upgrade to the 5700. The Compellent array as configured can do 384 disks, around 25k IOPS before we consider a second array. Since we are at 96 disks with Compellent we don't have to buy anymore software licensing. With EMC we have to pay for each TB we buy.

We have another meeting with Dell next week and I want to find out more about running VDI on Compellent. We are looking at possibly deploying VDI in the next 1-2 years.
Brovane is online now   Reply With Quote
Old 05-17-2012, 11:41 PM   #9
Jeff7181
Lifer
 
Jeff7181's Avatar
 
Join Date: Aug 2002
Location: SE Michigan
Posts: 18,220
Default

My experience is limited to NetApp, so my opinion may be biased, but I'd urge you to look into them. I've worked with their 2040's, 3040's, 6070's, 3240's, 3270's and 6280's. We have a combination of SSD, 15k SAS, 15k FC and SAS attached SATA disk shelves. We use FCP, no NFS or CIFS. Our primary data is housed on them, we do offsite replication for DR and d2d backup, which is also replicated offsite.

I gotta tell you, when you can perform EVERY daily administrative task and even most maintenance tasks via PowerShell, it's incredibly pleasant to work with.

Our loads include everything from document storage to vmware to SQL Server databases. I've seen 25k IOPS on an aggregate with around 60 15k spindles with no spike in latency.

In fact, the data center that hosts our DR site recently took our advice and got a pair of 3240's to host their IaaS environment.

*EDIT* I'm new to storage administration. I got into storage administration because I was tasked with finding a new d2d backup solution. After a POC (and a Data Domain POC) I ended up going with NetApp and their SnapVault product since we already use NetApp SANs. Shortly after I was tasked with bringing up a 3240 HA pair at our main site and our DR site and setting up D2D backup and replication. I declined to have a NetApp engineer on-site to assist with the install. It took me about 2 hours to get the one at our DR site racked and up and running by myself (save for some help racking the shelves and controllers due to their weight) after I had done a "dry run" on the one at our primary site. I believe that speaks to how well NetApp's product is designed - doesn't take a boat load of experience to administer.
__________________
"The Universe is huge and old and rare things happen all the time, including life." - Lawrence Krauss

Last edited by Jeff7181; 05-17-2012 at 11:58 PM.
Jeff7181 is online now   Reply With Quote
Old 05-17-2012, 11:47 PM   #10
Jeff7181
Lifer
 
Jeff7181's Avatar
 
Join Date: Aug 2002
Location: SE Michigan
Posts: 18,220
Default

Quote:
Originally Posted by SaveYourBacon View Post
... I could speak volumes on this subject!
I see what you did there!
__________________
"The Universe is huge and old and rare things happen all the time, including life." - Lawrence Krauss
Jeff7181 is online now   Reply With Quote
Old 05-18-2012, 08:04 AM   #11
Brovane
Diamond Member
 
Brovane's Avatar
 
Join Date: Dec 2001
Location: Orange County, CA
Posts: 3,108
Default

Quote:
Originally Posted by KameLeon View Post
I would suggest you take a good look at IBM Storage as well - the v7000 or XIV Gen3. They have come a long way. We have customers that use both, and the ones who have switched love it. We've also been hearing a lot of complaints from our customers using EMC lately. Maybe have IBM do a POC for you...

Here's a good read if you're interested:
http://www-03.ibm.com/systems/storag...ost-study.html
Unfortunately with how we are setup for purchases because of corporate policy we are limited to either buying through Dell or EMC so this does limit or options. Working for a big corporation sometimes that is just the way it is.
Brovane is online now   Reply With Quote
Old 05-18-2012, 08:42 AM   #12
KentState
Diamond Member
 
KentState's Avatar
 
Join Date: Oct 2001
Location: Atlanta, GA
Posts: 5,787
Default

Do not believe the IOPS marketing numbers that they throw out at you. There are bandwidth limitations that can heavily reduce the actual throughput to the database servers. Even with 16 SSD drives for both FastCache and 16 for storage, plus 16GB RAM, I was not able to match the MB/sec of reads and writes with 64k blocks compared to DAS.

Where IOPS is misleading is the block size that they base the number on. It's easy to hit 10k when you are doing it at the 4k or 8k size which they quote off of. In reality, those numbers drop heavily when you go to a more typical 64k size which SQL Server uses. Secondly, if you use FC or iSCSI you are limited to 8GB or 10GB which is only 800MB/sec or 1000MB/sec which your switch will be 4000MB/sec max. With a single server, I was able to generate over 2000MB/sec of IO with DAS. I don't know if it's the hardware or not, but EMC engineers where not able to get the performance to an acceptable level and were still "researching" the issue when I left.
KentState is offline   Reply With Quote
Old 05-18-2012, 10:27 AM   #13
Brovane
Diamond Member
 
Brovane's Avatar
 
Join Date: Dec 2001
Location: Orange County, CA
Posts: 3,108
Default

Quote:
Originally Posted by KentState View Post
Do not believe the IOPS marketing numbers that they throw out at you. There are bandwidth limitations that can heavily reduce the actual throughput to the database servers. Even with 16 SSD drives for both FastCache and 16 for storage, plus 16GB RAM, I was not able to match the MB/sec of reads and writes with 64k blocks compared to DAS.

Where IOPS is misleading is the block size that they base the number on. It's easy to hit 10k when you are doing it at the 4k or 8k size which they quote off of. In reality, those numbers drop heavily when you go to a more typical 64k size which SQL Server uses. Secondly, if you use FC or iSCSI you are limited to 8GB or 10GB which is only 800MB/sec or 1000MB/sec which your switch will be 4000MB/sec max. With a single server, I was able to generate over 2000MB/sec of IO with DAS. I don't know if it's the hardware or not, but EMC engineers where not able to get the performance to an acceptable level and were still "researching" the issue when I left.
Understood. The IOPS was overall IOPS. For example one SQL server will generate around 1400 IOPS at the 95th Percentile. However this is only at certain times of the day. Both EMC and Dell where trying to look at overall IOPS that or current environment generated and then where designing a SAN to match with growth. Right now we are currently using around 4500 IOPS at the 95 percentile with or CX3-40. We submitted to both of them NAR data from several days from the EMC SAN. For Dell we also ran a DPACK tool over 24-hours on each SAN connected server which supplied them with more data.
Brovane is online now   Reply With Quote
Old 05-18-2012, 12:55 PM   #14
dikrek
Junior Member
 
Join Date: May 2012
Location: Chicago, IL
Posts: 1
Default

Hi all, Dimitris from NetApp here.

Since you can't buy anything apart from EMC or Dell there's no point in my trying to provide NetApp info.

However, check this article out about proper system sizing.

I think given the ultra-low IOPS you require almost any vendor can do it, and I don't even think you need autotiering.

I'd focus more on things like always-on operation, upgradeability, backup and recovery, RPO and RTO.

D
dikrek is offline   Reply With Quote
Old 05-24-2012, 04:06 PM   #15
cgehring
Junior Member
 
Join Date: May 2012
Posts: 8
Default

Brovane,

It looks like both companies are meeting your needs with the configurations you show. EMC is going a little overboard on the drive count side, but my opinion (take that for what it is worth :-) is that they need to do this because they are carving the storage tiers into pools. Which creates wasted IOPS. Taking SSD out of the picture because it changes everything, you are going to have the following in your EMC:

<edit - reformatting table>
Speed Count IOPS Total
10K 106 100 10600
7.5K 30 75 2250
10K 25 100 2500
Total IOPS 15350

The challenge is always going to be... what service do I put in which pool? The majority of your performance is going to be locked in that top tier, and all that will use it are those services in the top tier. What tier are you going to put your VDI in? How will that affect your other things in that same tier? These answers are simple in the Compellent... yes, you must worry about service A contending with service B... but the upgrade math is simple... you know your spindle count, you know your IOPS limit... if you approach the limit, add more to the top tier... if you are just approaching the capacity limits add more to the bottom tier. Raid Groups, Pools, meta luns... these are terms of the past.

On the Compellent side they are a little closer to your target, but maybe too close for comfort:
<edit - reformatting table>
Speed Count IOPS Total
15K 60 150 9000
7.5K 36 75 2700
Total IOPS 11700

The benefit here is that data in tier 1 that is not being used could migrate down into your cheap SATA storage if you let it do it's thing. You could add SSD to the Compellent to make things a little more fair.

As for "fork-lift", I think what they say is true. Like I said before, we are Series 30 customers about to go to Series 40. While it is true that they have moved away from loops to SAS, the replacement controllers still connect all the legacy drives. When we finish our upgrade, we will have 6 FC SSD, 4 shelves of FC 15K, and 4 shelves of FC SATA (all the legacy) and add a shelf of SAS SSD, 4 shelves of SAS 15K, and 3 shelves of SAS 7.5 K. All running together, nothing to throw away... it's a big upgrade . One primary reason is the deployment of our VDI solution that is passing pilot phase and going into production. In other words, I'm going from 160 drives to 260 drives. Maybe you won't get there, but maybe you will. The new EMC you are looking at is 166 drives... maybe you will be where I am in a few years and bumping that 250 drive limit. That's a forklift for sure.

96 drives is the sweet spot for Compellent since you will never pay for another piece of code. Take a look at the TCO of upgrades across both platforms and see where that lands you. My guess is more than half the cost of the EMC upgrades will be software... per TB. Check the maintenance fees as well because EMC will charge you for Hardware and Software Maint, and both increase as time passes.

Feel free to PM me if you want to talk more. I'd be happy to serve as a reference and share our experience and and even struggles... yes, there are always struggles.

Final item of note... I haven't seen a happy EMC customer post yet... hmmm. Maybe this will pull them out of their shell :-)

Last edited by cgehring; 05-24-2012 at 04:13 PM.
cgehring is offline   Reply With Quote
Old 05-27-2012, 08:18 AM   #16
RobertR1
Golden Member
 
Join Date: Oct 2004
Posts: 1,092
Default

Quote:
Originally Posted by Brovane View Post

We have another meeting with Dell next week and I want to find out more about running VDI on Compellent. We are looking at possibly deploying VDI in the next 1-2 years.
For full disclosure, I work for a VAR in NorCal.

For VDI, either solution will work but you might want to manage your resource pools separately for consistancy. Break up your boot, swap, system and user profile volumes on different tier discs to maximize performance.

The biggest tip I can give you for VDI (if you go with vmware's View)is to keep the compute environment separate. The reason is licensing. Having a standalone VDI environment is much cheaper than a compute platform where you are mixing server and VDI workloads. With a dedicated VDI compute/server environment, you do not have to license per socket. VDI, if the use cases work out for your business needs, is a great technology but just as mass virtualization of servers, it needs to be planned carefully and is not a one shoe fits all strategy. We have seen a lot of customers provide a poor end user experience due to false expectations and poor PoC testing. Just be careful.

As for EMC vs Compellent, truth is, both will ultimately work fine. Both will overmarket their strengths and run in circles around their weaknesses. Compellents auto tiering isn't as amazing as they claim it to be and EMC's push for IOPS performance on Block will be pushed harder than needed. Go with the solution you would feel comfortable working on from day to day, if the price is similar. Both systems will require a learning curve and mangament. No way around it.

Sorry for any 4am errors! Damn allergies won't let me sleep. Maybe I should go watch Monaco GP live or finish my Nightmare run in Diablo. Choices.....
__________________
2600k @ 4.6ghz, Hyper 212+, Asrock Z68 Extreme 4 Gen3, Corsair Veng 8GB CL9 1600 RAM, OCZ ZX 850W, Crucial M4 128GB, Haf X, Zotac AMP! GTX 580 @ 950/1900/2225, Dell 3008 WFP (2560x1600)
RobertR1 is offline   Reply With Quote
Old 05-29-2012, 07:26 AM   #17
Gheris
Senior Member
 
Gheris's Avatar
 
Join Date: Oct 2005
Location: New Jersey
Posts: 305
Default

Wanted to ask as you guys seem very knowledgable on SANS. Any particular recommendations on where I can read up on the basics and different types of SANS? Any suggestions would be greatly appreicated.
Gheris is offline   Reply With Quote
Old 05-29-2012, 10:44 AM   #18
cgehring
Junior Member
 
Join Date: May 2012
Posts: 8
Default

Quote:
Originally Posted by Gheris View Post
Wanted to ask as you guys seem very knowledgable on SANS. Any particular recommendations on where I can read up on the basics and different types of SANS? Any suggestions would be greatly appreicated.
I wish this were more simple. It is easy to find information, but very difficult to find unbiased opinions or solid apples to apples comparisons of any two systems.

I would recommend starting with Gartner and maybe forums like this or research firms like ESG http://www.esg-global.com/esg-storage-truths/. Even those can be biased at times so it can be hard to separate fact from hype. Forums like this are often the best place to find real customers that may have similar requirements as you.

The best approach I can recommend is to not start with manufacturers or vendors, but start with your requirements. IOPS, Workloads, Protocols (FC, iSCSI, NFS, CIFS), Business Continuity (replication), Reporting, Management, Scalability. Any SAN can spin disks, what really matters is will it handle what you want to do without complex add on products and hiring additional staff as the thing gets more complex. The big guys will make it seem magical, but the real magic is in well written, easy to use software. The hardware behind the scenes in most of these solutions is the same.

The first step is to try and figure out if you are looking for SAN, NAS, or both. There are basically two architectures out there in the SAN/NAS world, Controller based and Grid based. Both have their pluses and minuses. Controller based models (EMC, Hitachi, NetApp, Compellent) are the traditional 2-node setup. Simple and easy to setup, but they don't scale in a linear fashion since you don't add processing power as you add more disk.

Grid based systems (XIV, 3PAR, Isilon, Pillar) have a more linear scaling, but tend to be more rigid in how you can upgrade and scale. These systems require that the system be balanced across the grid.
I would highly recommend leaving yourself a lot of time for analysis before you make a purchasing decision. Document and generate your requirements and future plans. There is no one system that is the best in every scenario. I have used NetApp, 3PAR, EMC, Isilon and Compellent in different scenarios for different reasons.

As you go through your decision making process, be sure to include plans for growth, and force the vendors to not only show the cost of future upgrades, but also show you how the upgrade will be implemented. For example, it is easy to add 2 shelved of new dirves to any system... be sure to ask "now how do I add this additional space to my existing systems and rebalance the workloads". This is not the same nor easy for every vendor. Just about every vendor will make you a good up front deal to lock you in to their solution... be wary when the price looks too good, as it often means that they will make it up in future maintenance or upgrades.

Finally, talk with customers. So often I see storage decisions made on "nobody gets fired for buying ..." I believe it to be true, for any vendor, nobody gets fired for buying them... but they might get fired for improperly managing them :-) Find something that will meet your needs, and do it simply. The more complex the solution is, the higher the risk of failure. The name of the company will not create or prevent failure. Find customers that have used more than one solution. You will find it hard to find a customer that can give an unbiased opinion when they have only had one flavor of kool-aid their whole life.

I hope this helps, but maybe not. The standard answer in IT "it depends", certainly applies here :-)

Good luck!
cgehring is offline   Reply With Quote
Old 05-29-2012, 11:21 AM   #19
Gheris
Senior Member
 
Gheris's Avatar
 
Join Date: Oct 2005
Location: New Jersey
Posts: 305
Default

Thanks cgehring. This is more for familiarity than anything else. I may be working with SANs in the not too distant future and I am trying to prepare myself. Not even aware of what solution is or will be put in place.
Gheris is offline   Reply With Quote
Old 05-29-2012, 01:18 PM   #20
KentState
Diamond Member
 
KentState's Avatar
 
Join Date: Oct 2001
Location: Atlanta, GA
Posts: 5,787
Default

My recomendation is to engage an IT consulting firm in your area that has experience with SAN technology to help with an RFP. Unless you implement hundreds of these from many different vendors, it's hard to distinguish the BS that the vendors will feed you. I know companies like CDW will try to help, but even they are more about making a sale than finding the best solution.
KentState is offline   Reply With Quote
Old 05-29-2012, 05:22 PM   #21
Brovane
Diamond Member
 
Brovane's Avatar
 
Join Date: Dec 2001
Location: Orange County, CA
Posts: 3,108
Default

Quote:
Originally Posted by cgehring View Post
Brovane,
snip

)
Thank your input I really appreciatte it. The system we are purchasing this year wouldn't run VDI on it. The VDI system would be installed at or call center which is at a different location. We want to make sure that Compellent supports VDI because whatever we purchase for or SAN solution we want to keep it the same across all of our sites in CA that we support. We don't want a situation where one site has EMC and all the other sites have Compellent.

Went to a storage conference that Dell sponsored at another site last week. A lot of good information and so far everything seems in favor of going the Compellent route. The company that I work for 5 other sites in other states have purchased Compellent and have been extremely happy with it. The general consensus is while they liked EMC they love Compellent.

Talking to Dell at dinner last week one of the points that they made was as far as forklift goes. With the current upgrade we are doing we are having to spend money to migrate TB of data from the current EMC SAN to a new SAN. We are paying to have to move this data with the associatted impact on the system as the data is migrated. With a Compellent SAN you really never have to migrate data from one array to another. You can swap heads out while the data stays in-place. Your configuration is exacatly the point I was making to or local management. You can essentially with Compellent run FC and SAS drives in the same system. You decide when you want to get rid of your FC shelves. We have FC shelves purchased in the last couple of years for EMC and they will become door stops once we are done. (Expensive door stops).

We are working with Dell to get some TCO numbers because out the door EMC and Compellent are very close to each other but where Compellent pulls out ahead is when you start getting into the 5-6th year when you normally have to do a forklift with EMC.
Brovane is online now   Reply With Quote
Old 05-29-2012, 08:05 PM   #22
RobertR1
Golden Member
 
Join Date: Oct 2004
Posts: 1,092
Default

VDI will be supported by any of the enterprise SAN manufacturers. Your challenges around VDI will come from other places (use cases, compute environment, app compatibility, etc...)

Also Dell bought out Wyse so they'll be pushing the Wyse clients. The P20 is a very good product. We've done large successful deployments with it.
__________________
2600k @ 4.6ghz, Hyper 212+, Asrock Z68 Extreme 4 Gen3, Corsair Veng 8GB CL9 1600 RAM, OCZ ZX 850W, Crucial M4 128GB, Haf X, Zotac AMP! GTX 580 @ 950/1900/2225, Dell 3008 WFP (2560x1600)

Last edited by RobertR1; 05-29-2012 at 08:10 PM.
RobertR1 is offline   Reply With Quote
Old 05-30-2012, 01:17 PM   #23
Conscript
Golden Member
 
Conscript's Avatar
 
Join Date: Mar 2001
Location: Glen Allen, VA
Posts: 1,698
Default

If we're going to look at 5th and 6th year, probably pretty wise to consider all aspects. As Dell falls further and further, and cuts R&D spending more and more, expect the Compellent product and support to deteriorate. If price is the same, and you can only choose between the two, it's a no brainer to me. EMC all the way. Dell's only competitive advantage is their ability to sell at low or negative margins right now because they're already running a lean business and trying to establish a market foothold.
__________________

Heatware 93-0-0
Conscript is offline   Reply With Quote
Old 05-30-2012, 01:35 PM   #24
Brovane
Diamond Member
 
Brovane's Avatar
 
Join Date: Dec 2001
Location: Orange County, CA
Posts: 3,108
Default

Quote:
Originally Posted by Conscript View Post
If we're going to look at 5th and 6th year, probably pretty wise to consider all aspects. As Dell falls further and further, and cuts R&D spending more and more, expect the Compellent product and support to deteriorate. If price is the same, and you can only choose between the two, it's a no brainer to me. EMC all the way. Dell's only competitive advantage is their ability to sell at low or negative margins right now because they're already running a lean business and trying to establish a market foothold.
I have been hearing that from the Dell Haters for over a decade. Still hasn't happened.
Brovane is online now   Reply With Quote
Old 05-30-2012, 01:55 PM   #25
cgehring
Junior Member
 
Join Date: May 2012
Posts: 8
Default

Quote:
Originally Posted by Conscript View Post
If we're going to look at 5th and 6th year, probably pretty wise to consider all aspects. As Dell falls further and further, and cuts R&D spending more and more, expect the Compellent product and support to deteriorate. If price is the same, and you can only choose between the two, it's a no brainer to me. EMC all the way. Dell's only competitive advantage is their ability to sell at low or negative margins right now because they're already running a lean business and trying to establish a market foothold.
Wow... That's some serious FUD.
cgehring is offline   Reply With Quote
Reply

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT -5. The time now is 04:18 PM.


Powered by vBulletin® Version 3.8.7
Copyright ©2000 - 2014, vBulletin Solutions, Inc.