EMC vs Compellent

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Crogers81

Junior Member
Sep 13, 2012
2
0
0
So FYI. I am a dell employee but want to clarify the zero space migration. Yes the Compellent system does disregard the zero or free space. But it has to READ all the zero to disregard it. So when you snd it from an EMC to CML it has to at least look at everything, but not write it. Also, you can set the proiety of the migration to high and it does help thin import times..
 

tintin

Junior Member
Aug 23, 2000
4
0
0
working for a rather large out sourcer my team looks after pretty much every San/product vendor, think we're currently above 5Petabytes total.
Now adays with such large controller caches and ssd tiering almost any vendors can provide the performance that all but very large implementations require. The real differentiators are arround support inc firmware revisions , speed of access to 'skilled' techies etc, and extra functionality replication/backups/application integration . Please remember a new storage array is not a panacea for all your storage problems or requirements the actual San wether fc or network has to be closely looked at inc s/w stacks on servers , bandwidth/queue depth on ports or ISL's.
From a personnel opinion overall for most installations I prefer Netapp , the flexibility they can provide cannot really be beaten , though it's still not right for every situation and it all generally comes down to who can give the biggest discount.
But always get free training units thrown in no matter what the vendor you go for , as its generally easier to get some freebies before you place the PO.
On a final note make sure you get the a good historical performance tool in place wether from the vendor or a 3rd party, so many times in my career I've had people complain about performance , but they have no way of monitoring or capturing performance apart from real time, this makes life hellish trying to fix issues.

Man I really hate typing my than a few words on an iPad, blah .
 

tintin

Junior Member
Aug 23, 2000
4
0
0
Oh and remember in almost all circumstances latency is king , especially in VDI / VMware / OLTP DB deployments .
 

Brovane

Diamond Member
Dec 18, 2001
6,226
2,463
136
So FYI. I am a dell employee but want to clarify the zero space migration. Yes the Compellent system does disregard the zero or free space. But it has to READ all the zero to disregard it. So when you snd it from an EMC to CML it has to at least look at everything, but not write it. Also, you can set the proiety of the migration to high and it does help thin import times..

In my opinion that still doesn't explain that fact that it took around around 10 hours to migrate a 6TB volume with 250GB of data on it. The migration didn't seem to speed up at all as it was working through this volume with less than 5% of the volume having data on it. Most of the volume was zeros. The migration priority was set at high. Like I said both the Dell Install engineer and the sales storage architect where scratching their heads over this.
 

Brovane

Diamond Member
Dec 18, 2001
6,226
2,463
136
But always get free training units thrown in no matter what the vendor you go for , as its generally easier to get some freebies before you place the PO.
On a final note make sure you get the a good historical performance tool in place wether from the vendor or a 3rd party, so many times in my career I've had people complain about performance , but they have no way of monitoring or capturing performance apart from real time, this makes life hellish trying to fix issues.

Man I really hate typing my than a few words on an iPad, blah .

Performance monitoring was tops on my list. We have a License for SolarWinds Storage Profiler but it was never properly implemented for or EMC arrays. I dusted off the software last week and built a new machine and loaded everything and now got the software taking in Performance for both or Compellent and EQL arrays. I can really drill down now into the arrays, FC etc and get some good stats. Still not completely done having it implemented fully but it is already pulling some good data.
 

Jeff7181

Lifer
Aug 21, 2002
18,368
11
81
In my opinion that still doesn't explain that fact that it took around around 10 hours to migrate a 6TB volume with 250GB of data on it. The migration didn't seem to speed up at all as it was working through this volume with less than 5% of the volume having data on it. Most of the volume was zeros. The migration priority was set at high. Like I said both the Dell Install engineer and the sales storage architect where scratching their heads over this.

It makes perfect sense given what he told you.

6TB = 6,000,000 MB = 48,000,000 Mbit
6 hours = 360 minutes = 21600 seconds
48,000,000 / 21600 seconds = 2222 Mbps (277 MB/sec)

Don't forget that he told you, even though the white space is not written to disk, all the white space is read by the SAN. So if you only had 250 GB used in a 6 TB volume, well, you should have shrunk the volume prior to migrating it so it wouldn't have had to read 6TB of data just to transfer 250 GB. If you had made it a 500 GB volume, it would have taken 30 minutes to transfer assuming the same transfer rate.
 
Last edited:

Brovane

Diamond Member
Dec 18, 2001
6,226
2,463
136
It makes perfect sense given what he told you.

6TB = 6,000,000 MB = 48,000,000 Mbit
6 hours = 360 minutes = 21600 seconds
48,000,000 / 21600 seconds = 2222 Mbps (277 MB/sec)

Don't forget that he told you, even though the white space is not written to disk, all the white space is read by the SAN. So if you only had 250 GB used in a 6 TB volume, well, you should have shrunk the volume prior to migrating it so it wouldn't have had to read 6TB of data just to transfer 250 GB. If you had made it a 500 GB volume, it would have taken 30 minutes to transfer assuming the same transfer rate.

I had the volume done first as a test to see how it would treat the white space and how quickly it would migrate the white space. Dell told me to expect about 500-600 GB/hr for offline data migration over FC. In all my pre-deployment meetings the discussions/timelines where based on data that needed to be migrated and not on total volume size. So based on this I was basing my outage windows on the total data in a volume not volume size. The PM from Dell briefly mentioned that volume white space would still slow things down a little but wouldn't be significant overall. I went over my outage window with Dell several times and reviewed the fact that my outage window's where based on the assumption that data would transfer at 500GB/hr based on just data in a volume. If the Dell PM or Dell install engineer would have indicated that the figure of 500GB/hr was based on total volume size not on data inside the volume I would have re-worked my figures and adjust my outage window. It still worked out ok because the data actual data transfer speed worked out to over 1000GB/hr so I still meet my outage windows.

I kind of felt that the Dell team couldn't figure out what was correct. Maybe I should have questioned it harder but I didn't during the meetings. Even the original quote for paying for the data migration was a back and forth. Originally they told me it was based on volume size not on data in a volume. Then the Dell team told me it was based on data in the volumes being migrated not on volume size. What was also weird during testing with the 6TB volume was that both the install engineer and the Storage Architect from Dell onsite both thought the volume should be done with 1-2 hours at most. So we started it and then went to lunch. We came back from lunch and around 500GB had been moved. They where both surprised since they expected the volume import to be almost completed.

Also the volume in question for testing was part of a geo-cluster that was being replicated from another data center. It would have been quicker for me to just build a new volume on the Compellent add the volume to the Geo-cluster take the old volume offline and just let the data replicate again since I have a 1Gb link between the data centers. At least the cluster replication software would have just copied the actual data being used.
 

Jeff7181

Lifer
Aug 21, 2002
18,368
11
81
I had the volume done first as a test to see how it would treat the white space and how quickly it would migrate the white space. Dell told me to expect about 500-600 GB/hr for offline data migration over FC. In all my pre-deployment meetings the discussions/timelines where based on data that needed to be migrated and not on total volume size. So based on this I was basing my outage windows on the total data in a volume not volume size. The PM from Dell briefly mentioned that volume white space would still slow things down a little but wouldn't be significant overall. I went over my outage window with Dell several times and reviewed the fact that my outage window's where based on the assumption that data would transfer at 500GB/hr based on just data in a volume. If the Dell PM or Dell install engineer would have indicated that the figure of 500GB/hr was based on total volume size not on data inside the volume I would have re-worked my figures and adjust my outage window. It still worked out ok because the data actual data transfer speed worked out to over 1000GB/hr so I still meet my outage windows.

I kind of felt that the Dell team couldn't figure out what was correct. Maybe I should have questioned it harder but I didn't during the meetings. Even the original quote for paying for the data migration was a back and forth. Originally they told me it was based on volume size not on data in a volume. Then the Dell team told me it was based on data in the volumes being migrated not on volume size. What was also weird during testing with the 6TB volume was that both the install engineer and the Storage Architect from Dell onsite both thought the volume should be done with 1-2 hours at most. So we started it and then went to lunch. We came back from lunch and around 500GB had been moved. They where both surprised since they expected the volume import to be almost completed.

Also the volume in question for testing was part of a geo-cluster that was being replicated from another data center. It would have been quicker for me to just build a new volume on the Compellent add the volume to the Geo-cluster take the old volume offline and just let the data replicate again since I have a 1Gb link between the data centers. At least the cluster replication software would have just copied the actual data being used.

I find that all too common. In some of the POC's I've done, I've had "storage engineers" getting bits and bytes mixed up and spouting throughput numbers without mentioning latency. I just kept thinking to myself, "Really? You're here trying to convince me to switch SAN vendors and spend a half million dollars with you, and you can't keep bits/bytes straight and can't tell me the theoretical maximum IOPS per 15k SAS spindle @ such and such latency?" At least I got several free meals out of it...
 

Crogers81

Junior Member
Sep 13, 2012
2
0
0
I agree that sometimes the information needed to conduct jobs aren't properly communicated. The 500GB an hour rate is the AVERAGE for most environments when conducting a migration via Fiber Channel. This rate can Vary depending on a number of factors including the source RAID type, disk type and the destination of the same, etc. Also the CPU utilization rates on both sides. Quick question, did you use a migration profile to a particular Tier/RAID type? Also, did you know you can perform live migrations on CML? They are unsupported but thy work for environments that can be easily rolled back, like file servers. I have used it on SQL servers with customer permission but again Live isn't supported. It's one of those features tucked away, but useable for demanding environments.
 

Brovane

Diamond Member
Dec 18, 2001
6,226
2,463
136
I agree that sometimes the information needed to conduct jobs aren't properly communicated. The 500GB an hour rate is the AVERAGE for most environments when conducting a migration via Fiber Channel. This rate can Vary depending on a number of factors including the source RAID type, disk type and the destination of the same, etc. Also the CPU utilization rates on both sides. Quick question, did you use a migration profile to a particular Tier/RAID type? Also, did you know you can perform live migrations on CML? They are unsupported but thy work for environments that can be easily rolled back, like file servers. I have used it on SQL servers with customer permission but again Live isn't supported. It's one of those features tucked away, but useable for demanding environments.

Yes we used a Migration Storage Profile. The data was migrated into RAID 5/6 only and any storage tier could be used. The explanation from Dell was that there was two ways to do migration.

Method #1 was to do offline migration. Which is the method we used.
Method #2 was to use double take. This would replicate over the data while the server was online. However this method was much slower.
 

xenosloki

Junior Member
Jul 25, 2013
1
0
0
Brovane - now that you've been running Compellent for almost a year any recommendations?

We're looking at VNX2 vs Compellent vs (a few others) right now. Would love to talk a non-reference Compellent customer... especially one that went from EMC to Compellent.

PM me if you have a few minutes for a brief phone call.
 

Emulex

Diamond Member
Jan 28, 2001
9,759
1
71
I can get you some killer 3PAR pricing ;) Compellent is ghetto 3par in the industry