SATA SSDs vs PCI-E SSDs (i.e. OCZ Z-Drive)

Which SSD do you use?

  • Intel X25-M

  • Crucial C300

  • OCZ Vertex 2

  • OCZ PCI-E Z-Drive P88 or P84

  • OCZ PCI-E Z-Drive M84

  • A-DATA S599

  • Corsair Force

  • Other


Results are only viewable after voting.

sol326

Junior Member
Sep 16, 2010
12
0
0
I've been doing a lot of reading on the different SSD technologies and trying to decide on which way is best to go for my new system build.

I am wondering which people here are using in a production server for their main data access. As most we work with many databases from quickbooks to CAD although accounting is the most intensive use.

http://www.ocztechnology.com/products/solid-state-drives/pci-express/z-drive-r2.html

I was of course thinking the OCZ Z-Drive R2 P88 but this PCI-E technology is evolving so darn fast I'm hesitant to be stuck with such an expensive unit that improves 2 or 3 fold in the next year or two and wait to see if I would rather have that then. Don't want to be the guinea pig for OCZ.

Or traditional SSD since most of the bugs seem to be worked out. I've heard a lot of good things about both crucial and Intel X25-M (although Intel is really not large enough capacity for my requirements); needing about 512GB in space to assure enough growth over the next few years.

http://www.newegg.com/Product/Produc...82E16820148349

http://www.newegg.com/Product/Produc...e=&srchInDesc=

Can anyone give some more feedback and some experienced knowledge especially toward the P88 since it's unclear how the RAID works on it to break down existing memory on the card. Like does the 1TB only give you 512 RAIDed or the 1TB is already RAIDED. I've also heard these Z-Drives can't be RAIDED together either which would cause failure point if these don't last.

Of course I've also been looking at a few of the mid grade SSD's like the A-DATA which is reasonably prices as well with good ratings for stability and wait on any of the new stuff till it is more proven??

Look forward to ANY input people may have, thanks in advance.

-SOL326
 
Last edited:

Emulex

Diamond Member
Jan 28, 2001
9,759
1
71
the problem with pci-e solutions is obviously draw on the bus and heat.

gotta keep those two in check or you'll have issues
 

sub.mesa

Senior member
Feb 16, 2010
611
0
0
Heat? SSDs don't produce much heat; they don't have heatsinks. Only the RAID controller might have a tiny heatsink; but it wouldn't be much.

The NAND itself would be very low power when not active. So generally SSDs require very little power.
 

MJinZ

Diamond Member
Nov 4, 2009
8,192
0
0
I've been doing a lot of reading on the different SSD technologies and trying to decide on which way is best to go for my new system build.

I am wondering which people here are using in a production server for their main data access. As most we work with many databases from quickbooks to CAD although accounting is the most intensive use.

http://www.ocztechnology.com/products/solid-state-drives/pci-express/z-drive-r2.html

I was of course thinking the OCZ Z-Drive R2 P88 but this PCI-E technology is evolving so darn fast I'm hesitant to be stuck with such an expensive unit that improves 2 or 3 fold in the next year or two and wait to see if I would rather have that then. Don't want to be the guinea pig for OCZ.

Or traditional SSD since most of the bugs seem to be worked out. I've heard a lot of good things about both crucial and Intel X25-M (although Intel is really not large enough capacity for my requirements); needing about 512GB in space to assure enough growth over the next few years.

http://www.newegg.com/Product/Produc...82E16820148349

http://www.newegg.com/Product/Produc...e=&srchInDesc=

Can anyone give some more feedback and some experienced knowledge especially toward the P88 since it's unclear how the RAID works on it to break down existing memory on the card. Like does the 1TB only give you 512 RAIDed or the 1TB is already RAIDED. I've also heard these Z-Drives can't be RAIDED together either which would cause failure point if these don't last.

Of course I've also been looking at a few of the mid grade SSD's like the A-DATA which is reasonably prices as well with good ratings for stability and wait on any of the new stuff till it is more proven??

Look forward to ANY input people may have, thanks in advance.

-SOL326

Nutshell: SSDs are useless for sustained read/write. SSDs are godly for Random. Thus faster than SATA speeds are useless. For 99% of people.
 

sub.mesa

Senior member
Feb 16, 2010
611
0
0
Why useless? They are only several factors faster than HDDs in those workloads; but that can still be a reason to use them.

For example SSDs are used to accelerate ZFS arrays as both (L2ARC) cache device and as ZIL ('journal log') disk. The cache device needs random read IOps and the ZIL device needs only sequential write throughput.

Sequential writing is generally not the strong suit of SSDs; but some PCIe solutions are nice; especially if they contain a 'supercap' or super-capacitor that ensures safe writes on power failures. Not sure which offer this ability, though.
 

MJinZ

Diamond Member
Nov 4, 2009
8,192
0
0
Why useless? They are only several factors faster than HDDs in those workloads; but that can still be a reason to use them.

For example SSDs are used to accelerate ZFS arrays as both (L2ARC) cache device and as ZIL ('journal log') disk. The cache device needs random read IOps and the ZIL device needs only sequential write throughput.

Sequential writing is generally not the strong suit of SSDs; but some PCIe solutions are nice; especially if they contain a 'supercap' or super-capacitor that ensures safe writes on power failures. Not sure which offer this ability, though.

Useless for most people. I'm sure they are a godsend for select professionals.
 

sub.mesa

Senior member
Feb 16, 2010
611
0
0
Some products listed as 2010 Q4 (think november) some some 'early 2011' or 2011 Q1. The 40GB Intel X25-V and the 1,8" Intel's are one of them.

So i guess you you should say that 2011 brings us the newer "25nm" generation of SSDs with newer controllers as well. Not much info about those, let alone benchmarks.
 

FishAk

Senior member
Jun 13, 2010
987
0
0
needing about 512GB in space to assure enough growth over the next few years.

Keep in mind that since the Z-drive is RAID, it does not support TRIM.

Also, keep in mind that, as mentioned earlyer, the huge speed advantage of the SSD is only with operations less than 32Kb. They are faster with larger files as well, but the difference is very little with writes, and only twice on reads. Of course only you can decide if that small improvement in speed is worth the price. I would think a data base has many small files, but I'm mostly ignorant in this aria.

Don't worry too much about the next few years. SSD tech is moving very fast, and when you need more room, the drive you want and need will be relatively cheep. Look at what you have now, and over the next year or so, but not much beyond that. I doubt that you will not want to update to a bigger/cheaper/faster drive in a year or so.
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
Some products listed as 2010 Q4 (think november) some some 'early 2011' or 2011 Q1. The 40GB Intel X25-V and the 1,8" Intel's are one of them.

So i guess you you should say that 2011 brings us the newer "25nm" generation of SSDs with newer controllers as well. Not much info about those, let alone benchmarks.

http://www.storagereview.com/intel039s_ssd_roadmap_leaked_x25m_g3_confirmed_2010
from 1 month ago.
Q4 2010 brings all the 25-M (160, 300, and 600GB), and the 80GB 25-V
Q1 2011 brings all the 25-E, all the 18-M, and the 40GB 25-V
 

sub.mesa

Senior member
Feb 16, 2010
611
0
0
@Emulex: was that meant for me?

SSDs indeed use few energy; but less requires a comparison; not less than an ice cube. :)

But it does use less than any HDD :

SSD in desktop: 0.5W idle
SSD in laptop: 0.0075W idle (requires DIPM)
Notebook HDD : 0.7-1.4W idle (5400/7200)
velociraptor idle: 4W
5400rpm 2TB green idle: 3.5-4.5W
7200rpm single-platter: 6W
7200rpm multi-platter: 7-9W

Only the crappy "Apex" SSDs with dual crappy JMicron controllers would consume more than an average laptop HDD. Virtually all modern SSDs use under 1W power.
 

yinan

Golden Member
Jan 12, 2007
1,801
2
71
I have both Western Digital blue and an OCZ 120GB RevoDrive. The RevoDrive is as fast as 3 of the WD Drives at RAID0. It is speedy and I love it, also since it is PCIe there is no cabling involved :)
 

beginner99

Diamond Member
Jun 2, 2009
5,315
1,760
136
Intel.

If space is really an issue wait for the new 600 Gb model. If not buy now. The current intels are probably the most reliable ones and reading subtext (production server) in your post that probably a very important point. Hence forget PCI-E stuff.

You can't conclude from this that the new Intels will be as reliable but Intel sure does the most testing compared crucial, ocz,...they both had there issues in the beginning and issues = potential data loss and downtime of server.
 

Emulex

Diamond Member
Jan 28, 2001
9,759
1
71
let's talk about power when at 100% use (read/write) not idle.

let's compare a 160gb x18-m against a 160gb (or similar) 1.8" drive in use?

that way we can use just 3.3V - 5v and 12V seem archaic yeah?
 

ElFenix

Elite Member
Super Moderator
Mar 20, 2000
102,390
8,547
126
the revodrive is a bunch of standard SSD controllers RAIDed through SATA on a PCI-e card. it's not native PCI-e (in fact it's SATA -> PCI-X -> PCI-e). on a good SATA controller RAIDed vertex 2 drives are faster. and right now it looks like the vertex 2 drives would be $50 less expensive (plus they're likely easier to sell down the road) for 2x60GB rather than 120GB revo.
 
May 29, 2010
174
0
71
I can tell you that Micron has an entire group of engineers devoted to PCIe SSD. I can also tell you that the PCIe memory cards are O-MY-GAWD BLAZINGLY fast and likely to be even more gawd-awful expensive, even when compared to non-cheap SSD's pricing. The issue is that it is not seen as a consumer item, it is being developed for enterprise storage solutions only.

Unlike typical SSD's, the PCIe solutions are adorned with heatsinks and fans. Just like with video cards, cranking up the controller speeds, chip density, and amounts of memory makes a lot more heat.

Will they ever become consumer type items with related lower prices? Not any time soon. Just the hardware is expensive as hell. Terabytes of SLC NAND and the numbers of controllers needed to fill the bandwidth of the PCIe bus, does not come cheap.
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
let's talk about power when at 100% use (read/write) not idle.

let's compare a 160gb x18-m against a 160gb (or similar) 1.8" drive in use?

that way we can use just 3.3V - 5v and 12V seem archaic yeah?

1. The best SSDs still beat the pants off of the best HDD in terms of active power consumption... that being said, there is a lot of variance both for SDD and HDD, thus the best HDD do indeed beat the worst SSD... but thats an unfair comparison. The worst is REALLY bad.

2. SSDs don't generate enough heat to rise even 1 single degree. HDD can get very very hot.
 

wpcoe

Senior member
Nov 13, 2007
586
2
81
Unlike typical SSD's, the PCIe solutions are adorned with heatsinks and fans. Just like with video cards, cranking up the controller speeds, chip density, and amounts of memory makes a lot more heat.
The PCIe cards I've seen have no fans, and the only heatsink seems to be a small one on a (the controller?) chip. For example, see: Z-Drive R2 OCZ Technology
 

sol326

Junior Member
Sep 16, 2010
12
0
0
Hey Guys~

Thanks for all the feedback. So from a standpoint of benefit of removing some bottlenecks to improve efficiencies, I'm still looking at the comparison from both price and failure points.

Since it is my goal to not loose data upon failure, I need to use RAID and from this it seems like TRIM doesn't really matter either way you go with PCI-e or SSD. This is because in PCI-e uses GC and although some SSD use GC and TRIM, TRIM doesn't work in an ARRAY of SSDs. So that makes TRIM a Non-issue.

Efficiency/Speed seems to be similar for both SSDs and PCI-e's once SSDs are in a RAID from what I read. Not completely on every level, but on some things like small writes the SSDs sounds like they're still faster but not by much as mentioned by a few earlier posters here.

As for price (removing FusionIO from the equation (now that is out of anyone's but enterprises range)).... it seems like it's pretty varied depending on quality of controller you want to use and trust. But the controller seems to be the biggest price difference being you are paying for quality. However, there is a HUGE difference in price if you take this into account the need for a RAID card with SSDs, vs PCI-e there is no Backplanes or cards to deal with bringing the price down for them. BOTH SSD and PCI-e have to be idled for a certain amount of time every so often to keep from loosing their efficiency over time. However, there are even more factors to consider in price when considering SSDs. What ARRAY will you use and how many drives will that ARRAY take? I like a RAID 10 personally, but either way most redundant RAIDS take at least 3-5 disks and more like 5-8 disks if you are using Hot Swaps, since most HBAs won't let you mix SSD and traditional HHD other than hot swapping, so you will always need an extra laying around 'just in case'.

So PCI-e is starting to sound cheaper at $2300 eh? Considering 5 SSDs cost $2500-3000 with good controllers. (plus $500 for a good HBA that is compatible with VMWare). But since PCI-e is only raided on it's own card and cannot be RAIDed with any other storage device, it's point of failure is quite a downfall for it.

To me, I'm leaning toward the PCI-e for the reason of price if I can find a good way to remove that failure point if not I'm leaning toward some vertex 2 at this point, although the crucial c300 sounds attractive as well. I SOO want to try out the new stuff but don't want to at the cost of loosing my data, meaning SSD is probably my only option.

Any more insight? Any suggestions on how someone can instantaneously copy the PCI-e to a HDD/SSD (RAID 1??), I suppose, for fast recovery if the PCI-e were ever to fail?

Any further suggestions on how someone can create a good efficient SAN server for a decent price??

Sol326
 
Last edited:

sol326

Junior Member
Sep 16, 2010
12
0
0
Maybe you missed my point. I'm only referring to a portion of my total system set up. I'm concentrating on one point at a time. So to clear up that statement... 'Failure of a disk or two' not total catastrophe... That is what the instantaneous copy is for. I was more thinking of PCI-e's point of failure of depending on only one disk, vs a few disks in a RAID you have a bit of 'cushion'. But thanks for the reminder FishAk.
 
Last edited: