• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Should I spend 700 bucks on a OCZ Vertex 250gig

Darkstar757

Diamond Member
I have been reading up on SSD's for quite a while now. All the performance issues really scared me away. Now that the OCZ issues have been worked out I am itching for an upgrade. I just dont know if I should buy this for my desktop or just get a small one for my Athlon X2 based laptop. Can some SSD owners please chime in to give me some insight here.

Thanks,
Dark
 
If you are going to use them on your desktop rig you could get better performance/dollar if you buy the smaller capacity (at slightly higher $/GB per drive) and raid-0 them with your mobo's hardware raid capability.
 
What he's suggesting is just using the smaller capacity Vertex models in RAID 0. For your laptop obviously a single drive is the answer, but for desktop you have some flexibility.

Viper GTS
 
It won't save you any money. It will actually cost you more money.

You will, however, get better performance for each of those $.

Viper GTS
 
hummm

I am wonder if I should just let them be. 700 bucks is just insane for a HD.


This is also coming from a man who owns two 280GTX's.
 
It's like this:

30GB is $108 w/o MIR = $3.60/GB

60GB is $199 w/o MIR = $3.32/GB

120GB is $399 w/o MIR = $3.33/GB

250GB is $755 w/MIR = $3.02/GB

(MIR applies to first drive only, additional drives are $20 more each)

If your budget is $755 for the 250GB model then with that same budget you could buy six (6) 30GB drives for the same total cost (108 for 1st drive, 128 for remaining five drives) but have up to 6x the performance in raid-0 configuration. (your mileage will vary on actual performance depending the quality of the raid controller you use)

Yes you end up with less aggregate capacity for the same budget, 180GB with the 6x30GB array versus 250GB with the single drive, but your performance will be substantially faster.

Alternatively if you are really interested in the capacity and aren't interested in wiring up 6 sata disks (your mobo probably has only 6 sata ports anyways, just guessing) then you could go for 4x60GB = 240GB total storage for $856 and have 4x the performance over the 250GB single drive for $100 less.
 
The Samsung from Dell is cheaper I believe.
$640 or so after applying one of the rebates.
And Anand likes it better ;-)

Samsung = OCZ Summit.
 
I think the price point for these are ridiculous. Think about it, you could get like 8 or 9 TB for $800. I'd probably wait a while until they drop down to roughly $1-$1.50/GB. Hell for $800 you could have yourself another pretty decent computer. Anyways, it's really up to you though, if you really think it's worth it then go for it, or if you're super rich 😛.
 
Wouldn't RAID hinder latency quite a bit in these drives? I know that it doesn't really matter with rotating media, because latency is so slow, but it may make a large difference with SSD's.

Or I could be talking out of my rear end ...
 
he means that two 30GB drives in raid0 will perform (theoretically) about 2x faster than one 60GB drive. Its not gonna save you money

even though the 60GB drive is 200$ and two 30GB drive will cost 220$ for both. 20$ more will give a lot more performance.
 
Do you really have 250gb worth of programs that you need on your desktop?

The benefits of a SSD aren't really noticed by filling it with a bunch of big media files.

You should instead get only the size you need for your OS and programs (60gb or 120gb), and store your data on a hard drive.

Basically the best of both worlds, awesome performance for programs, great value for storage space.

Whatever you do, I would not go with the onboard RAID idea. Not worth the hassle, believe me.
 
Originally posted by: Martimus
Wouldn't RAID hinder latency quite a bit in these drives? I know that it doesn't really matter with rotating media, because latency is so slow, but it may make a large difference with SSD's.

Or I could be talking out of my rear end ...

Yep absolutely latency will take a slight hit, by slight we are usually talking about increasing the base latency by another 10% or so.

It's not going to take your 0.1ms SSD latency and crater it to 3ms or something bad like that.

It will take your 0.1ms SSD and increase that read latency to 0.11 or 0.12ms Depending on the quality of the raid controller, whether it has cache, etc, your write latency can actually go down as the latency gets masked.

This is particularly true for small file writes (fit into small caches just fine), which is exactly why raid cards with onboard cache are the brute-force method of masking the jmicron stutter issues.
 
Originally posted by: Idontcare
Originally posted by: Martimus
Wouldn't RAID hinder latency quite a bit in these drives? I know that it doesn't really matter with rotating media, because latency is so slow, but it may make a large difference with SSD's.

Or I could be talking out of my rear end ...

Yep absolutely latency will take a slight hit, by slight we are usually talking about increasing the base latency by another 10% or so.

It's not going to take your 0.1ms SSD latency and crater it to 3ms or something bad like that.

It will take your 0.1ms SSD and increase that read latency to 0.11 or 0.12ms Depending on the quality of the raid controller, whether it has cache, etc, your write latency can actually go down as the latency gets masked.

This is particularly true for small file writes (fit into small caches just fine), which is exactly why raid cards with onboard cache are the brute-force method of masking the jmicron stutter issues.

Are there any tests out there that measure the Latency hit for fast SSDs? (especially Intel SSDs) I have seen some for rotating media in the past, but it was no big deal because they were so slow in the first place; adding a couple ms to access times was nothing. Adding a couple ms to something that has access times of 0.1ms would obviously be a big deal.

I would want to see how RAID effects these accesses in real life before I decide whether to go with a RAID-0 setup. If it is as you say, then I have no problem dealing with a 10% increase in random access, but if it is much greater than that I wouldn't feel it is worth it to me.

Most of all, I am just curious about RAID in a SSD setup. Since they are an order of magnitude faster in some ways, I wonder if RAID would hinder them more than help in some applications.

EDIT: Thanks for taking the time to answer my question.

This has been something I have wondered about SSDs for quite a while, but I haven't seen any testing that compared these things between RAID systems and non-RAID systems.
 
Originally posted by: Martimus
Are there any tests out there that measure the Latency hit for fast SSDs? (especially Intel SSDs) I have seen some for rotating media in the past, but it was no big deal because they were so slow in the first place; adding a couple ms to access times was nothing. Adding a couple ms to something that has access times of 0.1ms would obviously be a big deal.

Yes there are bench programs out there with the resolution to measure sub 100us latency to the media. I'll search for a few to add to this post, but they are pretty abundant to recollection of reading a lot of reviews these past six months.

Here's a quick example of how putting SSD's in raid-0 is not going to crater your latency. Note in this review both the single SSD and the 2x raid-0 SSD array both measure as having 0.10ms access time.

http://www.tomshardware.com/re...-memoright,1926-6.html

Clearly both cases are actually <0.10 ms and for all we know the raid-0 array has 3x or 4x the latency of the single SSD, but nonetheless regardless how bad the latency became it still did not exceed 0.15ms (presumably needed to hit the round-up threshold for reporting 0.20 ms latency for this program).

I mentioned the latency increase in my above post merely for sake of completeness and 5-nines correctness. It is not of practical consideration, but you know these forums and if I had said "raid-0 latency is identical to non-raid latency" then at least one poster would have felt compelled to smack me with the "well now that's not technically correct..." quote/post.

Originally posted by: Martimus
Most of all, I am just curious about RAID in a SSD setup. Since they are an order of magnitude faster in some ways, I wonder if RAID would hinder them more than help in some applications.

It would take a pretty extraordinary usage pattern for an application to cause the performance of a system to be worse with a raid-0 array of SSD's versus the performance of the system operating with just a single drive.

Even as a Gedanken experiment I can't fathom what manner of an extraordinary usage pattern it would take to wig out a raid-0 setup whilst simultaneously not wreaking havoc on an otherwise single-drive setup. Not trying to be abrasive, just curious if you have a particular example in mind?

Originally posted by: Martimus
This has been something I have wondered about SSDs for quite a while, but I haven't seen any testing that compared these things between RAID systems and non-RAID systems.

http://hothardware.com/News/In...Study-In-Speed-Take-2/

http://www.nextlevelhardware.com/storage/battleship/

http://www.tweaktown.com/revie...s_in_raid_0/index.html

You'd probably be hard pressed to find an SSD review that doesn't contain some elements of raid-0 testing. Anandtech is such a site, they abhor raid-0 testing as much as they abhor LLC.
 
Originally posted by: Idontcare
Originally posted by: Martimus
This has been something I have wondered about SSDs for quite a while, but I haven't seen any testing that compared these things between RAID systems and non-RAID systems.

http://hothardware.com/News/In...Study-In-Speed-Take-2/

http://www.nextlevelhardware.com/storage/battleship/

http://www.tweaktown.com/revie...s_in_raid_0/index.html

You'd probably be hard pressed to find an SSD review that doesn't contain some elements of raid-0 testing. Anandtech is such a site, they abhor raid-0 testing as much as they abhor LLC.

Thanks. I took a look at the Tweaktown Review that you linked and it answered my questions pretty well.

 
1) Raid 0 is suicide on this drive unless you run with the new firmware. Otherwise you are looking at the latency of a 4200rpm drive.

2) The price of the 250GB is probably going to drop soon. The new Summit line is coming out soon and is supposed to be the new high end. The pre-release firmware is already good to go out the box (and will likely only improve), unlike the Vertex.

3) Sequential performance increases alot on these drives but how often are you going to move big files on a slew of 30GB drives? Seriously...this screams synthetic benchmarking.
 
1. i do not think any SSD is worth the price premium right now, i stick with with my 640 double platter drive.
2. if I was gonna spend 200$ for an ssd, id get two 30GB vertex in raid0 or maybe a single 60GB.
3. If I was gonna spend 400$ I Would get a single intel 80GB SSD.

I would not spend a cent more for personal use no matter how rich I Was. If i needed it for some special business project id be using however many X25-E in raid as needed.
 
Or you could get a pair of intel 80GB drives and raid those. Slower peak sequential numbers, but better latency and less risk.
 
Originally posted by: ilkhan
Or you could get a pair of intel 80GB drives and raid those. Slower peak sequential numbers, but better latency and less risk.

Is your screen name "ilkhan" come from Gromnir Il Khan by any chance?
 
Back
Top