Advantages of a bigger SSD

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

bradley

Diamond Member
Jan 9, 2000
3,671
2
81
This graph pretty much confirms that even the same controller with different firmware can handle nontheoretical :) GC and TRIM differently. In fact, it's a more illuminating real-world bench than most of what passes for data in reviews.

iometer.png



Certainly an internal SSD processor can throw a lot more channels at the problem than an external RAID0 ever could with a lot more efficiency and less overhead. That's the bottom line.

And in R0, while 4k writes go up access time (especially on small random reads) will as well. Everyone always mentions 4k, but access time also makes SSDs feel snappy. R0 only gives solid benefits on sequentials, which don't matter as much as users think.

R0 is pretty much a real world bust vs. a single well-made drive for light to moderate users. Any problems R0 solves for power users is soon made obsolete by a better NAND/controller combination.
 

moriz

Member
Mar 11, 2009
196
0
0
only the last part of that sentence is correct. 4k's, specifically the writes, do in fact go up as you add channels.

As a result of this.. multitasking is better with more drives added. I wouldn't even consider using 6 SSD's in R0 if that was not the case. Don't even need to run a benchmark to see and feel it either.

And that's not even taking into account the cumulative effects of ram caching as each new drive is added. Seeing is believing and many will never have a workload to ever see it anyways.. so to each his own ijn that respect.

Only if you manage to set a stripe size low enough. To get any improvement in 4k performance, you need to set a stripe size less than 4k. Most RAID controllers can't do it. In that case, 4k performance remains unchanged, because you'll be reading and writing to one drive.

Remember, RAID 0 doesn't function on the bit level.

EDIT: just dug up some of my own benchmarks on my RAID0 array. 4K read and write didn't change from a single drive, however, 4KQD32 did actually double. or in other words, single threaded 4K performance remained the same, while massively multithreaded 4K performance definitely do see a benefit. so some applications do benefit after all, but i think it's unlikely that a typical desktop will really get any appreciable gains.
 
Last edited:

groberts101

Golden Member
Mar 17, 2011
1,390
0
0
See my responses below in red.

read the links I posted
umm.. I've read more than my fair share about these controllers and have an inside track that most will never know. Not saying in the least that I know it all.. just more than most.


What you call GC is not GC at all. Earlier you described what happens with neither trim nor GC, and in this post you are calling the "block recycling path" GC, incorrectly.

You are simply confusing your terms. LOL.. actually the other way around if you reread the link you supplied and do a bit more research on Sandforce controllers(and yes, since I beta-test these controllers before you or the reviewers even get your hands on them, I certainly have more than a clue by now as to how they work and test out with various data streams).. you'll eventually realize that Anand has simply used a generalization to describe the process or "path" that the controller uses during the recovery process.

So, "block recycling path" is not actually a specific algorithm.. it's the overview of the various algorithms used during the recovery process. The more correct nomenclature will be "recycling engine" as that's the actual portion of the processor(which is what the controller is by nature) and the physical space used to do all things associated with recovery of dirty blocks, such as idle GC, on-the-fly GC(which, make no mistake about it.. is STILL GC), write combining, and partial block consolidation.

And it makes absolutely NO difference whatsoever whether this is done at idle time(lazy deferred), in real time(aggresive on-the-fly), or to slightly lead ahead with block/space recycling of what the processor expects for the next anticipated write load requirement( which will be based on counters/algorithms for GB/time. This is where trim shows the most benefit for the SF controller since it reduces processor overhead and allows the drive to wear level more efficiently as a result. However, TRIM is NOT required by any stretch to recover in near the same speed. Unlike other controllers on the market.. you cannot force trim a Sandforce drive to immediately recover. Even Anand is finally starting to speak of that fact, although his testing over the years has caused some contradictions. Look at his more recent testing with the Intel 520.
http://www.anandtech.com/show/5508/...cherryville-brings-reliability-to-sandforce/7

There's also an enterprise testing review(one which he overprovisioned the drive as well) that he did(at least I think it was his review) somewhere(damned if I can find it right now though) that he tested and commented on the non-recovery after trimming the drive in that review as well.

So, again.. you cannot force a Sandforce drive to recover by trimming it. The only way to force it into aggressive GC mode(also called on-the-fly recovery) is to completely fill the drive several times over and wait until it faulters and losses speed. Then it can be seen to regain speeds in the very next test.. TRIM or NO-TRIM simply because the drives counters and algorithms are coded to do so.


Additional channels are significantly more efficient then RAID0 in improving your speed and handling extra writes. You cannot add more channels than the controller already has at it's maximum. You can however, add more channels in parallel when you use drives that have the maximum numbers of channels and interleaved packages of nand.. in RAID 0. Otherwise random IOPS would never rise.

RAID0 could increase your random performance... but only IF the RAID controller you are using was properly optimized to do so.

LOL.. obviously you have not much experience with raids then. Even the old ICH10R can hit almost 150k IOPS with 4k writes in IOMeter. Raidcards get even crazier. I've said it before, and I'll say it again. RAID offers the user a much bigger pie to split up when much simultanious disk activity is required(as in larger amounts of multitasking). Particularly the writes(randoms or sequentials) which is the weakest link on any SSD made these days. Raid 0 raises that bar far beyond what most users will ever be able to use, is all. It's not just about benchmarks or boot races. :)
 

groberts101

Golden Member
Mar 17, 2011
1,390
0
0
Only if you manage to set a stripe size low enough. To get any improvement in 4k performance, you need to set a stripe size less than 4k. Most RAID controllers can't do it. In that case, 4k performance remains unchanged, because you'll be reading and writing to one drive.

Remember, RAID 0 doesn't function on the bit level.

EDIT: just dug up some of my own benchmarks on my RAID0 array. 4K read and write didn't change from a single drive, however, 4KQD32 did actually double. or in other words, single threaded 4K performance remained the same, while massively multithreaded 4K performance definitely do see a benefit. so some applications do benefit after all, but i think it's unlikely that a typical desktop will really get any appreciable gains.


If you saw single drive speeds then you did not have something set up right with your raids. Not sure what sata or raid controller you used.. but I test these specific controllers more than most. I've even bought multiple controllers from various mfgrs and tested them as well. ALL will increase in random performance with R0. Your testing is faulty.. or the hardware was not capable. Simple as that. I have folders full of benchmarks, GC/trim recovery, beta-firmware, and sleep testing that is larger than some peoples video or picture archives. So, I certainly have more than a clue as to what I speak about here.

Although I could really care less what a benchmark full of numbers means.. here is some quick copy/paste to peruse for better understanding where your setup was at fault.


and @ bradley. Those graphs fly in the face of the fundmental inner workings of any Sandforce based drive. If a Sandforce 2281 based drive is degrading that severely?.. add huge amounts of salt there because they do not degrade to that extent unless the testing protocol has fault. I'm curious of the link for that data. Thanks.
 
Last edited:

taltamir

Lifer
Mar 21, 2004
13,576
6
76
Telling me to read
Done arguing this, I am right, you are wrong, I provided multiple quotes and links and you insist I am wrong. Fine, we disagree on the terms and people can believe whom they choose.

Additional channels are significantly more efficient then RAID0 in improving your speed and handling extra writes.
You cannot add more channels than the controller already has at it's maximum.
Context is important, I was comparing different controllers with different amount of channels. So you are pretty much arguing against imagined arguments you made up and assigned to me.

Even the old ICH10R can hit almost 150k IOPS with 4k writes in IOMeter.
Intel makes the best mobo integrated SATA/RAID controllers by a huge margin and the ICH10R is fairly modern and powerful.
Show me the integrated/cheap solutions from AMD, jmicron, and their ilk that do well there.
Again you imagine arguments I did not make and combine it with facts that do not exist with an ad hominum telling me how ignorant I am...
Oh, and for gods sake stop saying "lol".
 

groberts101

Golden Member
Mar 17, 2011
1,390
0
0
ok.. then here's another for you. BWAHAHAHA

seriously though. You are right about one thing. Most people will figure it out in the end once they dig through the assumptions, ego, and useless rhetoric. :hmm:
 

groberts101

Golden Member
Mar 17, 2011
1,390
0
0
Also like to add a quick apology to the OP of this thread for going on about things which may, or may not be, relevant to your original question.

If you consider that the IC configs of larger drives bring about much stronger multitasking performance and better leverage faster storage capabilities by increasing sequential R/W speeds(usually requiring raids)?.. larger SSD's do in fact give you greater performance.

A very good peek into this fact can be had by looking at the Anandtech heavy storage benches and the disk busy times associated with various capacity points of the same model SSD. Just happened to have this page still up, so we'll just go with this one for now.
http://www.anandtech.com/show/5508/...cherryville-brings-reliability-to-sandforce/5

Hope that helps, and again, sorry for the OT stuff. :)
 

moriz

Member
Mar 11, 2009
196
0
0
If you saw single drive speeds then you did not have something set up right with your raids. Not sure what sata or raid controller you used.. but I test these specific controllers more than most. I've even bought multiple controllers from various mfgrs and tested them as well. ALL will increase in random performance with R0. Your testing is faulty.. or the hardware was not capable. Simple as that. I have folders full of benchmarks, GC/trim recovery, beta-firmware, and sleep testing that is larger than some peoples video or picture archives. So, I certainly have more than a clue as to what I speak about here.

Although I could really care less what a benchmark full of numbers means.. here is some quick copy/paste to peruse for better understanding where your setup was at fault.


and @ bradley. Those graphs fly in the face of the fundmental inner workings of any Sandforce based drive. If a Sandforce 2281 based drive is degrading that severely?.. add huge amounts of salt there because they do not degrade to that extent unless the testing protocol has fault. I'm curious of the link for that data. Thanks.

interesting. what kind of drives are you using? and while you are at it, please explain how a stripe size >4K have any impact on single queue depth 4K transfer? looks like you have some kind of ram caching going on there, or somehow set a stripe size of 2K, which as far as i know, the intel RAID controller does not allow.

as for my setup, it is very similar to yours i'd imagine. P67 chipset, 2600K 4.6GHz. drivers are up to date. the stripe size i eventually settled on was 32K, though the controller defaults to 16K for intel SSDs for some reason.
 

Burner27

Diamond Member
Jul 18, 2001
4,452
50
101
If it wasn't stated already, you also need the 11.0 OROM on your motherboard to support TRIM for RAID with the 11.5 drivers.
 

bryanW1995

Lifer
May 22, 2007
11,144
32
91
interesting. what kind of drives are you using? and while you are at it, please explain how a stripe size >4K have any impact on single queue depth 4K transfer? looks like you have some kind of ram caching going on there, or somehow set a stripe size of 2K, which as far as i know, the intel RAID controller does not allow.

as for my setup, it is very similar to yours i'd imagine. P67 chipset, 2600K 4.6GHz. drivers are up to date. the stripe size i eventually settled on was 32K, though the controller defaults to 16K for intel SSDs for some reason.

He doesn't use an intel controller.
 

moriz

Member
Mar 11, 2009
196
0
0
the driver being used in his screenshot is iaStor, which is the intel storage controller, if i remember correctly.
 

Burner27

Diamond Member
Jul 18, 2001
4,452
50
101
So the 11.5 drivers won't work on my ICH9R motherboard from 2008?

I didn't say that. You can install the driver if you want. I was pointing out there are two components necessary to have TRIM available on an INTEL controller when you are in RAID mode.


Your ICH9R controller will NOT be updated to 11.0 OROM because it only applies to x67, Z68, &x79 Mobos.