• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

3x X25-V 40GB - Best Configuration?

So I just ordered 3 of the Kingston badged drives from Amazon and am trying to figure out how I want to configure things. Some possibilities:

3x RAID0 - OS/apps/games
1x OS/apps, 2x RAID0 games
1x OS/apps, 1x some games, 1x more games
 
raid0 all 3, and buy a rotational harddrive to do periodic image backups. without trim support (you dont get trim with raid) you'd need to wipe the drive to get the speed back. so wipe and reimage every 2 months or so 😛
 
I have a question similar to the OP's. It is not my intention to hijack the thread, but the reasoning behind my problem relates to the OP's.

I just got an X-25V 40 GB from my brother, who bought it by mistake thinking it would fit into his MacBook Air. I want to try to reinstall my Windows 7 on that one, and use my 300 GB Velociraptor (currently OS/apps/games) for games and maybe apps. Now I'm not really sure if it is best to:

1) Only leave the OS on the SSD and put both apps and games on the Velociraptor. That way there would be plenty of spare room on the SSD (Windows 7 taking 15-20 GB), which is good for the SSD performance from what I gather. My reasoning is that apps and games mostly consist of sequential reads, and perhaps they wouldn't be much affected by being on an HDD.

2) or if I would see a tangible performance increase putting my apps on the SSD as well, and leave only the games on the Velociraptor. I would have to be more careful this way, since I use some big apps (Adobe stuff mostly) and would probaby reach around 30 Gb on a clean OS+apps install, leaving little headroom until I reach the 20% free space rule of thumb.

Any thoughts or insights appreciated.
 
The main reasons I'm thinking about skipping RAID is that while it does look awesome in benchmarks, a lot of people feel there's not really any real-world difference. Also, without RAID don't have to deal with the RAID OS boot thing or array maintenance. Though without RAID, sequential writes are gimp. I'm not sure, just trying to get some more opinions here.
 
Last edited:
While thinking about this, I remembered Anand mentioning "queue depth" in the context of SSD benchmarks. I did a search on the site for "queue depth." I came across the second post below from a user named GullLars in the comments of one of Anand's articles. Then I did a search for GullLars and came across this comment as well...


Good test, now RAID 😉 by GullLars, 10 days ago
This was a good test, and one i've been waiting for a while. I'm a bit disappointed a 32GB Indilinx Barefoot drive wasn't included. I have a 30GB Vertex in my laptop that performs better sequentailly than these numbers, and has better random performance than the Kingston V 30GB. The price is slightly higher though.
Ref screenshot: http://www.diskusjon.no/index.php?app=c...h_rel_module=post&attach_id=339908 CDM 3.0 + WEI for my laptop.

Now the next thing I hope Anandtech will do regarding SSDs is a comparison of RAID of low-capasity cheap SSDs VS single high capasity SSDs. This is something no other reckognized tech site has done yet, but enthusiasts have done for years now. Example: http://www.nextlevelhardware.com/storage/battleship/

I'll also mention Nizzen, an enthusiast on a forum i frequent, who set a WR i PCmark vantage last spring with his 24/7 setup, and is still on top5 with the same setup (updated in august with 4GB RAM on the Areca). The key was an Areca 1680ix-12 with a RAID-0 of several (7 i think) OCZ Vertex.
ORB result page: http://service.futuremark.com/resultCom...sultId=210815&compareResultType=18
24740 PCmarks, WAY ahead of the highest score in your benchmark lists. The same level of disk performance is possible to get with an LSI 9211-8i with 8 30-40GB SSDs in RAID-0 for about $1000 (less than 2 256GB SSDs).

Suggested lineup for such an article: RAID-0 of 4 Kingston V 30GB, Intel x25-V, and Indilinx Barefoot 32GB (Vertex?). 2 RAID-0 SF-1200/1500 50GB, Kingston SSDNow V+ 64GB, Indilinx Barefoot 64GB, Intel x25-M 80GB. And single 100/128/160 GB SSDs of various controllers.

Regarding performance degrading in RAID whitout TRIM, increased reserved area can help negate the performance degrading (Ref IDF whitepaper on spare area). Increasing the spare area to ~20-25% from the default 7% (on most SSDs) will make sure degrading will not be noticable by users in normale usage models.


Additional note on SSD RAID, IOPS, and QD by GullLars, 31 days ago
Just thought I'd add a link to a couple of graphs i made of IOPS scaling as a function of Queue Depth for 1-4 x25-V in RAID 0 from ICH10R compared to x25-M 80GB and 160GB, and 2 x25-M 80GB RAID 0. These are in the same price range, and the graphs will show why i think Anands reviews don't show the whole truth when there is no test beyond QD 8.
link: http://www.diskusjon.no/index.php?app=c...h_rel_module=post&attach_id=348638
The tests were done by 3 users at a forum i frequent, the username is in front of the setup that was benched.

The IOmeter config run was: 1GB testfile, 30 sec run, 2 sec ramp. Block sizes 0,5KB, 4KB, 16KB, 64KB. Queue Depths 1-128, 2^n stepping. This is a small part of a project from a month back mapping SSD and SSD RAID scaling by block size and queue depth (block sizes 0,5KB-64KB 2^n stepping, QD 1-128 2^n stepping).

ATTO is also nice to show scaling of sequential performance by block size (and possibly queue depth).
 
why in the world would you get more then one of those for the same machine?
as only 5 chanel drives they are each significantly slower than the 80GB or 160GB models.
 
It seems to me that there might be two possible configurations here each providing a different kind of parallelism. And maybe it isn't obvious which kind of parallelism would be more advantageous.

For the first configuration, consider two independent (no RAID involved) 5-channel drives. For the second configuration, consider a single 10-channel drive.

Is it possible that with the first configuration (two 5-channel drives) you benefit from:
A) having separate data paths leading *to* each 5-channel drive, for a total of two data paths leading to the drives, and/or from
B) having two independent controllers?

Furthermore, which configuration is better might depend on the details of the workload, such as "queue depth."
 
Last edited:
Anand failed to mention that TRIM is now available for RAID 0.
If you're referring to the latest Intel RST 9.6 drivers, no they do not enable TRIM for RAID arrays. What they do is allow pass-through to single drives not in an array, ie single SSD + RAID hard drives, TRIM now makes it to the SSD
 
And the comment from GullLars seems spot on:

GullLars on Tuesday, March 30, 2010
This was a great test, and one that i've been nagging for a few months now. I'm a bit disappointed you stopped at 2 x25-V's and didn't do 3 and 4 also, the scaling would have blown your chart away, while still coming in at a price below a lot of the other entries.

I would also love if you could do a test of 1-4 Kingston V+ G2 64GB RAID-0, as it seems as the sweet-spot for sequential oriented "value RAID".

I feel the need to comment your comment under the random write IOmeter screenshot:
"Random read performance didn't improve all that much for some reason. We're bottlenecked somewhere else obviously."
YES, you are bottlenecked, and ALL of the drives you have tested to date that support NCQ have also been so. You test at Queue Depth = 3, which for random read will utilize MAX 3 FLASH CHANNELS. I almost feel the need to write this section in ALL CAPS, but will refrain from doing so to not be ignored.
The SSDs in your chart has anywhere between 4 and 16 flash channels. Indilinx Barefoot has 4, x25-V has 5, C300 has 8, x25-M has 10, and SF-1500 has 16. And i repeat: YOU ONLY TEST A FRACTION OF THESE, so there is no need to be confused when there is no scaling by adding more channels through RAID. 2 x25-V's in RAID-0 has the same amount of channels as x25-M, but twice the (small block parallel, large block sequential) controller power.
 
But I've seen no discussion of the possible merits of keeping the drives independent. At the very least, you get to keep Trim. And maybe there is an advantage to having 2 independent data paths leading to the drives? Or to having 2 independent controllers?

My goal is to minimize the time it takes to: boot Windows 7, allow the anti-virus to scan startup files and download updates, and load several applications into RAM.

Question: Would this be faster with:
A) One Intel 160GB G2 SSD (by the way, a 10-channel controller, right?)
B) Two independent (i.e., no RAID involved) Intel 80GB G2 SSDs -- One for Win 7 and the other for anti-virus and applications? (by the way, each SSD has a 10-channel controller, for a total of 20 channels?)
 
Last edited:
Back
Top