• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

OS Drive - SAS or SSD?

imipenem

Senior member
I need some help selecting a primary data config for OS/programs for a new Solid Edge/GibbsCAM workstation build (3D drafting, design, CNC programming). I need performance not redundancy, but really do not want to use RAID0.

I have no personal experience with SSDs, but am considering them for this build - I am concerned about TRIM...

My original plan was to use an Adaptec 5405 SAS controller and 15K rpm SAS disks.

Components:
Intel Core i7-920
Giga-byte GA-EX58-UD5
12GB of Corsair XMS3 DDR3 1600
PNY Quadro FX 3800
Windows 7 Pro (x64)
SolidEdge ST2
GibbsCAM 2009
VMWare 7 for old PC images
 
I'd go with the SSD. If you have the cash grab a 160gig Intel X-25 or as many as you need.
 
I thought about that, but I am worried about TRIM and performance degradation. Adpatec controllers do not support TRIM per a conversation with Adaptec support.
 
Adaptec is not known for performance, nevertheless, it's not terrible either. If you couple the write cache onboard, SAS or SSD does not matter much.

I personally prefer 2.5" sized SAS drives.
 
Yes but SAS = duplex.

If you are buying a 5405 + 15k SAS, you can get good numbers, but it is going to be all about sequential reads/ writes. Random stuff, SSD's tend to be better.

Here's my experience with 8x 15k Savvio's + an Adaptec 5805 v. a OCZ Vertex. http://www.servethehome.com/?p=147

Also, the 5405 is limited to 4 drives (sans an expander) so you won't be able to get a ton of spindles to make 15k SAS really fly.
 
There are a couple of new SSD drives coming out that dont require TRIM....does the build have to be completed now?....7 does have a great util that will allow you to dump your configured image down on any new drive you wish!
 
Which SAS controllers offer best performance? LSI?

If I decided to go SSD would I benefit from using a PCI8x controller?

How long before performance degrades on current Intel SSDs?
 
Which SAS controllers offer best performance? LSI?

If I decided to go SSD would I benefit from using a PCI8x controller?

How long before performance degrades on current Intel SSDs?

I think a SAS controller will only benefit a SSD if you are doing sequential read\writes, as for performance degradation, how long is a piece of string. In other words, how much data is written daily, will the drive be full, 3/4 or 1/2 full?...No SSD will degrade to a state worse than a SAS drive. SLC SSD will not degrade, MLC SSD (current drives) require TRIM.

So, if depending on whether you use SSD for O/S without SAS controler and SAS for data, is whether or not you require WIndows 7 & TRIM.
 
Areca 1680ix is the best performing champ in mid grade class right now. They're coming out with 6Gbps based hosts this year optimized for SSD. I'm also told that they will have firmware updates to current controllers that can send TRIM to individual array member disks as a background task. I'll believe it when I see it!

I'm satisfied overall - replacing arrays with as many as 22 15K SAS drives (Fujitsu MBA series) with arrays comprised of 4 to 8 GS Kill Falcon (Indilinx Barefoot) drives.

I have many of these drives and can swap them out and restore machine images in minutes. Minor inconvenience yes. With a conventional PC set up with drives mounted in a chassis it would be a major inconvenience to do this. Performance does drop but not noticeably outside of benchmarks. IOPS still many times greater than 15K arrays.
 
Which SAS controllers offer best performance? LSI?

If I decided to go SSD would I benefit from using a PCI8x controller?

How long before performance degrades on current Intel SSDs?

1) Areca is indeed the speed champion as mentioned above.

2) Depends on your needs, x8 slot is quite a bit difficult to bandwidth saturate.

3) I have forgotten the statistics from reviews. I think it's past 70% full, and a "major" decline. Isn't it Intel SSD supposed to be intelligently managed?
 
I've done a little more reading and I think have raised my ignorance level instead of lowering it.

SSD and page file - how does page file access affect SSDs?

I'm now considering going with SAS 2.0 instead of SSDs...

Thoughts?
 
Yes you can run SATA devices on a SAS host but not the other way around. That's why it makes little sense to buy a SATA host to control lots of drives.

You add SATA encapsulation over head though.... SAS and SATA are electrically compatible but the SATA commands have to be embedded inside SAS "frames" which lowers the SATA drives performance.
 
I've done a little more reading and I think have raised my ignorance level instead of lowering it.

SSD and page file - how does page file access affect SSDs?

I'm now considering going with SAS 2.0 instead of SSDs...

Thoughts?

The page file is a lot of random writes which can wear out certain cells, or "fill" the blank space. Also since most page file writes are 4k at a time, the drive may need to read larger sector, erase it and re write it.
 
You add SATA encapsulation over head though.... SAS and SATA are electrically compatible but the SATA commands have to be embedded inside SAS "frames" which lowers the SATA drives performance.

I've compared both and any hit is unseen - possibly both hosts are at their IOP limit. In any case the performance is so much greater than a single drive (or six of them!) on motherboard "dumb hosts" it's not funny.

I had another box with a single velociraptor and got sick of how slow it was so took four SSDs and another 1680 and threw it in there. It was instant relief like the kind you have from walking all day with a splinter in your foot and it gets removed!
 
I've compared both and any hit is unseen - possibly both hosts are at their IOP limit. In any case the performance is so much greater than a single drive (or six of them!) on motherboard "dumb hosts" it's not funny.

I had another box with a single velociraptor and got sick of how slow it was so took four SSDs and another 1680 and threw it in there. It was instant relief like the kind you have from walking all day with a splinter in your foot and it gets removed!

Ah I see. I did my testing on a SAN. Basically 7200rpm SATA vs 7200rpm SAS as close as we could. Same brand, same cache.

The SATA disks had a much harder time keeping up with heavy random.
 
Google Microsoft SSD paper. -THEY- insist you put the swap file on SSD (and they know what they are talking about).

Frankly, no typical desktop home user is going to hit the lifetime of a reasonably full SSD before you build/buy a new PC or upgrade the drives. In 5 months, with a fair amount of SSD benching and normal use, I've used a whopping TWO percent of the drives lifetime (Intel SSD Toolbox).

Put everything on it except movies, photos, and other large, seldom-used files.
 
I've compared both and any hit is unseen - possibly both hosts are at their IOP limit. In any case the performance is so much greater than a single drive (or six of them!) on motherboard "dumb hosts" it's not funny.

I had another box with a single velociraptor and got sick of how slow it was so took four SSDs and another 1680 and threw it in there. It was instant relief like the kind you have from walking all day with a splinter in your foot and it gets removed!

Rubycon - I guess you used RAID5?
 
Rubycon - I guess you used RAID5?

Never - always RAID0.

Important data is kept on redundant arrays backed up hourly to another device. Even with 22 mechanical drives in RAID0 never experienced an issue. (that was not my fault!)
 
Back
Top