Intel Z68 and SSD Caching

JimOstell

Junior Member
Jul 2, 2011
2
0
0
I am starting to build out a Z68 motherboard Windows 7 system. I have a Vertex 3 240 gb SSD, and a pair Western Digital Caviar Black 1 TB drives that I had planned to put into RAID 1 with each other. The Vertex will be the boot drive, and the mirrored WD HDD will be my bulk data drive. Since I will be mostly reading from the WD HDD I figured RAID 1 would give me the safety while still increasing my reading speed.

I know the Z68 supports SSD caching on HDD, so here's my question. Would it make any sense to partition the SSD to create a virtual 40 gb SSD to use a cache on the RAIDED WD HDD, leaving the rest of the SSD to act as an unRAIDED boot drive? I hadn't seriously planned to do this because I'm not sure it's even possible, and even if it is, it might be too much complication for too little gain in performance. On the KISS principle, my default is to just do the simple thing listed in the first paragraph. But if anyone already knows if the partioning trick is even possible and knows it works well, I'd like to hear about it.

Thanks for any comments..
 

JimOstell

Junior Member
Jul 2, 2011
2
0
0
I just found the thread about using an SSD partition for Smart Response Technology on the Motherboard Forum. I had assumed this discussion would be under Memory and Storage.

Sorry for posting here before discovering the discussion was already ongoing in a different forum. I'll go there.
 

BonzaiDuck

Lifer
Jun 30, 2004
16,632
2,027
126
I am starting to build out a Z68 motherboard Windows 7 system. I have a Vertex 3 240 gb SSD, and a pair Western Digital Caviar Black 1 TB drives that I had planned to put into RAID 1 with each other. The Vertex will be the boot drive, and the mirrored WD HDD will be my bulk data drive. Since I will be mostly reading from the WD HDD I figured RAID 1 would give me the safety while still increasing my reading speed.

I know the Z68 supports SSD caching on HDD, so here's my question. Would it make any sense to partition the SSD to create a virtual 40 gb SSD to use a cache on the RAIDED WD HDD, leaving the rest of the SSD to act as an unRAIDED boot drive? I hadn't seriously planned to do this because I'm not sure it's even possible, and even if it is, it might be too much complication for too little gain in performance. On the KISS principle, my default is to just do the simple thing listed in the first paragraph. But if anyone already knows if the partioning trick is even possible and knows it works well, I'd like to hear about it.

Thanks for any comments..

I just successfully implemented ISRT (the SSD caching with Z68 you speak of) -- and I will comment. In my own case, choosing to test and over-clock the system later with VISTA-64 SP2 before reconfiguring under Windows 7-64 SP1, there seem to be some minor caveats which Intel says -- vaguely -- that they are addressing. In your case, TRIM is implemented natively within Windows 7, so that "caveat" does not apply. [See the post I just made 5 minutes ago).

My best understanding at this point, urging you seriously to research further and confirm, is that you cannot cache a RAID configuration with an SSD. The SSD caching is itself a RAID0 configuration of sorts -- although you do not risk losing your data if the SSD fails, and you should not lose any data if your system crashes provided that you implement SSD caching in "Enhanced" mode as opposed to "Maximized." Again -- I urge you to seek further confirmation, but at this line of my paragraph, I distinctly remember reading, from the Intel web-site and motherboard reviews that put ISRT through its paces and give a "how-to" summary for it, that -- no -- you cannot cache another RAID configuration. At least -- not at this time.

Further, you cannot just select a cache size between 18GB and 64GB. You only have a choice of the minimum (18GB) and the maximum (64GB). At least, that is the way I see it with the existing ISRT user interface. It wouldn't vary between VISTA-64 and Windows-7-64.

On the other hand -- yes -- you will then have "unallocated space" on the SSD. Once the caching is successfully implemented, you can go to the disk management feature of "Computer Management" or the "Manage" option under "Computer," initialize the disk-space (partitioning it) and then let Windows automatically format it as NTSF.

Both the "data storage" volume (the partitioned disk that gets a letter/label assigned), and the caching volume are part of a RAID0 configuration. A bit confusing, but . . . it is . . . what it is . . . .
 
Last edited:

BonzaiDuck

Lifer
Jun 30, 2004
16,632
2,027
126
THIS MESSAGE COULD BE MORE APPROPRIATE AS AN "EDIT" TO MY LAST POST, but it is significant enough I'd worry that it wouldn't be missed.

I STAND CORRECTED.

You CAN "accelerate" a RAID volume of multiple disks, as seems explicit in the "Requirements" section of Intel's own Smart Response Technology PDF posted on their web-site:

System Requirements:
For a system to support Intel Smart Response Technology it must have the following:
 Intel® Z68 Express Chipset-based desktop board
 Intel® Core™ Processor in the LGA 1155 package
 System BIOS with SATA mode set to RAID
 Intel Rapid Storage Technology software 10.5 version release or later
 Single Hard Disk Drive (HDD) or multiple HDD’s in a single RAID volume
 Solid State Drive (SSD) with a minimum capacity of 18.6GB
 Operating system: Microsoft Windows* Vista 32-bit Edition and 64-bit Edition, Microsoft Windows*
7 32-bit Edition and 64-bit

The 5th item pretty much settles that issue. Even so, people have "had trouble" implementing their ISRT with -- among other things -- a RAID0. I've seen also that this has multiple causes -- the most obvious of which is an attempt to incorporate drives on another non-Intel controller (e.g., Marvell etc.) The drives absolutely must be connected to the same INTEL controller that includes the SSD intended for caching.

The only hands-on examples I've seen so far involve single drives. Keep in mind that moving away from simplicity and toward complexity to push the boundaries of some new technological innovation invites further risk. I say that as a rule of thumb; rules-of-thumb should be guidelines -- not precepts [and do not let me cross over into the discussion of government and politics -- doesn't belong here.]

THAT BEING SAID. It would seem that -- if you could get it to work -- and I'd see less problems with it, anyway -- you start with enhancing a single HDD (or volume if you intend to cache an array of HDD's.) And you can only cache a single array or HDD per SSD.

So if the SSD caching improves speed like RAID0, it doesn't increase the possibility of failure (in "Enhanced" mode) over the single-drive budget-solution. But if you can get the SSD-caching to work with a RAID1, then you get the speed improvements of caching with no loss in reliability or increase in risk from a RAID0 speed enhancement, while you can now assure reliability and data integrity with a RAID1. This then would additionally give you some power-savings in electrical consumption by making a RAID5 of three or more drives seem unnecessary.

That's my word on the matter so far.
 

Dylie

Junior Member
Jul 23, 2011
5
0
0
Ok so I was in the same boat, as I just recently ordered the parts for my Z68 build.

With the build I wanted to transition over to an SSD first and foremost. I also had to get plenty of storage to back up my overly large media collection that is currently housed over 3 2TB external drives.

I wanted to back this data up as well since the externals are going to other members of my family so they can enjoy the media too. I figured the easiest and best way to do this would be a raid 5 with 3 seagate 3 TB drives. Well I hadnt exactly kept up to date on the raid side of things and kind of assumed that raid 5's were cool which upon further research I decided that raid 5 was just as reliable as a single drive if not less reliable so I nixed the idea and scoffed at raid 1 because its only media and hell Id just rather pick up 2 of the newer seagate 3TB drives when they come out and use those for day to day and just store the older drives as backups.

But back to the point, in this whole mess I found out about SSD caching so I decided it was a must have for my raid 5 and at that point I had it in my head that I would make 2 SSD cached raid 5s cuz it would rule. I quickly realized that only 1 would even be possible and then there are several issues that arise even then!

Clearly the raid has to be on the same sata chip as the SSD and preferably the intel sata ports if possible. If you are trying to do all of this on SATA 6.0 Gbps like I wanted to, you will quickly find that no mother board offers more that 2 Intel SATA ports. Ok so thats why I assumed all the benchies I found for SSD caching were with single drives. Then I stumbled on the Gigabyte motherboard that has the intel lawson creek ssd strapped to the board. Well since the SSD is connected to a Sata 2 port and is a Sata 2 drive I figured that I must have been incorrect in the 2 port 2 drive reasoning.

In my mind the extra bandwidth on SATA 3 could be put to use if there was a 9TB raid that had an SSD caching the data. Lots of data + SSD speed = high bandwidth strain? Even though a HDD cant even max out SATA 1 probably in my mind doing it all on SATA 3 would have been much more preferable than on Gigabyte board from my mind. However, looking back now, if you really wanna do the SSD caching with a RAID, it may be THE board to have.

Moving on, I convinced myself that I would have to try it on SATA 2 if I was going to try at all. I did some more research and found that the caching only really seems to help performance when used as a boot drive. From what it looked like, this is because the caching happens real time and is not some sort of accelerating road map of the drive. What I think I mean is that if I install a program it will take more or less just as long as it would have without having the drive SSD cached, and if I open the program after install then it will open at the same speed, but on the second time around it will open as fast as an SSD. When used as a boot drive you can see some benefit from the SSD caching in that if you consistently do the same things then they will be cached and speedy but if you are all over the place then you wont see any real performance increase. Obviously this fact rendered the SSD caching for my theorized media raid absolutely pointless. UNLESS MAYBE I was editing one of those videos, which not only do I not do a lot of but also it may only help the second time around haha.

My suggestion is to either completely ignore the tech because its a serious stop gap solution, or if you really decide you NEED this iffy tech then its probably a good idea to go with the Gigabyte board. Honestly if a board that has the damn SSD strapped to it cant perform well then I have no reason to believe any other set up would perform. But honestly I am finding it hard to see where this solution is really very useful since a 240GB wildfire has roughly 400GB space which should be enough to install a plethora of apps and is 2-3 times faster than the SSD cached drive when it is working at top speed. MAYBE I could see it for some sort of VM server wherein the data is often being reused since all the VMs can access programs on it, but that doesnt even really make a whole lot of sense because its iffy on whether VMs even do that lol and also if the cache would not be getting constantly reset all the time by the multitude of users. Although if there were only the same 4-5 apps available on the VMs then it might not be a problem since the app data never gets flushed as all the VMs see is a second open scenario constantly.
 

RaiderJ

Diamond Member
Apr 29, 2001
7,582
1
76
Wouldn't caching any mechanical drive holding primarily bulk data with an SSD be a waste? Certainly an array of mechanical drives - SSD's are WAY faster than spindles when it comes to small random access read and writes, but sequential speeds on spindles are pretty fast. Plus, doesn't Intel's Z68 caching ignore large files anyway? Seems like you'd be wasting the potential of the SSD.

I see SSD caching as a great way to take advantage of what SSD's do best: read and write small random bits of data. Let's the spindles handle all the bulk data transfer.

Curious to see some hard numbers, but I wouldn't be surprised if putting an SSD cache in front of a couple WD Black drives actually decreased performance (when used for data storage). Or at least provided very little practical benefit.
 
Last edited:

Patrick Wolf

Platinum Member
Jan 5, 2005
2,443
0
0
Would it make any sense to partition the SSD to create a virtual 40 gb SSD to use a cache on the RAIDED WD HDD, leaving the rest of the SSD to act as an unRAIDED boot drive?

Yes, but it's kind of pain to do as Intel doesn't offer this as an option in RST.

The comments here are grossly overcomplicating things and don't seem to even understand SRT so I'll keep it simple. Since you already have a 240GB SSD just use it for the OS having the WD drives raided. SRT is not very flexible so this is probably your best option.
 

BonzaiDuck

Lifer
Jun 30, 2004
16,632
2,027
126
Wouldn't caching any mechanical drive holding primarily bulk data with an SSD be a waste? Certainly an array of mechanical drives - SSD's are WAY faster than spindles when it comes to small random access read and writes, but sequential speeds on spindles are pretty fast. Plus, doesn't Intel's Z68 caching ignore large files anyway? Seems like you'd be wasting the potential of the SSD.

I see SSD caching as a great way to take advantage of what SSD's do best: read and write small random bits of data. Let's the spindles handle all the bulk data transfer.

Curious to see some hard numbers, but I wouldn't be surprised if putting an SSD cache in front of a couple WD Black drives actually decreased performance (when used for data storage). Or at least provided very little practical benefit.

There is another thread -- I think it's on "Motherboards" -- where we had an extensive discussion on this and I don't think it's dropped to "inactive." Several people provided ATTO and CrystalDiskMark results on their "accelerated" HDD with SSD caching.

It seems that bigger gains were reaped with an SATA-II SSD and an SATA-II HDD. That is, reported in a benchmark test at a review web-site, "up to" a 400% gain in performance over HDD standalone benchies, asymptotically approaching 80 to 90% of the SSD's performance after user "software and OS" habits are "learned."

I threw in with my test of a Caviar Black SATA-III and Elm Crest SSD SATA-III, followed by a SATA-III Veloci-Raptor and the same Elm Crest. The Elm Crest on an SATA-III controller is supposed to hit 520 MB/s or thereabouts in sequential "reads" and about half that for "writes." We saw a degradation against that spec for the formatted partition of the Elm Crest under the RAID0 configuration for caching, so that the "reads" fell to between 350 and 400, with a corresponding or proportionate drop for the writes.

With the Cav Black connected to an SATa-III port, "Accelerated" (SSD-cached) "reads" were just over 200 with "writes" between 98 and 99 under "enhanced" mode. In "Maximized" mode, reads and writes are near the same level, with some slight degradation of the accelerated-drive "reads" but a sizeable improvement in writes.

With the "accelerated" Raptor and enhanced-mode, reads were around 262 with writes at about 135 to 140. [These are "sequential" reads and writes-- I must remind.] In Maximized mode, reads take a small hit dropping below 250 while writes are over 180.

Point being: this suggests that attempts to capture greater-than-SATA-II performance with this technology don't seem to yield any gains. Instead of a 400% improvement over --say -- the Raptor's 145 MB/s sustained-throughput spec, the result is less than 200% of that or less than 100% improvement.

But also. SSD's will not spell the end of HDDs. They both fit into the established model of "memory" or "storage" in a pyramidal hierarchy of a computer system. Right now, a 500GB SSD will run you about $1,000. You can get a 2 TB drive for a tenth of that. When 1TB SSD's are released, we will see 3 or 4 TB HDDs.

NOw for the type of computing that we're used to, known only to our current or past experience, a 1TB SSD would be preferred for all OS software and files, but there's the problem of expense -- a factor in that pyramidal hierarchy of trade-offs between speed, storage volume and cost.

I very much like the idea of a 600 GB drive offering 260 MB/s reads and 180/190 MB/s writes. It seems "instantaneous." But I'd really like to see that performance realized to this mythical SATA-III, 6.0 GB/s spec, so that the overall accelerated performance is around 375 or 400 MB/s for those sequential writes, and maybe 325 to 350 for the reads.

If I missed or forgot something, and someone thinks that wish is still achievable with the equipment I've described, please let me know . . .
 

RaiderJ

Diamond Member
Apr 29, 2001
7,582
1
76
Very informative post! Had to read it a couple of times to absorb it all. From the performance limits you've described, it sounds like you're running up against the wall with the technology. Since the Z68/SSD caching handles files, you're up against how those files are handled by the OS and file system.

If a bunch of random writes come in, the SSD cache should handle them relatively quickly, near the full performance of the drive assuming all of the files are in the cache. You'd hit a bottleneck however when those same writes had to be reflected onto the HDD. You'd see a performance drop that would dip towards the random write performance of the HDD as it put all the writes on disk.

Now, if the OS/file system did a copy-on-write method, like ZFS does, it could turn those random writes into a sequential write. You'd then see the performance lean towards the sequential write speed of the drive instead of the random write speed. You'd still see a performance increase, assuming the SSD sequential write speed is greater than that of the HDD.

Does that make sense? Not sure I have all my thoughts correct.

I'd be curious to see how a Z68/SSD cache would perform in front of a fast HDD when running a ZFS file system on it, instead of Windows/NTFS. Do you know if something like that has been done?

EDIT: Don't think you would want to do Z68 caching on ZFS even if you could. Still, I'd be curious to see how well a Z68 SSD cache would perform if a file system/OS was able to reorganize data requests to best match the capabilities of the setup.
 
Last edited:

Dylie

Junior Member
Jul 23, 2011
5
0
0
Bonzai, when you were benching your performance did you also use the drive in real world scenarios and not just benchmarking programs?

The reason I ask is because in my mind the best benefit of an SSD is the much improved install times. From what I could find is that it would install at the same speed as the regular HDD.

Also, a 240GB sandforce drive is more or less a 400GB SSD for around 500 dollars. Its the best price point of any SSD at rougly only $1.20/GB whereas an Intel SSD 510 is about $3/GB and a 120GB sandforce is about $1.50/GB.

That being said there are extreme problems with most of the Sandforce drives right now. Hopefully the Wildfire has avoided them