Tiered storage (fusion drive) or progressive caching on windows possible?

BD2003

Lifer
Oct 9, 1999
16,815
1
81
Is there software out there that can provide something similar to apples fusion drive on windows? Better yet, something that can get the ram cache involved?

Currently, my setup is this:

128GB Samsung 840 EVO - OS drive with plenty of spare space.
256GB Samsung 840 - SSD drive for games, that I need to manually use steam mover to shift games over to.
64GB SSD + 2TB HDD using intel smart response, where I keep most of the 1.2TB or so of games.
8GB of RAM - 4GB of which is sitting completely idle/zero/free when it could be used for cache.

As you can probably tell, I hate load times. Like really, really hate them. I also hate having to manually manage this stuff.

Ideally I could have a setup where I'd have one HDD and one SSD, and I'd throw in another 8GB of ram for good measure. Recent read/writes and the most frequently accessed files/blocks would then be intelligently cached onto the SSD and RAM. That way any time I'd download a game, it'd be ready and waiting on the SSD, potentially even RAM...and I wouldn't need to manage a thing. Even though progressive caching like this is a fundamental concept in computing, I can't find any modern solution that brings it all together in an intelligent way.

I can enable RAPID RAM caching on my EVO - but that only caches a maximum of 1GB on the system drive. Still leaves a ton of ram idle and doesn't help out the other drives.

Intel smart response is nice for accelerating a HDD, but I'm limited to a maximum of 64gb (which barely fits a game like titanfall). It's also a dirty hardware hack that's really particular about setup, I can't just carve out a portion of the OS drive to cache the HDD without jumping through hoops.

IMO windows itself should have all this built in. They've even got the intelligent ram caching built in (superfetch) - it's just disabled on SSDs, and manually re-enabling it in the registry seemingly has no effect. Either that, or they toned it down so much that it basically preloads next to nothing. Nor is it possible to have it preload anything beyond the system drive. The standard RAM cache of recently accessed files doesn't help me after a reboot, and it's prone to being flushed by any random I/O.

In every case the developers have decided to arbitrarily restrict the ways I can utilize the hardware...but maybe there's a total solution out there I'm not aware of that lets me do what I want?
 

thecoolnessrune

Diamond Member
Jun 8, 2005
9,673
583
126
Firstly, it sounds as though you may not thoroughly understand how many of these technologies work. You using terms such as "arbitrary restrictions" is inflammatory to a wealth of hard work that goes into working with these technologies. It would be recommended that you come into looking for help and answers or explanations with an open mind, rather than a pre-aimed gun.

Let me start by saying there is some cognitive dissonance in desire for an Apple Fusion product, while ignoring active products. Apple Fusion uses proprietary software on proprietary flash drives to offer up to a 128GB read cache for the platter-based drive its paired with. Such SSD + Software combos are readily available from OCZ, Mushkin, Crucial, SanDisk, etc. etc. But you decided to buy an SSD from one that doesn't have the software.

The following statement is false:
That way any time I'd download a game, it'd be ready and waiting on the SSD, potentially even RAM...and I wouldn't need to manage a thing. Even though progressive caching like this is a fundamental concept in computing, I can't find any modern solution that brings it all together in an intelligent way.

It is true that the idea of caching is fundamental to computing, but there are many intelligent solutions out there to accomplish it, but not many for the average person who just wants to buy a product from Best Buy and go. This is because Storage tiering just isn't where it needs to be yet to be something that's "just there", and in a world where businesses have a hard-on for Windows XP and forums of gamers swearing up and down that Windows 8 is the next coming of Hitler, I can't blame Microsoft for that. There's also the issue that people conflate caching in Storage into one thing, but really there is read caching, and write caching, and the two require *very* different approaches and considerations. I could go into more junk, but I feel that would get in the way of what you feel you've come for, which is solutions, so let me try to help with that :)

If you want Windows solutions, then like Apple, you need to bring together a combination that works. There are many SSDs being sold as cache drives as I mentioned before, and Intel SRT as you're already using is also a source.

If you want a hardware agnostic approach, then use Windows 8.1. Think modern. OS X didn't support Fusion until 10.8.2, which was released just before November 2012. It's irrational to place the burden of Microsoft to produce an answer to this quandary with a 3 year older operating system (not that you were). Windows 8.1 brings a modern filesystem (ReFS) in with new features, including checksumming and a proper soft-raid implementation. But what it also provides is Storage Spaces. Using Storage Spaces, you can create a tiered storage array that treats your drive and SSDs as Storage + Caching targets. Now unfortunately, Windows 8.1 Storage Spaces can't *create* a tiered Storage Space, but the functionality to support it is there. If you get a hold of Windows Server 2012 R2 (any trial download will do), you can create a Tiered Storage Space in there. Then its just a matter of installing Windows 8.1 and mounting that created Storage space. This is my recommended suggestion if you're using Windows 8 :)

Lastly, if you want completely agnostic solutions, consider a RAID controller with a better caching system, but this solution would require a larger outlay in money. For instance a LSI 9260-4i used would run you $250 used. From there, the CacheCade 2.0 software would cost you another $200. That $450 would give you the ability to use a 3-tiered system of Controller RAM -> SSD -> Hard Drive on any system with any OS running on the array.

If you really want to use a top to bottom memory + SSD + hard drive caching system, then you have to use a system that has storage in mind from point 1, and Windows/OSX isn't it. You have to consider volatility, block handling, write caching and protection of cached data, etc. etc, and all of that is just not on the radar of a company that has far more things to worry about. For that, you'd have to think about deploying some sort of SAN that can provide tiered caching, like ZFS.

Hopefully this can provide some direction. :)
 

BD2003

Lifer
Oct 9, 1999
16,815
1
81
Firstly, it sounds as though you may not thoroughly understand how many of these technologies work. You using terms such as "arbitrary restrictions" is inflammatory to a wealth of hard work that goes into working with these technologies. It would be recommended that you come into looking for help and answers or explanations with an open mind, rather than a pre-aimed gun.

Let me start by saying there is some cognitive dissonance in desire for an Apple Fusion product, while ignoring active products. Apple Fusion uses proprietary software on proprietary flash drives to offer up to a 128GB read cache for the platter-based drive its paired with. Such SSD + Software combos are readily available from OCZ, Mushkin, Crucial, SanDisk, etc. etc. But you decided to buy an SSD from one that doesn't have the software.

The following statement is false:


It is true that the idea of caching is fundamental to computing, but there are many intelligent solutions out there to accomplish it, but not many for the average person who just wants to buy a product from Best Buy and go. This is because Storage tiering just isn't where it needs to be yet to be something that's "just there", and in a world where businesses have a hard-on for Windows XP and forums of gamers swearing up and down that Windows 8 is the next coming of Hitler, I can't blame Microsoft for that. There's also the issue that people conflate caching in Storage into one thing, but really there is read caching, and write caching, and the two require *very* different approaches and considerations. I could go into more junk, but I feel that would get in the way of what you feel you've come for, which is solutions, so let me try to help with that :)

If you want Windows solutions, then like Apple, you need to bring together a combination that works. There are many SSDs being sold as cache drives as I mentioned before, and Intel SRT as you're already using is also a source.

If you want a hardware agnostic approach, then use Windows 8.1. Think modern. OS X didn't support Fusion until 10.8.2, which was released just before November 2012. It's irrational to place the burden of Microsoft to produce an answer to this quandary with a 3 year older operating system (not that you were). Windows 8.1 brings a modern filesystem (ReFS) in with new features, including checksumming and a proper soft-raid implementation. But what it also provides is Storage Spaces. Using Storage Spaces, you can create a tiered storage array that treats your drive and SSDs as Storage + Caching targets. Now unfortunately, Windows 8.1 Storage Spaces can't *create* a tiered Storage Space, but the functionality to support it is there. If you get a hold of Windows Server 2012 R2 (any trial download will do), you can create a Tiered Storage Space in there. Then its just a matter of installing Windows 8.1 and mounting that created Storage space. This is my recommended suggestion if you're using Windows 8 :)

Lastly, if you want completely agnostic solutions, consider a RAID controller with a better caching system, but this solution would require a larger outlay in money. For instance a LSI 9260-4i used would run you $250 used. From there, the CacheCade 2.0 software would cost you another $200. That $450 would give you the ability to use a 3-tiered system of Controller RAM -> SSD -> Hard Drive on any system with any OS running on the array.

If you really want to use a top to bottom memory + SSD + hard drive caching system, then you have to use a system that has storage in mind from point 1, and Windows/OSX isn't it. You have to consider volatility, block handling, write caching and protection of cached data, etc. etc, and all of that is just not on the radar of a company that has far more things to worry about. For that, you'd have to think about deploying some sort of SAN that can provide tiered caching, like ZFS.

Hopefully this can provide some direction. :)

I'll try and ignore the condescension since you're actually being helpful. :)

I am using 8.1, and I'm already aware of the tiered storage spaces in server 2012 R2, but I'm not fully convinced that windows 8.1 could manage the space. Googling comes up with nothing beyond it being not supported at all in 8.1. (Believe me, I've gone down that rabbit hole already)
 

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
Is there software out there that can provide something similar to apples fusion drive on windows?
WD's Black2, kind of sort of, maybe? So not really.

Better yet, something that can get the ram cache involved?
Use more RAM. If you RAM is not being used, that means that either your working set is too small for Windows to be able to fill it up, or you are using a ton of RAM and then quickly releasing it.

Your best option is to stop using the HDD for any live data but streaming media and backups. You can make up to a ~5.5TB SSD RAID 0 on most Intel H/Z-series boards.

Apple Fusion uses proprietary software on proprietary flash drives to offer up to a 128GB read cache for the platter-based drive its paired with. Such SSD + Software combos are readily available from OCZ, Mushkin, Crucial, SanDisk, etc. etc.
I know of no such software that is comparable to Fusion Drive.

For that, you'd have to think about deploying some sort of SAN that can provide tiered caching, like ZFS.
ZFS is a block FS, not a SAN.
 

BD2003

Lifer
Oct 9, 1999
16,815
1
81
Use more RAM. If you RAM is not being used, that means that either your working set is too small for Windows to be able to fill it up, or you are using a ton of RAM and then quickly releasing it.

It does get used when I play some of the bigger games, and then drop considerably thereafter. Assumedly some of that game remains in cache, but considerably less than I'd expect, even after playing several different games. Sure, there's times when all my free memory is being used as cache like it should be, but it tends to be filled with useless stuff of what I was doing, not what I'm going to be doing.

I don't fully understand the rationale behind why MS disabled super fetch for SSDs. Yes, the *need* for proactive RAM caching is considerably less with an SSD. But at the same time, free memory is wasted memory, and an SSD could easily fill it in mere seconds without causing the kind of slowdown that mechanical disk thrashing causes. The functionality is there, it's really good tech and it won't let me use it! It's not even that I expect a dramatic speed up, it's more like leaving all that RAM free bothers me on a philosophical level.
 

thecoolnessrune

Diamond Member
Jun 8, 2005
9,673
583
126
WD's Black2, kind of sort of, maybe? So not really.

Use more RAM. If you RAM is not being used, that means that either your working set is too small for Windows to be able to fill it up, or you are using a ton of RAM and then quickly releasing it.

Your best option is to stop using the HDD for any live data but streaming media and backups. You can make up to a ~5.5TB SSD RAID 0 on most Intel H/Z-series boards.

I know of no such software that is comparable to Fusion Drive.

ZFS is a block FS, not a SAN.

This post kind of blows my mind. Your last comment was pretty much filed up there with pedantry. Windows has no support, or open project to support the ZFS file system in the present or future, so what exactly would he need to get support for this file system on Windows? Some mounted volume from another appliance, be it a virtual machine or an external appliance. I did not figure that this would require any strong exhaustion of the mind to do.

It would have been fine to chalk it up to that, but then you make your other comments, and I'm confused..

You state that there is no software comparable to Fusion Drive? Comparable how? Comparable in codebase? Comparable in features? Comparable in function? Fusion Drive actively shifts specific data from one drive to another, giving you no more than single drive redundancy at any one time. Other caching solutions (read, pretty much every other one), always stores data on a specific base data storage group, with "hot" data cached to SSDs. Is this pedantry again that you believe that this isn't "comparable"? Or are you saying that Fusion Drive is inferior to other implementations because all data never exists on a single Storage set? Or are you saying its superior by being able to give you 128GB more space at the sacrifice of never having data on a single set? The only other thing I can think of is that you believe there's genuinely no other invisible caching software out there, which the post you quoted already shows that's not the case at all.

And to top it all off, the only contribution you actually make to this discussion is suggesting the Black2 drive, which does not have any caching software of any sort bundled with it other than to make the 2 drives work through one SATA connection in Windows since it doesn't use a Port Multiplier (but rather one LBA set). Your suggestion does absolutely nothing to answer the OP's question. If you believed that drive to be relevant to the discussion, it's no wonder you don't see any software out there "comparable" to Fusion Drive. :colbert:
 

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
This post kind of blows my mind. Your last comment was pretty much filed up there with pedantry.
Not so. A little smartassery, maybe...
Windows has no support, or open project to support the ZFS file system in the present or future, so what exactly would he need to get support for this file system on Windows?
If 100MBps is acceptable, you could, as you suggested, use NAS. SAN is not what you were describing, and would be all downsides, anyway. Assuming ~100MBps wouldn't be acceptable, big SSDs become the way to go.
You state that there is no software comparable to Fusion Drive? Comparable how?
In implementing write caching, and putting files on the SSD without concern for their state on the HDD. That's what makes it generally faster than most, if not all, Windows-based cache programs.

However, I am not an expert on exactly what features may have been added to some specific programs, or if any OEMs have implemented new ones, and there's always rumors of something like that coming to Windows, one of these years. If one exists, I'd like to know, and if reasonable to do so, try it out.
 
Last edited:

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
Sure, there's times when all my free memory is being used as cache like it should be, but it tends to be filled with useless stuff of what I was doing, not what I'm going to be doing.

I don't fully understand the rationale behind why MS disabled super fetch for SSDs.
That first sentence I quoted is part of why. It wastes IOs filling RAM with junk. To be useful just after you free a bunch of RAM, it would need to keep track of what was being evicted that wouldn't have been if you'd had more RAM. In doing that, it would then be accessing your SSD hundreds of times more often than otherwise, if not even more. With battery-powered devices being the common case, now, and the speedup likely imperceptible, it doesn't make much sense.

Superfetch only predicts well if you have a very regular pattern of file access, in the best case, and random junk has been filling your RAM cache. Some given game could be that random junk. It does well for business PCs that have regular virus scans and such, or someone that checks their email and facebook pages about the same time every day but then becomes less effective form there on out.

They may be able to speculatively improve it in the future, to react to what you're doing now, rather than just your habits based on time (FI, start a game EXE, then load everything that game was loading the last few times, before it asks for it), but haven't gotten there (as much work as they've but into it over the last 15 years, I'd bet they're trying).
 

BD2003

Lifer
Oct 9, 1999
16,815
1
81
That first sentence I quoted is part of why. It wastes IOs filling RAM with junk. To be useful just after you free a bunch of RAM, it would need to keep track of what was being evicted that wouldn't have been if you'd had more RAM. In doing that, it would then be accessing your SSD hundreds of times more often than otherwise, if not even more. With battery-powered devices being the common case, now, and the speedup likely imperceptible, it doesn't make much sense.

One could also make the case that overclocking a CPU doesn't make much sense on a battery powered device either, but that shouldn't mean that desktop users have to get the short end of the stick. The speedup can often be significant - take a game like path of exile. Takes damn near forever to load on a HDD (60+ seconds). About 25 seconds off an SSD. And about 15 sec off a RAM drive or fresh in the file cache.

Superfetch only predicts well if you have a very regular pattern of file access, in the best case, and random junk has been filling your RAM cache. Some given game could be that random junk. It does well for business PCs that have regular virus scans and such, or someone that checks their email and facebook pages about the same time every day but then becomes less effective form there on out.

It's been tamed considerably since Vista, where it used to go out of it's way to fill your RAM, now it just caches a very small subset to accelerate the launch, even on HDDs. Maybe that's more appropriate for the vast majority of users, but I'd really like that mode of operation back.

They may be able to speculatively improve it in the future, to react to what you're doing now, rather than just your habits based on time (FI, start a game EXE, then load everything that game was loading the last few times, before it asks for it), but haven't gotten there (as much work as they've but into it over the last 15 years, I'd bet they're trying).

That'd be nice, but that's a step above what I'm trying to achieve. It doesn't need to be that proactive, but at least as far as the RAM cache goes, all I basically want is something akin to smart response, just for memory. Basically exactly what ZFS is doing:

ZFS cache: ARC (L1), L2ARC, ZIL[edit]
ZFS uses different layers of disk cache to speed up read and write operations. Ideally, all data should be stored in RAM, but that is too expensive. Therefore, data is automatically cached in a hierarchy to optimize performance vs cost.[33] Frequently accessed data is stored in RAM, and less frequently accessed data can be stored on slower media, such as SSD disks. Data that is not often accessed is not cached and left on the slow hard drives. If old data is suddenly read a lot, ZFS will automatically move it to SSD disks or to RAM.
The first level of disk cache is RAM, which uses a variant of the ARC algorithm. It is similar to a level 1 CPU cache. RAM will always be used for caching, thus this level is always present. There are claims that ZFS servers must have huge amounts of RAM, but that is not true. It is a misinterpretation of the desire to have large ARC disk caches. The ARC is very clever and efficient, which means disks will often not be touched at all, provided the ARC size is sufficiently large. In the worst case, if the RAM size is very small (say, 1 GB), there will hardly be any ARC at all; in this case, ZFS always needs to reach for the disks. This means read performance degrades to disk speed.
The second level of disk cache are SSD disks. This level is optional, and is easy to add or remove during live usage, as there is no need to shut down the zpool. There are two different caches; one cache for reads, and one for writes.
The read SSD cache is called L2ARC and is similar to a level 2 CPU cache. The L2ARC will also considerably speed up Deduplication if the entire Dedup table can be cached in L2ARC. It can take several hours to fully populate the L2ARC (before it has decided which data are "hot" and should be cached). If the L2ARC device is lost, all reads will go out to the disks which slows down performance, but nothing else will happen (no data will be lost).
The write SSD cache is called the Log Device, and it is used by the ZIL (ZFS intent log). ZIL basically turns synchronous writes into asynchronous writes, which helps e.g. NFS or databases.[34] All data is written to the ZIL like a journal log, but only read after a crash. Thus, the ZIL data is normally never read. Every once in a while, the ZIL will flush the data to the zpool; this is called Transaction Group Commit. In case there is no separate log device added to the zpool, a part of the zpool will automatically be used as ZIL, thus there is always a ZIL on every zpool. It is important that the log device use a disk with low latency. For superior performance, a disk consisting of battery backed up RAM such as the ZeusRAM should be used. Because the log device is written to often, an SSD disk will eventually be worn out, but a RAM disk will not. If the log device is lost, it is possible to lose the latest writes, therefore the log device should be mirrored. In earlier versions of ZFS, loss of the log device could result in loss of the entire zpool, therefore one should upgrade ZFS if planning to use a separate log device.
 

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
(...) that shouldn't mean that desktop users have to get the short end of the stick.
They do, though, because there's just one way chosen to get the job done, to save them some hassle. :\
 

yinan

Golden Member
Jan 12, 2007
1,801
2
71
Not so. A little smartassery, maybe... If 100MBps is acceptable, you could, as you suggested, use NAS. SAN is not what you were describing, and would be all downsides, anyway. Assuming ~100MBps wouldn't be acceptable, big SSDs become the way to go.
In implementing write caching, and putting files on the SSD without concern for their state on the HDD. That's what makes it generally faster than most, if not all, Windows-based cache programs.

However, I am not an expert on exactly what features may have been added to some specific programs, or if any OEMs have implemented new ones, and there's always rumors of something like that coming to Windows, one of these years. If one exists, I'd like to know, and if reasonable to do so, try it out.

Why is he limited to 100MB sec? 10Gbe is affordable now. $300 per card, and $800 for the switch. Not too bad for the speed. I am going to need a 2nd switch here soon though.

What he could do is have a ZFS box present a share to windows and write at 10Gb to that.
 

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
Why is he limited to 100MB sec? 10Gbe is affordable now. $300 per card, and $800 for the switch.
The networking gear, with no useful NAS attached, costs about as much as 3TB of SSDs. Then, another 1-2TB worth for a NAS.

You could do it, but it makes little sense, when local drives are cheaper and faster, and the data size isn't that big.
 

Tristor

Senior member
Jul 25, 2007
314
0
71
Not so. A little smartassery, maybe... If 100MBps is acceptable, you could, as you suggested, use NAS. SAN is not what you were describing, and would be all downsides, anyway. Assuming ~100MBps wouldn't be acceptable, big SSDs become the way to go.
In implementing write caching, and putting files on the SSD without concern for their state on the HDD. That's what makes it generally faster than most, if not all, Windows-based cache programs.

However, I am not an expert on exactly what features may have been added to some specific programs, or if any OEMs have implemented new ones, and there's always rumors of something like that coming to Windows, one of these years. If one exists, I'd like to know, and if reasonable to do so, try it out.

I don't know why people continuously ignore Infiniband when it's specifically designed for storage fabric and it DOESN'T require a switch. You can use OFED (free) drivers on Windows, Linux, and BSD to get Infiniband support including SCSI-RDMA (functionally works like iSCSI). A couple of QLogic dual-port 4x QDR PCI-E cards will set you back ~$800 or less and can easily sustain ~6GB/s to a single target, which is far faster than other alternative. ZFS ARC/L2ARC is highly efficient and has no upper limitation for RAM or SSDs used for read caching. I'm currently running 16x3TB drives in two 8 disk RAIDZ-2 vdevs in a single pool with 4 cache devices and am able to sustain 2GB/s reads over IB to my HV from my media server. I'm limited by my cache devices primarily, using a good PCI-E SSD as a cache device would probably be better but more costly and I was building on a budget.

While it's a bit ridiculous for a gaming desktop, if you don't mind having a second computer sitting there purely to act as a super-highspeed block storage device, then there's absolutely nothing wrong with the idea of building a SAN using FreeBSD+ZFS to act as that device. Steam also natively supports installing all your games to a different drive (can be selected at install time as well as setting a default that's not C:\). If the OP is that persnickety about load times, there's certainly nothing technical stopping him from solving his issue once and for all as long as he's willing to spend the money.

Also, what the poster was describing is a SAN, not a NAS. There is a difference between the two.

SAN == Storage Area Network. It's designed a Storage Fabric (that is the concept of horizontal scale and interaction). The primary difference though is that a SAN is a BLOCK storage device across the network. It uses iSCSI, SCSI-RDMA, Fibre Channel, or similar technology to provide a dedicated block storage target to a system on the network.

NAS = Network Attached Storage. It's designed a single endpoint (no horizontal scale) which provides SHARED FILE-LEVEL storage. This is typically implemented with a protocol like CIFS/SMB or NFS (although NFS can be mounted as a block device or treated as file-storage, it's fundamentally file-level storage in the way that it operates).

A device can potentially act as both a SAN and a NAS, as my media server does. In order to get the experience the OP was discussing though, he needs a block device that meets his needs, not file storage. Mounting a network drive in Windows is nowhere near the same as a remote block device in performance or compatibility to use with other software (Steam).

Also, I should probably point out that the purpose of tiered storage systems isn't to increase throughput, or at least not directly. It's to reduce latency for often-accessed data. Getting that data into lower-latency storage mechanisms and physically closer to where it's needed can have a huge effect on reducing end-to-end latency in a system. In the case of my described single-device SAN above, the most frequently accessed data would be in RAM-cache on the HBA, followed by RAM-cache on the SAN, followed by SSD cache on the SAN, followed by on the HDD array. Latency drop is almost exponential at each tier. ~17ms of latency on disk, ~0.1ms of latency on SSD, down to around ~50ns latency on RAM. Of course, those latency figures are only what's in the local system, moving it across a network connection (of any kind, no matter how high the throughput is) takes a pretty significant latency hit. Despite this, it's still /far/ faster than HDD storage locally, and on par with local SSD storage while enabling me to have a large quantity (32TB) of storage reflected at that speed. Obviously, if I were to do an operation that read the entire 32TB it wouldn't be that fast, but 256GB of SSD + 64GB of RAM cache is pretty significant.
 
Last edited:

BD2003

Lifer
Oct 9, 1999
16,815
1
81
My interest is piqued. I've got a 2 bay synology nas sitting right next to my desktop, and it's always kind of bugged me that despite the RAID my throughput is limited by the gigabit ethernet.

So you're saying if I build my own, there's a way to connect it locally while still functioning as a nas for other devices? Obviously even esata is too slow for it to do what I want, so what connection could even handle that? Thunderbolt?
 

KeithP

Diamond Member
Jun 15, 2000
5,664
201
106
Apple Fusion uses proprietary software on proprietary flash drives to offer up to a 128GB read cache for the platter-based drive its paired with.

Forgive me if I am misunderstanding what you are trying to say but I believe this explanation is incorrect. Apple's solution moves data between the SSD and HD depending on various factors but the same data does not reside on both drives at once. It can also be enabled on 3rd party SSDs and HDs given the proper terminal commands.

From http://www.anandtech.com/show/6679/a-month-with-apples-fusion-drive/2

Unlike traditional SSD caching architectures, Fusion Drive isn’t actually a cache. Instead, Fusion Drive will move data between the SSD and HDD (and vice versa) depending on access frequency and free space on the drives. The capacity of a single Fusion Drive is actually the sum of its parts. A 1TB Fusion Drive is actually 1TB + 128GB (or 3TB + 128GB for a 3TB FD).

-KeithP
 

Tristor

Senior member
Jul 25, 2007
314
0
71
My interest is piqued. I've got a 2 bay synology nas sitting right next to my desktop, and it's always kind of bugged me that despite the RAID my throughput is limited by the gigabit ethernet.

So you're saying if I build my own, there's a way to connect it locally while still functioning as a nas for other devices? Obviously even esata is too slow for it to do what I want, so what connection could even handle that? Thunderbolt?

Yes, you can create a volume in ZFS (functions like a LUN) that is tied to SCSI-RDMA via Infiniband as a dedicated remote block device while utilizing CIFS to treat the remainder of the storage as a NAS. Obviously you're sharing I/O operations, but it shouldn't incur a significant performance penalty unless the NAS side is getting hammered. Infiniband is the way you would wish to connect this for best performance, although it is not cheap, as I mentioned before.