• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Do you think we will see U.2 hard drives?

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
SATA Express stillborn as a device endpoint does not necessarily mean that the interface is going away.

Keep in mind those two DRAM-less NVMe PCIe 3.0 x 2 controllers (Marvell 88NV1160 and Phison E8/E8T) I mentioned in this post are still relatively recent announcements.

So with that becoming available, perhaps will eventually see SSHDs capable of taking of advantage of PCIe 3.0 x 2? (By my reckoning this would require 128GB of 3D NAND per drive.....perhaps 64GB? if using the small/mobile size generation 2 3D NAND dies mentioned in this article.).

With the second generation 3D NAND, Micron is shifting their strategy slightly by offering at least two different die sizes. We've previously heard about the 512Gb 64-layer 3D TLC part, but Micron will also be making a smaller 256Gb 3D TLC part. This die is planned to be the smallest 256Gb NAND flash die available from any vendor, at 59 mm^2 or 4.3Gb/mm^2. The smaller die is intended for the mobile market where the 512Gb part will be physically too large. Micron's market share for NAND in the mobile market has been quite low, in part because they tend toward making large, high-capacity chips. The new smaller part will give them a chance to go after a much larger share of the rapidly expanding mobile storage market. The smaller part may also see some use in the SSD market for the smallest models in each family, to avoid the pitfalls of having too few dies to stripe data accesses across.
 
Last edited:
No mfgs have even announced any shipping products for it.
Actually, there is one: Asrock's USB 3.1 5.25" bay panel. Seems like Asus made one too. Don't know if it shipped, though.
That seriously sounds like a complete guess. How many motherboards had SATA Express at launch? How many AM4 motherboards are out there now for a lunch platform? Do you believe every motherboard following this AM4 launch will be feature parity with existing boards throughout the service life? You're taking a bunch of observations (new platform, new CPUs, new chipset, new designs) and saying that it's diminishing, which it is, but you provide complete conjecture as to *why* its diminishing. That also includes the fact that you seem to think it has some sort of "permanence" about it. Samsung SSD shipments were down Q1 2016, but no one considered it permanent. And it wasn't.
Okay, first of all, you can't possibly be drawing parallels between sales trends of an entire product segment (SSDs) and adoption of a "new" interface. Where on earth is the commonality there? History has shown us time and time again that slow or non-existent adoption of new connector standards is a clear indication that the standard will go away quickly. See Firewire or eSATA for reference.

Second, SATAe has been plentifully available on motherboards since Haswell. Yet devices just don't show up. How do you explain that?

And sure, we haven't seen half of the AM4 motherboards supposed to be available (30-something of the 82+ AMD mentioned). Yet very few of them - including high-end models - have SATAe.
My numbers aren't wrong, but I could see the mis-interpretations. In my original post, I was discussing AM4, because AMD is implementing SATA Express on Chipset. In that area, reviewers have noted that the 2 SATA Express Ports can be divided into SATA, used as 2 x2 PCIe NVMe ports, or coalesced into an x4 PCIe port, hence why it can also be another M.2 slot.
Sorry, but you said this:
If they want, the end user can deploy 2 SATA ports. If they want, the end user can deploy 2 PCI-e NVM drives. If they want, the OEM can take the 4 PCI-e lanes and redirect that for any general purpose they want, or just make a slot.
I.e. you're explaining it as if 2 sata ports (which with an additional connector make up a single SATAe connector) could somehow transmit 4 lanes of PCIe, or connect 2 NVMe drives. Which is simply false. I'll cut you some slack, though, and say your wording was simply not clear. Still, the X370 chipset allows for either 6 SATA ports + 2 lanes of PCIe 2.0, or 4 SATA ports + 4 lanes of PCIe 2.0. For now, the main implementation of this seems to be to use the dedicated x4 PCIe 3.0 from the Ryzen CPU to run an m.2 slot, and use the remaining lanes for SATA ports and the remaining PCIe for onboard devices or PCIe slots (and in some cases a second m.2 slot with either SATA or SATA/PCIe 2.0 x2 speeds).
Again, as you noted, it's conjecture. You have provided no evidence, while I have provided evidence indicating the opposite: http://www.anandtech.com/show/9369/...-sata-6gbps-jmf815-pcie-controllers-next-year
Ahem. You're quoting an article from 2015 stating that controllers will be coming "next year". If so, where are they? Did you miss 2016? Or might they also be one of the victims of manufacturers cutting their losses relative to a stillborn standard? Who here is guilty of conjecture now? "Oh, they said it was in development, so it'll for sure arrive at some point!"
Controllers stay on process nodes a very long time due to cost reasons. Even after the jump to 28nm, it still doesn't necessarily mean there aren't room for x2 controllers. You're welcome to bring conjecture if you want, but stating it as some sort of fact is silly.

EDIT: Noting that @cbn posted this information ahead of me. I made this post in my spare time over 3 hours, so I missed it. 🙂
Again, the post is from 2015. I can't find a single mention of the controller since. Either it's cancelled or it has just never been used in a single consumer-facing product.
To the first point, that's fine. Again, SATA Express does not require devices be made for the interface to live.
To the second, a single brand new platform launch does not a trend make. Since you said "facts", can you bring any evidence forward with any numbers that shows SATA Express is releasing on fewer motherboards within stable platforms? Or is this more of an observation?
Some examples:
Now, again, those are a few examples. Do I think they're indicative of a trend? Yes, considering that for the last two generations, pretty much every high-end board has had SATAe connectors.

To your first point: actually, I'd argue that it does. Why? Because storage devices represent the vast majority of connected devices in PCs today. Bay devices, AICs outside of GPUs, and other expansions have largely gone extinct thanks to more and more features being integrated into motherboards. If you don't have SATAe storage devices, you're already limiting the use of that connector to a tiny handful of people.
To your second: It doesn't make a trend, but see above. SATAe is slowly but surely disappearing from motherboards. Some manufacturers (Asus, Asrock) are doing away with it faster than others (Gigabyte seems to love SATAe), but it's happening.
It adds board complexity? How?
By having to route PCIe lanes out to the far edges of the board, and requiring these to be right next to SATA ports.
AM4 leaves it entirely up to the OEM. They could not implement it at all, or they could make additional SATA ports. Or they could even make the block a x4 PCIe slot, or hell, make it an M.2 slot. The idea behind AM4's implementation is that it gives the OEM total freedom on the final implementation without the complexity of having to add an additional controller to the motherboard. How can M.2 help with that? How can U.2 help with that?
Have I argued for adding controllers to motherboards? Do either m.2 or u.2 require those? No. I'm arguing for u.2 (or something similar) with lane splitting support - a standard PCIe feature that's sadly mostly disabled in consumer chipsets. Every single server chipset in the world supports it. With lane splitting, the chipset handles everything you need.
Why do you think that space constricted M.2 slots do not add to board complexity when you have to route a slot, and make sure no components interfere with the M.2 card that ends up mounted on he motherboard. Seriously, that doesn't make any sense. As for the connector, compared to what? M.2 only has a kludge of a connector. U.2 carries 2 channels as well but is a much more robust cable, SFF-8639. The cables are individually shielded! Have you ever bought Mini HD SAS cables? Are you aware of how much those things cost right now?
Please, pay attention: that I'm arguing that SATAe adds board complexity doesn't in any way indicate that I'm arguint against m.2 also doing the same. Of course it does! Saying anything else would be completely bonkers. The difference is that m.2 is a standard that actually has a use. As such, it's a requirement on a modern motherboard. SATAe really isn't. Which, again leads to the board complexity argument: when you have to have m.2, adding SATAe adds more complexity which increases costs without any tangible gain. It's that simple.

And yeah, u.2 cables are crazy expensive, as are most hihgly shielded enterprise cables. This would of course have to change for consumer adoption. The thing is, SATAe cables wouldn't really have been any cheaper. At all. After all, they're still sending rather sensitive PCIe signals over a wire, and are thus equally susceptible to interference and the like.

I would love that too. Lots of uses. But we don't have it. The rest is simply an opinion which I can't agree with. U.2 takes less board space as a connector, but gives up flexibility. It's entirely personal opinion on which you would rather have. I agree the SATA Express cable is therefore much larger, but that's a trade-off. Do you think people will want to spend $40 on what's essentially a SAS3 cable? We can't even get people to spend more than $70 on their power supplies a great deal of the time.[/quote]
No. Having separate connectors does not give up flexibility. Say you have two identical boards, one with 2xSATAe, and one with 4 SATA ports and an x2 u.2 port. Which of those allow you to connect the most devices at the same time? The second, given enough lanes to allow use of all ports at once. And if you don't, then your point is moot. In addition, u.2 allows manufacturers to route SATA and PCIe lanes separately, requiring less protection against interference and thus less complex boards.
U.2 lane splitting already exists, because it's primarily a commercial product designed for Enterpise use. Dual Port U.2 drives like the Intel DC3600 have been on the market for a year now. But they go many to one, rather than the many to many you're wanting. That's because U.2 either sends 4x PCIe for 1 controller, or 2, 2x PCIe for dual controller mode. Unlike SATA Express, all of the lanes are sent down each U.2 connector to each device. How would you do what you wanted? Would you go back to the days of master / slave drives where you manually set each drive, or use a separate cable that only had half the lanes in each U.2 connector? That standard doesn't exist by the way. You'd have to create a whole new SFF cable standard. And even when you did achieve that, you'd end up with 2, 2x PCI-e connectors. That seems like a lot of work for something we already have though. It's called SATA Express 😉
So ... you're arguing that because enterprise devices use a feature (lane splitting) in one way, it could never be used in another. Sure, that makes sense. If the chipset is capable of communicating with separate controllers separately, it doesn't matter if these are on a single PCB or in different buildings (as long as the signal is intact and latency is acceptable). As long as the chipset supports lane splitting on the 4 PCIe lanes routed to the u.2 port, it shouldn't care whether you connect a single x4 device, four x1 devices, 2 x2 devices, or 2 x1 + 1 x2 on the other end.

Could dual-device cables be a solution to this? Sure, but that would be clunky. I'd argue for a connector standard that splits along the middle (like EPS connectors and the like, just better suited for its task, obviously. As such, something like 2 SATA cables alone would work. Just without the extra doohickey, and placed side-by-side vertically to each other, not the ridiculous "How wide can we make this connector?" approach of SATAe. With either one capable of transmitting PCIe x2. And not requiring those horrible breakout power cables. Okay? U.2 is far from perfect. But it's far better suited for modern computing than SATAe. That cable is straight out of the "practicality is a word I've never heard of" rulebook of '90s computing.


There isn't a doubt in my mind that PCIe over cables is a growing necessity in modern PCs, and it would greatly help a whole host of add-ons. The PCIe card standard is for a large part outdated in design, and we need more flexible, smaller form factor solutions for added features. SATAe, though, is not the solution to this. It's incompatible with small devices due to its huge size, and it's wildly impractical - which has seriously limited adoption of standards previously.
 
thecoolnessrune said:
Controllers stay on process nodes a very long time due to cost reasons. Even after the jump to 28nm, it still doesn't necessarily mean there aren't room for x2 controllers. You're welcome to bring conjecture if you want, but stating it as some sort of fact is silly.

EDIT: Noting that @cbn posted this information ahead of me. I made this post in my spare time over 3 hours, so I missed it. 🙂

Again, the post is from 2015. I can't find a single mention of the controller since. Either it's cancelled or it has just never been used in a single consumer-facing product.

That company (Jmicron) changed its name to Maxiotek.

But like I mentioned in this post (starting in mid 2016) there have been other PCIe 3.0 x 2 controllers announced (Marvell 88NV1160, Phison E8/E8T and Samung Photon)
 
Last edited:
Second, SATAe has been plentifully available on motherboards since Haswell. Yet devices just don't show up. How do you explain that?

Economical controllers haven't existed till very recently.

And for a SSHD is a PCIe 3.0 x 4 controller (even if it was affordable) a good fit for the PCB?

Here is what a 3.5" SSHD PCB looks like with a small dram-less SATA 6 Gbps SSD controller (JMF608):

StorageReview-WD-Blue-SSHD-4TB-PCB.jpg
 
Last edited:
That company (Jmicron) changed its name to Maxiotek.

Not really. JMicron spun off its SSD business as a separate P&L center named Maxiotek. JMicron still exists in the SATA world. It was not a name change.
 
  • Like
Reactions: cbn
Economical controllers haven't existed till very recently.

And for a SSHD is a PCIe 3.0 x 4 controller (even if it was affordable) a good fit for the PCB?

Here is what a 3.5" SSHD PCB looks like with a small dram-less SATA 6 Gbps SSD controller (JMF608):

StorageReview-WD-Blue-SSHD-4TB-PCB.jpg
Yep, that's a packed board. So adding in flash for the "dual drive" aspect would pretty much require a separate PCB or a wholesale redesign of the unit. OTOH, if you're going the SSHD route, with 64-128GB of flash, wouldn't it be a huge challenge to get enough flash dies on there to actually exceed SATA speeds? Especially if the goal is to utilize modern, high-density flash, where a single die is 32GB or more. I'd argue that for use cases like that, SATA is still the way to go. Even if PCIe x2 would be cheaper than x4, SATA would be cheaper still, and with no tangible performance deficit unless you somehow squeeze in 4+ channels of flash.
 
Economical controllers haven't existed till very recently.

And for a SSHD is a PCIe 3.0 x 4 controller (even if it was affordable) a good fit for the PCB?

Here is what a 3.5" SSHD PCB looks like with a small dram-less SATA 6 Gbps SSD controller (JMF608):

StorageReview-WD-Blue-SSHD-4TB-PCB.jpg

Yep, that's a packed board. So adding in flash for the "dual drive" aspect would pretty much require a separate PCB or a wholesale redesign of the unit.

There is 8GB of MLC NAND somewhere on that PCB....not sure which package contains it though.

OTOH, if you're going the SSHD route, with 64-128GB of flash, wouldn't it be a huge challenge to get enough flash dies on there to actually exceed SATA speeds? Especially if the goal is to utilize modern, high-density flash, where a single die is 32GB or more. I'd argue that for use cases like that, SATA is still the way to go. Even if PCIe x2 would be cheaper than x4, SATA would be cheaper still, and with no tangible performance deficit unless you somehow squeeze in 4+ channels of flash.

A single NAND package can hold up to eight dies (actually I've seen up to 16 dies mentioned in some cases).....so increasing NAND (to max out sequential read on PCIe 3.0 x 2) wouldn't be a PCB real estate problem. Ideally, the small dies would be used though. This to get the most Sequential Read with the least amount of NAND.
 
Last edited:
There is 8GB of MLC NAND somewhere on that PCB....not sure which package contains it though.



A single NAND package can hold up to eight dies (actually I've seen up to 16 dies mentioned in some cases).....so increasing NAND (to max out sequential read on PCIe 3.0 x 2) wouldn't be a PCB real estate problem. Ideally, the small dies would be used though. This to get the most Sequential Read with the least amount of NAND.
Aren't there limits to the I/O capabilities of a single package? My understanding is that this is the reason for the development of TSVs, to gain the ability to individually address more dies in a single package. After all, having 16 dies in a single package doesn't help performance if those dies are still only accessible through a single channel. And at least to my knowledge, TSVs have yet to be implemented in NAND.
 
Aren't there limits to the I/O capabilities of a single package? My understanding is that this is the reason for the development of TSVs, to gain the ability to individually address more dies in a single package. After all, having 16 dies in a single package doesn't help performance if those dies are still only accessible through a single channel. And at least to my knowledge, TSVs have yet to be implemented in NAND.

See article below for an example on why having eight NAND dies in one package doesn't hurt performance even on low capacity SSDs.

http://www.anandtech.com/show/8747/samsung-ssd-850-evo-review/2

There are three different PCB designs in the 850 EVO lineup. The 120GB and 250GB models (above) use a tiny PCB with room for two NAND packages (one on each side). Interestingly enough, both use octal-die packages, meaning that the 120GB 850 EVO only has a single 128GB (8*16GB) NAND package. Decoding the part number reveals that the packages are equipped with eight chip enablers (CEs), so a single NAND package is viable since all eight dies can be accessed simultaneously.

The use of octal-die packages is actually true for all capacities. It's an interesting choice nevertheless, but I suspect Samsung's packaging technology is advanced and mature enough that it's more cost efficient to use high die count packages and small PCBs instead of larger PCBs with more and less dense NAND packages.

With that mentioned, here is an older article from January 2014 that did mention some performance loss (for traditional packaging methods) after exceeding four dies per package:

http://www.anandtech.com/show/7594/samsung-ssd-840-evo-msata-120gb-250gb-500gb-1tb-review/2

So far the limit has been eight dies and with traditional packaging methods there is already some performance loss after exceeding four dies per package. That is due to the limits of the interconnects that connect the dies to the PCB and as you add more dies the signal integrity degrades and latency goes up exponentially.

However, the same article also mentions Samsung being able to use 16 dies per package with no significant performance loss.

In closing, the author offers the following perspective:

I am thinking this is not strictly hardware related but software too. In the end, the problem is signal integrity and latency, both which can be overcome with high quality engineering. The two are actually related: Poor signal integrity means more errors, which in turn increases latency because it's up to the ECC engine to fix the error. The more errors there are, the longer it obviously takes. With an effective combination of DSP and ECC (and a bunch of other acronyms), it's possible to stack more dies without sacrificing performance.
 
Last edited:
P.S. Back in 2014, Western Digital did release a PCIe Hard drive. It was a dual drive though, not a SSHD:

http://www.storagereview.com/wd_demonstrates_first_pcie_hard_drives

WD%20PCIe%20HDD.jpg

Looking at the Anandtech Announcement of the Western Digital SATA Express Protoype, the author does mention WD was working on making the drive a Hybrid rather than dual drive.

Similar to the Black2 we reviewed last year, the prototype shows off as two separate volumes, although Western Digital is also working on a caching software to make the solution more user friendly.

And check out the description in the Gigabyte sign below:

WD1_678x452.jpg


It is hard to read, but here is what it says:

"Included in our demonstration is work we have done with WD to make the combination of a hard disk drive and flash subsystem look like a single volume to the end user"
 
Last edited:
Too bad it's all been canceled...

Here is what the Anandtech Author wrote about the SATA Express prototype (which has a SATA 6 Gbps controller for the solid state part):

To be completely honest, the product as it stands today doesn't make much sense because it's internally SATA 6Gbps, but uses for PCIe for host connectivity. From a performance perspective the only advantage of PCIe is that the SSD and HDD can be accessed at the same time at full speed, but ultimately I think Western Digital has to go with a native PCIe SSD controller to be competitive. Western Digital told me that they are looking into PCIe controllers but since there aren't any available at this point, the prototype is stuck with SATA 6Gbps controllers.

So based on that (with no PCIe x 2 controllers available) I agree it didn't make much sense to release the drive.

Why use SATA Express for an SSHD using SATA 6 Gbps for the solid state part (with all the bulk associated with SATA Express) when a single SATA 6 Gbps connector could accomplish the same thing?

And even if used as dual drive, how important (or how likely) would it be for a person to use both drives (SSD and HDD) at the same time? And even if both drives in the dual drive are used at the same time, how much restriction would a SATA 6 Gbps connector impose? Probably not much.

P.S. Western Digital released the Black2 dual drive in the past. This with a single SATA 6 Gbps connector.
 
Last edited:
Here is what the Anandtech Author wrote about the prototype (which has a SATA 6 Gbps controller for the solid state part):



So based on that (with no PCIe x 2 controllers available) I agree it didn't make much sense to release the drive.

Why use SATA Express for an SSHD (all the bulk associated with SATA Express) when a single SATA 6 Gbps connector could accomplish most of the functionality of the prototype?

P.S. Western Digital when released the Black2 dual drive in the past it didn't need SATA Express....it worked with a single SATA 6 Gbps connector.
Yeah, I liked the Black2, too bad the implementation was pretty bad (showing up as two separate drives, no way to use the SSD for caching or anything similar without 3rd party apps). Still, I don't see it as likely that a restarted development of a drive like the one you refer to will use the SATAe connector - u.2 (or, well, the drive-side version of that which I can't remember the name of - the one on the Intel 750 2.5") seems far more likely as it's already an accepted, in-use industry standard (even for hot-swap drive racks and the like). On the other hand, that shouldn't matter - PCIe is PCIe, after all, regardless of how it hooks up to the motherboard. An x4 port shouldn't care if the connected device has two lanes, and a 2-lane port shouldn't care if the connected device has four lanes. Making whatever adapters you want/need should be as easy as putting the correct connector on each end of the cable. As such, I believe what connector wins will come down to size, ease of use, ease of implementation and cable costs. I'm definitely not in love with u.2, but I believe it trounces SATAe in all but the latter (and we don't really know the latter since no SATAe cables are for sale). Hopefully, someone will come up with a smaller, more flexible solution with cables that don't cost an arm and a leg - although that might be very difficult seeing how many leads an x4 PCIe link would need and how susceptible to interference it is. In my head, the ideal connector would either carry four lanes and be small enough for it not to matter if it's not very flexible, or two lanes in a smlal connector with the option to combine two connectors next to each other (for a total footprint similar to u.2) with an extra hefty cable (or two, if you love spaghetti) for more performance. That's why I truly believe SATAe is dead - it only supports PCIe x2, it's yuge, and comes with a mess of cable spaghetti all on its own.
 
That's why I truly believe SATAe is dead - it only supports PCIe x2, it's yuge, and comes with a mess of cable spaghetti all on its own.

For an SSHD (which I think is the ideal usage for SATA Express) I don't think PCIe 3.0 x 2 is a disadvantage because that will definitely support 128GB of NAND without restricting Sequential Read.

128GB is a lot of NAND for cache. I don't see much point going beyond that.

P.S. The cable spaghetti on the SATA Express can be reduced quite a bit if it is cabled:

cable.jpg
 
For an SSHD (which I think is the ideal usage for SATA Express) I don't think PCIe 3.0 x 2 is a disadvantage because that will definitely support 128GB of NAND without restricting Sequential Read.

128GB is a lot of NAND for cache. I don't see much point going beyond that.

P.S. The cable spaghetti on the SATA Express can be reduced quite a bit if it is cabled:

cable.jpg
That actually doesn't look too bad. The only issue I have with that are the connectors. Still awful.

And I agree that PCIe x2 would suffice for SSHDs (although I'd argue that until we reach the levels of cache you're talking about anything other than SATA is wasted), I'm just saying that having a huge connector that only supplies x2 and can't be teamed up to provide x4 for higher-performance SSDs is impractical, if one of the goals is to counteract the motherboard-filling nature of m.2 drives.
 
P.S. The cable spaghetti on the SATA Express can be reduced quite a bit if it is cabled:

cable.jpg

If we could get a drive with the performance of an Intel 750 SSD, and the storage capacity of a WD 8/10TB "Helium" HDD, all in one package, then I would be willing to put up with one or two of those sleeved SATA-E cables in the PC.
"Ultimate Storage Device" indeed.

Heck, even a 4TB SSHD with 128GB NAND cache memory would be a good start. (I think WD already has such a product, without SATA-E? Maybe only with 32GB NAND? @cbn, do you know which product I'm talking about? It's a "Blue" SSHD, 4TB.)

https://www.newegg.com/Product/Product.aspx?Item=N82E16822236983

Only 8GB NAND flash. Boooo. Barely better than caching boot-time files. Probably all that it's good for.
 
  • Like
Reactions: cbn
Last year Seagate released a consumer 2.5" SSHD with 32GB NAND on it. (The model is ST1000LX001 and has the highest amount of NAND I have seen so far on a client drive)

Here are the reviews I found:

https://www.back2gaming.com/reviews...views/seagate-laptop-sshd-st1000lx001-review/

https://www.techporn.ph/1tb-seagate-sshd-st1000lx001-hybrid-drive-review/

http://www.reimarufiles.com/2016/05/25/seagate-st1000lx001-1tb-laptop-sshd-review/

http://www.pcworld.idg.com.au/review/seagate/st1000lx001/600632/

Interestingly because Crystal Disk Mark and AS SSD use temporary files (apparently not benefited by Seagate's Adaptive Memory technology) the Sequential read is not much different than seen on the Seagate SSHD with 8GB NAND. So instead of getting 300+ MB/s Sequential Read on Crystal Disk Mark (as we would expect of 32GB of NAND), the Sequential Read is only gets to around 120 MB/s.

However, as show by the comparison below vs. Samsung (Seagate) Momentus (a 5400 rpm drive that uses the same 500GB platter as the Seagate ST1000LX001 SSHD) the boot time on the SSHD is a good bit faster.



 
Last edited:
If we could get a drive with the performance of an Intel 750 SSD, and the storage capacity of a WD 8/10TB "Helium" HDD, all in one package, then I would be willing to put up with one or two of those sleeved SATA-E cables in the PC.
"Ultimate Storage Device" indeed.

Heck, even a 4TB SSHD with 128GB NAND cache memory would be a good start.

I agree.....and for a SATA 6 Gbps SSHD I'd be happy even with 64GB NAND (which should be enough to hold a game or two as well as max out the Sequential read on that interface).

As a side note, It would be interesting to see how two SATA 6 Gbps SSHDs (with 64GB NAND each) in RAID-0 compare to one SATA Express SSHD with 128GB NAND. Assuming the NAND and the PCIe 3.0 x 2 controller is good enough the SATA Express SSHD should be faster when using flash, but slower on the disk.
 
Last edited:
If we could get a drive with the performance of an Intel 750 SSD, and the storage capacity of a WD 8/10TB "Helium" HDD, all in one package, then I would be willing to put up with one or two of those sleeved SATA-E cables in the PC.
"Ultimate Storage Device" indeed.

Heck, even a 4TB SSHD with 128GB NAND cache memory would be a good start. (I think WD already has such a product, without SATA-E? Maybe only with 32GB NAND? @cbn, do you know which product I'm talking about? It's a "Blue" SSHD, 4TB.)

https://www.newegg.com/Product/Product.aspx?Item=N82E16822236983

Only 8GB NAND flash. Boooo. Barely better than caching boot-time files. Probably all that it's good for.
Pretty sure you're thinking of the WD Black2 Dual Drive. It was basically a 120GB SSD and a 1TB HDD in a 2.5" package.

http://www.anandtech.com/show/7682/the-wd-black2-review

Looks like it was discontinued; remaining stock is NOT cheap.
 
Here is something else that looks interesting, Apple's Fusion Drive:

http://www.anandtech.com/show/6679/a-month-with-apples-fusion-drive

ipad-mini-150_575px.jpg


http://www.anandtech.com/show/6679/a-month-with-apples-fusion-drive/2

Unlike traditional SSD caching architectures, Fusion Drive isn’t actually a cache. Instead, Fusion Drive will move data between the SSD and HDD (and vice versa) depending on access frequency and free space on the drives. The capacity of a single Fusion Drive is actually the sum of its parts. A 1TB Fusion Drive is actually 1TB + 128GB (or 3TB + 128GB for a 3TB FD).

http://www.anandtech.com/show/6679/a-month-with-apples-fusion-drive/7

For the first time since late 2008, I went back to using a machine where a hard drive was a part of my primary storage - and I didn’t hate it. Apple’s Fusion Drive is probably the best hybrid SSD/HDD solution I’ve ever used, and it didn’t take rocket science to get here. All it took was combining a good SSD controller (Samsung’s PM830), with a large amount of NAND (128GB) and some very aggressive/intelligent software (Apple’s Core Storage LVM). Fusion Drive may not be fundamentally new, but it’s certainly the right way to do hybrid storage if you’re going to do it.
 
Fusion drive seems to be a good software solution. After all, it's just a standard 3.5" drive and an SSD. Wish Windows had something similar. With Storage Spaces already being quite well-developed, it should be possible, no? Guess it just comes down to the caching algorithms.
 
  • Like
Reactions: cbn
Fusion drive seems to be a good software solution. After all, it's just a standard 3.5" drive and an SSD. Wish Windows had something similar. With Storage Spaces already being quite well-developed, it should be possible, no? Guess it just comes down to the caching algorithms.

Yes, I wish Windows had something similar as well.

Then perhaps this would even encourage the production of dual drives? Dual drives with 3D Xpoint or higher amounts of NAND? Maybe even U.2 dual drives that save PCB area by integrating the PCIe 3.0 x 4 SSD controller with the hard disk controller?
 
Last edited:
Yes, I wish Windows had something similar as well.

Then perhaps this would even encourage the production of dual drives? Dual drives with 3D Xpoint or higher amounts of NAND? Maybe even U.2 dual drives that save PCB area by integrating the PCIe 3.0 x 4 SSD controller with the hard disk controller?
It might. It would definitely allow for more flexibility. Heck, it could even make the (aforementioned) WD Black2 into a viable product, instead of just showing up as two separate drives, each with middling performance.
 
Back
Top