• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

WD Advanced Format drives: Prepping for Win 7 or WHS 2011

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
The AF drives are supposed to use a 4-Kbyte sector size, so instead fo selecting "default" or "512" -- I picked "4096."

Assuming this is an NTFS term for cluster or block size, then default is the same as 4096, or 4K, and that's what you want. Having already researched the performance benefit of different cluster sizes, the amount of time you will save in a very obscure case where a different size is slightly better will be vastly exceeded by researching it for your use case. NTFS is optimized for 4K.

I hate the stupid "forced 512b emulation" thing they have going on. So many problems would be solved by using native 4K sectors.

It's a huge change though, they really had no choice. e.g. when BIOS asks for LBA 0 and has 4096 bytes returned instead of 512 bytes, it's going to puke. How do you even put an MBR on a 4096 physical sector? It seems out of spec. So if it's UEFI only, you still have the small problem of bootloaders and kernels that ask for 8 physical sectors for every 1 cluster/block. Instead of getting 4KB as expected, the drive returns 32KB. Instant exploding bootloader/kernel.
 
It's a huge change though, they really had no choice

Note that I said "forced 512e".
I understand that this is a major change and that is not practical to NOT have 512e... BUT, there is absolutely no reason the 512e drive cannot disable the emulation and let us get native access via 4Kn if we tell it to do so (via a jumper or a tool that communicates with firmware).
 
How do you even put an MBR on a 4096 physical sector?
You use something else.

So if it's UEFI only
Only because nobody still develops legacy BIOS, there is no reason legacy bios couldn't be modified to support it... its just not economical to modify an obsolete firmware stack to support new tech.
And it only matters for boot disks where the primary attraction of AF is in bulk storage (not boot drive)

you still have the small problem of bootloaders and kernels that ask for 8 physical sectors for every 1 cluster/block. Instead of getting 4KB as expected, the drive returns 32KB. Instant exploding bootloader/kernel.
Non boot drives. Also, they could be modified once there is actually hardware to develop support for... without any hardware to test on it is very hard to develop support.
 
I understand that this is a major change and that is not practical to NOT have 512e... BUT, there is absolutely no reason the 512e drive cannot disable the emulation and let us get native access via 4Kn if we tell it to do so (via a jumper or a tool that communicates with firmware).

Jumpers cost money. And I bet you're talking 5% of the consumer market would be be able to make use of it by disabling it, and maybe 2% of that market would. It's a fraction of their sales targets. Zero point. More people lick and sniff their new hard drive, while doing a square dance under a full moon.

You use something else.

That's GPT, which for Windows means you have to have UEFI hardware. And on top of that the Windows 7 bootloader or kernel (or both) lack support for 4K physical sectors. It's a non-starter. I don't even know that OS X has 4Kn capability. Newer linux kernels do. I'm a bit sketchy off hand if GRUB2 supports them, I think it does. But even if not the linux EFI stub bootloader would. So in any case you're talking a very small market.

Only because nobody still develops legacy BIOS, there is no reason legacy bios couldn't be modified to support it... its just not economical to modify an obsolete firmware stack to support new tech.
And it only matters for boot disks where the primary attraction of AF is in bulk storage (not boot drive)

AF disks also imply 2+TB. There are a smaller number of sub-2TB AF disks. And MBR is maxed out at TB. So there is a reason legacy BIOS can't be modified. Any modification is for an fraction of the intended target market, which by my guess is below 1%. That might jump to 5% on October 26th.
 
Oh and because the market for this is so tiny when 512e AF disks came out, that if they had put a jumper in it, it would have increased their support calls beyond all sensible reason. Just by putting that hurt me button on the drive, you'd have people who shouldn't use it, use it anyway. Things wouldn't work, they'd call support, or return the drive. It would be a support nightmare.
 
Jumpers cost money. And I bet you're talking 5% of the consumer market would be be able to make use of it by disabling it, and maybe 2% of that market would. It's a fraction of their sales targets. Zero point. More people lick and sniff their new hard drive, while doing a square dance under a full moon.

It costs very little money, gives substantial performance benefits, and can be done via software switch (which I already stated in the post you quote) rather than jumper to save on costs.

AF disks also imply 2+TB. There are a smaller number of sub-2TB AF disks. And MBR is maxed out at TB. So there is a reason legacy BIOS can't be modified. Any modification is for an fraction of the intended target market, which by my guess is below 1%. That might jump to 5% on October 26th.

32bit limits you to 2TiB with 512b sectors and 16TiB with 4K sectors.
This is actually one of the limitations of 512e that 4kn solves.
 
It costs very little money, gives substantial performance benefits, and can be done via software switch (which I already stated in the post you quote) rather than jumper to save on costs.

It gives substantial performance benefits, how, exactly? (Excluding misalignment.)

32bit limits you to 2TiB with 512b sectors and 16TiB with 4K sectors.
This is actually one of the limitations of 512e that 4kn solves.

Oh yeah good point. Duh. Well BIOS is going away regardless, the most significant problem that can't be dealt with by BIOS are bootkits. UEFI firmware hasn't been all that interesting except as an aside until the Secure Boot implementation was finalized in the 2.3.1 spec.
 
But it will work better on win8 and server 2012 and it takes the absolutely minimum of extra effort (since its the removal of a feature rather then adding one)..
Better? How? Do you have performance figures for 512e vs 4K native on those OSes? If so, please link them.

Or is this the same “better” as the theoretical advantage of GPT over MBR which doesn’t really equate to anything in the real world, outside of >2TB?

Actually GPT is worse than MBR in some respects because it’s not as easy to overprovision an SSD with it, but I digress.

It will also work on all variants of linux and unix for those of us running servers
All variants? So your currently installed distro version supports both booting off and accessing 4K native disks?

No, under no circumstance would 512b emulation on a 4k drive ever cause the drive to flat out not work on your OS.
I was talking about 4K native, which by extension makes the jumper a non-issue at this time since basically nobody can run 4K disks natively.
 
Better? How? Do you have performance figures for 512e vs 4K native on those OSes? If so, please link them.
Oh sure I would just pull links from my nether regions for something esoteric.
I HAVE seen tests that compared properly aligned 4K emulation on 512e AF drive, it showed performance being better.
512e AF drives suffer from alignment issues just like SSDs do (although in some cases not to the same magnitude and for different reasons).

Or is this the same “better” as the theoretical advantage of GPT over MBR which doesn’t really equate to anything in the real world, outside of >2TB?
GPT adds redundant copy of critical data which improves data integrity.

Actually GPT is worse than MBR in some respects because it’s not as easy to overprovision an SSD with it, but I digress.
I could see it being a wear leveling issue since MFT has to be written twice to the disk, but not over provisioning since that is done in the controller and the OS does not "see" the extra space.

All variants? So your currently installed distro version supports both booting off and accessing 4K native disks?
First, I keep on saying that I am only talking about storage not boot drives.
And second I am obviously talking about latest/updated versions. An older linux distro is not going to have support for new features.
If support is lacking, it would be added very quickly once the hardware is released (historically those OS are the first get support).

FreeBSD already has support.

I was talking about 4K native, which by extension makes the jumper a non-issue at this time since basically nobody can run 4K disks natively.
4Kn is already supported by UFS, ZFS, and ext2.
 
Oh sure I would just pull links from my nether regions for something esoteric.
I HAVE seen tests that compared properly aligned 4K emulation on 512e AF drive, it showed performance being better.

Wait a minute. You said that there were "substantial performance benefits" in the context of a hypothetical jumper switch that would deactivate 512e, and make a disk 4Kn. It is unremarkable that misaligned 512e AF drives will suffer a performance penalty (some more than others as the RMW can be optimized like it is in RAID). But so long as you're using a partitioning tool that isn't positively ancient to the point of being decrepit, you will get aligned partitions. In that case there is 1:1 parity between a file system cluster (or block), and a disk physical sector. The idea that removing the 512e layer in between will cause substantial performance benefits is what it seemed like you were implying. And in reality this is mostly optimized away by the disk and the file system:
READ LBA 4000 4
READ LBA 4000 1

That's the difference in command from the file system to read a single 4K cluster for a 512e disk vs a 4Kn disk. The file system does not ask for each sector by LBA, it asks for a start sector and then how many sequential sectors to read. The result appears to be the same.

4Kn is already supported by UFS, ZFS, and ext2.

4Kn is supported by most every file system now for 10 years, as the majority of them have used 4K clusters for at least that long or longer. The problem with 4Kn is at a lower level: firmware, bootloader, kernel. It doesn't matter if the file system is ready for 4Kn if any of those three puke when suddenly receiving 8x the data they were expecting for each request.
 
Actually GPT is worse than MBR in some respects because it’s not as easy to overprovision an SSD with it, but I digress.

I think that's rather a hack than a feature of MBR. GPT is more complicated but it comes with some nice features, like the aforementioned secondary GPT. Both have checksums, so it's possible to unambiguously determine which is correct if one is corrupt, each partition as a UUID. There are in effect unlimited partition type GUIDs instead of the 2 byte codes for MBR. There are at least 128 partitions possible without strange non-standard (but mostly agreed upon) hacks for MBR's extended partitions, to get beyond 4. Etc.

In a server context it's common to apply RAID metadata to the bare disk, then format the array logical volume(s), and forego either MBR or GPT schemes.
 
I HAVE seen tests that compared properly aligned 4K emulation on 512e AF drive, it showed performance being better.
512e AF drives suffer from alignment issues just like SSDs do (although in some cases not to the same magnitude and for different reasons).
We aren’t talking about misalignment, we’re talking about the emulation layer. At this time there’s no evidence to suggest a properly aligned 512e drive is significantly slower than the same drive if it were presenting 4K native to the host.

I could see it being a wear leveling issue since MFT has to be written twice to the disk, but not over provisioning since that is done in the controller and the OS does not "see" the extra space.
GPT always puts data at the end of the disk, regardless of the partition structure. That means you can’t get automatic overprovisioning just by short-stroking an SSD like you can with MBR.

There was a recent thread about Samsung drives that covered this very issue.
 
We aren’t talking about misalignment, we’re talking about the emulation layer. At this time there’s no evidence to suggest a properly aligned 512e drive is significantly slower than the same drive if it were presenting 4K native to the host.
We are talking about both.

GPT always puts data at the end of the disk, regardless of the partition structure. That means you can’t get automatic overprovisioning just by short-stroking an SSD like you can with MBR.
There is no short stroking on an SSD. There is leaving space unpartitioned so that you have extra spare space.
And it is a wrong statement to say GPT prevents that. GPT in no way interferes with that (except for ever so slightly reducing the amount of such space you have)...
Heck you can even get the same full benefit while the entire drive is fully partitioned by having TRIM enabled (with TRIM all free space, even partitioned free space, is the same as over provisioned space)

There was a recent thread about Samsung drives that covered this very issue.
Do you mean the thread about how samsung drives GC can read MBR data and NTFS data to perform their own TRIM without OS support?
Or the supposed issue with the second copy of GPT (the one at the end of the drive) becoming corrupt?
 
GPT always puts data at the end of the disk, regardless of the partition structure. That means you can’t get automatic overprovisioning just by short-stroking an SSD like you can with MBR.

Umm? The backup GPT header and table are ~34 sectors. It's positively tiny. And it doesn't really matter in that in normal operation the primary header and table can be read, and so long as the checksum works out, the backup doesn't need to be read. I don't see how the existence and location of the secondary GPT matters.

---edit

And usually the GPT is only taking up 2 sectors of the 34 reserved for it.
 
How about a death sentence. You must use Windows 8 if you want full WHS advanced blah blah... hehehe. What a nightmare enjoy your new Windows 8 Phone I mean Winblows 8 OS ...... hehehe metro rocks,,,,,,,, The game guys sheeez... gl
 
So you're making a claim that the emulation layer slows things down substantially. Please explain this.

No I am making the claim misalignment slows down the drive substantially (a 4k write that is misaligned results in replacing a single write operation with 2 separate read-modify-write operations) and a separate claim that the emulation layer slows down the drive as well in an unspecified amount. That unspecified amount can be high in certain operating environments as the drive is forced to perform read-modify-write cycles http://wiki.illumos.org/display/illumos/ZFS+and+Advanced+Format+disks but I don't have exact figures on it because it cannot be tested. However, simple physics assures us that adding such a layer would add some measure of slowdown (which could be very small)

Lets not forget block suballocation https://en.wikipedia.org/wiki/Block_suballocation

There is also the further issue that while modern windows defaults to 4k allocation units (good for AF) regardless of drive size and aligns them to 1MB (good for AF) you can still manually choose a sub 4k allocation unit size manually. And then there are non windows OS... For example a ZFS' "variable block size" (combined with compression) can also results in sub 4k blocks.

Then you have the XP "alignment" fix jumper on WD drives which would make all modern windows OS align the partitions incorrectly.

And finally, there is the whole 2TiB limitation thing. If 4Kn is used it becomes a 16TiB limitation

PS I just remembered another issue (its been years since I first had this debate, I FORGOT most issues 😛... starting to remember now). Basically its KISS. Adding a whole abstraction layer on top of the drives internal 4K sector management means more complex firmware with more places to have a bug. Not quite as bad as the wear leveling and GC issues of SSDs but still much more complex than a straightforward emulation free drive.
 
Last edited:
and a separate claim that the emulation layer slows down the drive as well in an unspecified amount. That unspecified amount can be high in certain operating environments as the drive is forced to perform read-modify-write cycles

That's not correct. The vast majority of file systems being used default to 4K cluster/blocks, i.e. eight 512 byte sectors. If data on one sector changes, the file system will read all 8 in that cluster, modify, and write out those 8 sectors. That's the same thing that happens on a 512e drive as on a conventional drive, except of course the disk firmware translates the 8 sector read and write into a single sector read and write. But the quantity of data read, modified, written is the same for a conventional disk, a 512e disk, and 4Kn disk, because the cluster size hasn't changed.

http://wiki.illumos.org/display/illumos/ZFS+and+Advanced+Format+disks but I don't have exact figures on it because it cannot be tested. However, simple physics assures us that adding such a layer would add some measure of slowdown (which could be very small)

It's basically zero.


Lets not forget block suballocation https://en.wikipedia.org/wiki/Block_suballocation

There is also the further issue that while modern windows defaults to 4k allocation units (good for AF) regardless of drive size and aligns them to 1MB (good for AF) you can still manually choose a sub 4k allocation unit size manually.

Which next to no one does, and hasn't had efficacy overall for ~10 years.

And then there are non windows OS... For example a ZFS' "variable block size" (combined with compression) can also results in sub 4k blocks.

That's an argument for a conventional drive, because ZFS if it doesn't sense the disk is AF, or doesn't just default to a 4K minimum, *will* incur a (small) RMW penalty within the disk if it wants to modify a block smaller than the physical sector. Obviously on a 4Kn disk this actually isn't even possible, the block size can't be smaller than the sector.

Then you have the XP "alignment" fix jumper on WD drives which would make all modern windows OS align the partitions incorrectly.

This is out of scope. Everyone in this thread understands alignment issues with AF disks. The issue is challenging your assertion that manufacturers should have provided consumers the means to convert a 512e disk to a 4Kn disk in the field. There simply no use case for this that's positive compared to the negatives.

And finally, there is the whole 2TiB limitation thing. If 4Kn is used it becomes a 16TiB limitation.

For 1% of the BIOS out there than can deal with this, and a non-spec on were you even put a < 66 byte MBR within a 4096 byte sector. This is defined for 512 byte sectors, as far as I know it's not for 4096 sectors.

You will sooner see bare disk formatting, with no partitioning.

PS I just remembered another issue (its been years since I first had this debate, I FORGOT most issues 😛... starting to remember now). Basically its KISS. Adding a whole abstraction layer on top of the drives internal 4K sector management means more complex firmware with more places to have a bug. Not quite as bad as the wear leveling and GC issues of SSDs but still much more complex than a straightforward emulation free drive.

This layer is the simplest math on the planet. It's division and multiplication by 8. There isn't even an offset. If they can't do this without bugs, then the world would have ended shortly after than first million AF disks shipped.
 
Last edited:
That's not correct. The vast majority of file systems being used default to 4K cluster/blocks, i.e. eight 512 byte sectors. If data on one sector changes, the file system will read all 8 in that cluster, modify, and write out those 8 sectors.
This is a ridiculous thing for the FS to do.

A file is read from the drive and into RAM which has been allocated to a program. When the program tries to write the modified data the FS will take the data from the program and either send it as a (over)write command to all 8 physical sectors that make up the one block. Or more likely if it is a smart FS it would send the write command to just the 1 modified sector. The former will initiated RMW if command queuing doesn't catch it, the latter would always initiate RMW. This is in an aligned partition (a misaligned one will always initiate two RMW)
The only time a FS needs to perform a RMW cycle on its own is if it is performing Block suballocation. That isn't to say there aren't extra RMW going on but those extra RMW are occurring in (and caused by) the program level not FS level and they are unavoidable regardless of what your drive or FS is.

RMW = read-modify-write

This layer is the simplest math on the planet. It's division and multiplication by 8. There isn't even an offset. If they can't do this without bugs, then the world would have ended shortly after than first million AF disks shipped.
Its not just math, its actions and commands.
And there are all those situations that have to be addressed like RMW cycles.
 
Last edited:
We are talking about both.
There's no "both"; there's unaligned, which nobody is debating, and then there's the supposed hit of the emulation layer, which there’s no evidence of it existing.

Again, if you have 4Kn vs aligned 512e benchmarks of the same drive, let’s see them. Otherwise any alleged performance hit is just speculation on your part.

There is no short stroking on an SSD. There is leaving space unpartitioned so that you have extra spare space.
Short-stroking AKA unpartitioned space. You know what I mean so there’s no need to argue semantics.

Heck you can even get the same full benefit while the entire drive is fully partitioned by having TRIM enabled (with TRIM all free space, even partitioned free space, is the same as over provisioned space)
Can you please provide evidence of that? Thanks.

Or the supposed issue with the second copy of GPT (the one at the end of the drive) becoming corrupt?
There's nothing supposed about it; it was being corrupted and Samsung confirmed the over-provisioning feature by using unpartitioned space doesn’t work with GPT.

The option wasn’t even enabled on GPT drives in Magician at the time, and probably still isn’t today.
 
Umm? The backup GPT header and table are ~34 sectors. It's positively tiny. And it doesn't really matter in that in normal operation the primary header and table can be read, and so long as the checksum works out, the backup doesn't need to be read. I don't see how the existence and location of the secondary GPT matters.
The secondary GPT was being corrupted because it was being overwritten for wear-leveling.

Samsung support basically confirmed overprovisioning through unpartitioned space wasn’t supported on GPT.
 
Can you please provide evidence of that? Thanks.
Thats in the very definition of trim. TRIM informs the drive which sectors contain garbage data rather than actual data so it no longer has to preserve said data. Exactly the same way it treats over provisioned space.
Without TRIM it is limited to only using the hidden from the user space.
Remember its never a specific area of the drive (thanks to wear leveling), its all virtualized.

There's nothing supposed about it; it was being corrupted and Samsung confirmed the over-provisioning feature by using unpartitioned space doesn&#8217;t work with GPT.

The option wasn&#8217;t even enabled on GPT drives in Magician at the time, and probably still isn&#8217;t today.

I said supposed because some users claimed it didn't happen on their drive.
And I don't recall samsung confirming over provisioning or wear leveling was the cause.
IIRC it was not over provisioning that caused it but a "smart" GC algorithm that relies on the firmware recognizing MBR and certain FS and then analyzing them for free space to trim (something only samsung drives do AFAIK, there was the whole debate over whether they do this or not which was resolved when someone found solid proof).

And this doesn't show that over provision doesn't work, in fact it works perfectly fine. If it didn't work then it would never have touched the second GPT data (since the data it contained would have been considered non-trash) thus never corrupting it.
You claim was
That means you can&#8217;t get automatic overprovisioning just by short-stroking an SSD like you can with MBR.
Which is simply false and not supported by the fact a singular samsung drive has a firmware bug causing it corrupt the second copy of GPT data. Because:
1. Its just the one samsung drive not all SSDs.
2. As I have said previously, on a non working over provisioning the GPT data would never have been touched.
 
Last edited:
Or more likely if it is a smart FS it would send the write command to just the 1 modified sector.

That totally obviates the entire point of clusters/blocks, therefore that's not how it works. The allocation block for NTFS, HFS+, FAT, extX and XFS are fixed and by default are 4K. That's 99% of file systems. They will not write out one sector, they write out an entire cluster at a time. So RMW already happens at a file system level. In any case, it's 4K being written to the drive whether it's 512e or 4Kn. It's the same 4K being read from the drive whether it's 512e or 4Kn.

It's 4KB of data, no matter what.

Its not just math, its actions and commands.

The commands are identical, I already told you this. There is no difference between asking a drive to read 8 512 byte sectors and 1 4096 byte sector. The command length is the same. The amount of data read or written is the same.



And there are all those situations that have to be addressed like RMW cycles.

There are no additional RMW for an aligned 512e drives, compared to a 4Kn drives when the file system uses 4K blocks - which the vast majority do.
 
This is a ridiculous thing for the FS to do.

It's a pragmatic one. Most CPUs (Intel, AMD and ARM) use 4 kB memory pages; and consequently most OSs perform all internal memory management in 4 kB blocks.

There are performance and simplicity enhancements to using 4 kB allocation blocks from the FS's/OS block cache perspective, as it allows most reads/writes to be directly mapped as VM page reads/flushes.
 
Back
Top