• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Allocation Unit Size

  • Thread starter Thread starter C1
  • Start date Start date

C1

Platinum Member
What is the recommended allocation unit size to set up HDDs of sizes 1TB and 2TB.

A large drive evaluation on Tomsharware warns that the size used can make a big difference in performance, but doesnt give a value or even a recommendation. MicroSoft recommends 64KB for mail servers.

In my case, yes it's a mixed bag. There will be both lots of large media files but also lots of things like photos. The 64KB looks good for video (or media files in general), but 16KB looks about right for run of the mill stuff otherwise.

Has anyone done an analysis for NTFS with large HDDs and have a recommendation?

Why Im concerned about this is because I expect there to be a lot of data shuffling around (eg, resizing) on these drives.
 
For normal usage, with all of the caching and readahead that modern OSes do I doubt you'll see a difference.

As for the resizing, that will most likely be limited by drive speed. And if I were you, I'd try to come up with a solution that minimizes or completely eliminates any resizing you think might happen. I use Linux LVM on my home machine to make that sort of thing simpler and I still almost never actually do it.
 
Thanks for info. Resizing is PIA and to be avoided. The issue is that I have an application that I use a lot and which cant save to any disk that has a free space > 1TB. A possibility is to under size the drive initially then resize when it fills sufficiently.

You're right though. It is something you dont want to do probably more than once (if at all). I think a "one time" resize would do it. There doesnt seem to be much down side to using the NTFS default (4KB) on a large drive for data, but I do see that the drive registers noticeably quicker (within windows) after initial HDD power on with using larger AUs.
 
You might be better off creating a separate partition just for saving from that app or even a dedicated drive if you have one lying around. And bitching at the app's support/developers for their stupidity.
 
All my data drives get formatted with 64K over the 4096 byte default whether it be server or personal PC. I've done extensive tests using SQLIO for sequential and random read/write using 8K up to 256K data sets and There may have been one or two tests where 64K wasn't better than 4096 bytes (and for the most part, there was a substantial gain from formatting 64K).

Looking at our servers general file size, in a worst-case scenario on a 2TB array we would lose at most 1 Gig of space from formatting with 64K. Cache isn't all it's cracked up to be when you're looking at Home PC hardware. Make sure your stuff is partition-aligned as well.
 
Last edited:
Back
Top