2tb drive allocation unit size

Dorkenstein

Diamond Member
Jul 23, 2004
3,554
0
0
What NTFS allocation unit size should I use for my 2tb Hitachi drive? I think 4096 is the default but I was just reading an article on how you can set it for an even higher value if you just use the drive for large storage space and it will open files faster and waste less space, and all I'm using mine for is storage. Any thoughts? Thank you.
 

C1

Platinum Member
Feb 21, 2008
2,376
112
106
As I recall, FAT32 is 32KB for partitions => 32GB. You could comfortably use 32KB or even 64KB allocation unit size (for large file archiving such as for video files). [I have been using 32KB.]

There is quite a bit of discussion concerning speed versus FAT storage, cluster size, HDD size, storage efficiency, etc. for FAT, but I have not seen much about this for NTFS.
 

Zap

Elite Member
Oct 13, 1999
22,377
2
81
I thought bigger allocation size made for more wasted space, not less?
 

sub.mesa

Senior member
Feb 16, 2010
611
0
0
The amount of wasted space is calculated with this:

Wasted space = Total number of files * (filesystem frag size / 2)

So if you have 1000 files totaling 10GB, your average file size is 10MB. Your wasted space when using 64K frag size would be:

(64K / 2 = 32K) * 1000 = 32.000KiB or ~32MiB. No that much. But with a lot of smaller files this can increase stored filesize considerably. So only use it for dedicated filesystems.
 

Binky

Diamond Member
Oct 9, 1999
4,046
4
81
If this is your porn and/or MP3 storage drive, bigger is better. If this drive stores word docs and other small files, leave it at default.
 

Dorkenstein

Diamond Member
Jul 23, 2004
3,554
0
0
Hmm, it's a mixture of small files and large files. But definitely lots of large files. I might go for 32k.
 

sub.mesa

Senior member
Feb 16, 2010
611
0
0
Dorkenstein: simply count the files! Select all, choose properties and see howmany files it is all combined. Then look at the size. Then you can calculate the average filesize.

If it's above 1Megabyte, then 32k would be good.
If it's above 2Megabyte, then 64k could be used.

Average filesize in MB = total size in MB / number of files
 

kalrith

Diamond Member
Aug 22, 2005
6,628
7
81
I went with 64k for my 2TB drive. 70-80% of the utilized space will be media files larger than 1GB, and usually much larger than that.
 

Voo

Golden Member
Feb 27, 2009
1,684
0
76
Dorkenstein: simply count the files! Select all, choose properties and see howmany files it is all combined. Then look at the size. Then you can calculate the average filesize.

If it's above 1Megabyte, then 32k would be good.
If it's above 2Megabyte, then 64k could be used.

Average filesize in MB = total size in MB / number of files
Well I'd say that's a rather bad metric if he has lots of mixed file sizes.
A 4gb iso together with a million 1kb files will get you an average size of 2.5gb filesize.. (I don't strive for realism with that example, though if I look at my files, I actually got few medium sized, but lots of large and small files - but I also archive stuff I don't need regularly)

Personally if I'd want to optimize stuff like that I'd write a minimal script that orders the files in some categories and look at that.
Obviously that's only needed if you don't already know that 80% of the stuff will be large movies..
 

Timmah!

Golden Member
Jul 24, 2010
1,560
912
136
What about advanced format disk? I have just bought WD Green 2TB WD20EARS, which is advanced format, but i am not sure which unit allocation size to choose. Do i need to choose 4096K, because of the advanced format? Or should i go default - i do not have idea, what size default is though...i suppose 512k?
 

joetekubi

Member
Nov 6, 2009
176
0
71
What about advanced format disk? I have just bought WD Green 2TB WD20EARS, which is advanced format, but i am not sure which unit allocation size to choose. Do i need to choose 4096K, because of the advanced format? Or should i go default - i do not have idea, what size default is though...i suppose 512k?

This is really pretty complicated.
AFAIK, if you are using Win7, you don't need to worry. Support for 4096 sectors is built-in. If you are using other versions of Windows, Western Digital has a utility to align your partitions.

If you are using Mac or Linux it's a bit more complicated.

-joe
 

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
Changing the filesystem cluster size will have very little affect if any, it's one of those things that you don't need to worry about these days.
 

FishAk

Senior member
Jun 13, 2010
987
0
0
You can download and use: GetFoldersize for free. With it you can easily and quickly see the amount of space and number of files in each directory branch. It can be used from the Right Click Contest Menu.
 

jvroig

Platinum Member
Nov 4, 2009
2,394
1
81
What NTFS allocation unit size should I use for my 2tb Hitachi drive? I think 4096 is the default but I was just reading an article on how you can set it for an even higher value if you just use the drive for large storage space and it will open files faster and waste less space, and all I'm using mine for is storage. Any thoughts? Thank you.
Btrees have solved the problem of balancing performance-vs-space considerations in such a way that any difference between cluster sizes is mostly negligible for general usage. All modern filesystems use btrees already for metadata.

Unless your disk is for a transactional database, go for default and forget about it. If that is for transactional databases, crank it up and then leave it alone.

Of course, if by "storage" you actually mean videos you've ripped or downloaded, you might as well crank it up, since all important files you have will be quite large (>100MB?)

Theoretically "faster", yes, but I'm not sure if it's noticeable at all, considering you'd just be opening those files one at a time anyway, and you'd hardly notice a speed difference of a few microseconds.

Now that I think about it, since modern filesystems already have persistent pre-allocation and some newer ones have extents and these two features already drastically improve large file performance regardless of cluster size, cranking up cluster size may end up as an exercise in wasted effort, gaining nothing at all. So unless it's for a busy transactional database where every drop of I/O speed you can possibly squeeze out of existing hardware counts, bothering to crank up cluster size just doesn't matter any more.
 
Last edited: