Question Synology NAS - effect of going over "Recommended Number of Hosted Files"?

piokos

Senior member
Nov 2, 2018
554
206
86
Synology gives "Recommended Number of Hosted Files" for their NAS.
(I believe these used to be called "Maximum", not "Recommended")

I'm thinking about DS118 and the figure given is 100,000, which is perfectly fine for keeping movies, but somehow limiting for general use.
Why is it so low? Indexing?

Is there a workaround (other than archiving)?
If not, what happens when I go over (significantly, like 10x)? NAS crashes or becomes unresponsive? Or does it only affect file lookup times?
 

VirtualLarry

No Lifer
Aug 25, 2001
56,570
10,202
126
Actually, it's a hard limit in the filesystem, depending on how it was formatted. At least with EXT4 (Linux FS), there's a tradeoff relationship between block size, inode count, and total filesystem maximum storage size.

My QNAP allows my to CHOOSE, when I format the volume and set it all up.

Every file stored, consumes AT LEAST ONE "inode". Once you run OUT of the inode in the superblocks, AFAIK, you cannot store more files to that filesystem, even if you have actual unused blocks of storage in the bitmap.
 

piokos

Senior member
Nov 2, 2018
554
206
86
Actually, it's a hard limit in the filesystem, depending on how it was formatted. At least with EXT4 (Linux FS), there's a tradeoff relationship between block size, inode count, and total filesystem maximum storage size.
Clear.
However, they did go from "maximum" to "recommended", so maybe there's a possibility to change this in new DSM versions?
100,000 is really low. I plan to use Git and it'll eat through this limit easily...
I'll ask them on support forum.
My QNAP allows my to CHOOSE, when I format the volume and set it all up.
I'm aware of the fact QNAP offers slightly more flexibility (including containers in entry-level models...). And is cheaper.

But Synology apps just seem more polished and NASes have an opinion of working for many years with minimal effort.
To be honest, I see way too many forum topics about QNAP workarounds and crashing after updates.
If I wanted to have a semi-DIY NAS, I'd just go for a DIY one (I'm running a small home server anyway). :)

That said, I'll have to look into QNAP, because suddenly the cheapest acceptable Synology is DS218+ and it's over my budget...
 
Feb 25, 2011
16,983
1,616
126
Actually, it's a hard limit in the filesystem, depending on how it was formatted. At least with EXT4 (Linux FS), there's a tradeoff relationship between block size, inode count, and total filesystem maximum storage size.

My QNAP allows my to CHOOSE, when I format the volume and set it all up.

Every file stored, consumes AT LEAST ONE "inode". Once you run OUT of the inode in the superblocks, AFAIK, you cannot store more files to that filesystem, even if you have actual unused blocks of storage in the bitmap.

EXT4 supports, like... I don't know exactly. Billions probably. I've managed NAS boxes with multiple millions of files on them, using EXT4 on mdadm SoftRAID.

But the files were divided nicely into hash trees with usually less than 50k files in each subdirectory, the file servers in question were 12 or 16 core Xeon boxes with 48-64GB of RAM, and it still would take a few seconds to enumerate the contents of a directory over the network using NFS (which was, in turn, waaaaay faster than CIFS.)

So I would suspect that Synology is probably making the recommendation for performance reasons. Anything close to 100k files in a single directory would make typical interactions frustratingly slow, even on a system significantly more powerful than a low-end NAS appliance.

Synology gives "Recommended Number of Hosted Files" for their NAS.
(I believe these used to be called "Maximum", not "Recommended")

I'm thinking about DS118 and the figure given is 100,000, which is perfectly fine for keeping movies, but somehow limiting for general use.
Why is it so low? Indexing?

Is there a workaround (other than archiving)?
If not, what happens when I go over (significantly, like 10x)? NAS crashes or becomes unresponsive? Or does it only affect file lookup times?

My recommendation would be to store the files in a directory tree with <1k files per directory. If you're using the NAS as a backend for, say, software development (usually the worst culprit when it comes to a bajillion tiny files) you should be fine, since the biggest projects are usually already organized that way.

Also... how do you define "general use?" Because 100k files is kind of a lot. (I have about that many on my NAS, and that's the result of like 20 years of pack-ratting.)

Incidentally, if you know specifically the file you want, you can usually get it very quickly with an explicit copy command; it's generating directory listings that's usually the time-consuming part.
 
Last edited:

piokos

Senior member
Nov 2, 2018
554
206
86
EXT4 supports, like... I don't know exactly. Billions probably. I've managed NAS boxes with multiple millions of files on them, using EXT4 on mdadm SoftRAID.
4 billions by filesystem design, but the actual figure is specified during setup (as mentioned by @VirtualLarry).
Synology sets very conservative limits. This could be performance-driven, but also just based on licensing/product placement. You want to keep more files, you pay more.
So I would suspect that Synology is probably making the recommendation for performance reasons. Anything close to 100k files in a single directory would make typical interactions frustratingly slow, even on a system significantly more powerful than a low-end NAS appliance.
100k is for all files, not per directory.
My recommendation would be to store the files in a directory tree with <1k files per directory. If you're using the NAS as a backend for, say, software development (usually the worst culprit when it comes to a bajillion tiny files) you should be fine, since the biggest projects are usually already organized that way.
Keeping a balanced tree definitely helps performance. Not just in file systems. :)

Yes, NAS is supposed to be a development backend + download station + general home use.
Git produces a lot of files, but maybe they aren't kept on the main file system (but in some archive / container)? This is one of the questions I'll have to ask Synology.
Also... how do you define "general use?" Because 100k files is kind of a lot. (I have about that many on my NAS, and that's the result of like 20 years of pack-ratting.)
Photos: 20k - just final JPEGs (ideally: 20k*3 since there are also RAW and xmp files...). And growing.
Documents and projects (excluding Git archive and data): 10k
Articles, instructions, manuals, ebooks etc: 5k+

So already at launch I'm using half of the limit for personal/hobby stuff.

Git archive: no idea.
Project data: honestly... tens of millions (I'm doing ML, image recognition...). But this part isn't mandatory. I run everything on the workstation anyway.
 

mv2devnull

Golden Member
Apr 13, 2010
1,519
154
106
Actually, it's a hard limit in the filesystem, depending on how it was formatted. At least with EXT4 (Linux FS), there's a tradeoff relationship between block size, inode count, and total filesystem maximum storage size.
Indeed. Filesystem uses some space for metadata (inode indices, file allocation table, etc).

That gives a chance to "optimize". If we dictate maximum volume size and file count, then allocation for metadata can be limited, leaving more blocks for data.
Some filesystems allow increase of size and inode limit afterwards. Shrinking is rarely possible.
 
Feb 25, 2011
16,983
1,616
126
Git produces a lot of files, but maybe they aren't kept on the main file system (but in some archive / container)? This is one of the questions I'll have to ask Synology.

Git doesn't really produce a lot of files - in a properly packed repo, the .git metadata directory is probably <100 extra files. (More if you don't occasionally run a repack, but that's up to you anyway.) Projects get 10k+ files in their source trees because that's just how developers/engineers organize large projects, and git doesn't do anything to archive or containerize those.