Question DiY NAS 8-bay 3.5/2.5 drive HotSwap backplane chassis, server 1U/flex PSU, mini-ITX/micro-ATX mobo.

Shmee

Memory & Storage, Graphics Cards Mod Elite Member
Super Moderator
Sep 13, 2008
8,039
2,985
146

Tech Junky

Diamond Member
Jan 27, 2022
3,825
1,343
106
While hot swap is appealing it's not something you hardly use with quality drives. Maybe once a decade? If you're rotating drives though or backups then it might make more sense. I just put them all into a Meshify 2 as part of my DIY setup. I do like the more compact option @VirtualLarry posted but, the problem comes down to the other components that I'm running. The latest options for CPU's this round really failed to bring mATX board to the party. I would have preferred to reuse my Node 804 rather than being forced into an ATX case again. The Node series makes quick work of drives as well w/ a couple screws to remove the side panel and racks that slide out for the drives and then the 4 screws holding the drive to the bracket. The Meshify isn't too bad either though it's a thumb screw to remove the sled and the 4 screws on the HDD.
 

Shmee

Memory & Storage, Graphics Cards Mod Elite Member
Super Moderator
Sep 13, 2008
8,039
2,985
146
IMO, the problem with mATX and certainly mini ITX is it really can limit you on SATA ports and expansion slots for NICs or HBAs. And with mini ITX, RAM is often quite limited, which is important for ZFS. That is why I use an Asus X99 Deluxe with 256GB (8x32GB), an Intel X540 NIC, and a Dell PERC. (H310 I think?)
 
  • Like
Reactions: dorion

Tech Junky

Diamond Member
Jan 27, 2022
3,825
1,343
106
@Shmee
If you shop for the board with ports in mind then you don't fall into the lack of ports issue. My mATX had 8 ports on it. With the mission in mind it's easy to avoid those issues you mentioned.

I don't' get the whole ZFS thing though with people talking about needing 256GB of RAM and this and that. I run my "NAS" drives using Linux / EXT4 / Raid 10 and only put in 16GB of RAM and it really only uses 4GB at any given time. I still get 400MB/s out of the drives across the network as well.

I know there's a million different ways to Raid or do storage in general but, some of these systems and approaches are over built or over complicated for what they're being used for. IMO of course.
 

Shmee

Memory & Storage, Graphics Cards Mod Elite Member
Super Moderator
Sep 13, 2008
8,039
2,985
146
Fair point, obviously 256GB is probably overkill, but RAM is much faster to write to than any drive, so with ZFS cache you can do backups/transfers to the NAS much faster. And with 128GB or more, you could probably fit an entire backup from some systems in cache lol.
 

Tech Junky

Diamond Member
Jan 27, 2022
3,825
1,343
106
And if the power goes out your RAM data is gone. Which is why raid cards have battery add ons for caching. It all depends on how you use the storage and whether you use enough bandwidth to warrant a UPS to allow enough time to flush the data to the disks. Tiering an SSD as the cache to me makes more sense than relying on RAM. Considering now you can use NVME's with 7GB/s of bandwidth that isn't volatile like RAM.
 
  • Like
Reactions: Shmee

aigomorla

CPU, Cases&Cooling Mod PC Gaming Mod Elite Member
Super Moderator
Sep 28, 2005
21,019
3,490
126
@Shmee that silverstone case is horrible.
It has issues keeping drives cool and in check. I played with one, and you'd think it would keep the drives cool. But the air gets choked in a backwash as it has no outlet for the air which gets pulled in to go to.

The case larry linked seems interesting, but i see it also will have issues keeping the CPU and board cool as it only has one fan feeding the RAM area and not many low profile heatsinks i can think of will be able to get enough air from that low clearance.

Also you will definitely need a HBA or a SAS controller of some sort, because even if you board has 8 SATA plugs, if your running a nVME you will lose 2, and that means only 6 physical ports unless you have a dedicated controller of some sort, which then again it will get TOASTY as HBA and SAS controllers do not run cool at all.
 
  • Like
Reactions: Shmee

Shmee

Memory & Storage, Graphics Cards Mod Elite Member
Super Moderator
Sep 13, 2008
8,039
2,985
146
@Shmee that silverstone case is horrible.
It has issues keeping drives cool and in check. I played with one, and you'd think it would keep the drives cool. But the air gets choked in a backwash as it has no outlet for the air which gets pulled in to go to.

The case larry linked seems interesting, but i see it also will have issues keeping the CPU and board cool as it only has one fan feeding the RAM area and not many low profile heatsinks i can think of will be able to get enough air from that low clearance.

Also you will definitely need a HBA or a SAS controller of some sort, because even if you board has 8 SATA plugs, if your running a nVME you will lose 2, and that means only 6 physical ports unless you have a dedicated controller of some sort, which then again it will get TOASTY as HBA and SAS controllers do not run cool at all.
Ah good to know about that case, that is disappointing though. I guess better to get a mid tower or full tower with a bunch of 5.25in bays and add some decent drive cages in them, and make sure there is good airflow.

As for the SATA ports, it depends on the board. Many X99 boards have 8 to 12 SATA ports, and I don't think you lose any by adding an NVMe drive. My X58 board had 10, but they were all SATA II ports, and only 6 were Intel, so not ideal, certainly not for SSDs.

The only problem with using a desktop tower for HDD cages in the 5.25 bays, is many of them have those notches for holding optical drives in place, so the case would need to be modded for some cages. Like I had to do with mine.
 

Tech Junky

Diamond Member
Jan 27, 2022
3,825
1,343
106
The meshify 2 has a rack for 13 drives when you put it in storage mode. If you want compact the node 804 holds 8 in the brackets with room for more on the shroud.
 

aigomorla

CPU, Cases&Cooling Mod PC Gaming Mod Elite Member
Super Moderator
Sep 28, 2005
21,019
3,490
126
I don't' get the whole ZFS thing though with people talking about needing 256GB of RAM and this and that.

Need MOAR Ram~!!
truenas.JPG

ZFS can use as much ram as you can throw at it... its like windows xp back in the days. Its paging 201GB of i dont know what, only because it can.

On a serious note, i think they said you need like 8GB base, and 1GB for each TB as a page.
ZFS likes to use ram to page a ton of stuff, so typically you'll see people shoot for the moon with Ram.
I personally have like 88TB of storage with 256GB of Ram, which is overkill, as i would of been fine on 128GB.
But DDR3 ECC Reg ram was so cheap when it was cycled out, i went why not.

Only issue is for a system to have that much ram, it needs 8 ram slots, and only servers have that many slots, and most of them are dual socketed on top.
Well my EYPC also has 256GB and is single, but that system is doing real server stuff and not acting like a overpowered NAS.
 
  • Like
Reactions: Shmee

Tech Junky

Diamond Member
Jan 27, 2022
3,825
1,343
106
@aigomorla

If it's cheap then sure it makes sense. 88TB is a bit much then again I'm currently glancing at 20 or 18TB drives pondering an uplift from 8TB drives. It's a mystery to me though how / why people build storage using such odd 2-6TB drives though. There seems to be a bunch of packrats hoarding legacy drives for some reason.

I suppose in the case of TN / BSD there might be something about the OS that might bloat the RAM use kind of like Chrome does when you just leave it open with tons of tabs. I suppose there might be something in TN that's keeping a "live" version of the data in RAM if you're editing it or just keeping it readily available based on access.

BSD seems to be temperamental though to deal with compared to debian based options. Then again I haven't bothered playing with it since I'm running something that works and doesn't require much fuss to make changes or fix it when it decides to take a dive upon updates / upgrades. It's relatively easy on HW resources as well in comparison.

I've thought about jumping the fence though to AMD based on some niche options they provide compared to Intel but, financially it doesn't make sense to do on a whim unless I really need those options. If budget weren't an issue and being able to just design something with any part w/o cost then aiming for the moon could be fun. Then again once you get to a certain point you just jump to a SAN setup and run fiber between boxes for bandwidth.
 

Shmee

Memory & Storage, Graphics Cards Mod Elite Member
Super Moderator
Sep 13, 2008
8,039
2,985
146
Need MOAR Ram~!!
View attachment 72636

ZFS can use as much ram as you can throw at it... its like windows xp back in the days. Its paging 201GB of i dont know what, only because it can.

On a serious note, i think they said you need like 8GB base, and 1GB for each TB as a page.
ZFS likes to use ram to page a ton of stuff, so typically you'll see people shoot for the moon with Ram.
I personally have like 88TB of storage with 256GB of Ram, which is overkill, as i would of been fine on 128GB.
But DDR3 ECC Reg ram was so cheap when it was cycled out, i went why not.

Only issue is for a system to have that much ram, it needs 8 ram slots, and only servers have that many slots, and most of them are dual socketed on top.
Well my EYPC also has 256GB and is single, but that system is doing real server stuff and not acting like a overpowered NAS.
Yeah that is like my system, only my RAM is DDR4 ECC Reg. With a Xeon E5-1660v3. Again, surplus ECC reg RAM is cheap. But it is not only for servers, but some HEDT as well, as long as it has Xeon. So X79 and X99 pretty much. What board are you using there? Is that an X79 of some sort?
 

aigomorla

CPU, Cases&Cooling Mod PC Gaming Mod Elite Member
Super Moderator
Sep 28, 2005
21,019
3,490
126
What board are you using there? Is that an X79 of some sort?

Supermicro X9DAI i only use supermicro for my servers.
I also use there chasis, but i found supermicro has the best compatibility with almost everything you throw at it.
The worst being HP and Dell, where if its not made by DELL or HP it wont work period.

It's relatively easy on HW resources as well in comparison.

Yup thats the main reason why i really don't even think about upgrading that system.
I don't need it for jails, i just need it for NAS.
I don't intend to replace it either until its broken, then i'll probably get another one of these to replace it.

20221018_150241.jpg

EYPC's are getting cheaper especially the 7000 series on the used market.
 

Tech Junky

Diamond Member
Jan 27, 2022
3,825
1,343
106
EPYC intrigues me but, most server HW tends to lag behind when it comes to upgrades to underlying advances. For the price I would expect a bit more than aa 3YO CPU. I looked at the X299 option as well and it's just as disappointing. Pricing for "HEDT" systems is just as bad.

This kind of sums things up to a point - https://www.pugetsystems.com/blog/2022/05/09/what-happened-to-high-end-desktop-hedt-processors-2329/

It's still interesting stuff though.
 

Shmee

Memory & Storage, Graphics Cards Mod Elite Member
Super Moderator
Sep 13, 2008
8,039
2,985
146
Yeah IMO, X99 was the last great HEDT. One could argue Zen2 TR, but that is really more workstation/sudo-server stuff. And for Zen3 TR, I don't think that is available yet as DIY, last I checked. And many of the CPUs are vendor locked :/