I have a 36T NAS (26T usable raidz2), built using the Norco 4224 chassis. The backplane / drive trays that it comes with aren't 100% reliable, so I would recommend trying to get a SuperMicro chassis instead. I keep mine in the attic to keep the noise down.
Mine runs fine on an old Q6600 with 4G of RAM. I'm not trying to do anything silly, like enabling dedupe, so it's not a problem. I'm running ZFS on Linux, again, not a problem with that amount of RAM despite ZoL having a less than ideal caching situation in the kernel.
I do my backups with Crashplan. Not only do I back up the NAS with Crashplan, but I back up my various PCs and laptops to my NAS, for faster restores should I need it. Crashplan supports peer backup, which works well.
I tuned my recordsize by copying a representative sample of files to different ZFS filesystems with different settings, and comparing du output. The empirical calculation seemed easier.
I've used my NAS for a various things; video / photo archive, VM backing store (serving up zvols as iSCSI endpoints), working space for a variety of side-projects. More than anything else, it gets rid of the idea that storage space is in any way scarce, and removes the requirement to delete things (very often, anyway; I built up several years worth of motorcycle commute headcam videos at one point). My pool wanders between 40 and 70% utilization.
I wonder if it's related to the memory "issue" with their client, where exceeding 1TB/1 million files, a manual edit of the .ini file to bump the JVM memory allocation is necessary. I remember reading somewhere that it has to do with CRC checksum calculation for all the files? I've had to change the setting multiple times (currently at 8GB for ~8TB/1 million files).
I use quite a bit of Supermicro hardware and have to conclude that it's just good stuff (tm). Recently had to work with an Intel server board, ick... no comparison at all.
How much does it cost to build and run a system like that? What does "my pool wanders between 40 and 70% utilization" mean? I'm assuming you're not talking about free space, because you say you don't delete things.
I've been using crashplan on my NAS for a few years now, but I feel bad and only really backup critical stuff. (aka its just a few TB) rather than the full 50T+ on the NAS.
No. I don't back up backups, VM zvols, or media that's easily re-aquired or doesn't have very high value (eg commutes). And of course the NAS isn't full.
Mine runs fine on an old Q6600 with 4G of RAM. I'm not trying to do anything silly, like enabling dedupe, so it's not a problem. I'm running ZFS on Linux, again, not a problem with that amount of RAM despite ZoL having a less than ideal caching situation in the kernel.
I do my backups with Crashplan. Not only do I back up the NAS with Crashplan, but I back up my various PCs and laptops to my NAS, for faster restores should I need it. Crashplan supports peer backup, which works well.
I tuned my recordsize by copying a representative sample of files to different ZFS filesystems with different settings, and comparing du output. The empirical calculation seemed easier.
I've used my NAS for a various things; video / photo archive, VM backing store (serving up zvols as iSCSI endpoints), working space for a variety of side-projects. More than anything else, it gets rid of the idea that storage space is in any way scarce, and removes the requirement to delete things (very often, anyway; I built up several years worth of motorcycle commute headcam videos at one point). My pool wanders between 40 and 70% utilization.