Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I have a 36T NAS (26T usable raidz2), built using the Norco 4224 chassis. The backplane / drive trays that it comes with aren't 100% reliable, so I would recommend trying to get a SuperMicro chassis instead. I keep mine in the attic to keep the noise down.

Mine runs fine on an old Q6600 with 4G of RAM. I'm not trying to do anything silly, like enabling dedupe, so it's not a problem. I'm running ZFS on Linux, again, not a problem with that amount of RAM despite ZoL having a less than ideal caching situation in the kernel.

I do my backups with Crashplan. Not only do I back up the NAS with Crashplan, but I back up my various PCs and laptops to my NAS, for faster restores should I need it. Crashplan supports peer backup, which works well.

I tuned my recordsize by copying a representative sample of files to different ZFS filesystems with different settings, and comparing du output. The empirical calculation seemed easier.

I've used my NAS for a various things; video / photo archive, VM backing store (serving up zvols as iSCSI endpoints), working space for a variety of side-projects. More than anything else, it gets rid of the idea that storage space is in any way scarce, and removes the requirement to delete things (very often, anyway; I built up several years worth of motorcycle commute headcam videos at one point). My pool wanders between 40 and 70% utilization.



36T to CrashPlan sits on 36GB of RAM at all times... screw that. rclone never uses more than ~2GB to manage my whole 50TB dataset.

edit: Also, I'm using a SuperMicro chassis, not a Norco. I've got a section where I go into why I went with SuperMicro.


Uhm, what exactly is crashplan doing with all that memory?


I wonder if it's related to the memory "issue" with their client, where exceeding 1TB/1 million files, a manual edit of the .ini file to bump the JVM memory allocation is necessary. I remember reading somewhere that it has to do with CRC checksum calculation for all the files? I've had to change the setting multiple times (currently at 8GB for ~8TB/1 million files).

https://support.code42.com/CrashPlan/6/Troubleshooting/Adjus...


Hm, maybe give Borg a shot. For initial backups it's not exactly the fastest thing, though much better after that.


> rclone never uses more than ~2GB to manage my whole 50TB dataset.

How do you handle file versions with rclone?


I guess that's the downside. It only syncs the latest version of every file it syncs.


I use quite a bit of Supermicro hardware and have to conclude that it's just good stuff (tm). Recently had to work with an Intel server board, ick... no comparison at all.


How much does it cost to build and run a system like that? What does "my pool wanders between 40 and 70% utilization" mean? I'm assuming you're not talking about free space, because you say you don't delete things.


I don't delete things for a year or more at a time. I am talking about free space.

Cost is in low thousands, I haven't tabulated it recently.


Are you backing up all 36T to crashplan?

I've been using crashplan on my NAS for a few years now, but I feel bad and only really backup critical stuff. (aka its just a few TB) rather than the full 50T+ on the NAS.


No. I don't back up backups, VM zvols, or media that's easily re-aquired or doesn't have very high value (eg commutes). And of course the NAS isn't full.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: