Back That Thing Up: Implementing the 3-2-1 Rule

Back That Thing Up: Implementing the 3-2-1 Rule

In my All Things Operational With Podman post, I mentioned that the next logical step for my Fedora Server homelab was ensuring I’d never have to rebuild it from zero. It’s one thing to get a stack of Podman containers and ZFS pools running. It’s another thing entirely to ensure they stay that way after a drive starts clicking or, worse, after I make a late-night mistake in the terminal.

Today, we’re looking at the 3-2-1 backup strategy and how I’ve mapped my current hardware (the ThinkStation server and the Nukbox desktop) against that gold standard.


Understanding the 3-2-1 Rule

If you care about your digital files, the 3-2-1 rule is the baseline. It’s a simple framework designed to eliminate “single points of failure.”

  • 3 Copies of Data: You should have your original data plus at least two backups.
  • 2 Different Media: You should store those copies on different types of hardware (e.g., an internal SSD, a mechanical drive pool, or an external disk).
  • 1 Off-site Copy: At least one copy needs to live in a different physical building to protect against fire, theft, or localized disasters.

My Current Architecture: The “Hot” Sync and the “Cold” History

Right now, my setup relies on two main pillars: Syncthing for movement and ZFS/Sanoid for integrity.

  1. The Live Data: My active documents live on my Nukbox.
  2. The Sync Pipeline: I use Syncthing to replicate those files in real-time to my Fedora Server (the ThinkStation). The moment a file is saved on the Nukbox, it’s pushed to the server.
  3. The Redundant Vault: The server stores this data on a ZFS Mirror. If one of the physical hard drives in that pool dies, the system doesn’t even blink; the data remains online and accessible.
  4. The Time Machine (Sanoid): This is where it gets technical. Syncing is great, but it’s dangerous. If I accidentally delete a paragraph in a document, Syncthing will faithfully delete it on the server, too. To solve this, I’ve configured Sanoid to take hourly ZFS snapshots.

My current policy (the production template) looks like this:

  • Hourly: 36 snapshots
  • Daily: 30 snapshots
  • Monthly: 3 snapshots

The Progress Report: How I Stack Up

If we look at the 3-2-1 requirements, I’m currently sitting at a solid 3-2-0.

  • 3 Copies? Yes. I have the original on the Nukbox, the synced copy on the ThinkStation, and a history of immutable snapshots on the ZFS pool.
  • 2 Media? Yes. The data exists on the Nukbox’s NVMe SSD and the ThinkStation’s mechanical SAS/SATA drives.
  • 1 Off-site? No. This is my current “failure state.” If my house loses a fight with a lightning strike or a leaky pipe, all my copies are in the same room.

What’s Next: Completing the Circle

To turn that “0” into a “1,” I need to get my data out of the house. I also have an existing archive of photos and documents on a 1TB external drive, which adds a nice “cold storage” layer to the mix. Here’s how I’m weighing my options to round out the strategy:

  • A Friend’s NAS (ZFS Send): A friend offered me a terabyte of space on his NAS. Since we’re both running ZFS, I could use zfs send and receive over a VPN (like Tailscale). This free route would allow me to replicate my snapshots exactly as they exist on my server, preserving my hourly history off-site.
  • Commercial Cloud (Backblaze B2 / Amazon S3): This is the “set it and forget it” option. Using a tool like Rclone, I could encrypt my files locally and ship them to a data center. It’s incredibly reliable and costs pennies per gigabyte. It’s essentially insurance for your digital life.
  • The “Air-Gapped” Archive: I’m currently using an old 1 TB external 2.5″ HD as a manual archive. By plugging this into the ThinkStation periodically and running a sync, I create a “cold” copy. If I then store this drive at a different location (like my office or a friend’s place), I’ve officially satisfied the “Off-site” requirement of the 3-2-1 rule.
  • Decentralized Storage (Storj / Sia): A more modern approach where encrypted file fragments are distributed across a global network. It’s resilient and often cheaper than the big cloud providers, though it adds another layer of software to manage.

By the time the next chapter is written (or like eventually), I’ll have pulled the trigger on one of these automated off-site paths. Between the real-time Syncthing replication, the ZFS mirroring, the hourly Sanoid snapshots, and my external archives, I’m finally reaching a point where “data loss” is no longer a phrase that keeps me up at night. If you’re running a homelab without a similar “Time Machine” layer, you aren’t running a lab. You’re just borrowing your data from fate.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Scroll to Top