DIY Cloud backup 2022 update

Update to the DIY Cloud backup project
2022 update!

In 2017 I built a DIY cloud backup solution (which you can find documented here). This has been running every since but somewhere end of 2021 I decided the used hardware could use a little update.

I’ve been trying to do a video series of basically a re-imagining of the project but I’m having a hard time following it through!

With this post video part 4 (and 5) are going to be released and we are finally heading towards the end of the project!

Below will be article parts to accompany the videos since especially code is easier copy & pasted then having to read and type it from a video!

Bonus – Initial build with old hardware

Part 1 – Analysis and explanation of setup

 

Part 2 – Proxmox install and software chosen

Bonus – Livestream building the server hardware

 

Part 3 – Picking up where we left off and the hardware

 

Part 4 – Install Minio S3 Storage Server

Part 5 – Installing and using the restic backup client

 

Synopsis of storage setup

It’s explained in detail in the video by the slimmed down version of it is that we’re running Promox as the operating system giving us access to ZFS. We’re using a zRAID2 to have the ability to tolerate 2 drive failures without losing data! Then we’re also using the ZFS system to have the ability to create multiple ZFS datasets combined with settings we can set per dataset such as quotas, etc.

This dataset is the most important part that otherwise isn’t really possible within Minion itself. Yes Minio has gotten many more options over the years and now also partly has the same functionality but the clue here is multi-tenancy. When I get someone a 1TB or 2TB account, I don’t care how many users they want to share that with, how the divide it, amount of users they create, etc. etc.. They can do that all on their own because they each have their own installation of the software and “partitioned” off dataset within ZFS. This way lots of users can use the same server without being able to influence the other users on there.

Hopefully that explains the software stack I’ve chosen and the reasoning behind it!

Used Commands

Creating user “minio”

(This tutorial assumes you are logged in as the root user otherwise you will need to use “sudo” to execute certain commands!)

adduser minio
5x blank enter
Y

Creating ZFS dataset on pool

(This tutorial assumes you are logged in as the root user otherwise you will need to use “sudo” to execute certain commands!)

In this tutorial our pool is called “HDDmirror” please use whatever name you have called your storage pool!

cd /
cd HDDmirror
zfs create HDDmirror/minio
zfs create HDDmirror/minio/quindor
zfs set quota=100G HDDmirror/minio/quindor
chown -R minio:minio /HDDmirror/minio

Installing the minio application

(This tutorial assumes you are logged in as the root user otherwise you will need to use “sudo” to execute certain commands!)

cd /opt
mkdir minio
chown minio:minio minio
cd minio
su minio

wget https://dl.min.io/server/minio/release/linux-amd64/minio

chmod +x minio
ls -l

Test start of minio binary

(This tutorial assumes you are logged in as the root user otherwise you will need to use “sudo” to execute certain commands!)

In this part we are still assuming the role as the “minio” user since we never exited above!

MINIO_ROOT_USER=admin MINIO_ROOT_PASSWORD=password ./minio server /HDDmirror/minio/quindor

The URL this command shows is linked to the IP the machine you are working on has, generally it’s port 9001 for the management interface.

The user: admin, the password: password.

Creating a bucket

Please follow the video!

Creating a startup file

(This tutorial assumes you are logged in as the root user otherwise you will need to use “sudo” to execute certain commands!)

exit (become root again)
cd /etc/systemd/system
nano minio.quindor.service

Service file details

[Unit]
Description=MinIO
Documentation=https://docs.min.io
Wants=network-online.target
After=network-online.target
AssertFileIsExecutable=/opt/minio/minio

[Service]
WorkingDirectory=/HDDmirror/minio/quindor

User=minio
Group=minio
ProtectProc=invisible

Environmentfile=/HDDmirror/minio/quindor/minio.env
ExecStart=/opt/minio/minio server $MINIO_OPTS $MINIO_VOLUMES

Restart=always
LimitNOFLE=65536
TasksMax=infinity
TimeoutStopSec=infinity
SendSIGKILL=no

[Install]
WantedBy-multi-user.target

Creating the minio environment file

(This tutorial assumes you are logged in as the root user otherwise you will need to use “sudo” to execute certain commands!)

nano /HDDmirror/minio/quindor/minio.env
---

MINIO_OPTS="--address :9000 --console-address :9001"
MINIO_ROOT_USER=minioadmin
MINIO_ROOT_PASSWORD=password
MINIO_VOLUMES="/HDDmirror/minio/quindor"

Starting the minio.quindor.service service

(This tutorial assumes you are logged in as the root user otherwise you will need to use “sudo” to execute certain commands!)

systemctl daemon-reload
systemctl start minio.quindor.service
systemctl status minio.quindor.service

Part 5 – Restic Backup software

In this part we’re going to be running the restic backup software on Windows. You can find all the Restic pre-compiled binaries (including for Windows) in the github repository here.

In the video we go over how to download and unpack it. I generally rename the current version I’m using to “restic.exe” to make it a bit easier in all my batch files, etc.

Run CMD when creating first backup

For some reason on Windows 11 restic doesn’t function correctly on the first backup if not run in the “cmd” environment for me. I haven’t been able to figure out why that is but at least that fixes the issue for me of it just hanging and not doing anything. Subsequent backups seems to work fine within “cmd” or not.

Initialize the restic repository in the S3 bucket

“addenv.bat” batch file

set AWS_ACCESS_KEY_ID=minioadmin
set AWS_SECRET_ACCESS_KEY=password
set RESTIC_PASSWORD=newpassword12
set TMPDIR=c:\temp\restic\temp
set TMP=c:\temp\restic\temp

Run the “addenv.bat” batch file to enable these values.

Restic command to initialize repository

restic -r s3:http://10.10.128.109:9000/desktop-clients init

Running first backup

To run the first backup to the just initialized repository we are going to use the following command:

restic -r s3:http://10.10.128.109:9000/desktop-clients --verbose backup "d:\arduino"

Second backup with extra options

Let’s add some more options to the backup line for instance how to exclude certain file types and folders!

restic -r s3:http://10.10.128.109:9000/desktop-clients --verbose --cache-dir=c:/temp/restic/cache --exclude"**/Downloads" --exclude="**.mkv" backup "d:\arduino"

Restic repository statistics

Let’s see how many backups we have, files and the total size!

restic -r s3:http://10.10.128.109:9000/desktop-clients stats

Restic repository snapshots

Snapshots are individual backups restic is holding right now, let’s take a look

restic -r s3:http://10.10.128.109:9000/desktop-clients snapshots

 Finding a file in your backups

Let’s try and find a “.jpeg” file in our latest snapshot backup

restic -r s3:http://10.10.128.109:9000/desktop-clients find *.jpeg

Restore a file

Let’s restore one of those .jpeg files from the most recent backup.

restic -r s3:http://10.10.128.109:9000/desktop-clients --target d:\temp\restore --include Untitled_000076.jpeg restore latest

Restic maintenance

We need to keep our repository under control that it doesn’t just grow indefinitely so let’s do some maintenance.

First we need to tell restic how long we want backups to be kept, you can do this using the keep command.

restic -r s3:http://10.10.128.109:9000/desktop-clients --keep-hourly 4 --keep-daily 30 --keep-weekly 52 --keep-monthly 24 --keep-yearly 10 forget

Restic prune

Although the above command tells restic what to keep and what to forget the actual blocks that data is contained in are still there. Let’s tell restic to remove those too so the space used actually shrinks!

restic -r s3:http://10.10.128.109:9000/desktop-clients prune

Restic consistency check

How do you know all your backup containers and files are ok? Restic has a check command for this! For a simple check, this command suffices.

restic -r s3:http://10.10.128.109:9000/desktop-clients check

Restic check with reading archives

restic -r s3:http://10.10.128.109:9000/desktop-clients check --read-data

Restic check with reading some archives

Reading all archives can take a really really long time, partly dependent on the speed of the internet connections involved. To not have to do this every time you can tell restic to only check a percentage of the archives and it will select random archives to check each time!

restic -r s3:http://10.10.128.109:9000/desktop-clients check --read-data --read-data-subset=5%

End conclusion

This concludes the basic series of the DIY Backup 2022 episodes! As mentioned in the video I will look into doing some more advanced videos in the future (such as backup scheduling and such) but with the above videos you should be able to build your own solution and get up and running!

 

 

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *