I’m trying to come up with a elegant way of backing up my docker volumes. I don’t really care about my host and the data on the host, because everything I do happens inside my docker containers and the mapped volumes. Some containers use mapped paths, but some others use straight up docker volumes.

I’ve started writing a script that inspects the containers, reads all the mount paths and then spins up another container to tar all the mounted paths.

docker run --rm --volumes-from $container_name-v /data/backup:/backup busybox tar cvf /backup/$container_name.tar $paths

So far so good, this works and I can write all backups to my storage and later sync them to an offsite backup space.

But error handling, (nice)logging and notifications using ntfy in case of success / errors / problems is going to suck in a bash script. Local backup file rollover and log file rollover also just suck if I have to do all this by hand. I’m able to use other languages to write this backup util but I don’t want to start this project if there already is a ready made solution.

So the question, is there a utility that can simply schedule arbitrary bits of script, write nicer logs for these script bits, do file rollovers and run another script on success / error?
All the backup programs that I can find are more focused on backing up directories, permissions and so on.

  • mbirth@lemmy.ml
    link
    fedilink
    arrow-up
    4
    ·
    20 days ago

    As I’m not using a Swarm or cluster, I consider Docker volumes volatile and use mounts where I need persistence. All my configuration and other persistent data is under /opt/docker/<container>/<foldername>. And /opt/docker gets backed up regularly using restic.