"Official" docker backup strategy - what about consistency?

2.7k Views Asked by At

The suggested strategy to manage and backup data in docker looks something like this:

docker run --name mysqldata -v /var/lib/mysql busybox true
docker run --name mysql --volumes-from mysqldata mysql
docker run --volumes-from mysqldata -v $(pwd):/backup ubuntu tar cvf /backup/backup.tar /var/lib/mysql

However, when I backup running containers that way, I won't get a consistent backup, would I? I'm aware of tools like mysqldump, but what if I need to backup, for example, a folder to which files are constantly added and removed?

1

There are 1 best solutions below

6
On

The underlying problem you are facing, i.e. backing up changing files is independent of docker. Use a tool such as rsnapshot or dirvish to make backups into a volume and then use the approach you mentioned above to move those backups to somewhere safer like Amazon s3 or glacier based on your reliability requirement.

Whether you mount volumes from another container or the host vm using the -v switch the changes to the files are reflected in all containers (or host vm) in more or less real-time. (There is some delay because of the AUFS that docker uses on top of host fs, but its not huge). If the backup container was running perpetually it could keep taking backups and the files would always reflect latest files seen by the mysql container.

Edit: For clarity.