项目作者: TwoStoryRobot

项目描述 :
Container that regularly backs up volumes to S3
高级语言: Shell
项目地址: git://github.com/TwoStoryRobot/docker-dir-to-s3.git
创建时间: 2017-12-07T00:44:03Z
项目社区:https://github.com/TwoStoryRobot/docker-dir-to-s3

开源协议:MIT License

下载


docker-dir-to-s3

Docker container that performs daily backups from a local directory to S3

How to Use

Basic usage is as follows:

  1. docker run \
  2. -e "AWS_BUCKET=my-container" \
  3. -e "AWS_ACCESS_KEY_ID=MYACCESSKEYID" \
  4. -e "AWS_SECRET_ACCESS_KEY=3jk2kj3lkll+EXAMPLE/k213jl12k3kj213lkj213ll" \
  5. -e "AWS_DEFAULT_REGION=ca-central-1" \
  6. -v /my_local_dir/:/upload/:ro
  7. twostoryrobot/dir-to-s3

This will start cron in the background which will run an upload script daily.
Any volumes that you mount to the container’s /upload/ directory will be
compressed into a tar archive using bzip compression. The resulting file is
uploaded to S3 with filename <timestamp>.tar.gz.

You can mount a single volume from another container or from your filesystem
to the root of the /upload/ directory, or you can also backup multiple volumes
by mounting them as subdirectories on /upload/dir1/, /upload/dir2/, and so
on.

You can control the directory inside the container that holds the temporary tar
file by setting the TMPDIR environment variable. This might be useful if your
backup will be very large and needs to be stored on a network volume.

Security Best Practices

To perform the backup, only one permission is needed: s3:PutObject. You should
create a separate Managed Policy which can be assigned to a user with this
permission. Additionally, you should limit the permission to only the bucket
you plan to upload to. Your policy will look something similar to this:

  1. {
  2. "Version": "2012-10-17",
  3. "Statement": [
  4. {
  5. "Sid": "Stmt1291030123918",
  6. "Effect": "Allow",
  7. "Action": [
  8. "s3:PutObject"
  9. ],
  10. "Resource": [
  11. "arn:aws:s3:::my-container/*"
  12. ]
  13. }
  14. ]
  15. }

You should create a separate user in your AWS IAM console to perform the
backups. This user does not need a password, so they should only have the
‘Programmatic access’ Access Type. AWS will provide a AWS_ACCESS_KEY_ID and
AWS_SECRET_ACCESS_KEY for that user. Assign the Managed Policy that you
created to this user, and provide no other permissions to it.

In the event that your container or host system is compromised, an attacker will
have a key which can only write output to the container and cannot read the
backups or perform any other operations on your AWS account. They could
potentially use this to overwrite files, so make sure you enable versioning on
the bucket as well.

It’s also good practice to only mount your volumes as read-only (:ro), since
this container does not need to write to any volumes.