Skip to content

Backup Strategies

Backup to Amazon S3 Bucket

If you have your own AWS account, you can send files and backups directly to an S3 bucket.

Install the AWS CLI inside your Stratus instance using the Bundled Install. Use the Install the AWS CLI without Sudo (Linux, macOS, or Unix) method

$ curl "https://s3.amazonaws.com/aws-cli/awscli-bundle.zip" -o "awscli-bundle.zip"
$ unzip awscli-bundle.zip
$ ./awscli-bundle/install -b ~/bin/aws

Run export to make the ~/bin path available

export PATH=~/bin:$PATH

Add the same to the ~/.bash_profile file so it will be available every time you log in. By default the ~/.bash_profile file does not exist.

Run aws configure and follow the prompts entering the appropriate region and access key for your AWS account.

Now you can run commands like aws s3 ls to show your existing buckets and manage other AWS resources. In this example, we will create a new one named my-magento-backup with:

magemojo@svc-magentoperformance-io:~$ aws s3 mb s3://my-magento-backup
make_bucket: my-magento-backup
magemojo@svc-magentoperformance-io:~$ aws s3 ls
2019-03-19 13:23:50 my-magento-backup

Now we can start uploading a backup, as a reminder, you can create backups in Magento 2 with the built-in system. Here we'll simply create a database dump and tar up the web root. If you have a very large site creating an archive can take a long time

#from document root
$n98-magerun2 db:dump s3_backup.sql
$tar -zcvf s3_backup_3_19_2019.tar.gz /srv/public_html/

Now we have an archive, we can upload it to the S3 bucket.

magemojo@svc-magentoperformance-io:~$ aws s3 ls
2019-03-19 13:23:50 my-magento-backup
magemojo@svc-magentoperformance-io:~$ aws s3 cp s3_backup_3_19_2019.tar.gz s3://my-magento-backup
upload: ./s3_backup_3_19_2019.tar.gz to s3://my-magento-backup/s3_backup_3_19_2019.tar.gz

You will see it upload with progress details. Once uploaded, you should also see the archive within the bucket in the AWS Console or web interface. S3 Bucket

Well done! Learn more about S3 commands from Amazon. From here you can throw together a script that creates a backup and uploads it to S3. Then set a cron to automatically run it, in a script similar to below:

#!/bin/bash
#S3 Backup Example

date="$(date "+%m-%d-%Y")"
bucket_name="my-magento-backup"
magento_root="/srv/public_html"

#create databasedump
echo "Creating database dump..."
/usr/local/bin/n98-magerun2 --root-dir=$magento_root db:dump db-backup-$date.sql

#create archive of webroot, excluding var
echo "Creating tar archive of files and database dump..."
tar --exclude=/srv/public_html/var/* -zcf $date-backup.tar.gz /srv/public_html/

#upload to s3
echo "Uploading to S3..."
aws s3 cp $date-backup.tar.gz s3://my-magento-backup

#clean up
echo "Removing local files and cleaning up..."
rm $date-backup.tar.gz
rm $magento_root/db-backup-$date.sql

echo "Done!"

The above would output:

magemojo@svc-magentoperformance-io:~$ ./backup.sh
Creating database dump...


  Dump MySQL Database


Start dumping database db_cded1u2ypqu to file db-backup-03-19-2019.sql
Finished
Creating tar archive of files and database dump...
tar: Removing leading `/' from member names
Uploading to S3...
upload: ./03-19-2019-backup.tar.gz to s3://my-magento-backup/03-19-2019-backup.tar.gz
Removing local files and cleaning up...
Done!

Dropbox backups

The official Dropbox CLI utility from Dropbox is not currently supported on Stratus. But you can use this 3rd party script to push files to a Dropbox folder with the proper access tokens.  See https://github.com/andreafabrizi/Dropbox-Uploader for more details.

Backup to Google Cloud

To backup your Stratus instance to Google Cloud, you’ll first want to create an account if you do not yet have one. Next, you’ll want to create a project from the Google Cloud Console - In this case, I’ll be using my-backups-256118

We will be using the gsutil for this tutorial. gsutil is a Python application that accesses Google Cloud Storage from the command line.

We will start by downloading the files from Google and extracting them locally;

mark@svc-m2-markmuyskens-com:~$ wget https://storage.googleapis.com/pub/gsutil.tar.gz mark@svc-m2-markmuyskens-com:~$ tar -zxvf gsutil.tar.gz

Next, we will want to configure gsutil to connect to Google;

mark@svc-m2-markmuyskens-com:~$ ./gsutil/gsutil config
This command will create a boto config file at /srv/.boto containing
your credentials, based on your responses to the following questions.
Please navigate your browser to the following URL:
https://accounts.google.com/o/oauth2/auth?scope=https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fcloud-platform+https%3A%2F%2Fwww.googleapis.com%2Fauth%2Faccounts.reauth&redirect_uri=urn%3Aietf%3Awg%3Aoauth%3A2.0%3Aoob&response_type=code&client_id=909320924072.apps.googleusercontent.com&access_type=offline
In your browser you should see a page that requests you to authorize access to Google Cloud Platform APIs and Services on your behalf. After you approve, an authorization code will be displayed.

Enter the authorization code: [REDACTED]

Please navigate your browser to https://cloud.google.com/console#/project,
then find the project you will use, and copy the Project ID string from the
second column. Older projects do not have Project ID strings. For such projects,
 click the project and then copy the Project Number listed under that project.

What is your project-id? my-backups-256118

gsutil developers rely on user feedback to make improvements to the
tool. Would you like to send anonymous usage statistics to help
improve gsutil? [y/N] y

Boto config file "/srv/.boto" created. If you need to use a proxy to
access the Internet please see the instructions in that file.
mark@svc-m2-markmuyskens-com:~$

Next, you will want to create a bucket. I will be using “mark-stratus-backups” as my bucket name. Buckets need to be a unique name as they share a global naming scheme at Google. We are also using standard storage in this example. A list of storage types can be found HERE.

mark@svc-m2-markmuyskens-com:~$ ./gsutil/gsutil mb -c standard -l US -p my-backups-256118 gs://mark-stratus-backups
Creating gs://mark-stratus-backups/...
mark@svc-m2-markmuyskens-com:~$

We can then create a manual backup by switching to the document root, dumping a copy of the database, and then creating an archive;

mark@svc-m2-markmuyskens-com:~$ cd public_html/
mark@svc-m2-markmuyskens-com:~/public_html$ n98-magerun2 db:dump backup.sql
mark@svc-m2-markmuyskens-com:~/public_html$ tar -zcvf backup_10_16_2019.tar.gz /srv/public_html/

Once archived, it can be copied to Google - running the ls command after, will show that the file has been copied successfully to the Google bucket;

mark@svc-m2-markmuyskens-com:~$ ./gsutil/gsutil cp /srv/public_html/backup_10_16_2019.tar.gz gs://mark-stratus-backups
Copying file:///srv/public_html/backup_10_16_2019.tar.gz [Content-Type=application/x-tar]...
\ [1 files][126.1 MiB/126.1 MiB]                                                
Operation completed over 1 objects/126.1 MiB.                                    
mark@svc-m2-markmuyskens-com:~$ ./gsutil/gsutil ls -l gs://mark-stratus-backups
 132217623  2019-10-16T18:52:52Z  gs://mark-stratus-backups/backup_10_16_2019.tar.gz
TOTAL: 1 objects, 132217623 bytes (126.09 MiB)
mark@svc-m2-markmuyskens-com:~$

We can also see this in the Google Cloud Console;

Screenshot

That's it! From here you can throw together a script that creates a backup and uploads it to Google. Then set a cron to automatically run it, in a script similar to below:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
#!/bin/bash
#Google Cloud Backup Example

date="$(date "+%m-%d-%Y")"
bucket_name="mark-stratus-backups"
magento_root="/srv/public_html"

#create database dump
echo "Creating database dump..."
/usr/local/bin/n98-magerun2 --root-dir=$magento_root db:dump db-backup-$date.sql

#create archive of webroot, excluding var
echo "Creating tar archive of files and database dump..."
tar --exclude=/srv/public_html/var/* -zcf $date-backup.tar.gz /srv/public_html/

#upload to Google
echo "Uploading to Google..."
./gsutil/gsutil cp $date-backup.tar.gz gs://$bucket_name

#clean up
echo "Removing local files and cleaning up..."
rm $date-backup.tar.gz
rm $magento_root/db-backup-$date.sql

echo "Done!"

Backup to Backblaze

Backing up Stratus to Backblaze B2 can be performed using a tool called Restic. To get started, head over to Backblaze and create an account.

Once you have an account, the first thing you will want to do is create a bucket. You’ll want to note the Bucket Unique Name for use later:

Screenshot

Next, you will want to head over to the App Keys section and add a new key. The key can be created with access to any bucket or can be optionally restricted access to a specific bucket.

Screenshot2

On creating the application key, you will be provided a keyID and applicationKey. You’ll want to note these for later use. The applicationKey will not be displayed again. You’ll have to create a new App Key if these details are forgotten.

Screenshot3

We’re done with the Backblaze interface now. Next we will want to head over to Github where we can grab the latest pre-compiled binary. (As of this article, it would be restic_0.9.6_linux_amd64.bz2)

I am going to download it directly into /srv, extract it, rename it to restic for simplicity, and then grant it execution permission.

mark@svc-m2-markmuyskens-com:~$ wget https://github.com/restic/restic/releases/download/v0.9.6/restic_0.9.6_linux_amd64.bz2
mark@svc-m2-markmuyskens-com:~$ bzip2 -d restic_0.9.6_linux_amd64.bz2 
mark@svc-m2-markmuyskens-com:~$ mv restic_0.9.6_linux_amd64 restic                   
mark@svc-m2-markmuyskens-com:~$ chmod u+x restic 
mark@svc-m2-markmuyskens-com:~$ 

Next, since the goal is to automate backups, we are going to store the required information as environmental variables to make it easier to use. We are also going to create a file called restic-pw.txt which will store a random string.

mark@svc-m2-markmuyskens-com:~$ export B2_ACCOUNT_ID="000d957bb8071340000000006"
mark@svc-m2-markmuyskens-com:~$ export B2_ACCOUNT_KEY="K000WYtFayi9GU5ATyVmm6T+UFH+v04"
mark@svc-m2-markmuyskens-com:~$ export RESTIC_REPOSITORY="b2:magemojo-b2-kb"
mark@svc-m2-markmuyskens-com:~$ export RESTIC_PASSWORD_FILE="restic-pw.txt"

Next, it’s time to init the new bucket;

mark@svc-m2-markmuyskens-com:~$ ./restic -r b2:magemojo-b2-kb init
created restic repository b44062684d at b2:magemojo-b2-kb

Please note that knowledge of your password is required to access
the repository. Losing your password means that your data is
irrecoverably lost.
mark@svc-m2-markmuyskens-com:~$

Now it’s time to perform our first backup. I’ll be backing up my full Magento installation which is located in this case in the public_html directory. On a clean Magento 2 installation, this takes about 2 minutes to perform.

mark@svc-m2-markmuyskens-com:~$ ./restic -r b2:magemojo-b2-kb backup public_html/
repository b4406268 opened successfully, password is correct
created new cache in /srv/.cache/restic

Files:       74683 new,     0 changed,     0 unmodified
Dirs:            0 new,     0 changed,     0 unmodified
Added to the repo: 402.722 MiB

processed 74683 files, 507.124 MiB in 1:40
snapshot f0263a11 saved
mark@svc-m2-markmuyskens-com:~$ 

If you run the same command once again, Restic will create another snapshot of your data. Restic uses de-duplication thus the next backups will always be faster as only files that are new or have been modified since the last snapshot are sent to B2.

You can also see all snapshots with the following command;

mark@svc-m2-markmuyskens-com:~$ ./restic -r b2:magemojo-b2-kb snapshots
repository b4406268 opened successfully, password is correct
ID        Time                 Host                     Tags        Paths
------------------------------------------------------------------------------------
f0263a11  2019-12-18 19:00:06  svc-m2-markmuyskens-com              /srv/public_html
------------------------------------------------------------------------------------
1 snapshots
mark@svc-m2-markmuyskens-com:~$

You can restore the specific snapshot with the -t flag to specify a restore location;

mark@svc-m2-markmuyskens-com:~$ ./restic -r b2:magemojo-b2-kb restore f0263a11 -t /tmp/restore
repository b4406268 opened successfully, password is correct
restoring <Snapshot f0263a11 of [/srv/public_html] at 2019-12-18 19:00:06.135398979 +0000 UTC by mark@svc-m2-markmuyskens-com> to /tmp/restore
mark@svc-m2-markmuyskens-com:~$

That’s it! From here you can throw together a script that creates a snapshot and uploads it to B2 and then set a cron to automatically run it. We also don’t want to forget about the database so we will include that as well;

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
#!/bin/bash
#Backblaze B2 Backup Example

magento_root="/srv/public_html"

export B2_ACCOUNT_ID="000d957bb8071340000000006"
export B2_ACCOUNT_KEY="K000WYtFayi9GU5ATyVmm6T+UFH+v04"
export RESTIC_REPOSITORY="b2:magemojo-b2-kb"
export RESTIC_PASSWORD_FILE="restic-pw.txt"

#create database dump
echo "Creating database dump..."
/usr/local/bin/n98-magerun2 --root-dir=$magento_root db:dump db-backup.sql

#upload to Backblaze
echo "Uploading to Backblaze B2..."
./restic -r $RESTIC_REPOSITORY backup $magento_root

#clean up
echo "Removing local db backup..."
rm $magento_root/db-backup.sql

echo "Done!"