Backup Strategies
Backup to Amazon S3 Bucket
If you have your own AWS account, you can send files and backups directly to an S3 bucket.
Install the AWS CLI inside your Stratus instance using the Bundled Install. Use the Install the AWS CLI without Sudo (Linux, macOS, or Unix) method
$ curl "https://s3.amazonaws.com/aws-cli/awscli-bundle.zip" -o "awscli-bundle.zip"
$ unzip awscli-bundle.zip
$ ./awscli-bundle/install -b ~/bin/aws
Run export to make the ~/bin path available: export PATH=~/bin:$PATH
Add the same to the ~/.bash_profile
file so it will be available every time you log in. By default, the ~/.bash_profile
file does not exist; it may need to be created.
Run aws configure
and follow the prompts entering the appropriate region and access key for your AWS account.
Run aws s3 ls
to show your existing buckets and manage other AWS resources.
For example, a new backup called my-magento-backup can be created with:
$ aws s3 mb s3://my-magento-backup
make_bucket: my-magento-backup
$ aws s3 ls
2019-03-19 13:23:50 my-magento-backup
You can created backups in Magento 2 with the built-in backup system.
To create a database dump and a tar of the web root:
$n98-magerun2 db:dump s3_backup.sql
$tar -zcvf s3_backup_3_19_2019.tar.gz /srv/public_html/
Archiving a Magento store can take a long time.
To upload the backup to AWS:
$ aws s3 ls
2019-03-19 13:23:50 my-magento-backup
$ aws s3 cp s3_backup_3_19_2019.tar.gz s3://my-magento-backup
upload: ./s3_backup_3_19_2019.tar.gz to s3://my-magento-backup/s3_backup_3_19_2019.tar.gz
The file will upload with progress details.Once uploaded, the archive can be seen within the bucket in the AWS Console or web interface.
Learn more about S3 commands from Amazon.
A script can be set to create a backup and upload to S3. Then a cron can be set to automatically run the script.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 |
|
The above would output:
$ ./backup.sh
Creating database dump...
Dump MySQL Database
Start dumping database db_cded1u2ypqu to file db-backup-03-19-2019.sql
Finished
Creating tar archive of files and database dump...
tar: Removing leading `/' from member names
Uploading to S3...
upload: ./03-19-2019-backup.tar.gz to s3://my-magento-backup/03-19-2019-backup.tar.gz
Removing local files and cleaning up...
Done!
Dropbox backups
The official Dropbox CLI utility from Dropbox is not currently supported on Stratus. A 3rd party script may be used to push files to a Dropbox folder with the proper access tokens.
Backup to Google Cloud
To backup a Stratus instance to Google Cloud, an account must be created. Next, create a project from the Google Cloud Console. As an example: my-backups-256118.
gsutil is used for this tutorial. gsutil is a Python application that accesses Google Cloud Storage from the command line.
-
Begin by downloading the files from Google. Extract them locally.
$ wget https://storage.googleapis.com/pub/gsutil.tar.gz $ tar -zxvf gsutil.tar.gz
-
Next, configure gsutil to connect to Google.
$ ./gsutil/gsutil config This command will create a boto config file at /srv/.boto containing your credentials, based on your responses to the following questions. Please navigate your browser to the following URL: https://accounts.google.com/o/oauth2/auth?scope=https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fcloud-platform+https%3A%2F%2Fwww.googleapis.com%2Fauth%2Faccounts.reauth&redirect_uri=urn%3Aietf%3Awg%3Aoauth%3A2.0%3Aoob&response_type=code&client_id=909320924072.apps.googleusercontent.com&access_type=offline In your browser you should see a page that requests you to authorize access to Google Cloud Platform APIs and Services on your behalf. After you approve, an authorization code will be displayed. Enter the authorization code: [Authorization Code] Please navigate your browser to https://cloud.google.com/console#/project, then find the project you will use, and copy the Project ID string from the second column. Older projects do not have Project ID strings. For such projects, click the project and then copy the Project Number listed under that project. What is your project-id? [Project ID] gsutil developers rely on user feedback to make improvements to the tool. Would you like to send anonymous usage statistics to help improve gsutil? [y/N] y Boto config file "/srv/.boto" created. If you need to use a proxy to access the Internet please see the instructions in that file.
-
Next, create a bucket. Buckets need to be a unique name as they share a global naming scheme at Google. Standard storage is used in this example. Google has different storage types.
$ ./gsutil/gsutil mb -c standard -l US -p my-backups-256118 gs://mark-stratus-backups Creating gs://[bucket-name]...
-
Create a manual backup by switching to the document root, dumping a copy of the database, and then creating an archive;
$ cd public_html/ ~/public_html$ n98-magerun2 db:dump backup.sql ~/public_html$ tar -zcvf backup_10_16_2019.tar.gz /srv/public_html/
-
Once archived, copy to Google:
$ ./gsutil/gsutil cp /srv/public_html/backup_10_16_2019.tar.gz gs://[bucket-name] Copying file:///srv/public_html/[backup-name].tar.gz [Content-Type=application/x-tar]... \ [1 files][126.1 MiB/126.1 MiB] Operation completed over 1 objects/126.1 MiB. ~$ ./gsutil/gsutil ls -l gs://[bucket-name] 132217623 2019-10-16T18:52:52Z gs://[bucket-name]/[backup-name].tar.gz TOTAL: 1 objects, 132217623 bytes (126.09 MiB)
A script can be set that creates a backup and uploads it to Google. Then a cron can be set to automatically run the script.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 |
|
Backup to Backblaze
Backing up Stratus to Backblaze B2 can be performed using a tool called Restic.
-
Create a Backblaze account.
-
Create a new bucket. Note the Bucket Unique Name for later:
-
Go to the App Keys section and add a new key. The key can be created with access to any bucket or can be optionally restricted to access to a specific bucket. Noe the keyID and applicationKey for later use.
-
Go to Github to get the latest pre-compiled binary for Restic.
- Download and extract the code into
/srv
. - Rename the program to "restic".
- Grant the file execution permission.
$ wget https://github.com/restic/restic/releases/download/v0.9.6/restic_0.9.6_linux_amd64.bz2 $ bzip2 -d restic_0.9.6_linux_amd64.bz2 $ mv restic_0.9.6_linux_amd64 restic $ chmod u+x restic
- Download and extract the code into
-
Create a file called restic-pw.txt which will store a random string.
$ export B2_ACCOUNT_ID="[keyID]" $ export B2_ACCOUNT_KEY="[applicationKey]" $ export RESTIC_REPOSITORY="b2:magemojo-b2-kb" $ export RESTIC_PASSWORD_FILE="restic-pw.txt"
-
Initialize the new bucket:
$ ./restic -r b2:magemojo-b2-kb init created restic repository b44062684d at b2:magemojo-b2-kb Please note that knowledge of your password is required to access the repository. Losing your password means that your data is irrecoverably lost.
-
Perform a backup. On a clean Magento 2 installation, this takes about 2 minutes to perform.
$ ./restic -r b2:magemojo-b2-kb backup public_html/ repository b4406268 opened successfully, password is correct created new cache in /srv/.cache/restic Files: 74683 new, 0 changed, 0 unmodified Dirs: 0 new, 0 changed, 0 unmodified Added to the repo: 402.722 MiB processed 74683 files, 507.124 MiB in 1:40 snapshot f0263a11 saved
If the same command is run again, Restic will create another snapshot of the data. Restic uses de-duplication, thus the next backups will always be faster as only files that are new or have been modified since the last snapshot are sent to B2.
All snapshots can be ween with the following command:
$ ./restic -r b2:magemojo-b2-kb snapshots
repository b4406268 opened successfully, password is correct
ID Time Host Tags Paths
------------------------------------------------------------------------------------
f0263a11 2019-12-18 19:00:06 hostname /srv/public_html
------------------------------------------------------------------------------------
1 snapshots
To a the specific snapshot use the -t flag to specify a restore location;
$ ./restic -r b2:magemojo-b2-kb restore f0263a11 -t /tmp/restore
repository b4406268 opened successfully, password is correct
restoring <Snapshot f0263a11 of [/srv/public_html] at 2019-12-18 19:00:06.135398979 +0000 UTC by username> to /tmp/restore
Next, a script can be set for a backup cron job.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 |
|