In this article I’m going to talk about how I set up two WordPress sites on one server. None of the articles I could come up with covered all the topics I was interested in. Not exactly groundbreaking, in fact it sounds simple. But the devil is in the details. To actually perform such a setup for the first time is actually pretty daunting. From setting up the DNS records to getting file permissions to work, to getting the reverse proxy right, it’s all a complicated mess that I’m going to delineate for you (and me) here, while it’s still fresh in my head.
Features
- Two WordPress sites:
https://www.example1.com
andhttps://www.example2.com
. - Redirects from
https://example1.com
,http://example1.com
,http://www.example1.com
tohttps://example1.com
(and the same forexample2.com
). - Let’s encrypt certificates.
- WordPress debug logging with logrotate. I have ranted previously about why I think having debug logging turned on is important on live sites.
- Emails must work in WordPress.
- The containers must run as a service, so that they start with system start, and exit gracefully on system shutdown.
- Daily backups of all WordPress files and MySQL databases.
- Networks of the two sites should be isolated for security.
Shameless plug of my referral links
We start with a hosted server. This can be a dedicated server or a server slice. Hosting providers that I like are:
- DigitalOcean – Using this link you get a $200, 60-day credit to try their products. If you spend $25 after your credit expires, I will get $25 in credit.
- Hostinger – You don’t get anything with this link, except for a great hosting service. Again, I get a commission from this link if you stick with Hostinger for 45 days. Think of it as my reward for writing such a great article for you.
I have a Debian droplet on DigitalOcean with 2GB of RAM, but with some tweaking it’s possible to squeeze two low-traffic WordPress sites in 1GB, if you really need to keep the monthly costs down.
First let’s get the (DNS) record straight
The first order of business is to setup the DNS records. We’re going to need two A
records to point to our server’s IP, and two CNAME
records that will be wwww.
aliases of the bare domain. Oh, and we’ll need some NS
records to point to the domain name provider (in this case Digital Ocean).
Type | Hostname | Value | TTL |
---|---|---|---|
A | example1.com | (my server’s IP) | 1800 |
A | example2.com | (my server’s IP) | 1800 |
CNAME | www.example1.com | alias of example1.com. | 1800 |
CNAME | www.example2.com | alias of example2.com. | 1800 |
NS | example1.com | ns1.digitalocean.com | 14400 |
NS | example1.com | ns2.digitalocean.com | 14400 |
NS | example1.com | ns3.digitalocean.com | 14400 |
NS | example2.com | ns1.digitalocean.com | 14400 |
NS | example2.com | ns2.digitalocean.com | 14400 |
NS | example2.com | ns3.digitalocean.com | 14400 |
I like to keep the TTL (Time-To-Live) values low until I’m finished with my setup. I’ve set everything to 1800
seconds which is half an hour. Once I’m sure that everything is OK, I can increase the values to something larger like 14400
(four hours).
ssh
We are going to need to be able to login to the server with a passwordless setup.
Login as root to the new server via the admin console.
Create a regular user with adduser
:
adduser yourusername
Then add the user to sudoers with:
usermod -aG sudo yourusername
(Replace yourusername
with your username.)
Once we are on our local machine, we check if we already have an ssh key with:
ls -al ~/.ssh/id_*.pub
If we don’t have any, we can generate one with:
ssh-keygen -t rsa -b 4096 -C "your_email@domain.com"
Once we are sure that there is a key, we upload it to the new server with:
ssh-copy-id yourusername@server_ip_address
(Again replace yourusername
with your remote username, and server_ip_address
with your ip address. You will need to enter the password you entered in adduser
.)
Docker compose
First, let’s install docker on the server by following the installation instructions for Debian. I am not going to repeat the instructions here. If you have chosen a different distro, follow the respective instructions.
We are going to create a docker-compose.yml
file. This file describes how the different docker containers are orchestrated.
We are going to need four containers:
- Two databases for the two sites.
- Two WordPress installations.
I’m first going to show some simple compose configs with the basics, then we are going to add the bells and whistles. Here goes:
Two databases, sitting in a server
version: "3.8"
name: droplet
networks:
net1:
net2:
volumes:
db1volume:
db2volume:
services:
db1:
image: mysql:8.2.0
networks:
- net1
restart: unless-stopped
expose:
- "3306"
volumes:
- db1volume:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: wp1_root_pass
MYSQL_DATABASE: wp_db1
MYSQL_USER: db1_user
MYSQL_PASSWORD: db1_pass
db2:
image: mysql:8.2.0
networks:
- net2
restart: unless-stopped
expose:
- "3306"
volumes:
- db2volume:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: wp2_root_pass
MYSQL_DATABASE: wp_db2
MYSQL_USER: db2_user
MYSQL_PASSWORD: db2_pass
There’s already a lot going on here:
- We are defining our composition to have a name. Here I am using
droplet
. This will also be the prefix for the names of all the containers. - We are defining two networks,
net1
andnet2
. Only containers on the same network can talk to each other. We don’t want ourexample1.com
WordPress to have any access to the MySQL database ofexample2.com
. - Next we are defining two identical
mysql:8.2.0
containers, nameddb1
anddb2
. - Each of the two databases is put in its respective network (
net1
andnet2
). - We want a database that has crashed to restart, unless we explicitly stop it.
- We are going to let the databases listen to TCP port
3306
. This is the port where WordPress will connect. All other ports are firewalled. - We are going to mount the
/var/lib/mysql
directories into docker volumes nameddb1volume
anddb2volume
. - Next we are going to use some environment variables that the startup script inside the mysql image recognizes. These will set up a root password, a new empty database, and a username/password pair that WordPress will use to access this new database. The startup script will do all the
CREATE DATABASE
,CREATE USER
andGRANT
magic for us. You can learn more about the MySQL docker image here.
A tale of two WordPresses
Next, let’s also add the two WordPress services (these also go under the services section along with the databases):
wp1:
image: wordpress:latest
networks:
- net1
depends_on:
- db1
user: 1000:1000
restart: unless-stopped
expose:
- "80"
volumes:
- ./wp1fs:/var/www/html
ports:
- "127.0.0.1:8101:80"
environment:
WORDPRESS_DB_HOST: db1:3306
WORDPRESS_DB_NAME: wp_db1
WORDPRESS_DB_USER: db1_user
WORDPRESS_DB_PASSWORD: db1_pass
WORDPRESS_DEBUG: true
wp2:
image: wordpress:latest
networks:
- net2
depends_on:
- db1
restart: unless-stopped
expose:
- "80"
volumes:
- ./wp2fs:/var/www/html
ports:
- "127.0.0.1:8102:80"
environment:
WORDPRESS_DB_HOST: db2:3306
WORDPRESS_DB_NAME: wp_db2
WORDPRESS_DB_USER: db2_user
WORDPRESS_DB_PASSWORD: db2_pass
WORDPRESS_DEBUG: true
- We have named the two WordPress containers
wp1
andwp2
and assigned them to our two networks,net1
andnet2
. - We have defined that these depend on their respective databases to function.
- We have defined that these containers are to be restarted if they crash, but not if we explicitly stop them.
- We are exposing only HTTP port
80
to the networks. All other ports are firewalled. We are not exposing port443
here. TLS encryption will be done at the host level that will run the reverse proxy (see below). - We are mounting two local directories here
./wp1fs
and./wp2fs
. These will contain the WordPress installations. The first time that the containers run, WordPress will be installed in them. A specialwp-config.php
file will be placed in there. This file pulls the DB connection settings from the environment variables that we specify below. - We are port-mapping the HTTP
80
ports to the host’s ports8101
and8102
. These are the ports that the reverse proxy will use. They are bound to the loopback network (127.0.0.1
), and are therefore not exposed to the outside world. If we had used just8101:80
, this would map port 80 of the container to port8101
of the host on all network interfaces, including the one facing the outside world. This is not ideal. We only want access to our services through our reverse proxy. - The
WORDPRESS_*
environment variables are specific to this wordpress image. We specify the databases, the login credentials that we also specified above, and we turn on debug logging. To learn more about these environment variables, click here.
NOTE: I have made the decision here to put the databases into system volumes (these live usually in /var/lib/docker/volumes
and can be shared between containers, the WordPress filesystems are mounted in local directories which I call wp1volume
and wp2volume
. If you prefer to have all volumes unde /var/lib
, you can delete the ./
prefix in front of the volume names.
The bells and whistles
If you thought that’s enough, you are gravely mistaken. Here’s a few more things to take care of:
Database collation
We are going to set the databases a UTF-8 multibyte collation for unicode support. Under the environment variables in the database services, we are going to add an explicit mysqld command:
command: "mysqld --character-set-server=utf8mb4 --collation-server=utf8mb4_unicode_ci
And under the WordPress services, we are going to add the following environment variable:
WORDPRESS_DB_COLLATE: utf8mb4_unicode_ci
File permissions
If we run the above containers, WordPress won’t be able to install or remove any themes or plugins, and it won’t be able to do anything that requires writing to the file system.
This is because, in the WordPress images, the user that runs apache has a different uid and guid than the file system. The files are owned by uid
1000
and guid
1000
. We can specify that the user running stuff inside the container has the same numeric ids. To do this, we add the following to the two WordPress services:
user: 1000:1000
Database memory
By default, a mysql instance will take up at least 360MB of memory once it’s running. Most of it is because of the Performance Schema instruments, which take up a lot of memory.
The Performance Schema is a database that keeps track of the mysqld server’s performance, and is useful for diagnostics. If you are not going to use this feature, then you can turn it off. The memory usage of each DB container will then fall to a little over 100MB.
We are going to create a file named disable-perf-schema.cnf
with the following contents:
[mysqld]
performance_schema = OFF
This will be added to the mysql server’s config files. The server includes any .cnf
files in the /etc/mysql/conf.d
directory into its configuration. We can use the volumes section to map this file into our two db containers:
volumes:
- db1:/var/lib/mysql
- ./disable-perf-schema.cnf:/etc/mysql/conf.d/disable-perf-schema.cnf
volumes:
- db2:/var/lib/mysql
- ./disable-perf-schema.cnf:/etc/mysql/conf.d/disable-perf-schema.cnf
There are more hacks to reduce the memory usage of mysqld, but these are beyond the scope of this article. For example, you can look into reducing the InnoDB buffer pool size.
Log rotate
We have enabled debug logging, because reasons. This is cool, but the /var/www/html/wp-content/debug.log
files will eventually fill up our containers if left unchecked. Enter logrotate
to the rescue:
We are going to create a file named wordpress.logrotate
with the following content:
/var/www/html/wp-content/debug.log
{
su 1000 1000
rotate 24
copytruncate
weekly
missingok
notifempty
compress
}
This will gzip old logs daily and will delete even older logs. If you are not sure about the details, ChatGPT and Bard can explain exactly what each line does.
Note how we use again the uid
and guid
of the WordPress image.
Let’s mount this file into our WordPress containers, by adding a line to their volume clause:
volumes:
- ./wp1fs:/var/www/html
- ./wordpress.logrotate:/etc/logrotate.d/wordpress
volumes:
- ./wp2fs:/var/www/html
- ./wordpress.logrotate:/etc/logrotate.d/wordpress
Docker compose recap
We now have the following docker-compose.yml
file:
version: "3.8"
name: droplet
networks:
net1:
net2:
volumes:
db1volume:
db2volume:
services:
db1:
image: mysql:8.2.0
networks:
- net1
restart: unless-stopped
expose:
- "3306"
volumes:
- db1volume:/var/lib/mysql
- ./disable-perf-schema.cnf:/etc/mysql/conf.d/disable-perf-schema.cnf
environment:
MYSQL_ROOT_PASSWORD: wp1_root_pass
MYSQL_DATABASE: wp_db1
MYSQL_USER: db1_user
MYSQL_PASSWORD: db1_pass
command: "mysqld --character-set-server=utf8mb4 --collation-server=utf8mb4_unicode_ci --performance-schema-instrument='%=OFF' --innodb-buffer-pool-size=32M"
db2:
image: mysql:8.2.0
networks:
- net2
restart: unless-stopped
expose:
- "3306"
volumes:
- db2volume:/var/lib/mysql
- ./disable-perf-schema.cnf:/etc/mysql/conf.d/disable-perf-schema.cnf
environment:
MYSQL_ROOT_PASSWORD: wp2_root_pass
MYSQL_DATABASE: wp_db2
MYSQL_USER: db2_user
MYSQL_PASSWORD: db2_pass
command: "mysqld --character-set-server=utf8mb4 --collation-server=utf8mb4_unicode_ci --performance-schema-instrument='%=OFF' --innodb-buffer-pool-size=32M"
wp1:
image: wordpress:latest
networks:
- net1
depends_on:
- db1
user: 1000:1000
restart: unless-stopped
expose:
- "80"
volumes:
- ./wp1fs:/var/www/html
- ./wordpress.logrotate:/etc/logrotate.d/wordpress
ports:
- "8101:80"
environment:
WORDPRESS_DB_HOST: db1:3306
WORDPRESS_DB_NAME: wp_db1
WORDPRESS_DB_USER: db1_user
WORDPRESS_DB_PASSWORD: db1_pass
WORDPRESS_DB_COLLATE: utf8mb4_unicode_ci
WORDPRESS_DEBUG: true
wp2:
image: wordpress:latest
networks:
- net2
depends_on:
- db1
user: 1000:1000
restart: unless-stopped
expose:
- "80"
volumes:
- ./wp2fs:/var/www/html
- ./wordpress.logrotate:/etc/logrotate.d/wordpress
ports:
- "8102:80"
environment:
WORDPRESS_DB_HOST: db2:3306
WORDPRESS_DB_NAME: wp_db2
WORDPRESS_DB_USER: db2_user
WORDPRESS_DB_PASSWORD: db2_pass
WORDPRESS_DB_COLLATE: utf8mb4_unicode_ci
WORDPRESS_DEBUG: true
We can start this with docker compose up
(we must first cd
into the same directory as the .yml
file).
We can see if it’s running with docker compose ls
, and we can see the containers with docker container ls
.
We can inspect memory usage with docker stats
.
We can stop the containers with docker compose down
.
If we also want to wipe the database volumes and start over, we can do docker compose down -v
(DESTRUCTIVE!!!).
We can go into the shell of the first database with:
docker exec -it droplet-db1-1 bash
And then, we can go into the mysql console with
mysql -u root -pwp1_root_pass
We can go into the shell of the first WordPress with:
docker exec -it droplet-wp1-1 bash
If we need to, we can install wp-cli using instructions from https://wp-cli.org/. The copy of wp-cli
will not be persisted into the container across restarts. (Note: it’s possible to add special containers with wp-cli
pre-installed, but again this is out of scope of this article. For more information, see the CLI images here.
DaaS (Docker-as-a-Service)
We don’t want to have to issue docker compose up
every time the server starts, and docker compose down
every time the server stops. Let’s create a systemd
unit, so that it runs as a service.
We’ll create a file named /etc/systemd/system/docker-compose.service
with the following carefully crafted contents:
[Unit]
Description=A bunch of containers
After=docker.service
Requires=docker.service
[Service]
Type=oneshot
RemainAfterExit=yes
User=yourusername
ExecStart=/bin/bash -c "docker compose -f /home/yourusername/docker-compose.yml up --detach"
ExecStop=/bin/bash -c "docker compose -f /home/yourusername/docker-compose.yml stop"
[Install]
WantedBy=multi-user.target
- Replace
yourusername
with your username (duh!). - Replace the description with something less silly (optional).
- Note how we only start this service after the docker service starts.
- Note that we do a
--detach
. This will start the containers in the background and exit, without showing the logs of all the containers in the standard output.
We can now start the service with
sudo service docker-compose up
And stop it with
sudo service docker-compose down
If we want to see the logs of all the containers, we can type
docker compose logs -f
We should now be able to do curl http://127.0.0.1:8101
and see the HTML of the front page of the first WordPress.
The reverse proxy
The database and WordPress containers are running, but they are not yet exposed to the outside world. To do this, we are going to use nginx
as a reverse proxy.
The reverse proxy will:
- handle all the redirects that we need
- expose the apache2 servers to the outside world
- handle the TLS encryption
First we setup Let’s Encrypt. How to do this is beyond the scope of this article. You can look here for a good introduction.
The bottom line is that certbot
must be installed, and the following public and private certificate files must exist on your server (host):
/etc/letsencrypt/live/example1.com/fullchain.pem
/etc/letsencrypt/live/example1.com/privkey.pem
/etc/letsencrypt/live/example2.com/fullchain.pem
/etc/letsencrypt/live/example2.com/privkey.pem
These files are actually symlinks to the latest certificate issued. This is all handled by certbot.
Let’s start to create an nginx config file, which we will place in /etc/nginx/sites-available/reverse-proxy.conf
.
We are going to enter several server stanzas, remembering that nginx will use the first one that matches in order from top to bottom.
Redirects from http to https
First, we want any unencrypted requests to port 80
to do a soft redirect to our https://www.
sites.
server {
listen 80;
listen [::]:80;
server_name example1.com;
return 302 https://www.example1.com$request_uri;
}
server {
listen 80;
listen [::]:80;
server_name example2.com;
return 302 https://www.example2.com$request_uri;
}
The first listen statement is for IPv4, and the second is for IPv6. We redirect to the TLS site, preserving the path segment of the request URI.
Proxy forwarding
Next we are going to enter the stanza that handles the actual site content:
server {
listen 443 ssl;
listen [::]:443 ssl;
server_name www.example1.com;
ssl_certificate /etc/letsencrypt/live/example1.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example1.com/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
location / {
proxy_pass http://127.0.0.1:8101/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
server {
listen 443 ssl;
listen [::]:443 ssl;
server_name www.example2.com;
ssl_certificate /etc/letsencrypt/live/example2.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example2.com/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
location / {
proxy_pass http://127.0.0.1:8102/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
Again, we are listening for 443
(the TLS port) on both IPv4 and IPv6.
Notice how we only listen for requests to the www.
subdomain here.
We use the TLS certificates first, then we specify the reverse proxy in the location /
section.
We forward each site to the correct port that we exposed with docker (8101
and 8102
in this case).
We also set some X-
headers. This is so that the PHP server knows some details about the client.
Redirects from all subdomains to www
Finally, we want requests from https://example1.com
, or from ay other subdomain, such as https://foo.example1.com
, to redirect to our www.
subdomain:
server {
listen 443 ssl;
listen [::]:443 ssl;
server_name .example1.com;
ssl_certificate /etc/letsencrypt/live/example1.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example1.com/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
location / {
rewrite ^ https://www.example1.com permanent;
}
}
server {
listen 443 ssl;
listen [::]:443 ssl;
server_name .example2.com;
ssl_certificate /etc/letsencrypt/live/example2.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example2.com/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
location / {
rewrite ^ https://www.example2.com permanent;
}
}
Here we listen for any subdomain. Note the dot (.
) prefix in the server_name
.
We again use the TLS certificates, but this time we perform a redirect to the wwww.
subdomain.
Administering our reverse proxy
When we are ready to enable our reverse proxy, we will create a symlink to sites-enabled
:
sudo ln -s /etc/nginx/sites-available/reverse-proxy.conf /etc/nginx/sites-enabled/reverse-proxy.conf
We can test our syntax to see that it is correct with:
sudo nginx -t
And finally we can restart the nginx server with:
sudo service nginx restart
We can check the status of the server with:
sudo service nginx status
If everything is working correctly, and if the DNS records have had time to propagate, then we can visit our sites and run the famous WordPress installation process:
- https://www.example1.com/
- https://www.example2.com/
Emails
If only the above was enough. Sadly, our WordPress installations need a way to send emails, otherwise the webmaster experience is going to suck big time.
I say sadly, because setting up sendmail
first on the host is relatively easy, but then setting up SMTP proxies in the WordPress containers is not something I am familiar with. Sorry guys, in the interest of keeping things simple, I’m going to cheat a little here. Here’s what I did:
- Install the free WP Mail SMTP plugin on both sites.
- Create an application-specific password in my google account.
- In the WordPress admin screens, go to: WP Mail SMTP โ Mailer โ Other SMTP.
- Enter the following settings:
- SMTP Host:
smtp.gmail.com
- Encryption:
SSL
- SMTP Port:
465
- Auto TLS:
ON
- Authentication:
ON
- SMTP Username: (my gmail address)
- SMTP Password: (the application specific password that I just created).
- SMTP Host:
- Hit Save Settings.
- Go to WP Mail SMTP โ Tools and send a test email.
If everything works, then WordPress and its plugins can now send emails. But it will only be able to send email into spam folders, until we add a Sender Policy Framework (SPF) record to our DNS entries:
Type | Hostname | Value | TTL |
---|---|---|---|
TXT | example1.com | v=spf1 a mx ~all | 1800 |
TXT | example2.com | v=spf1 a mx ~all | 1800 |
The above TXT records tell recipients to treat all emails coming from servers pointed to by the A or MX record of your domains as safe, and others as potentially suspicious. Again, use your favorite AI chatbot to constuct an SPF record that matches your needs.
Nothing more permanent than a 301 redirect
If all works, it’s now time to turn the soft redirects into permanent (hard) redirects. Edit the reverse proxy config and change any 302
redirects to 301
. Any browsers visiting your site will cache these redirects for eternity.
It’s also now a good time to increase the Time-to-Live of all the DNS records to something like 4 hours, or 14400
seconds.
Backups
You would think that by now you’re finished, but you’d be wrong!
Any IT technician worth their salt knows that they must backup, and backup often.
First, turn off the server or droplet and take a full backup, snapshot, or whatever. Future you will thank you.
Then, let’s see how we can take automated daily backups. We can either pay the hosting provider every month to do this for us, or we can spend a few minutes to set up a few cron jobs. Let’s be cheap and do it manually.
I have a raspberry Pi at home that is always on. It does various things like take backups, ping various services and email me if they are down, trigger wp-cron URLs, control crypto miners, run services I need such as my ticket system, and in general runs any other odd 24/7 task. You should also have one such low-power system. The great thing with Raspberry Pi is that it’s easy to take out the MicroSD and gzip it into a mechanical disk, so the backup mechanism itself is nicely backed up in its entirety. (Yo dawg, heard you like backupsโฆ)
We’ll now use our local always-on Linux system to take daily backups of our online filesystems and databases:
Local backups.sh
script
First, let’s create a DB user that only has enough access to take backups from both databases, but no more:
Login to the MySQL consoles of each database and create a wp_bu
user that will do backups:
CREATE USER 'wp_bu'@'localhost' IDENTIFIED BY 'SOMESTRONGPASSWORD';
GRANT SELECT, LOCK TABLES ON wp_db1.* TO 'wp_bu'@'localhost';
CREATE USER 'wp_bu'@'localhost' IDENTIFIED BY 'SOMESTRONGPASSWORD';
GRANT SELECT, LOCK TABLES ON wp_db2.* TO 'wp_bu'@'localhost';
We only need SELECT, but since we want to call mysqldump
with the --single-transaction
argument, we’ll also need to grant the LOCK TABLES
permission. No point in having an ACID database if we’re going to take backups of an inconsistent state now, is there?
We’ll now create a bash shell script that does our daily backups. Let’s place it in our local backup server and call it backups.sh
:
#!/bin/bash
# ensure dirs exist
mkdir -p /path-to-backups/cache/wp{1,2}volume /path-to-backups/server
# download DBs to SQL files
ssh -t server "docker exec droplet-wpdb-1 nice -n 19 mysqldump -u wp_bu -pSOMESTRONGPASSWORD --no-tablespaces --single-transaction wp_db1 | nice -n 19 gzip -9 -f" >/path-to-backups/server/wp_db1-`date --rfc-3339=date`.sql.gz
ssh -t server "docker exec droplet-wpdb-2 nice -n 19 mysqldump -u wp_bu -pSOMESTRONGPASSWORD --no-tablespaces --single-transaction wp_db2 | nice -n 19 gzip -9 -f" >/path-to-backups/server/wp_db2-`date --rfc-3339=date`.sql.gz
# download wp-content files to backup cache
rsync -aq server:~/wp1fs/* /path-to-backups/cache/wp1volume
rsync -aq server:~/wp2fs/* /path-to-backups/cache/wp2volume
# Zip downloaded wp-content files
zip -r9q /path-to-backups/server/wp1-`date --rfc-3339=date`.zip /path-to-backups/cache/wp1volume -x "**/GeoLite2*" -x "**/GeoIPv6.dat"
zip -r9q /path-to-backups/server/wp2-`date --rfc-3339=date`.zip /path-to-backups/cache/wp2volume -x "**/GeoLite2*" -x "**/GeoIPv6.dat"
# prune old DB and FILE backups from local backups
cd /path-to-backups/server && ls -1tr | head -n -30 | xargs -d '\n' rm -rf -
Again, a lot goes on here. Let’s unpack:
- The script creates directories
server
andcache
under/path-to-backups
. Replace this path with something that points to the directory where you want to keep your backups. - We then
ssh
to the host using the-t
argument because we are in a headless environment (cron). We issue adocker exec
command into our databases. Notice how we do not use the-it
arguments todocker exec
, since this is a headless command (no TTY attached). The command is amysqldump
command that uses the credentials we just created to export the databases in a single transaction each. The SQL output is compressed with maximum compression (-9
) and the binary output ofgzip
is forced (-f
) into the standard output, which is then sent over the ssh connection. In our local backups server, we redirect this compressed stream into an.sql.gz
file. The file name starts withwp_db1-
and includes the current date inYYYY-MM-DD
notation. (RFC 3339 is my idea of a perfect date, btw). The--no-tablespaces
argument is need in MySQL8.0.21
and later, otherwise you’ll need the PROCESS global permission. (Unless you are using tablespaces you don’t need it, hence the argument--no-tablespaces
.) Notice that we make sure to benice
to other running processes because we don’t want to impact the performance of the web server with our backups.19
is the idle CPU priority. - We then use
rsync
with the quiet (-q
) and archive (-a
) flags to copy the files of our WordPress installations into ourcache/wp1volume
andcache/wp2volume
directories. The advantage of using rsync is that only changes to these directories will be transferred. - We then create a zip file for each of these directories. We name the zip files with the prefixes
wp1-
andwp2-
followed again by our idea of a perfect date. Many WordPress plugins include a database of IPs mapped to geographical locations. These files are large and can be found online. If we don’t want to save these, we can exclude them (-x
flag), but this is optional. - Finally we list the files we created (both
.sql.gz
and.zip
files) and we only keep the last 30, deleting any older ones. Since we have two files for each of two databases, this will retain daily backups for the last week or so.
Make the script executable with
chmod +x backups.sh
We run the script once, and we check the .sql.gz
files using zless
and the zip files with unzip -l
.
Once we are certain that all data is backed up by the script, we add it to the crontab. Edit the crontab with crontab -e
and add the line:
20 4 * * * /bin/bash /home/yourusername/backups.sh
This will execute the backups every day at 4:20 in the morning.
Checking the backups
The server works and is fully backed up. You would think that you’re done by now. That’s where you’d be wrong again!
Having backups and not checking them regularly is worse than not having backups at all: You are being lulled into a false sense of security. You may act precariously, thinking that you can always go back to the last backup. However, all backup mechanisms can fail, for any number of reasons.
What I do, is I’ve set up a weekly reminder in my Google calendar to check the backups. It only takes half a minute per week to ssh into my backup server and do an ls -l
, thus ensuring that the latest backups exist, and their file size is what I’d expect. I keep old backups for about a week, hence the weekly reminder.
I also have another reminder every three months, to backup the MicroSD of my Raspberry Pi backup server. Once every three months, I shutdown the Pi, take out the MicroSD, put it into my work PC, and copy the entire image into a file, stored on my mechanical disk:
sudo dd if=/dev/sdf of=/mnt/bu/rpi-backup-`date --iso-8601=date`.img bs=4096 conv=sync,noerror status=progress
gzip -9 /mnt/bu/rpi-backup-`date --iso-8601=date`.img
Only once I have this process setup I can sleep at night.
Are we finished yet?
By now you would think that we’re not finished yet, and that there’s more things to do. That’s where you’d be wrong!
And for anyone wondering, example1.com
is actually https://www.dashed-slug.net and example2.com
is actually this blog, https://www.alexgeorgiou.gr. There’s also a plain nginx container in there that serves static HTML files at https://wallets-phpdoc.dashed-slug.net .
My config is actually a little bit more complex than the one discussed above. To save some more server memory, I had to put both databases into the same MySQL container, and set up two different DB users with access restricted to each respective database. But you shouldn’t do this at home, because isolation!
This article is being served by the containers I discussed here, and will be backed up early tomorrow morning, via the mechanism I shared with you above. Which is pretty meta, if you think about it!
I never expected to compose such a long, self-contained article on containers and docker compose
. But now it’s finished and I can hardly contain my excitement!
Thanks for sticking to the end. Hope you enjoyed.