Node.js® Enterprise Support
New comprehensive plans availableLearn More
Thank you for contacting us. We will get back to you shortly.
November 11, 2016 - by Casey Bisson, Jason Pincin
WordPress, the user-friendly blog-oriented CMS that powers 25% of the web1, is famous for its 5-minute install2, but that's only for a single server with no scalability. Containerization and the Autopilot Pattern change that, however, and now we can deploy a WordPress website and scale it to meet any amount of traffic with a single command. This isn't a PaaS, this is your own WordPress install that you can completely customize with your own themes and plugins.
tl;dr: Try it for yourself right now.
Together with 10up, a digital agency with deep WordPress expertise (and some WP core committers), we developed a fully Dockerized implementation of WordPress that makes deploying and scaling WordPress on any compatible infrastructure a snap. Using the Autopilot Pattern and ContainerPilot, all the components automatically configure themselves on deploy and reconfigure themselves as you scale up and down.
The automated operation of the Autopilot Pattern makes running applications easy. When we demonstrated that ease of operation with a Node.js app backed by Couchbase that automatically configures itself on deploy and as you scale the components, many pressed us with the challenge that it was only possible because we were using modern applications. We took up that challenge when we Dockerized MySQL with the Autopilot Pattern and now again with WordPress.
In addition to the WordPress container, a complete site includes the following containers (all containers are launched automatically using Docker Compose):
You can run multiple instances of most containers to scale for any amount of traffic. The MySQL containers will automatically configure as a cluster with a single primary and multiple replicas, and the WordPress image includes HyperDB to best utilize the MySQL cluster. Everything is automatically configured when you launch the containers, and reconfigured as you scale (or if a container fails), so running a scalable WordPress site is no more complex than running without the scaling features.
The images will run anywhere you can run Docker, but you can run them on Triton without having to setup and manage Docker hosts. Here's how to get started:
docker-compose) on your laptop or other environment
triton profile createand
triton account update triton_cns_enabled=true, then
eval "$(triton env)"to configure your environment and connect to a Triton data center
Before we deploy, let's decide on the domain name for this site. You can run without a custom domain name, but you won't be able to enable SSL, and your readers will have trouble finding your blog.
Setting up a custom domain is easy. Alexandra explained how it works in detail previously, but here's the short version:
CNAMEDNS entry that points your custom domain name to the CNS name of the Nginx instances in this WordPress blueprint
Triton Container Name Service (CNS) will automatically update as you add, scale, or update Nginx instances, so you'll never have to update DNS again.
Just make sure CNS is turned on for your account (the
setup.sh script for this project will check to make sure it is), and use
./setup.sh get-cns-hostname to check what the generated Triton CNS name for the Nginx instances is.
That generated CNS name is what you'll point your custom domain name to in your
CNAME record. How you create a
CNAME record depends on your registrar or DNS provider (example with CloudFlare and Namecheap).
./setup.sh ~/path/to/MANTA_SSH_KEYand edit the
_envfile to set the Manta object store configuration details for the MySQL backups
_envfile and change
WORDPRESS_URLto point to the custom hostname chosen
_envfile to set the Manta object store configuration details for the MySQL backups and to the Manta object store configuration details for the MySQL backups
docker-compose up -dto launch WordPress
_envfile (powered by Triton CNS, see how to use a custom domain name)
docker-compose scale nginx=2 wordpress=3 memcached=3 mysql=3to scale
The full details on the configuration options are in Github.
The autopilotpattern/nginx image makes it simple to enable SSL for your site. In order to take advantage of this, you must already be using a custom domain (discussed in previous section), as there's a 64 character limit imposed by Let's Encrypt; too short for the full CNS hostname to be usable. Assuming that's the case:
_envfile, change the
https, then uncomment the
ACME_DOMAINline and change the value to the host specified for
# Environment variables for for WordPress site # please include the scheme http:// or https:// in the URL variable WORDPRESS_URL=https://blog.example.com [...] # Nginx LetsEncrypt (ACME) config # be sure ACME_DOMAIN host and WORDPRESS_URL host are the same, if using automated SSL via LetsEncrypt # ACME_ENV defaults to "staging", uncomment following ACME_ENV line to switch to LetsEncrypt production endpoint ACME_DOMAIN=blog.example.com #ACME_ENV=production
To re-deploy Nginx with the new SSL configuration:
docker-compose up -d nginx
This will acquire a test certificate from the Let's Encrypt staging endpoint. Your browser will likely ask you to accept this certificate because it's not signed by a recognized authority. This is useful to verify the process works before switching to the production Let's Encrypt endpoint (which will count against api request limits for the host in question). Once you're happy that things are working, edit the
_env file again, this time uncomment the line
ACME_ENV=production and let's re-deploy.
All the automation of the SSL certificate requests is coordinated through Consul, which stores state so the Nginx instances remain stateless. That means we have to clear out Nginx entries from Consul if we need to change or reset the SSL settings:
docker exec -it wordpress_consul_1 curl -X DELETE localhost:8500/v1/kv/nginx/acme?recurse=1
That will clear the old certificates from Consul, forcing all the Nginx containers to request new certificates. A complete reset of the Nginx configuration requires the following steps:
After editing the
_env file, here are the steps in code:
docker-compose stop nginx docker exec -it wordpress_consul_1 curl -X DELETE localhost:8500/v1/kv/nginx/acme?recurse=1 docker-compose up -d nginx
Docker Compose should delete the old, stopped Nginx instances and create new ones with the new environment configuration from the
Gamblers may wish to try doing this without downtime by excluding the
docker-compose stop nginx. That should work most of the time, but if one of the old Nginx instances attempts to re-new its configuration before the replacement instances start, the certificate will be generated using the old settings in the old Nginx instance.
The WordPress image includes WP-CLI. Using that in combination with
docker exec makes remote management especially easy. In the following examples,
wordpress_wordpress_1 is the name of a running WordPress container in this project.
Set the display name of the admin user (user ID
docker exec wordpress_wordpress_1 wp --allow-root user update 1 --display_name='<name>'
On a Mac, this will open a web browser with the front page of the web site:
open $(docker exec wordpress_wordpress_1 wp --allow-root option get home)
If you imported the sample content, this command will delete the sticky post at the top of the site:
docker exec wordpress_wordpress_1 wp --allow-root post delete 1241
Inserting posts is easy too. In this example,
sample-post.html is a file on my laptop with the content of the post I want to publish:
docker exec wordpress_wordpress_1 wp --allow-root post create \ --post_content="$(cat sample-post.html)" \ --post_title='<post title>' \ --post_status='publish' \ --post_author=1