Optimizing your Docker operations for Triton

Docker on Triton is different. Rather than running containers in VMs, the containers run securely on bare metal. We introduced Docker support on our cloud in 2015, but containers are not new to us. We've actually been running containers on our cloud since 2006, and we've developed our own lightweight container OS, networking solutions, and cloud automation stack to make everything work securely, conveniently, and fast.

Running your containers in a public cloud without a VM means that each container gets its own isolated set of resources, and if you need more, just scale up. Scaling your containers directly — without also needing to scale VMs up or down — makes it easier to manage your application and optimize its resource consumption.

Freedom from VMs and the frustrations of managing multiple layers of infrastructure makes it easier to quickly scale or build out and destroy staging environments for every branch and feature, as well as just run your application faster.

However, the prevalence of VM-based infrastructure may have left you with a bit of Stockholm syndrome. You may have some tendencies to think within the limitations and confines of the VM, and it might take some work to think in terms of what's possible with a hostless container model you'll enjoy on Triton.

Here's a guide for those moving from Docker in VMs to Docker on Triton...

Bare metal containers

Not only do containers really run on bare metal on Triton, but your containers are running next to other customers' containers on the same bare metal. That works because containers on Triton and SmartOS, our own lightweight container OS, are at least as secure as VMs. In fact, it's that level of security that qualifies SmartOS as a "container hypervisor," like VMware's ESXi, but for containers instead of VMs. Running containers securely on bare metal isn't magic, it's just that SmartOS was designed to do that from the start, and we've refined it based on over a decade of experience running the world's only bare-metal container-based cloud.

Those bare metal containers are orchestrated using Triton DataCenter, the software that turns a bunch of hardware into a proper cloud with virtualized compute, network, and storage, all woven together with API-driven provisioning. On Triton, we have two provisioning APIs working side-by-side: CloudAPI and Docker Remote API.

CloudAPI is similar to what you might find from our VM-based competitors. You can use it to provision and manage instances and networks across the cloud. What's exciting here is how we implemented the Docker Remote API alongside CloudAPI so that when you do a triton-docker run... it places your container somewhere in the data center. When you do a bunch of triton-docker run..., it's placing those containers throughout the data center, with no need to setup and manage a "swarm" or "cluster."

Docker containers run on bare metal throughout the data center on Triton

Your Docker containers run side-by-side with other compute instances on Triton, including infrastructure containers and hardware VMs.

Docker Remote API and CloudAPI work side-by-side on Triton

RAM, CPU, and disk resources for your containers

Most people think about the RAM, CPU, and storage for their VMs, but few people pay much attention to those details for their individual containers. On Triton you can set resource limits specifically for each container.

There are two ways to specify the resources for a Docker container on Triton:

  1. Specifying a package with --label com.joyent.package=
  2. Specifying just the memory limit with -m

If you don't specify anything, Triton will use a default that might not match the resources you need for the application.

You can find the list of available packages and their prices on the website or via the CLI tool using triton package list. The following example will run a high-memory instance with 32GB of RAM:

triton-docker run --label com.joyent.package=g4-highram-32G -it ubuntu bash

You can also specify just the memory limit using -m. Triton will automatically select the smallest g4-highcpu-* package with enough memory for the specified limit. The following example will result in a g4-highcpu-32G instance:

triton-docker run -m 32gb -it ubuntu bash

The g4-highcpu-* instances are ideal for microservices applications common with Docker, but you may wish to use a different package type for your application. In that case we recommend you use the --label com.joyent.package= syntax to select the exact package you need.

Networks

Triton offers unique networking features for Docker. Instead of each container sharing the same NIC on the VM, each and every container on Triton receives its own NIC(s). This eliminates frustrations about port conflicts and the performance costs of both hardware virtualization and Docker networking. You can read more about Triton's approach to networking on our blog.

There are a number of ways to specify what networks your Docker container should be attached to when starting it:

  1. Indicate nothing: your Docker container will be attached to your private network fabric, accessible only to your other containers and VMs. It will have a unique, private IP address, so you can connect to it from any of your other containers in the data center without port mapping.
  2. Specify a -p : the Docker container will be attached to the public internet and given a unique public IP address. Triton will also create a firewall rule that opens just the port you specified. Your container will be accessible from the public internet, and you'll never have to map traffic through a host or do port mapping. Your container will also be connected to your default private network as described above.
  3. Specify a network with --network=: the Docker container will be attached to the specified network (if you have permission to access it). The container will not be connected to your default private network.
  4. Specify both -p and --network=: the container will be connected to both your specified network and a public network.

Examples:

# Connect to your default private networktriton-docker run -d nginx# Connect to both your default private network and# the public internet, with the named ports opentriton-docker run -d -p 80 -p 443 nginx# connect to just the named networktriton-docker run -d --network=dev-net-123 nginx# connect to both a public network# and the specified network UUIDtriton-docker run -d -p 80 -p 443 --network=d8f607e4 nginx

You can create and manage networks from the my.Joyent portal or via CloudAPI and the sdc-fabrics tool. Support for creating and managing those networks is also coming to our Docker API implementation in the future.

Firewalls

The Triton cloud firewall is automatically enabled for all your Docker containers. By default, the firewall will allow any traffic between your containers in your private network, but you can separate containers into different networks or create firewall rules to isolate the containers by tag/label, or you can combine firewalls and separate networks to connect everything exactly the way you'd like.

When you choose to expose a container to the public internet with the -p command, the firewall is automatically configured to pass traffic only on the ports you specify. You can add more detailed firewall rules for a given instance, or apply firewall rules by label/tag.

Volumes

When running Docker in a VM, it's tempting to use -v to map a volume from the VM into the container. That convenience typically turns the VM into a "pet" and forces you to track what files were written to what VMs. It also makes it impossible to enjoy the scale and convenience possible when running the container on Triton.

Joyent recommends persistent storage patterns that eliminate pets and make scaling easy. This has been discussed in the context of immutable infrastructure, but take a look at a real-world example in our MySQL on Autopilot implementation, which automates backups of the database as well as bootstrapping of replicas without ever mapping a volume from the host.

That said, work is in progress on a network-attached volume solution that can be shared among multiple containers. When released, volumes can be defined and attached to containers using the regular triton-docker volume... commands.

Logging

Triton supports Docker log drivers like syslog, Graylog, and Fluentd. And, of course, if you specify no custom log drivers, then Docker containers on Triton log as you'd expect, and you can check the logs with triton-docker logs .

If you have an alternative approach to logging, like running syslog in your container, we won't try to talk you out of it. But, you might rather just use an infrastructure container, which will make it easy to run all the services you'd expect of a unix host...on bare metal.

However, if your logging solution depends on writing logs to a volume mapped in from the host, you'll have to ask yourself "where's the host?" See the volumes discussion above for more.

Monitoring

Most Docker monitoring solutions depend on an agent running in the host, but what if there is no host?

On Triton, getting metrics on your container or VM performance is as easy as opening your browser to the my.Joyent portal page for the instance. You'll find the most common metrics displayed there, and you can dig in for detailed metrics on everything from CPU wait time to thread creations.

Triton CloudAnalytics

If you want to integrate the metrics into your own applications and logging solutions, Triton Container Monitor makes it easy to track all of your containers and VMs together using Prometheus (open source) or compatible tools. Container Monitor has not yet been released in our public cloud, but the project was designed and built using Joyent's open source process, starting with RFD27.

In many cases, infrastructure metrics, the CPU and memory consumption, are not detailed enough. Many applications offer specific metrics that can be more important in understanding application performance. In those cases, we recommend using ContainerPilot telemetry to collect and report metrics.

Controlling container placement

Building "cloud scale" applications means designing to accommodate failure, and that requires distributing application components so that localized failures don't take down the entire application. Some people also want to control placement to keep components close together for theoretically faster network performance between them. While that is possible to do that on Triton, most people have found overall networking on Triton so much faster than on other clouds that it isn't worth the extra effort to micromanage container placement.

Triton supports many of the affinity options defined for Docker Swarm, and they can be expressed both by overloading environment variables (-e 'affinity:') or setting a Docker label for the container1 (--label 'com.docker.swarm.affinities=["",""]').

We recommend setting a loose anti-affinity for each of your application components (keep nodes of your DB separate from each other, for example). The following demonstrates that for a number of containers all named mysql-:

triton-docker run --name mysql-3 --label 'com.docker.swarm.affinities=["container!=~mysql-*"]' autopilotpattern/mysql

That rule can also be seen in context in a Docker Compose file. More options can be found in our Docker API docs.

To see where your containers are, use the Triton CLI tool:

$ triton instances -o name,compute_node | sortNAME             COMPUTE_NODEwp_consul_1      f57ce6d4-18d3-11e4-bb70-002590ea597cwp_consul_2      8a818a00-e289-11e2-8944-002590c3ebfcwp_consul_3      c78dd9de-e064-11e2-b0c9-002590c3edd4wp_memcached_1   34e1bf1e-b766-11e2-900f-002590c32058wp_memcached_2   d088b3f6-2f7e-11e3-b276-002590c3ed68wp_memcached_3   d088b3f6-2f7e-11e3-b276-002590c3ed68wp_mysql_1       cdec4e60-2f7d-11e3-8c56-002590c3ebecwp_mysql_2       f57ce6d4-18d3-11e4-bb70-002590ea597cwp_mysql_3       69590e24-2f7e-11e3-a59d-002590c3f140wp_nfs_1         f57ce6d4-18d3-11e4-bb70-002590ea597cwp_nginx_1       aaa7da0a-2f7d-11e3-9d09-002590c3ed18wp_nginx_2       561221be-e291-11e2-8a70-002590c3edd0wp_prometheus_1  69590e24-2f7e-11e3-a59d-002590c3f140wp_wordpress_1   44454c4c-3300-1035-804e-b4c04f383432wp_wordpress_2   69590e24-2f7e-11e3-a59d-002590c3f140wp_wordpress_3   f57ce6d4-18d3-11e4-bb70-002590ea597c

This CLI example will reveal any piling of instances:

$ triton insts -H -o compute_node | sort | uniq -c | sort   1 34e1bf1e-b766-11e2-900f-002590c32058   1 44454c4c-3300-1035-804e-b4c04f383432   1 561221be-e291-11e2-8a70-002590c3edd0   1 8a818a00-e289-11e2-8944-002590c3ebfc   1 8f836eda-1cf6-11e4-a382-002590e4f380   1 aaa7da0a-2f7d-11e3-9d09-002590c3ed18   1 c78dd9de-e064-11e2-b0c9-002590c3edd4   1 cdec4e60-2f7d-11e3-8c56-002590c3ebec   2 d088b3f6-2f7e-11e3-b276-002590c3ed68   3 69590e24-2f7e-11e3-a59d-002590c3f140   4 f57ce6d4-18d3-11e4-bb70-002590ea597c

Orchestration

"Orchestration" can mean different things to different people, but often times it's about coordinating or automating some aspect of the operation of your application. Most commonly, this is about connecting different components of an app running in different containers to each other, but it can also include a lot more.

Just as there are different meanings to "orchestration," there are different approaches. Some approaches tie the orchestration to the scheduler and infrastructure so that your applications can be moved to different schedulers/infrastructure. Joyent recommends an approach for application-centric micro-orchestration we call the Autopilot Pattern, and we've built out a library of applications you can use as building blocks.

Automatic container DNS

Docker lends itself well to disposability, the idea that compute instances should be replaced when deploying software updates, rather than upgraded in place with that software. It can be dismissed as just another expression of the pets-vs-cattle debate because of how difficult it can be to coordinate containers in VMs, but on Triton it's the default. Deploying your new Docker image with a new triton-docker run... deploys a new instance of your app.

The forced disposability of Docker on Triton makes various deployment models (rolling, canary, red/blue) easy to implement, but it also means that every deploy can change the IP addresses for the set of containers serving an app.

To solve that, Triton offers automated DNS called Container Name Service. Triton CNS allows you to set a "service tag" for a group of instances so they can all share the same DNS name. Triton CNS works inside the data center to connect instances to each other, but it's ideal for giving a set of instances—say a handful of Nginx instances at the front of your app—a consistent name on the public internet.

To set the CNS service name in Docker, use a --label triton.cns.services= when starting the container. The following will start three containers all with the same CNS name:

triton-docker run -d --name=nginx-1 -p 80 --label triton.cns.services=example-nginx nginxtriton-docker run -d --name=nginx-2 -p 80 --label triton.cns.services=example-nginx nginxtriton-docker run -d --name=nginx-3 -p 80 --label triton.cns.services=example-nginx nginx

After starting those containers, you can look up the CNS name using dig (see docs to understand the CNS name pattern, and how to use your own domain name):

$ dig example-nginx.svc.d42e7882-89d2-459e-bc0a-e9af0bca409c.us-sw-1.triton.zone; <<>> DiG 9.8.3-P1 <<>> example-nginx.svc.d42e7882-89d2-459e-bc0a-e9af0bca409c.us-sw-1.triton.zone;; global options: +cmd;; Got answer:;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 35713;; flags: qr rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 0;; QUESTION SECTION:;example-nginx.svc.d42e7882-89d2-459e-bc0a-e9af0bca409c.us-sw-1.triton.zone. IN A;; ANSWER SECTION:nginx.svc.d42e7882-89d2-459e-bc0a-e9af0bca409c.us-sw-1.triton.zone. 28 IN A 165.225.158.136nginx.svc.d42e7882-89d2-459e-bc0a-e9af0bca409c.us-sw-1.triton.zone. 28 IN A 64.30.133.29nginx.svc.d42e7882-89d2-459e-bc0a-e9af0bca409c.us-sw-1.triton.zone. 28 IN A 163.19.30.145;; Query time: 60 msec;; SERVER: 10.32.200.16#53(10.32.200.16);; WHEN: Tue Jan 31 10:25:53 2017;; MSG SIZE  rcvd: 116

But, be cautious about how you use DNS

DNS isn't the perfect solution for every application. DNS client problems are surprisingly common, including clients that refuse to respect TTLs and clients that don’t recognize multiple IPs in an A record. These problems are frustrating in a web browser (though modern browsers now have advanced DNS implementations that avoid these problems), but they can lead to applications failures when they occur inside the data center between the components of an application.

One way around those problems is using active discovery where possible. If you can't modify your app or runtime environment to ensure good DNS client behavior, then you should consider using an alternative to DNS for discovery of application components. ContainerPilot automates the processes of service registration, health checking, and discovery for this purpose.

Debugging Docker container issues on Triton

Sometimes your container image may have an error that prevents it from starting as you expect. If the error happens early enough—before the container's main process starts—the error won't appear in the Docker logs. There's a separate log that includes all the details of what SmartOS did to start your container (including any errors if it failed) in /var/log/sdc-dockerinit.log. You can inspect that log in a running container, but if the startup failed, you can use triton-docker cp to get it from the stopped container:

triton-docker cp :/var/log/sdc-dockerinit.log 

Often times you'll discover an error in the image's permissions or that perhaps the RUN or ENTRYPOINT executable is missing. If the problem still isn't clear, you can always contact support.

Docker commands and options to avoid

Triton Elastic Docker Host supports most of the features of Docker's 1.22 spec. That means you can do most everything you'd expect to do, but there are a few exceptions:

  • docker build ...: we advise you build your Docker images in your local environment or using the Docker daemon on a KVM instance. Our focus is on making the Triton Elastic Docker Host the best place to run your Docker images; building Docker images using the Triton Elastic Docker Host is not supported.
  • docker network...: Triton supports rich networks unlike any you'll be able to create with Docker, but you'll have to use CloudAPI to create them for now. Take a look at the networks section, above
  • Docker Compose file format 2+: technically this works now if you define network_mode: bridge. Your containers won't actually be using bridge networking, but it's a workaround until we implement support for managing Triton networks using the docker network... commands; see networks, above
  • docker volume...: This family of commands in Docker is probably an antipattern that turns Docker hosts into pets, but take a look at the volumes section above
  • docker service...: this family of commands seems to finally be taking shape in the most recent Docker release; Triton will offer Triton will offer comparable features in 2017
  • docker events...: there are no plans to implement this feature

  1. At the time of writing the label syntax was in Docker Swarm code, but not documented.) 



Post written by Casey Bisson