Simplifying service discovery in Docker with Containerbuddy

Previously, I discussed the common anti-pattern of injecting a load balancer between microservices and proposed a container-native architecture to replace it. I also discussed the new responsibilities this places on the application. But although container-native applications come into the world understanding this responsibility, no one wants to rewrite all our current applications!

We can wrap each existing application in a shell script that registers itself with the discovery service easily enough, but watching for changes to that service and ensuring that health checks are being made is more complicated. We can put a second process in the container, but unless we make a supervisor as PID1 then there's no way of knowing whether our buddy process has died.

Discovery services like Consul provide a means of performing health checks from outside our container, but that means packaging the tooling we need into the Consul container. If we need to change the health check, then we end up re-deploying both our application and Consul, which unnecessarily couples the two.

Containerbuddy to the rescue!

Containerbuddy is a shim written in Go to help make it easier to containerize existing applications. It can act as PID1 in the container and fork/exec the application. If the application exits then so does Containerbuddy.

Alternately, if your application double-forks (which is not recommended for containerized applications but hey we are taking about pre-container apps here!), you can run Containerbuddy as a side-by-side buddy process within the container. In that case the container will not die if the application dies, which can create complicated failure modes but which can be mitigated by having a good TTL health check to detect the problem and alert you.

Containerbuddy registers the application with Consul on start and periodically sends TTL health checks to Consul; should the application fail then Consul will not receive the health check and once the TTL expires will no longer consider the application node healthy. Meanwhile, Containerbuddy runs background workers that poll Consul, checking for changes in dependent/upstream service, and calling an external executable on change.

Using local scripts to test health or act on backend changes means that we can run health checks that are specific to the service in the container, which keeps orchestration and the application bundled together.

Containerbuddy processes

Containerbuddy is explicitly not a supervisor process. Although it can act as PID1 inside a container, if the shimmed process dies, so does Containerbuddy (and therefore the container itself). Containerbuddy will return the exit code of its shimmed process back to the Docker Engine or Triton, so that it appears as expected when you run docker ps -a and look for your exit codes. Containerbuddy also attaches stdout/stderr from your application to stdout/stderr of the container, so that docker logs works as expected.

I'll be using Containerbuddy in some upcoming posts to provide an example of a container-native application topology.

Video overview



Post written by Tim Gross