Production Docker Logs on Triton

Logs contain valuable data for informing the operation of your containerized applications. During development, we can use docker logs to get a stream of logs from a container. But this is just one way that Docker provides to access logs; Docker log drivers support protocols that can centralize logs for aggregation and analysis.

On Triton we recently added support for the syslog, Graylog, and Fluentd log drivers. In the future we'll support rotating Docker logs to our Manta object store, which will provide a new way to manage your log data.

Why log drivers?

You're already using a log driver with Docker whether you know it or not. The default is the json-file log driver, which means the output of your application is being persisted in JSON format on the compute node where your container is running. When you run docker logs, the Docker client asks the runtime (either the Docker Engine or Triton) for an HTTP stream of that JSON data, which it then parses for your terminal. The Docker client supports flags like --tail or --follow that make it easier to digest large logs. And you can even use docker-compose logs to combine the logs from several containers at once. This plays along nicely with twelve-factor apps, which send their logs as a stream to stdout.

But in production we need to make sure the logs are preserved after we stop and remove a container. We want to centralize log collection so we can make it available for real-time or offline aggregation and analysis. We might want to send our logs to something like Elasticsearch-Logstash-Kibana, or send them off-site to a third-party log processing provider. For this we'll want our Docker runtime to ship the logs from the running container to a "log sink."

To support this concept, Docker added the option to use log drivers to send your logs over a selection of protocols to remote hosts. This gives provides us with ease of use in development but operability in production, while keeping the application container image unchanged. Triton now supports the syslog, Graylog ("gelf"), and Fluentd log drivers, so you can centralize your logs to a service that supports any of those protocols.

How do I use log drivers on Triton?

The new log drivers are supported across the Joyent's public cloud as well as Triton private clouds. For more details about working with the Docker log drivers in private data center installs, see the Joyent Docker API documentation.

To send container logs to a syslog server, you can pass the syslog log driver and options to your docker run command as follows:

docker run \    --log-driver=syslog \    --log-opt syslog-address=tcp://host:port \    --log-opt syslog-facility=daemon \    --log-opt tag="example" \    my_container_image

Or, if you're using Docker Compose you can add these attributes to your docker-compose.yml file:

example:    image: my_container_image    log_driver: syslog    log_opt:      syslog-address: "tcp://host:port"

The syslog log driver supports the tcp and udp protocols in the required syslog-address field. You can add the tag or syslog-facility attributes to your logs as described in the Docker log driver documentation for syslog.

To send container logs to a server that supports Graylog Extended Log Format (GELF), such as Graylog or Logstash, pass the gelf log driver and options as follows:

docker run \    --log-driver=gelf \    --log-opt gelf-address=udp://host:port \    --log-opt tag="example" \    --log-opt labels=label1,label2 \    --log-opt env=env1,env2 \    my_container_image

The gelf-address option is required. See the Docker log driver documentation for gelf for the optional tag, labels, and env fields.

To send container logs to a Fluentd server, pass the fluentd log driver and options as follows:

docker run \    --log-driver=fluentd \    --log-opt fluentd-address=host:port \    --log-opt tag="example" \    my_container_image

As with the other drivers, the address argument is required and you can find details about the tag option in the Docker log driver documentation for Fluentd.

When you're using these new log drivers, you'll see some additional processes running in your container to handle the logging. Using Triton CNS for the target address might be a great option if you're sending to a log collection service you're running in another container.

What can I do with log drivers on Triton?

These new log drivers need some place to send their logs. ELK is a commonly used log aggregation and analysis stack and can serve as a log sink on Triton. Logstash receives the logs from the log drivers and stores them in an Elasticsearch cluster. Kibana, a Node.js application, is a user-friendly web UI on the data that we'll store in Elasticsearch.

Configuring an ELK stack normally involves editing configuration files to include the IP addresses of each component. But we can use the autopilot pattern and ContainerPilot to deploy a self-configuring, self-operating ELK stack.

Check out the demonstration code on GitHub and try it out on Triton. Once you've configured your Docker client to use Triton as its target environment, run ./test.sh check to confirm your environment configuration is correct and create an _env file for this application, you can bring up a new ELK cluster just with Docker Compose:

$ docker-compose -p elk up -d

Within a few moments all components of the application will be registered in the Consul discovery service and the Elasticsearch cluster will have formed. We can add new nodes to Elasticsearch just by running docker-compose -p scale =.

Go checkout the GitHub repo for a test application to log to this stack.

ELK in 90 seconds

My friend Casey took the autopilot ELK blueprint above for a spin and recorded this screencast. There's no audio, but you can see all the steps. On first startup, Kibana assumes there's an error if it has no data, so the first view of its web interface looks scary, but that gets cleared up as soon as Casey starts Nginx to produce some logs for the stack.



Post written by Tim Gross