What are the Steps to Containerize my Monolithic App?

In our online meetup last month, Bryan Cantrill answered dozens of questions live, but there were quite a few that we didn't have time for. We invited everybody to email devops@joyent.com and the response has been strong. We'll be answering as many questions as we can, starting with this one from Thomas F.:

If my application is monolithic and runs on one large server, what are the steps to containerize my app?

That's a great question, and it's one that I hear a lot. When we talk about containers many people think of application containers like Docker. There's a lot of really interesting work being done to deconstruct monolithic applications into microservices each running in their own container. Application containers are ideal for microservices architectures, but containers aren't limited to that.

The choice to run a single process in a container, or follow a specific application architecture, isn't based on any limitations of the underlying technology1 and it would be a mistake to think containers can't be used for other architectures. The truth is, many of the advantages of containerization (also called OS virtualization) are independent of application architecture, and monolithic apps can enjoy significant benefits from them as well.

Network and storage I/O, for instance, are typically much faster in container-native environments than on VMs. Our tests show nearly 10x performance improvement for write speeds on our platform vs. competing VM-based infrastructure2. Taking advantage of those benefits isn't a matter of conforming your application to a container, but choosing a container technology and strategy to fit your app.

To start with, Docker isn't strictly limited to single process apps. Phusion's baseimage-docker has built in support for running multiple processes through runit and workarounds for the process reaping problem that some container runtimes suffer. Another choice is Triton infrastructure containers.

Triton infrastructure containers running container-optimized Linux offer most everything you'd expect of a Linux host. Provision a host running Ubuntu and you'll get apt, upstart, sshd, normal logging, and pretty much everything else. What you don't get, however, is the slow performance of running in a VM. You can choose container-optimized distributions of Debian and CentOS as well, and each will feel right at home.

So, what's the answer to the question about the steps to containerize a monolithic app? Three: start, install, profit.

1: Start a new infrastructure container

Create a Triton infrastructure container running container-optimized Linux via the portal or do it via the CLI utility:

sdc-createmachine \    --name=my-new-infrastructure-container-1 \    --image=$(sdc-listimages | json -a -c "this.name === 'ubuntu-14.04' && this.type === 'smartmachine'" id | tail -1) \    --package=$(sdc-listpackages | json -a -c "/^t4/.test(this.name) && this.memory === 1024" id | tail -1) \    --networks=$(sdc-listnetworks | json -a -c "this.name ==='default'" id) \    --networks=$(sdc-listnetworks | json -a -c "this.name ==='Joyent-SDC-Public'" id) \

You can do that in any of our production data centers today3. Change the values for memory, base image, and name as needed. Use the sdc-listimages, sdc-listpackages, and sdc-listnetworks commands on their own to view the available options for each.

2: SSH in and install your app

Get the IP address from the portal or using sdc-getmachine $UUID or sdc-listmachines in the CLI tool and ssh into the new container.

Once inside, you can install and configure your app using apt, dpkg, yum, or other tools provided by the Linux distribution you chose. You can add configuration management tools like Ansible, Chef, or others to automate builds and deploys.

3: Your app is containerized!

Seriously, your app is now containerized and can enjoy the performance advantages of Joyent's bare metal, container-native platform.

If this feels similar to the steps to setup an app in a virtual machine, that's because we designed Triton infrastructure containers to look and feel like VMs in many ways. The major container differences you'll see are in performance, compute density, and resizability, not the tooling or process to set them up. That explains why they're called 'machines', rather than 'containers' in the API.

More questions, more answers

The response to our first online office hours and the number of questions we've received via email have been fantastic. We're going to continue answering questions in the blog, so keep emailing devops@joyent.com, but also keep an eye out for a future online meetup events. Follow @joyent or sign up to be notified so you can be sure not to miss it.


  1. Actually, some application container practices can be traced back to limitations and frustrations of zombie reaping in some container implementations, but those issues don't affect containers on Joyent's Triton. 

  2. Those tests were specific to Docker containers, but the biggest differences are in the cost of virtualized I/O in VMs vs. bare metal I/O on container-native infrastructure. 

  3. The packages and networks in our beta data center are slightly different. Use the sdc-listpackages, and sdc-listnetworks commands to see available options there. 



Post written by Casey Bisson