Joyent's next generation, Docker-native container service preview

March 06, 2015 - by Casey Bisson

Docker and containers are transforming how we architect, package and deploy modern applications. So why then are so many of us still running Dockerized apps on the same cloud infrastructure we've had for ten years? Why are we running our apps on clipper ships and stuffing containers onto break bulk haulers? Why are we running Docker on VMs?

At Joyent, we think there's a better way. We think that next generation, cloud native apps should run on container native infrastructure. Being container native means breaking down compute resources so that containers are the atomic unit. It means eliminating the cost and complexity of managing fleets of VMs or hardware hosts for your containers. It means automating the infrastructure so that we can use the entire data center as a single container host.

Months ago we embarked on a project to bring cloud native apps to container native infrastructure by integrating rich Docker support to our existing offerings. We've done that and more. We think we've built the most secure and fastest Docker hosting solution, with the most convenient networking and host management tools, and now we'd like to invite you to try them out.

We're opening up the program to selected users now, and we're expanding the program quickly. You can add your name to our early-access list now and read along for an idea of how our new Triton Elastic Container Service for Docker works and how to use it.

Install the CLI tools

You've probably already done some of this, but this is the list:

  1. You'll need an account in our public cloud service. Create one if you don't have one.
    1. Docker access is only available for the primary account, not sub accounts at this time.
    2. A public SSH key must be registered with Joyent and the private key must be present on the machine running the Docker CLI client tools.
  2. Install Docker. We'll be running the docker command in client mode to connect to the remote docker host.
    1. Docker 1.4.1 is required at this time, though support for 1.5 is in progress Docker 1.5 is now supported. The minimum required client CLI version is Docker 1.4.0, run docker --version to check yours.

You don't need to install any other tools to launch and manage Docker containers in our new container service, but you'll be able to do a lot more if you do have these:

  1. The CloudAPI CLI tools aren't required, but you'll be able to get a lot more detail about containers than is available via the Docker API.
  2. The Manta CLI tools to interact with Joyent's object store.
  3. The json and bunyan utilities are recommended with the installation of both the CloudAPI and Manta CLI tools.

Configure the Docker CLI client

Joyent's infrastructure software presents the entire data center as a single Docker host, so you can use the Docker CLI tools you're familiar with on your laptop to launch fleets of containers in the cloud. You don't need new tools, you don't need to manage and connect to multiple hosts, you just docker run and you're done.

To get there, we do need to configure remote access for the Docker CLI, though. We've got a helper script for that, let's download it like so:

curl -O https://raw.githubusercontent.com/joyent/sdc-docker/master/tools/sdc-docker-setup.sh && chmod +x sdc-docker-setup.sh

That will download it to the current directory and set permissions to make it executable. Now let's run it. You'll need to substitute your Joyent account username and the path to your SSH private key:

./sdc-docker-setup.sh -k <CloudAPI endpoint> <ACCOUNT> ~/.ssh/<PRIVATE_KEY_FILE>

That will output something like the following:

Setting up for SDC Docker using:
    Cloud API:       https://<CloudAPI endpoint>
    Account:         jpcuser
    SSH private key: /Users/localuser/.ssh/id_rsa

Verifying credentials.
Credentials are valid.
Generating client certificate from SSH private key.
writing RSA key
Wrote certificate files to /Users/localuser/.sdc/docker/jpcuser
Get Docker host endpoint from cloudapi.
Docker service endpoint is: tcp://<Docker API endpoint>

* * *
Successfully setup for SDC Docker. Set your environment as follows:

    export DOCKER_CERT_PATH=/Users/localuser/.sdc/docker/jpcuser
    export DOCKER_HOST=tcp://<Docker API endpoint>
    alias docker="docker --tls"

Then you should be able to run 'docker info' and you see your account
name 'SDCAccount' in the output.

What that's done is generate a TLS certificate using your SSH key and write it to a directory in your user account. The TLS certificate is what's used by the Docker client to identify and authenticate your requests to the Docker API endpoint.

To complete the setup you'll need to follow the steps outlined in the script output to export some additional environmental variables and tell Docker to use TLS encryption. The steps look like these, but note that DOCKER_CERT_PATH is specific for each user, so use what's shown in the script output and don't copy the data below.

    export DOCKER_CERT_PATH=/Users/localuser/.sdc/docker/jpcuser
    export DOCKER_HOST=tcp://<Docker API endpoint>
    alias docker="docker --tls"

You may have to unset DOCKER_TLS_VERIFY if you get errors about missing CA files. We're working on that bug as this is being written.

A port in the cloud

With the Docker CLI configured you should be able to interact with the API endpoint as simply and conveniently as when developing Docker containers on a laptop. Let's start with an easy one:

docker info

That should return the status of the API with some details like this:

Containers: 0
Images: 0
Storage Driver: sdc
 SDCAccount: jpcuser
Execution Driver: sdc-0.1.0
Operating System: SmartDataCenter
Name: us-east-3b

There's nothing sadder than zero containers, so let's start one.

docker run -it -e "TERM=xterm" ubuntu bash

That will start a basic Ubuntu container and start drop us into the shell on it. Go ahead and do an apt-get update (to freshen up the repo indexes) and then install something. Do an apt-get install -yq htop and then take a look at the 48 CPUs it's reporting. 48! Look forward to another post when we'll talk about why we're seeing 48 CPUs in this container.

You probably noticed the container took a bit longer to start up than it does on your laptop. The extra few seconds it took to start that container are the price paid for what makes our container infrastructure more secure, more convenient, and faster. I'll say more about these benefits and performance gains below and in future posts, but wanted to acknowledge this trade-off up front.

You asked for a bigger boat?

Martin Brody laying down the law

Going from Boot2Docker on a laptop to deploying in a data center is a bit mind-blowing. You're still using the Docker CLI to launch and manage containers, but you're no longer limited to the resources of your laptop, or a single VM or even a whole server. Consider:

  1. One Docker endpoint gives you access to a whole data center of computing power. Joyent's infrastructure management tools take care of the rest. Really, you don't need to manage VMs or Docker hosts because containers are the native unit of our cloud.
  2. Each container gets its own performance quota, and you can specify what resources to allocate to each container individually. You don't have to worry about your containers fighting each other for memory, CPU, or I/O.
  3. Each container is securely isolated from the others, so you don't have to worry about what your neighbor is up to. Joyent's container hypervisor has been proven with almost ten years in production use without incident.
  4. Each container gets a unique IP, so you don't have to manage port mapping or worry about collisions.
  5. Each container is running at bare-metal speeds with Joyent's world-class I/O performance.

Aside: if this interests you, but you want to run it on your own hardware, you can. The software that powers this infrastructure is open source. Look here to find out how to install it with the Docker service preview.

All of the above might make you ask "if I don't put my containers inside a VM, where do they go, and what do I pay for?"

The containers run securely on bare metal. We charge by the container, not per VM or for other infrastructure, and we charge based on the resources reserved for that container.

You can specify what resources you want for each container using Docker's memory and CPU arguments in the CLI. For example, to get a 128MB container, try the following:

docker run -it -m 128m ubuntu bash

Inside there you can run free to show the total and available memory. It looks like this:

# free
             total       used       free     shared    buffers     cached
Mem:        131072       4152     126920          0          0          0
-/+ buffers/cache:       4152     126920
Swap:       524288       1112     523176

The -m 128m argument in the docker run command is what specified the size of the container. We can run a larger container (up to 64GB of RAM) like this:

docker run -it -m 2g ubuntu bash
docker run -it -m 8g ubuntu bash
docker run -it -m 64g ubuntu bash

When specifying memory for containers, our infrastructure provisions that container with the smallest package that fits the largest aspect of the request. If you ask for 3GB of RAM you'll get a container that has 4GB of RAM. Here are the packages we're offering in our container service:

DRAM vCPUs Disk Name
128M 1/16 3G t4-standard-128M
256M 1/8 6G t4-standard-256M
512M 1/4 12G t4-standard-512M
1G 1/2 25G t4-standard-1G
2G 1 50G t4-standard-2G
4G 2 100G t4-standard-4G
8G 4 200G t4-standard-8G
16G 8 400G t4-standard-16G
32G 16 800G t4-standard-32G
64G 32 1600G t4-standard-64G

Starting with this new container service, our containers are billed by the minute, so you're not charged a full hour for a container that runs just a few minutes. And you only pay for the containers you provision, you're not paying for VMs or bare metal hosts that you can't fully utilize.

Uncharted waters

This early access preview is a work in progress, as such there are some pieces that are as yet incomplete. There are some aspects of the Docker API that are unimplemented, or implemented with some differences. The docker info method, for example, will always show 'sdc' as the storage driver and the execution driver shows 'sdc-', along with some other differences that reflect the details of our infrastructure.

Our Docker support implements all the API methods necessary to deploy Docker containers in the cloud, but is notably missing methods necessary to build containers. For that, continue using Docker on your laptop for now, though we definitely want to support those features in the future.

Here's the list of API methods currently unimplemented as of this writing, but expect it to get shorter by the day:

docker build, docker commit, docker diff, docker events, docker export, docker import, docker load, docker login, docker logout, docker port, docker pause, docker push, docker rename, docker save, docker stats, docker tag, docker unpause

Finally, we don't support all the arguments to docker create and docker run. Many of these arguments just don't make sense in our environment. For example, --cap-add, --cap-drop, --privileged, and --security-opt all refer to security options that are significant when running in operating systems without strong security containment, but redundant on Joyent's infrastructure. Similarly, --lxc-conf is unimplemented because we're not using LXc.

There are some other differences, but this is all under heavy development. We're deploying fixes and implementing missing features daily.

Please let us know if you encounter a Docker feature that's critical to you. And though we're working hard to make our container service the most secure, convenient, and fastest place to run Docker containers, you can still run Docker on your favorite OS inside KVM instances on our public cloud if that's what you'd prefer.

Explore the deep

You should now have everything setup to launch and manage Docker containers in our container service v2 preview. However, if you've read all the way to here and you don't yet have access to the preview, sign up and let's get you in. Go launch some containers, and be sure to tell us about the experience in #joyent on irc.freenode.net or open a ticket if you find bugs.

Version history

  • March 9: Updated for Docker compatibility and language clarification.
  • March 25: Updated with Triton branding, see announcement blog posts from Bryan Cantrill and Casey Bisson.
  • April 6: Removed mention of CPU shares related to package sizes. CPU shares are not portable across compute nodes with differing performance and the community is split on how to solve this problem (some suggest representing it as a decimal count of vCPUs for each container, an integer count of vCPUs, 1000 vCPU count, or 1024 vCPU count). We're participating in these discussions and will amend our API support as a standard emerges.