Node.js® Enterprise Support
New comprehensive plans availableLearn More
Thank you for contacting us. We will get back to you shortly.
March 06, 2015 - by Casey Bisson
Docker and containers are transforming how we architect, package and deploy modern applications. So why then are so many of us still running Dockerized apps on the same cloud infrastructure we've had for ten years? Why are we running our apps on clipper ships and stuffing containers onto break bulk haulers? Why are we running Docker on VMs?
At Joyent, we think there's a better way. We think that next generation, cloud native apps should run on container native infrastructure. Being container native means breaking down compute resources so that containers are the atomic unit. It means eliminating the cost and complexity of managing fleets of VMs or hardware hosts for your containers. It means automating the infrastructure so that we can use the entire data center as a single container host.
Months ago we embarked on a project to bring cloud native apps to container native infrastructure by integrating rich Docker support to our existing offerings. We've done that and more. We think we've built the most secure and fastest Docker hosting solution, with the most convenient networking and host management tools, and now we'd like to invite you to try them out.
We're opening up the program to selected users now, and we're expanding the program quickly. You can add your name to our early-access list now and read along for an idea of how our new Triton Elastic Container Service for Docker works and how to use it.
You've probably already done some of this, but this is the list:
dockercommand in client mode to connect to the remote docker host.
docker --versionto check yours.
You don't need to install any other tools to launch and manage Docker containers in our new container service, but you'll be able to do a lot more if you do have these:
bunyanutilities are recommended with the installation of both the CloudAPI and Manta CLI tools.
Joyent's infrastructure software presents the entire data center as a single Docker host, so you can use the Docker CLI tools you're familiar with on your laptop to launch fleets of containers in the cloud. You don't need new tools, you don't need to manage and connect to multiple hosts, you just
docker run and you're done.
To get there, we do need to configure remote access for the Docker CLI, though. We've got a helper script for that, let's download it like so:
curl -O https://raw.githubusercontent.com/joyent/sdc-docker/master/tools/sdc-docker-setup.sh && chmod +x sdc-docker-setup.sh
That will download it to the current directory and set permissions to make it executable. Now let's run it. You'll need to substitute your Joyent account username and the path to your SSH private key:
./sdc-docker-setup.sh -k <CloudAPI endpoint> <ACCOUNT> ~/.ssh/<PRIVATE_KEY_FILE>
That will output something like the following:
Setting up for SDC Docker using: Cloud API: https://<CloudAPI endpoint> Account: jpcuser SSH private key: /Users/localuser/.ssh/id_rsa Verifying credentials. Credentials are valid. Generating client certificate from SSH private key. writing RSA key Wrote certificate files to /Users/localuser/.sdc/docker/jpcuser Get Docker host endpoint from cloudapi. Docker service endpoint is: tcp://<Docker API endpoint> * * * Successfully setup for SDC Docker. Set your environment as follows: export DOCKER_CERT_PATH=/Users/localuser/.sdc/docker/jpcuser export DOCKER_HOST=tcp://<Docker API endpoint> alias docker="docker --tls" Then you should be able to run 'docker info' and you see your account name 'SDCAccount' in the output.
What that's done is generate a TLS certificate using your SSH key and write it to a directory in your user account. The TLS certificate is what's used by the Docker client to identify and authenticate your requests to the Docker API endpoint.
To complete the setup you'll need to follow the steps outlined in the script output to export some additional environmental variables and tell Docker to use TLS encryption. The steps look like these, but note that
DOCKER_CERT_PATH is specific for each user, so use what's shown in the script output and don't copy the data below.
export DOCKER_CERT_PATH=/Users/localuser/.sdc/docker/jpcuser export DOCKER_HOST=tcp://<Docker API endpoint> alias docker="docker --tls"
You may have to
unset DOCKER_TLS_VERIFY if you get errors about missing CA files. We're working on that bug as this is being written.
With the Docker CLI configured you should be able to interact with the API endpoint as simply and conveniently as when developing Docker containers on a laptop. Let's start with an easy one:
That should return the status of the API with some details like this:
Containers: 0 Images: 0 Storage Driver: sdc SDCAccount: jpcuser Execution Driver: sdc-0.1.0 Operating System: SmartDataCenter Name: us-east-3b
There's nothing sadder than zero containers, so let's start one.
docker run -it -e "TERM=xterm" ubuntu bash
That will start a basic Ubuntu container and start drop us into the shell on it. Go ahead and do an
apt-get update (to freshen up the repo indexes) and then install something. Do an
apt-get install -yq htop and then take a look at the 48 CPUs it's reporting. 48! Look forward to another post when we'll talk about why we're seeing 48 CPUs in this container.
You probably noticed the container took a bit longer to start up than it does on your laptop. The extra few seconds it took to start that container are the price paid for what makes our container infrastructure more secure, more convenient, and faster. I'll say more about these benefits and performance gains below and in future posts, but wanted to acknowledge this trade-off up front.
Going from Boot2Docker on a laptop to deploying in a data center is a bit mind-blowing. You're still using the Docker CLI to launch and manage containers, but you're no longer limited to the resources of your laptop, or a single VM or even a whole server. Consider:
Aside: if this interests you, but you want to run it on your own hardware, you can. The software that powers this infrastructure is open source. Look here to find out how to install it with the Docker service preview.
All of the above might make you ask "if I don't put my containers inside a VM, where do they go, and what do I pay for?"
The containers run securely on bare metal. We charge by the container, not per VM or for other infrastructure, and we charge based on the resources reserved for that container.
You can specify what resources you want for each container using Docker's memory and CPU arguments in the CLI. For example, to get a 128MB container, try the following:
docker run -it -m 128m ubuntu bash
Inside there you can run
free to show the total and available memory. It looks like this:
# free total used free shared buffers cached Mem: 131072 4152 126920 0 0 0 -/+ buffers/cache: 4152 126920 Swap: 524288 1112 523176
-m 128m argument in the
docker run command is what specified the size of the container. We can run a larger container (up to 64GB of RAM) like this:
docker run -it -m 2g ubuntu bash docker run -it -m 8g ubuntu bash docker run -it -m 64g ubuntu bash
When specifying memory for containers, our infrastructure provisions that container with the smallest package that fits the largest aspect of the request. If you ask for 3GB of RAM you'll get a container that has 4GB of RAM. Here are the packages we're offering in our container service:
Starting with this new container service, our containers are billed by the minute, so you're not charged a full hour for a container that runs just a few minutes. And you only pay for the containers you provision, you're not paying for VMs or bare metal hosts that you can't fully utilize.
This early access preview is a work in progress, as such there are some pieces that are as yet incomplete. There are some aspects of the Docker API that are unimplemented, or implemented with some differences. The
docker info method, for example, will always show 'sdc' as the storage driver and the execution driver shows 'sdc-
Our Docker support implements all the API methods necessary to deploy Docker containers in the cloud, but is notably missing methods necessary to build containers. For that, continue using Docker on your laptop for now, though we definitely want to support those features in the future.
Here's the list of API methods currently unimplemented as of this writing, but expect it to get shorter by the day:
Finally, we don't support all the arguments to
docker create and
docker run. Many of these arguments just don't make sense in our environment. For example,
--security-opt all refer to security options that are significant when running in operating systems without strong security containment, but redundant on Joyent's infrastructure. Similarly,
--lxc-conf is unimplemented because we're not using LXc.
There are some other differences, but this is all under heavy development. We're deploying fixes and implementing missing features daily.
Please let us know if you encounter a Docker feature that's critical to you. And though we're working hard to make our container service the most secure, convenient, and fastest place to run Docker containers, you can still run Docker on your favorite OS inside KVM instances on our public cloud if that's what you'd prefer.
You should now have everything setup to launch and manage Docker containers in our container service v2 preview. However, if you've read all the way to here and you don't yet have access to the preview, sign up and let's get you in. Go launch some containers, and be sure to tell us about the experience in #joyent on irc.freenode.net or open a ticket if you find bugs.