Kubernetes on Triton – the hard way

April 11, 2017 - by Joe Julian - Samsung CNCT

The following guest post is based on Kelsey Hightower’s well-regarded Kubernetes The Hard Way, published under the Apache 2.0 license in GitHub. This document, like Hightower’s original, is intended to instruct readers on the different components of Kubernetes, how they relate to each other, and how they relate to the host cloud infrastructure. In short, it’s one of the best ways to learn how to deploy Kubernetes by hand, prior to engaging in an automation strategy. If you'd rather just run Kubernetes on Triton the "Easy Way," check out Kubernetes on Triton – the easy way, by Fayaz Ghiasy.

Cloud infrastructure provisioning

Read on to learn how to deploy a Kubernetes cluster on Triton in hardware virtual machines. This tutorial can be used for running an example HA Kubernetes cluster using the Triton Cloud service.

Get started with Triton

To get started, you must sign up for a Triton account. Be sure to add your billing information and add an ssh key to your account. If you need instructions for how to generate and SSH key, read our documentation.

  1. Install Node.js and run npm install -g triton to install Triton CLI.
  2. triton uses profiles to store access information. You'll need to set up profiles for relevant data centers.

For this tutorial we will be using the us-sw-1 data center. Get into that environment by running:

eval $(triton env us-sw-1)

If the profile associated with us-sw-1 is not named as such, replace that with the name of your profile: eval $(triton env <profile name>).

Network

Create a private fabric network for your Kubernetes installation.

From the networks screen, select Create Fabric Network.

Create a network with these attributes:

  • Name: kubernetes
  • Data Center: us-sw-1
  • Subnet: 10.240.0.0/24
  • Gateway: 10.240.0.1
  • IP Range: 10.240.0.2 10.240.0.254
  • VLAN: Leave at default
  • DNS Resolvers: 8.8.8.8 8.8.4.4
  • Provision a NAT zone on the gateway address: Checked

Provision virtual machines

Provision the compute instances required for running a H/A Kubernetes cluster. All of the VMs in this lab will be provisioned using Ubuntu 16.04 mainly because it runs a newish Linux Kernel that has good support for Docker. A total of 7 virtual machines will be created.

Virtual Machines

Bastion:

triton instance create \
 --wait \
 --name=kubernetes \
 -N Joyent-SDC-Public,kubernetes \
 -m user-data="hostname=kubernetes" \
 ubuntu-certified-16.04 k4-highcpu-kvm-1.75G

Controllers:

triton instance create \
 --wait \
 --name=controller0 \
 -N kubernetes \
 -m user-data="hostname=controller0" \
 ubuntu-certified-16.04 k4-highcpu-kvm-1.75G

triton instance create \
 --wait \
 --name=controller1 \
 -N kubernetes \
 -m user-data="hostname=controller1" \
 ubuntu-certified-16.04 k4-highcpu-kvm-1.75G

triton instance create \
 --wait \
 --name=controller2 \
 -N kubernetes \
 -m user-data="hostname=controller2" \
 ubuntu-certified-16.04 k4-highcpu-kvm-1.75G

Worker Agents:

triton instance create \
 --wait \
 --name=worker0 \
 -N kubernetes \
 -m user-data="hostname=worker0" \
 ubuntu-certified-16.04 k4-highcpu-kvm-1.75G

triton instance create \
 --wait \
 --name=worker1 \
 -N kubernetes \
 -m user-data="hostname=worker1" \
 ubuntu-certified-16.04 k4-highcpu-kvm-1.75G

triton instance create \
 --wait \
 --name=worker2 \
 -N kubernetes \
 -m user-data="hostname=worker2" \
 ubuntu-certified-16.04 k4-highcpu-kvm-1.75G

Set up some environment for this tutorial:

BASTION=""
# Define a bash function to keep the typing to a minimum
proxyssh() {
  BASTION=${BASTION:=$(triton ip kubernetes)}
  ssh -o proxycommand="ssh root@$BASTION -W %h:%p" $SSH_PROXY $@
}

Ubuntu normally prevents you from logging in as root. For the sake of brevity in this document, override this behavior.

Allow root login to certified image:

ssh ubuntu@$BASTION sudo cp /home/ubuntu/.ssh/authorized_keys /root/.ssh/
for d in controller{0,1,2} worker{0,1,2}; do
    IP=$(triton ip $d)
    proxyssh ubuntu@$IP sudo cp /home/ubuntu/.ssh/authorized_keys /root/.ssh/
done

Note: This is not something you should normally do. This is to simplify this document only.

You should now have the following compute instances.

triton instances
SHORTID   NAME         IMG                              STATE    FLAGS  AGE
e2a3b8a6  kubernetes   ubuntu-certified-16.04@20161221  running  K      1d
b080b2c4  controller0  ubuntu-certified-16.04@20161221  running  K      1d
846d3bb7  controller1  ubuntu-certified-16.04@20161221  running  K      1d
e3226afb  controller2  ubuntu-certified-16.04@20161221  running  K      1d
753c5f0c  worker0      ubuntu-certified-16.04@20161221  running  K      1d
7bb2a272  worker1      ubuntu-certified-16.04@20161221  running  K      1d
c34597cf  worker2      ubuntu-certified-16.04@20161221  running  K      1d

Setting up a Certificate Authority and TLS Cert Generation

Set up the necessary PKI infrastructure to secure the Kubernetes components. This leverages CloudFlare’s PKI toolkit, cfssl, to bootstrap a Certificate Authority and generate TLS certificates.

You will generate a single set of TLS certificates that can be used to secure the following Kubernetes components:

  • etcd
  • Kubernetes API Server
  • Kubernetes Kubelet

Note: In production, it is best practice to generate individual TLS certificates for each component.

After completing this section you should have the following TLS keys and certificates:

ca-key.pem
ca.pem
kubernetes-key.pem
kubernetes.pem

Install CFSSL

This lab requires the cfssl and cfssljson binaries. Download them from the cfssl repository.

OS X

wget -O /usr/local/bin/cfssl https://pkg.cfssl.org/R1.2/cfssl_darwin-amd64
wget -O /usr/local/bin/cfssljson https://pkg.cfssl.org/R1.2/cfssljson_darwin-amd64
chmod +x /usr/local/bin/cfssl
chmod +x /usr/local/bin/cfssljson

Linux

wget -O /usr/local/bin/cfssl https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
wget -O /usr/local/bin/cfssljson https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
chmod +x /usr/local/bin/cfssl
chmod +x /usr/local/bin/cfssljson

Setting up a Certificate Authority

Create the CA configuration file

echo '{
  "signing": {
    "default": {
      "expiry": "8760h"
    },
    "profiles": {
      "kubernetes": {
        "usages": ["signing", "key encipherment", "server auth", "client auth"],
        "expiry": "8760h"
      }
    }
  }
}' > ca-config.json

Generate the CA certificate and private key

Create the CA CSR

echo '{
  "CN": "Kubernetes",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "US",
      "L": "Portland",
      "O": "Kubernetes",
      "OU": "CA",
      "ST": "Oregon"
    }
  ]
}' > ca-csr.json

Generate the CA certificate and private key

cfssl gencert -initca ca-csr.json | cfssljson -bare ca

Results

ca-key.pem
ca.csr
ca.pem

Verification

openssl x509 -in ca.pem -text -noout

Generate the single Kubernetes TLS Cert

Generate a TLS certificate that will be valid for all Kubernetes components. This is being done for ease of use. In production you should generate individual TLS certificates for each component type. All replicas of a given component type must share the same certificate.

Set the Kubernetes "Public" Address

IPS=$(triton instance ls -o ips | grep -v IPS | tr '\n' ',' | tr -d '[]')

Create the kubernetes-csr.json file:

cat > kubernetes-csr.json <<EOF
{
  "CN": "kubernetes",
  "hosts": [
    "kubernetes",
    "10.32.0.1",
    ${IPS}
    "127.0.0.1"
  ],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "US",
      "L": "Portland",
      "O": "Kubernetes",
      "OU": "Cluster",
      "ST": "Oregon"
    }
  ]
}
EOF

Generate the Kubernetes certificate and private key

cfssl gencert \
  -ca=ca.pem \
  -ca-key=ca-key.pem \
  -config=ca-config.json \
  -profile=kubernetes \
  kubernetes-csr.json | cfssljson -bare kubernetes

Results

kubernetes-key.pem
kubernetes.csr
kubernetes.pem

Verification

openssl x509 -in kubernetes.pem -text -noout

Copy TLS Certs

Copy the TLS certificates and keys to each Kubernetes host:

# Collect a list of all the ips
IPS=$(triton instance ls -o ips | grep -v IPS | tr '\n' ',' | tr -d '[]')
# Use this machine to access the hosts that are all on a private network
BASTION=$(triton ip kubernetes)
# Copy the certificates to all the hosts
tar cf - ca.pem kubernetes-key.pem kubernetes.pem |
  ssh root@$BASTION tar xf -
for ip in $(echo $IPS | tr ',"' ' '); do
  tar cf - ca.pem kubernetes-key.pem kubernetes.pem |
  ssh -o StrictHostKeyChecking=no \
  -o proxycommand="ssh root@$BASTION -W %h:%p"
  root@$ip tar xf -
done

Bootstrapping a H/A etcd cluster

Bootstrap a 3 node etcd cluster. The following virtual machines will be used:

  • controller0
  • controller1
  • controller2

Why

All Kubernetes components are stateless which greatly simplifies managing a Kubernetes cluster. All state is stored in etcd, which is a database and must be treated specially. To limit the number of compute resource to complete this lab etcd is being installed on the Kubernetes controller nodes. In production environments etcd should be run on a dedicated set of machines for the following reasons:

  • The etcd lifecycle is not tied to Kubernetes. We should be able to upgrade etcd independently of Kubernetes.
  • Scaling out etcd is different than scaling out the Kubernetes Control Plane.
  • Prevent other applications from taking up resources (CPU, Memory, I/O) required by etcd.

Provision the etcd Cluster

Run the following commands for each of controller0, controller1, and controller2:

TLS Certificates

The TLS certificates created in the Setting up a CA and TLS Cert Generation section will be used to secure communication between the Kubernetes API server and the etcd cluster. The TLS certificates will also be used to limit access to the etcd cluster using TLS client authentication. Only clients with a TLS certificate signed by a trusted CA will be able to access the etcd cluster.

Download and Install the etcd binaries

Download the official etcd release binaries from coreos/etcd GitHub project:

# Set NODENAME for each controller node in turn as your repeat this section
NODENAME=controller0
# Get the address of that controller
IP=$(triton ip ${NODENAME})
# Download the ETCD binaries
proxyssh root@$IP wget --no-verbose https://github.com/coreos/etcd/releases/download/v3.0.10/etcd-v3.0.10-linux-amd64.tar.gz

Extract and install the etcd server binary and the etcdctl command line client:

proxyssh root@$IP <<< '
tar -xvf etcd-v3.0.10-linux-amd64.tar.gz
mv etcd-v3.0.10-linux-amd64/etcd* /usr/bin/
'

All etcd data is stored under the etcd data directory.

Note: Best Practice: in production the etcd data directory should be backed by a persistent disk.

Copy the TLS certificates to the etcd configuration directory

proxyssh root@$IP <<< '
mkdir -p /var/lib/etcd /etc/etcd/
cp ca.pem kubernetes-key.pem kubernetes.pem /etc/etcd/
'

The internal IP address will be used by etcd to serve client requests and communicate with other etcd peers.

Note: Each etcd member must have a unique name within an etcd cluster.

Set the controller addresses:

# Retrieve all the names and addresses from triton,
# filter for just the controllers, then build URLs
# for each of them.
INITIAL_CLUSTER=$(triton instance ls -o name,ips |
  awk '/controller/{gsub(/[^0-9.]/,"",$2);print $1 "=https://" $2":2380"}' |
  tr '\n' ',')

The etcd server will be started and managed by systemd.

Create the etcd systemd service file:

proxyssh root@$IP cp /dev/stdin /etc/systemd/system/etcd.service <<< "[Unit]
Description=etcd
Documentation=https://github.com/coreos

[Service]
ExecStart=/usr/bin/etcd --name $NODENAME \
  --cert-file=/etc/etcd/kubernetes.pem \
  --key-file=/etc/etcd/kubernetes-key.pem \
  --peer-cert-file=/etc/etcd/kubernetes.pem \
  --peer-key-file=/etc/etcd/kubernetes-key.pem \
  --trusted-ca-file=/etc/etcd/ca.pem \
  --peer-trusted-ca-file=/etc/etcd/ca.pem \
  --initial-advertise-peer-urls https://$IP:2380 \
  --listen-peer-urls https://$IP:2380 \
  --listen-client-urls https://$IP:2379,http://127.0.0.1:2379 \
  --advertise-client-urls https://$IP:2379 \
  --initial-cluster-token etcd-cluster-0 \
  --initial-cluster $INITIAL_CLUSTER \
  --initial-cluster-state new \
  --data-dir=/var/lib/etcd
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
"
Enable and start etcd
proxyssh root@$IP <<< "
systemctl daemon-reload
systemctl enable etcd
systemctl start etcd
"

Verification:

proxyssh root@$IP systemctl status etcd --no-pager

Note: Remember to repeat these steps for controller1, and controller2.

Verification

Once all 3 etcd nodes have been bootstrapped verify the etcd cluster is healthy, on one of the controller nodes run:

Request etcd cluster health:

proxyssh root@$IP etcdctl --ca-file=/etc/etcd/ca.pem cluster-health
member 3a57933972cb5131 is healthy: got healthy result from https://10.240.0.12:2379
member f98dc20bce6225a0 is healthy: got healthy result from https://10.240.0.10:2379
member ffed16798470cab5 is healthy: got healthy result from https://10.240.0.11:2379
cluster is healthy

Bootstrapping an H/A Kubernetes Control Plane

Bootstrap a 3 node Kubernetes controller cluster. The following virtual machines will be used:

  • controller0
  • controller1
  • controller2

Why

The Kubernetes components that make up the control plane include the following components:

  • Kubernetes API Server
  • Kubernetes Scheduler
  • Kubernetes Controller Manager

Each component is being run on the same machines for the following reasons:

  • The Scheduler and Controller Manager are tightly coupled with the API Server
  • Only one Scheduler and Controller Manager can be active at a given time, but it’s ok to run multiple at the same time. Each component will elect a leader via the API Server.
  • Running multiple copies of each component is required for H/A
  • Running each component next to the API Server eases configuration.

Provision the Kubernetes Controller Cluster

Run the following commands for each of controller0, controller1, and controller2.

Set some variables for this section:

NODENAME=controller0
IP=$(triton ip $NODENAME)

Download and install the Kubernetes controller binaries:

proxyssh root@$IP <<< "
wget --no-verbose -P /usr/local/bin/ https://storage.googleapis.com/kubernetes-release/release/v1.5.6/bin/linux/amd64/{kube-apiserver,kube-controller-manager,kube-scheduler,kubectl}
chmod +x /usr/local/bin/{kube-apiserver,kube-controller-manager,kube-scheduler,kubectl}
"

TLS Certificates

The TLS certificates created in the Setting up a CA and TLS Cert Generation section will be used to secure communication between the Kubernetes API server and Kubernetes clients such as kubectl and the kubelet agent. The TLS certificates will also be used to authenticate the Kubernetes API server to etcd via TLC client auth.

Copy the TLS certificates to the Kubernetes configuration directory

proxyssh root@$IP <<< "
mkdir -p /var/lib/kubernetes
cp ca.pem kubernetes-key.pem kubernetes.pem /var/lib/kubernetes/
"

Setup Authentication and Authorization

Authentication

Token based authentication will be used to limit access to the Kubernetes API. The authentication token is used by the following components:

  • kubelet (client)
  • Kubernetes API Server (server)

The other components, mainly the scheduler and controller manager, access the Kubernetes API server locally over the insecure API port which does not require authentication. The insecure port is only enabled for local access.

Create a common token to be used throughout the rest of this tutorial:

COMMON_TOKEN=${COMMON_TOKEN:=$(head /dev/urandom | base32 | head -c 8)}
echo "Your common token is: ${COMMON_TOKEN}"

Create the token file:

proxyssh root@$IP cp /dev/stdin /var/lib/kubernetes/token.csv <<< \
"${COMMON_TOKEN},admin,admin
${COMMON_TOKEN},scheduler,scheduler
${COMMON_TOKEN},kubelet,kubelet
"

Authorization

Attribute-Based Access Control (ABAC) will be used to authorize access to the Kubernetes API for this tutorial. Kubernetes policy file backend is documented in the Kubernetes authorization guide.

Create the authorization policy file:

proxyssh root@$IP cp /dev/stdin /var/lib/kubernetes/authorization-policy.jsonl <<< \
'{"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"user":"*", "nonResourcePath": "*", "readonly": true}}
{"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"user":"admin", "namespace": "*", "resource": "*", "apiGroup": "*"}}
{"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"user":"scheduler", "namespace": "*", "resource": "*", "apiGroup": "*"}}
{"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"user":"kubelet", "namespace": "*", "resource": "*", "apiGroup": "*"}}
{"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"group":"system:serviceaccounts", "namespace": "*", "resource": "*", "apiGroup": "*", "nonResourcePath": "*"}}
'

Kubernetes API Server

Create the systemd service file:

ETCD_CLIENT_ACCESS=$(triton instance ls -o name,ips | awk '/controller/{gsub(/[^0-9.]/,"",$2);print "https://" $2":2379"}' | tr '\n' ',')

proxyssh root@$IP cp /dev/stdin /etc/systemd/system/kube-apiserver.service <<< \
"[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes

[Service]
ExecStart=/usr/local/bin/kube-apiserver \
  --admission-control=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota \
  --advertise-address=${IP} \
  --allow-privileged=true \
  --apiserver-count=3 \
  --authorization-mode=ABAC \
  --authorization-policy-file=/var/lib/kubernetes/authorization-policy.jsonl \
  --bind-address=0.0.0.0 \
  --enable-swagger-ui=true \
  --etcd-cafile=/var/lib/kubernetes/ca.pem \
  --insecure-bind-address=0.0.0.0 \
  --kubelet-certificate-authority=/var/lib/kubernetes/ca.pem \
  --etcd-servers=${ETCD_CLIENT_ACCESS} \
  --service-account-key-file=/var/lib/kubernetes/kubernetes-key.pem \
  --service-cluster-ip-range=10.32.0.0/24 \
  --service-node-port-range=30000-32767 \
  --tls-cert-file=/var/lib/kubernetes/kubernetes.pem \
  --tls-private-key-file=/var/lib/kubernetes/kubernetes-key.pem \
  --token-auth-file=/var/lib/kubernetes/token.csv \
  --v=2
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
"

Enable and start api server:

proxyssh root@$IP <<< "
systemctl daemon-reload
systemctl enable kube-apiserver
systemctl start kube-apiserver
"

Verification:

proxyssh root@$IP systemctl status kube-apiserver --no-pager

Kubernetes Controller Manager

proxyssh root@$IP cp /dev/stdin /etc/systemd/system/kube-controller-manager.service <<< \
"[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/GoogleCloudPlatform/kubernetes

[Service]
ExecStart=/usr/local/bin/kube-controller-manager \
  --allocate-node-cidrs=true \
  --cluster-cidr=10.200.0.0/16 \
  --cluster-name=kubernetes \
  --leader-elect=true \
  --master=http://$IP:8080 \
  --root-ca-file=/var/lib/kubernetes/ca.pem \
  --service-account-private-key-file=/var/lib/kubernetes/kubernetes-key.pem \
  --service-cluster-ip-range=10.32.0.0/16 \
  --v=2
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
"

Enable and start controller manager:

proxyssh root@$IP <<< "
systemctl daemon-reload
systemctl enable kube-controller-manager
systemctl start kube-controller-manager
"

Verification:

proxyssh root@$IP systemctl status kube-controller-manager --no-pager

Kubernetes Scheduler

proxyssh root@$IP cp /dev/stdin /etc/systemd/system/kube-scheduler.service <<< \
"[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/GoogleCloudPlatform/kubernetes

[Service]
ExecStart=/usr/local/bin/kube-scheduler \
  --leader-elect=true \
  --master=http://$IP:8080 \
  --v=2
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
"

Enable and start scheduler:

proxyssh root@$IP <<< "
systemctl daemon-reload
systemctl enable kube-scheduler
systemctl start kube-scheduler
"

Verification:

proxyssh root@$IP systemctl status kube-scheduler --no-pager

Note: Remember to run these steps for controller0, controller1, and controller2.

Verification

proxyssh root@$IP kubectl get componentstatuses
NAME                 STATUS    MESSAGE              ERROR
controller-manager   Healthy   ok
scheduler            Healthy   ok
etcd-1               Healthy   {"health": "true"}
etcd-0               Healthy   {"health": "true"}
etcd-2               Healthy   {"health": "true"}

Setup Kubernetes API server frontend load balancer

Setting up a load balancer is out of the scope of this tutorial. A frontend load balancer, like haproxy, will need to be set up on bastion in order to access any of your apis or services externally. If you don’t want to set up haproxy yet, simply complete all the remaining tasks from the bastion.

Kubernetes workers

Bootstrap 3 Kubernetes worker nodes. The following virtual machines will be used:

  • worker0
  • worker1
  • worker2

Why

Kubernetes worker nodes are responsible for running your containers. All Kubernetes clusters need one or more worker nodes. We are running the worker nodes on dedicated machines for the following reasons:

  • Ease of deployment and configuration
  • Avoid mixing arbitrary workloads with critical cluster components. We are building machine with just enough resources so we don’t have to worry about wasting resources.

Some people would like to run workers and cluster services anywhere in the cluster. This is totally possible, and you’ll have to decide what’s best for your environment.

Run the following commands for each of worker0, worker1, and worker2.

Set tutorial section variables:

NODENAME=worker0
IP=$(triton ip $NODENAME)

Move the TLS certificates in place

proxyssh root@$IP <<< "
mkdir -p /var/lib/kubernetes
cp ca.pem kubernetes-key.pem kubernetes.pem /var/lib/kubernetes/
"

Docker

Kubernetes should be compatible with the Docker 1.9.x - 1.12.x.

Download and install docker:

proxyssh root@$IP <<< "
wget --no-verbose https://get.docker.com/builds/Linux/x86_64/docker-1.12.1.tgz
tar -xvf docker-1.12.1.tgz
cp docker/docker* /usr/bin/
"

Create the Docker systemd service file:

proxyssh root@$IP cp /dev/stdin /etc/systemd/system/docker.service <<< \
"[Unit]
Description=Docker Application Container Engine
Documentation=http://docs.docker.io

[Service]
ExecStart=/usr/bin/docker daemon \
  --iptables=false \
  --ip-masq=false \
  --host=unix:///var/run/docker.sock \
  --log-level=error \
  --storage-driver=overlay
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target"

Enable and start Docker:

proxyssh root@$IP <<< "
systemctl daemon-reload
systemctl enable docker
systemctl start docker
"

Verification (this may take a moment for docker to be listening after being started):

proxyssh root@$IP docker version

kubelet

Download and install CNI plugins:

mkdir -p /opt/cni
wget --no-verbose https://storage.googleapis.com/kubernetes-release/network-plugins/cni-07a8a28637e97b22eb8dfe710eeae1344f69d16e.tar.gz
tar -xvf cni-07a8a28637e97b22eb8dfe710eeae1344f69d16e.tar.gz -C /opt/cni

Download and install the Kubernetes worker binaries:

proxyssh root@$IP <<< "
wget --no-verbose -P /usr/bin https://storage.googleapis.com/kubernetes-release/release/v1.5.6/bin/linux/amd64/{kubectl,kube-proxy,kubelet}
chmod +x /usr/bin/{kubectl,kube-proxy,kubelet}
"

Set COMMON_TOKEN to the value you saved in Authentication.

Configure kubelet:

CONTROLLER0=$(triton ip controller0 | xargs printf 'https://%s:6443')
API_SERVERS=$(triton instance ls -o name,ips | awk '/controller/{gsub(/[^0-9.]/,"",$2);print "https://" $2":6443"}' | tr '\n' ',')

proxyssh root@$IP mkdir -p /var/lib/kubelet/
proxyssh root@$IP cp /dev/stdin /var/lib/kubelet/kubeconfig <<< \
"apiVersion: v1
kind: Config
clusters:
- cluster:
    certificate-authority: /var/lib/kubernetes/ca.pem
    server: ${CONTROLLER0}
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: kubelet
  name: kubelet
current-context: kubelet
users:
- name: kubelet
  user:
    token: $COMMON_TOKEN"

Create the kubelet systemd service file:

proxyssh root@$IP cp /dev/stdin /etc/systemd/system/kubelet.service <<< \
"[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=docker.service
Requires=docker.service

[Service]
ExecStart=/usr/bin/kubelet \
  --allow-privileged=true \
  --api-servers=${API_SERVERS} \
  --cluster-dns=10.32.0.10 \
  --cluster-domain=cluster.local \
  --container-runtime=docker \
  --network-plugin=kubenet \
  --kubeconfig=/var/lib/kubelet/kubeconfig \
  --serialize-image-pulls=false \
  --tls-cert-file=/var/lib/kubernetes/kubernetes.pem \
  --tls-private-key-file=/var/lib/kubernetes/kubernetes-key.pem \
  --v=2

Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target"

Enable and start kubelet:

proxyssh root@$IP <<< "
systemctl daemon-reload
systemctl enable kubelet
systemctl start kubelet
"

Verification:

proxyssh root@$IP systemctl status kubelet --no-pager

kube-proxy

Create the kube-proxy service file:

proxyssh root@$IP cp /dev/stdin /etc/systemd/system/kube-proxy.service <<< \
"[Unit]
Description=Kubernetes Kube Proxy
Documentation=https://github.com/GoogleCloudPlatform/kubernetes

[Service]
ExecStart=/usr/bin/kube-proxy \
  --masquerade-all \
  --master=${CONTROLLER0} \
  --kubeconfig=/var/lib/kubelet/kubeconfig \
  --proxy-mode=iptables \
  --v=2

Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target"

Enable and start kube-proxy:

proxyssh root@$IP <<< "
systemctl daemon-reload
systemctl enable kube-proxy
systemctl start kube-proxy
"

Verification:

proxyssh root@$IP systemctl status kube-proxy --no-pager

Note: Remember to run these steps on all workers.

Configuring the Kubernetes Client - Remote Access

Download and install kubectl

We’re going to do all of this configuration to our bastion. Without a load balancer on a public ip address, there’s no way to access our cluster from outside.

Install kubectl:

ssh root@$BASTION <<< "
wget --no-verbose -P /usr/local/bin/ https://storage.googleapis.com/kubernetes-release/release/v1.5.6/bin/linux/amd64/kubectl
chmod +x /usr/local/bin/kubectl
"

Configure kubectl

If you had a load balancer you would configure kubectl to use that address. Since you don’t, we’ll do this all from the bastion and point it at a controller.

KUBERNETES_PUBLIC_ADDRESS=$(triton ip controller0)

Recall the common token we set up. The environment variable should still have that stored.

If you were not going to be using the bastion and were instead using your proxy server, you would need to copy your CA from Setting up a CA and TLS Cert Generation to the host with kubectl. Since we are using self-signed TLS certs we need to trust the CA certificate so we can verify the remote API Servers.

The CA certificate has been copied to the bastion in an earlier step.

Build up the kubeconfig entry

ssh root@$BASTION <<< "
kubectl config set-cluster kubernetes-the-hard-way \
  --certificate-authority=ca.pem \
  --embed-certs=true \
  --server=https://${KUBERNETES_PUBLIC_ADDRESS}:6443

kubectl config set-credentials admin --token ${COMMON_TOKEN}

kubectl config set-context default-context \
  --cluster=kubernetes-the-hard-way \
  --user=admin

kubectl config use-context default-context
"

At this point you should be able to connect securely to the remote API server.

ssh root@$BASTION kubectl get componentstatuses
NAME                 STATUS    MESSAGE              ERROR
controller-manager   Healthy   ok
scheduler            Healthy   ok
etcd-2               Healthy   {"health": "true"}
etcd-0               Healthy   {"health": "true"}
etcd-1               Healthy   {"health": "true"}
ssh root@$BASTION kubectl get nodes
NAME                                   STATUS    AGE
fb011ce0-be49-4f5d-8d13-86153cdf42f7   Ready     7m
6bc8cb95-c927-4346-8ec8-e4e821dc4b22   Ready     5m
e8e8402e-3326-4d66-b3db-d62e588ac347   Ready     2m

Managing the Container Network Routes and Overlay Network

Now that each worker node is online we need to add an overlay network and routes to make sure that pods running on different machines can talk to each other.

Container Subnets

The IP addresses for each pod will be allocated from the podCIDR range assigned to each Kubernetes worker through the node registration process.

The podCIDR will be allocated from the cluster cidr range as configured on the Kubernetes Controller Manager with the following flag:

--cluster-cidr=10.200.0.0/16

Based on the above configuration each node will receive a /24 subnet. For example:

10.200.0.0/24
10.200.1.0/24
10.200.2.0/24
...

Populate the routing table

Populate the routing table with the l3 routes over our overlay network.

Use kubectl to print the InternalIP and podCIDR for each worker node.

kubectl get nodes \
  --output=jsonpath='{range .items[*]}{.status.addresses[?(@.type=="InternalIP")].address} {.spec.podCIDR} {"\n"}{end}'

Output:

10.240.0.20 10.200.0.0/24
10.240.0.21 10.200.1.0/24
10.240.0.22 10.200.2.0/24

Create an overlay network

Triton fabric networks do not forward multicasting so we need to create a unicast VXLAN overlay.

Note: Do this for each worker: worker0, worker1, and worker2.

IP=$(triton ip worker0)
# set the VXLAN_CIDR to 172.16.0.N where N is the last quad of IP
VXLAN_CIDR=172.16.0.${IP##*.}/24
# Get the ouput of the kubectl command shown above in "Populate the Routing Table"
GETNODES=$(ssh root@$BASTION "kubectl get nodes --output=jsonpath='{range .items[*]} {.status.addresses[?(@.type==\"InternalIP\")].address} {.spec.podCIDR}{\"\\n\"}{end}'")
# Get the vxlan destination addresses from the kubectl output (every other worker IP)
VXLAN_D=$(awk '!/'$IP'/{print $1}' <<< "$GETNODES")
# create the unicast vxlan endpoint FDB table entries
BRIDGE_COMMANDS=$(xargs printf "bridge fdb append to 00:00:00:00:00:00 dev vxlan0 dst %s\n" <<< $VXLAN_D)
echo $BRIDGE_COMMANDS

proxyssh root@$IP <<< "
ip link add vxlan0 type vxlan id 1 dstport 0
$BRIDGE_COMMANDS
ip addr add $VXLAN_CIDR dev vxlan0
ip link set up vxlan0
"

Create Routes

# create the L3 route commands to forward pod traffic over the vxlan
ROUTES=$(awk '!/'$IP'/{GW=$1;gsub("10.240","172.16",GW);print "ip route add "$2" via "GW}' <<< "$GETNODES")
echo $ROUTES
proxyssh root@$IP <<< "$ROUTES"

Note: Remember to create the overlan network and routes on all workers.

Deploying the Cluster DNS Add-on

Deploy the DNS add-on which is required for every Kubernetes cluster. Without the DNS add-on the following things will not work:

  • DNS based service discovery
  • DNS lookups from containers running in pods

Cluster DNS Add-on

Create the kubedns service:

ssh root@$BASTION kubectl create -f https://raw.githubusercontent.com/kelseyhightower/kubernetes-the-hard-way/master/services/kubedns.yaml

Verification

ssh root@$BASTION kubectl --namespace=kube-system get svc
NAME       CLUSTER-IP   EXTERNAL-IP   PORT(S)         AGE
kube-dns   10.32.0.10   <none>        53/UDP,53/TCP   5s

Create the kubedns deployment:

ssh root@$BASTION kubectl create -f https://raw.githubusercontent.com/kelseyhightower/kubernetes-the-hard-way/master/deployments/kubedns.yaml

Verification:

ssh root@$BASTION kubectl --namespace=kube-system get pods
NAME                           READY     STATUS    RESTARTS   AGE
kube-dns-v19-965658604-c8g5d   3/3       Running   0          49s
kube-dns-v19-965658604-zwl3g   3/3       Running   0          49s

Smoke Test

Perform a quick smoke test to demonstrate that this cluster is working:

ssh root@$BASTION kubectl run nginx --image=nginx --port=80 --replicas=3
deployment "nginx" created
ssh root@$BASTION kubectl get pods -o wide
NAME                     READY     STATUS    RESTARTS   AGE       IP           NODE
nginx-2032906785-ms8hw   1/1       Running   0          21s       10.200.2.2   worker2
nginx-2032906785-sokxz   1/1       Running   0          21s       10.200.1.2   worker1
nginx-2032906785-u8rzc   1/1       Running   0          21s       10.200.0.2   worker0
ssh root@$BASTION kubectl expose deployment nginx --type NodePort
service "nginx" exposed

Grab a worker IP address and the NodePort that was setup for the nginx service.

WORKER=$(triton ip worker0)
NODE_PORT=$(ssh root@$BASTION kubectl get svc nginx "--output=jsonpath='{range .spec.ports[0]}{.nodePort}'")

Test the nginx service using cURL from the Bastion host.

ssh root@$BASTION curl http://${WORKER}:${NODE_PORT}
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

Congratulations. You now have a working cluster built the hard way on Triton. If you'd like to compare this to what it takes to get Kubernetes running on Triton the "Easy Way," check out Kubernetes on Triton – the easy way, by Fayaz Ghiasy.