With the 1.4 release of Kubernetes, Google have made instantiating a cluster a whole lot easier. Using Kubeadm, you can bring up a cluster with a single command on each node. A further command will create a DaemonSet which brings up a Weave mesh network between all your nodes.

As always with complex systems such as Kubernetes, there are some potential pitfalls to be aware of. Firstly, the getting started guide notes that v1.11.2 of Docker is recommended, but v1.10.3 and v1.12.1 also work well (don’t go straight for the latest release like I tried to). If you wish to have your nodes talk over a private network, you’ll also need to explicitly declare this when you init the master node, otherwise Kubernetes will default to using your primary interface/route:

With that done, you can see all the service containers up and running on the master node:

It’s now time to join your slave nodes to the master:

The master now knows about the cluster, however the DNS container will be stuck creating – this is because it needs networking to enable it to spawn successfully. Kubeadm makes this easy enough:

This will create a DaemonSet provided by Weaveworks which in turn pulls down all the necessary Docker containers to build the networking:

You can now see the DNS service container has started correctly and is able to serve requests, however the two new weave-net containers on the slave nodes will fail to start. Inspecting the logs for an affected container shows the following:

This is due to the local kube-proxy service on that node using the wrong interface. Fortunately this is easy enough to fix (you’ll need to ensure ‘jq’ is installed first to manipulate the json):

Note that the above is now incorrect for Kubernetes 1.6, you’ll need the following (thanks Mike!):

This will correct the –cluster-cidr flag and then delete the containers. Kubernetes applies a restart policy of ‘always’ to the service containers, so these will be respawned:

Success! One working cluster. It is worth bearing in mind that deploying via Kubeadm is fairly limited at the moment – it only creates a single etcd container which won’t be resilient if a node is lost. The tool is being heavily worked on however, and more functionality will be added in later releases.

Deploying Kubernetes 1.4 on Ubuntu Xenial with Kubeadm
Tagged on:                     

3 thoughts on “Deploying Kubernetes 1.4 on Ubuntu Xenial with Kubeadm

  • Pingback:Forcing Kubernetes to use a secondary interface – Dicking with Docker

  • 7th June 2017 at 21:32
    Permalink

    Thanks, this was right on the nose for us!

    One update, the label for name kube-proxy in the daemon set has changed. Here’s a version of the command to fix the kube-proxy service that worked for me with kubeadm 1.6 on Ubuntu 16.04:

    kubectl -n kube-system get ds -l ‘k8s-app=kube-proxy’ -o json | jq ‘.items[0].spec.template.spec.containers[0].command |= .+ [“–cluster-cidr=10.32.0.0/12”]’ | kubectl apply -f – && kubectl -n kube-system delete pods -l ‘k8s-app=kube-proxy’

    Reply
    • 12th June 2017 at 15:05
      Permalink

      Thanks for that fix Mike, I’ve updated the post to reflect. Glad to hear someone found my ramblings useful!

      Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.