Following on from my previous post, I discovered rather to my dismay that although I had my nodes initially communicating over the secondary interface, the weave services (and thus my inter-pod traffic) was all going over the public interface.

As these are VPSes, they have a public IP on eth0 and a VLAN IP on eth1, so it makes sense for all inter-pod traffic to stay internal. If I check the logs for one of the weave-net containers, we can see all comms are going via the 1.1.1.x IPs (for the purposes of this post they are the public IPs of each VPS):

The reasons for this are not related to Kubeadm, they’re in fact rooted in the behaviour of the Kubelet service – by default its name will be the same as the hostname and it’s IP address will be that of the default gateway. There are a couple of ways to work around this, depending on what method works for you – neither are perfect but both work well. Firstly, for systemd-based distros you can add a drop-in file to force the hostname the kubelet uses (this does obviously assume that you have a working A record for the hostname you want to use):

The alternative to this is to place entries in the hosts file on each node:

Run through the steps to bootstrap your cluster, then check the cluster is using the VLAN addresses:

And also the logs from one of the weave-net containers:

Forcing Kubernetes to use a secondary interface
Tagged on:     

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.