Kubernetes
Postmortem: Management Cluster nodes flapping
A postmortem around sporadic flapping nodes in my ClusterAPI management cluster. ...
Update: Using BGP to integrate Cilium with OPNsense
A little while back, I wrote a short piece on integrating Cilium with OPNsense using BGP. With more recent releases of Cilium, the team have introduced the Cilium BGP Control Plane (currently as a beta feature). This reworking of the BGP integration replaces the old MetalLB-based control plane and as such the older feature must first be disabled. To enable the new feature, you can either pass an argument to Cilium: --enable-bgp-control-plane=true Or if you use Helm to install Cilium then the following values are required: ...
Troubleshooting Network Traffic with CRI-O and Kubernetes
Running immutable infra is the holy grail for many people, however there are times when you’ll need to get down in the weeds in order to troubleshoot issues. Let’s imagine a scenario; you need to verify that a pod is receiving traffic, but the image is built FROM scratch. As scratch containers are as minimal as possible, there’s no shell in the image, so there’s no way you can exec into it and hope to do anything remotely useful. ...
Over-engineering my website with Kubernetes
A solution in need of a problem Like all good sysadmins, my personal website has been a ‘coming soon’ splash page for quite some time. According to the Wayback Machine, it’s been this way since some time in 2014. As I’m sure many can sympathise with, there are always far more interesting and shiny things to be experimenting with than building a website. One of the interesting things I like to experiment with is Kubernetes (as should be apparent from the tag cloud). Up until now, this has mostly consisted of building clusters, tweaking them and then tearing them down again. Whilst this gives me experience from the Operations side, I’m not getting the end-to-end experience a consumer of my cluster would have. ...
Deploying Kubernetes on VMs with Kubespray
All the choices So you’re looking to start using Kubernetes, but you’re overwhelmed by the multitude of deployment options available? Judging by the length of the Picking the Right Solution section to the Kubernetes docs, it’s safe to assume that you’re not alone. Even after you’ve made it past the provisioning stage, you then need to learn how to administrate what is a very complex system. In short; Kubernetes is not easy. ...
Forcing Kubernetes to use a secondary interface
Following on from my previous post, I discovered rather to my dismay that although I had my nodes initially communicating over the secondary interface, the weave services (and thus my inter-pod traffic) was all going over the public interface. As these are VPSes, they have a public IP on eth0 and a VLAN IP on eth1, so it makes sense for all inter-pod traffic to stay internal. If I check the logs for one of the weave-net containers, we can see all comms are going via the 1.1.1.x IPs (for the purposes of this post they are the public IPs of each VPS): ...
Deploying Kubernetes 1.4 on Ubuntu Xenial with Kubeadm
With the 1.4 release of Kubernetes, Google have made instantiating a cluster a whole lot easier. Using Kubeadm, you can bring up a cluster with a single command on each node. A further command will create a DaemonSet which brings up a Weave mesh network between all your nodes. As always with complex systems such as Kubernetes, there are some potential pitfalls to be aware of. Firstly, the getting started guide notes that v1.11.2 of Docker is recommended, but v1.10.3 and v1.12.1 also work well (don’t go straight for the latest release like I tried to). If you wish to have your nodes talk over a private network, you’ll also need to explicitly declare this when you init the master node, otherwise Kubernetes will default to using your primary interface/route: ...