This post is the final post in a series of articles describing how to set up an RBAC+TLS-enabled bare metal Kubernetes cluster running on Xen, in order to understand and follow the below instructions you might want to start from the beginning.
This set of instructions depends on the following
The cluster we’ll create will be named solar and will have a master node, three etcd nodes and two additional non-etcd nodes. It will be using the .20-.29 address space in our configuration and the individual Xen guest hostnames will be solar-xxx as described below
Master: solar-earth (192.168.100.20)
Etcd: solar-mercury (192.168.100.21)
Etcd: solar-venus (192.168.100.22)
Etcd: solar-mars (192.168.100.23)
Node: solar-jupiter (192.168.100.24)
Node: solar-saturn (192.168.100.25)
To make things easier in some commands the following environmental variable should be defined
KCLUSTER="earth mercury venus mars jupiter saturn"
The needed files for the scripts are as follows, note rules.v4 should be put in /etc/iptables/rules.v4 (don’t forget to back up the file there and/or to merge this with it if needed!) and systemctl restart iptables-persistent be rerun afterwards.
functions.bashrc and kgenerate are also available as syntax highlighted blog pages if you want to take a look at what they do more easily. You could also edit functions.bashrc and add the export KCLUSTER= line in there to make it easier so it’s set automatically every time you xencd to a cluster subdirectory.
Here is a tarfile containing all of the files woodensquares-k8s.tar.bz2 in a guests/solar/ subdirectory and kgenerate in a bin/ subdirectory, if you want to use this you can simply follow the next steps
./ ./rules.v4 ./bin/ ./bin/kgenerate ./guests/ ./guests/solar/ ./guests/solar/functions.bashrc ./guests/solar/yaml/ ./guests/solar/yaml/hostnames.yaml ./guests/solar/yaml/kubernetes-dashboard.yaml ./guests/solar/yaml/dns.yaml ./guests/solar/yaml/busybox.yaml ./guests/solar/templates/ ./guests/solar/templates/scheduler.template ./guests/solar/templates/kube-proxy.template ./guests/solar/templates/etcd.template ./guests/solar/templates/node-other.template ./guests/solar/templates/systemd.template ./guests/solar/templates/flanneld.template ./guests/solar/templates/profile.template ./guests/solar/templates/storage.template ./guests/solar/templates/storage-etcd.template ./guests/solar/templates/controller.template ./guests/solar/templates/storage-master.template ./guests/solar/templates/apiserver.template ./guests/solar/templates/kube-scheduler.template ./guests/solar/templates/kube-controller-manager.template ./guests/solar/templates/node-master.template ./guests/solar/templates/node-etcd.template ./guests/solar/templates/proxy.template ./guests/solar/templates/kubelet.template ./guests/solar/templates/flannel-setup.template
total 5748 drwxr-xr-x 2 root root 4096 Dec 12 17:06 . drwxr-xr-x 7 root root 4096 Dec 12 16:41 .. -rwxr-xr-x 1 root root 5871900 Sep 25 16:55 ct -rwxr-xr-x 1 root root 1183 Dec 12 17:06 kgenerate
./functions.bashrc ./yaml/... ./templates/....
Let’s now actually create the cluster
Expanding /storage/xen/images/coreos-1520.9.0.bin.bz2 to earth.img Adding 6144 megabytes to earth.img 6144+0 records in 6144+0 records out 6442450944 bytes (6.4 GB, 6.0 GiB) copied, xx.xxxx s, yyy MB/s Expanding /storage/xen/images/coreos-1520.9.0.bin.bz2 to mercury.img Adding 6144 megabytes to mercury.img ...
Creating earth.cfg, will become 192.168.100.20 with mac 00:16:3e:4e:31:20 Creating mercury.cfg, will become 192.168.100.21 with mac 00:16:3e:4e:31:21 Creating venus.cfg, will become 192.168.100.22 with mac 00:16:3e:4e:31:22 Creating mars.cfg, will become 192.168.100.23 with mac 00:16:3e:4e:31:23 Creating jupiter.cfg, will become 192.168.100.24 with mac 00:16:3e:4e:31:24 Creating saturn.cfg, will become 192.168.100.25 with mac 00:16:3e:4e:31:25 The cluster was created with master: vcpu=2, mem=2048 nodes: vcpu=1 mem=1536
Generating the etcd ca Generating the Kubernetes ca Generating the etcd server certificate for earth Generating the etcd peer certificate for earth Generating the etcd client certificate for earth Generating the Kubernetes apiserver server certificate for earth Generating the Kubernetes kubelet server certificate for earth Generating the Kubernetes kubelet client certificate for earth Generating the Kubernetes controller manager client certificate for earth Generating the Kubernetes scheduler client certificate for earth Generating the Kubernetes proxy client certificate for earth Generating the Kubernetes admin client certificate for earth ........
As an aside, note that we are running on CoreOS-provided images, and the Kubernetes distribution available there might be slightly behind the official distribution, this is expecially noticeable in the HyperKube container used to set-up kubelet, kube-proxy and so on.
For example at the time of writing the current Kubernetes version is 1.8.5, but the latest available hyperkube is 1.8.4, when choosing the tag to use here please make sure it does exist by checking https://quay.io/repository/coreos/hyperkube?tag=latest&tab=tags
Created templates for a cluster with: Master: solar-earth (192.168.100.20) Etcd: solar-mercury (192.168.100.21) Etcd: solar-venus (192.168.100.22) Etcd: solar-mars (192.168.100.23) Node: solar-jupiter (192.168.100.24) Node: solar-saturn (192.168.100.25)
grub.cfg set to:set linux_append="coreos.config.url=http://192.168.100.1/solar/earth.json" Removed /etc/machine-id for systemd units refresh Transpiling earth.ct and adding it to nginx Creating coreos/first_boot grub.cfg set to:set linux_append="coreos.config.url=http://192.168.100.1/solar/mercury.json" Removed /etc/machine-id for systemd units refresh Transpiling mercury.ct and adding it to nginx Creating coreos/first_boot grub.cfg set to:set linux_append="coreos.config.url=http://192.168.100.1/solar/venus.json" Removed /etc/machine-id for systemd units refresh Transpiling venus.ct and adding it to nginx Creating coreos/first_boot grub.cfg set to:set linux_append="coreos.config.url=http://192.168.100.1/solar/mars.json" Removed /etc/machine-id for systemd units refresh Transpiling mars.ct and adding it to nginx Creating coreos/first_boot grub.cfg set to:set linux_append="coreos.config.url=http://192.168.100.1/solar/jupiter.json" Removed /etc/machine-id for systemd units refresh Transpiling jupiter.ct and adding it to nginx Creating coreos/first_boot grub.cfg set to:set linux_append="coreos.config.url=http://192.168.100.1/solar/saturn.json" Removed /etc/machine-id for systemd units refresh Transpiling saturn.ct and adding it to nginx Creating coreos/first_boot
[1] 6492 [2] 6493 [3] 6496 [4] 6501 [5] 6502 [6] 6520 Parsing config from venus.cfg Parsing config from mercury.cfg Parsing config from earth.cfg Parsing config from mars.cfg Parsing config from jupiter.cfg Parsing config from saturn.cfg
The first boot of the cluster will take a while because docker has to download images after it’s up, check with the console, can ssh in and journalctl -f there will be errors for a while while etcd starts up and everything is settled
Dec 09 21:29:54 solar-earth kubelet-wrapper[761]: Downloading ACI: 24.6 MB/245 MB Dec 09 21:29:55 solar-earth flannel-wrapper[902]: Downloading signature: 0 B/473 B Dec 09 21:29:55 solar-earth flannel-wrapper[902]: Downloading signature: 473 B/473 B Dec 09 21:29:55 solar-earth kubelet-wrapper[761]: Downloading ACI: 26 MB/245 MB Dec 09 21:29:56 solar-earth flannel-wrapper[902]: Downloading ACI: 0 B/18 MB Dec 09 21:29:56 solar-earth flannel-wrapper[902]: Downloading ACI: 8.19 KB/18 MB Dec 09 21:29:56 solar-earth kubelet-wrapper[761]: Downloading ACI: 27.4 MB/245 MB Dec 09 21:29:57 solar-earth flannel-wrapper[902]: Downloading ACI: 1.38 MB/18 MB Dec 09 21:29:57 solar-earth kubelet-wrapper[761]: Downloading ACI: 28.5 MB/245 MB ...
after the download is done can also docker ps to verify containers are starting
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 58f9ec71708c gcr.io/google_containers/hyperkube@sha256:a68d3ebeb5d8a8b4b9697a5370ea1b7ad921c403cd7429c7ac859193901c9ded "/bin/bash -c '/contr" 15 seconds ago Up 14 seconds k8s_kube-controller-manager_kube-controller-manager-solar-earth_kube-system_c483efd67fc3ec53dba56e58eedf3d5c_0 345ce236abd1 gcr.io/google_containers/hyperkube@sha256:a68d3ebeb5d8a8b4b9697a5370ea1b7ad921c403cd7429c7ac859193901c9ded "/bin/bash -c '/proxy" 18 seconds ago Up 17 seconds k8s_kube-proxy_kube-proxy-solar-earth_kube-system_279180e8dd21f1066568940ef66762e4_0 608df8c0fb68 gcr.io/google_containers/hyperkube@sha256:a68d3ebeb5d8a8b4b9697a5370ea1b7ad921c403cd7429c7ac859193901c9ded "/bin/bash -c '/sched" 19 seconds ago Up 19 seconds k8s_kube-scheduler_kube-scheduler-solar-earth_kube-system_295842a1063467946b49b059a8b55c8b_0 0db9eaa574c8 gcr.io/google_containers/hyperkube@sha256:a68d3ebeb5d8a8b4b9697a5370ea1b7ad921c403cd7429c7ac859193901c9ded "/bin/bash -c '/apise" 22 seconds ago Up 22 seconds k8s_kube-apiserver_kube-apiserver-solar-earth_kube-system_09b4691a8d49eb9038ca56ce4543554a_0 a48be2806005 gcr.io/google_containers/pause-amd64:3.0 "/pause" 51 seconds ago Up 47 seconds k8s_POD_kube-proxy-solar-earth_kube-system_279180e8dd21f1066568940ef66762e4_0 57edc7794024 gcr.io/google_containers/pause-amd64:3.0 "/pause" 51 seconds ago Up 46 seconds k8s_POD_kube-controller-manager-solar-earth_kube-system_c483efd67fc3ec53dba56e58eedf3d5c_0 b7585fc1c322 gcr.io/google_containers/pause-amd64:3.0 "/pause" 51 seconds ago Up 47 seconds k8s_POD_kube-scheduler-solar-earth_kube-system_295842a1063467946b49b059a8b55c8b_0 cd1ea45ef2ed gcr.io/google_containers/pause-amd64:3.0 "/pause" 51 seconds ago Up 48 seconds k8s_POD_kube-apiserver-solar-earth_kube-system_09b4691a8d49eb9038ca56ce4543554a_0
You can also tail the kubernetes logs in /var/log/kubernetes/*log after the containers are up to make sure especially the apiserver is settled, this might take a few minutes depending on the speed of your computer and disks.
The cluster is now available, so we can execute commands against it, if you don’t have kubectl available it’s time to get it
we can now set the configuration for our cluster, with certificates and so on, via the kubeconfig bash function
Cluster "solar" set. User "admin" set. Context "solar" created. Switched to context "solar".
Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.4", GitCommit:"9befc2b8928a9426501d3bf62f72849d5cbcd5a3", GitTreeState:"clean", BuildDate:"2017-11-20T05:28:34Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.4", GitCommit:"9befc2b8928a9426501d3bf62f72849d5cbcd5a3", GitTreeState:"clean", BuildDate:"2017-11-20T05:17:43Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.199.0.1 <none> 443/TCP 2m
Let’s now start kube-dns, to have dns available in our cluster, as well as busybox and hostnames as discussed in the Kubernetes debugging page here to verify everything works
Server: 10.199.0.10 Address 1: 10.199.0.10 kube-dns.kube-system.svc.cluster.local Name: hostnames Address 1: 10.199.4.203 hostnames.default.svc.cluster.local
For a more complicated deployment let’s now install the Kubernetes dashboard, in order to use it we should give it some certificates to be able to access it over SSL, so we have to create them as well as store them in Kubernetes as a secret.
Generating a nodeport certificate for 'dashboard' with IPs set to 10.199.0.1,192.168.1.45,192.168.100.1
secret "kubernetes-dashboard-certs" created
The provided yaml file is the same as this file with the small change of making the deployment a nodeport on our 32000 port.
We can now start the deployment and take a couple of extra steps to verify it works accessing it from our main development workstation (where we likely have our browser)
6b2a827e83d9466e909a3b2a5818bdea
On our main host, we can now copy this file and use it after making sure we change the IP address of the cluster to the LAN IP of our Xen box, (192.168.1.45 here but of course different in your case) and the port to the port on the Xen box that will be forwarded to the master 8443 port for API access.
Cluster "solar" set.
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kube-dns ClusterIP 10.199.0.10 <none> 53/UDP,53/TCP 6m kubernetes-dashboard NodePort 10.199.12.23 <none> 443:32000/TCP 3m
You should now be able to go on your browser and go to https://[your-lan-ip]:32000/ and log in with the master token identified above to access the dashboard.
Have fun with your new Kubernetes cluster! If you have reached this page as part of the guide, you might need to go back to step 5 to understand how the cluster is put together.
To shut down the cluster you can just run kdown $KCLUSTER.