From Fedora Project Wiki
m
m
Line 6: Line 6:
 
|actions=
 
|actions=
  
 +
* Use package layering to install kubeadm on each host:
 +
  rpm-ostree install kubernetes-kubeadm ethtool -r
  
'''''NOTE: the kubelet is currently broken on F28 due to https://bugzilla.redhat.com/show_bug.cgi?id=1558425 -- I'm not aware of a workaround at the moment.'''''
+
* '''''BUG ALERT: Unfortunately, as of 1.7.3, SELinux again needs to be in permissive mode for kubeadm to work:'''''
  
 +
<nowiki>
 +
# setenforce 0
  
* Use package layering to install kubeadm on each host:
+
# sed -i 's/SELINUX=enforcing/SELINUX=permissive/' /etc/selinux/config
  rpm-ostree install kubernetes-kubeadm ethtool ebtables -r
+
</nowiki>
  
* Unfortunately, as of 1.7.3, SELinux again needs to be in permissive mode for kubeadm to work:
+
* '''''BUG ALERT: kubernetes wants to create a flex volume driver dir at <code>/usr/libexec/kubernetes</code>, but this is a read-only location on atomic hosts. Modify <code>/etc/systemd/system/kubelet.service.d/kubeadm.conf</code> to substitute a writeable flex volume location:'''''
  
 
  <nowiki>
 
  <nowiki>
# setenforce 0
+
# sed 's/--cgroup-driver=systemd/--cgroup-driver=systemd --volume-plugin-dir=\/etc\/kubernetes\/volumeplugins/' /etc/systemd/system/kubelet.service.d/kubeadm.conf
</nowiki>
+
</nowiki>
  
* kubernetes wants to create a flex volume driver dir at <code>/usr/libexec/kubernetes</code>, but this is a read-only location on atomic hosts. Modify <code>/etc/systemd/system/kubelet.service.d/kubeadm.conf</code> to match the following line, and then run <code>systemctl daemon-reload</code> to pick up the change:
+
* '''''BUG ALERT: the kubelet is currently broken on F28 due to https://bugzilla.redhat.com/show_bug.cgi?id=1558425 -- we can temporarily work around this by switching the docker and kubelet cgroupdriver from systemd or cgroupfs'''''
  
 
  <nowiki>
 
  <nowiki>
Environment="KUBELET_EXTRA_ARGS=--cgroup-driver=systemd --volume-plugin-dir=/etc/kubernetes/volumeplugins"
+
# cp /usr/lib/systemd/system/docker.service /etc/systemd/system/
 +
 
 +
# sed -i 's/cgroupdriver=systemd/cgroupdriver=cgroupfs/' /etc/systemd/system/docker.service
 +
 
 +
# sed -i 's/cgroup-driver=systemd/cgroup-driver=cgroupfs/' /etc/systemd/system/kubelet.service.d/kubeadm.conf
 +
 
 +
# systemctl daemon-reload
 +
 
 +
# systemctl restart docker
 
  </nowiki>
 
  </nowiki>
  
* Start the kubelet and initialize the kubernetes cluster. We specify a pod-network-cidr because flannel, which we'll use in this test, requires it, and we skip preflight checks because because kubeadm looks in the [https://github.com/kubernetes/kubernetes/pull/49410 wrong place] for kernel config.
+
* Start the kubelet and initialize the kubernetes cluster. We specify a pod-network-cidr because flannel, which we'll use in this test, requires it, and we ignore preflight errors because because kubeadm looks in the [https://github.com/kubernetes/kubernetes/pull/49410 wrong place] for kernel config.
  
 
  <nowiki>
 
  <nowiki>
 
# systemctl enable --now kubelet
 
# systemctl enable --now kubelet
  
# kubeadm init --pod-network-cidr=10.244.0.0/16 --skip-preflight-checks
+
# kubeadm init --pod-network-cidr=10.244.0.0/16 --ignore-preflight-errors=all
 
</nowiki>
 
</nowiki>
  

Revision as of 01:14, 11 April 2018

Description

Install Kubernetes on Fedora Atomic Host using kubeadm.

Setup

  • Install one or more Fedora Atomic Hosts.

How to test

  • Use package layering to install kubeadm on each host:
 rpm-ostree install kubernetes-kubeadm ethtool -r
  • BUG ALERT: Unfortunately, as of 1.7.3, SELinux again needs to be in permissive mode for kubeadm to work:
# setenforce 0

# sed -i 's/SELINUX=enforcing/SELINUX=permissive/' /etc/selinux/config

  • BUG ALERT: kubernetes wants to create a flex volume driver dir at /usr/libexec/kubernetes, but this is a read-only location on atomic hosts. Modify /etc/systemd/system/kubelet.service.d/kubeadm.conf to substitute a writeable flex volume location:
# sed 's/--cgroup-driver=systemd/--cgroup-driver=systemd --volume-plugin-dir=\/etc\/kubernetes\/volumeplugins/' /etc/systemd/system/kubelet.service.d/kubeadm.conf
 
# cp /usr/lib/systemd/system/docker.service /etc/systemd/system/

# sed -i 's/cgroupdriver=systemd/cgroupdriver=cgroupfs/' /etc/systemd/system/docker.service

# sed -i 's/cgroup-driver=systemd/cgroup-driver=cgroupfs/' /etc/systemd/system/kubelet.service.d/kubeadm.conf

# systemctl daemon-reload

# systemctl restart docker
 
  • Start the kubelet and initialize the kubernetes cluster. We specify a pod-network-cidr because flannel, which we'll use in this test, requires it, and we ignore preflight errors because because kubeadm looks in the wrong place for kernel config.
# systemctl enable --now kubelet

# kubeadm init --pod-network-cidr=10.244.0.0/16 --ignore-preflight-errors=all

  • Follow the directions in the resulting output to configure kubectl:
# mkdir -p $HOME/.kube
# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
# sudo chown $(id -u):$(id -g) $HOME/.kube/config

  • Deploy the flannel network plugin:
# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
  • By default, your cluster will not schedule pods on the master for security reasons. If you want to be able to schedule pods on the master, e.g. for a single-machine Kubernetes cluster run:
# kubectl taint nodes --all node-role.kubernetes.io/master-
  • If desired, join additional nodes to the master using the kubeadm join command provided in the kubeadm init output. For instance:
# kubeadm join --token 2a247c.f357bc09c56b12c8 atomic1:6443
  • Check on the install:
# kubectl get nodes
NAME                                           STATUS    AGE       VERSION
atomic1   Ready     6m        v1.7.3

# kubectl get pods --all-namespaces
NAMESPACE     NAME                              READY     STATUS    RESTARTS   AGE
kube-system   etcd-atomic1                      1/1       Running   0          5m
kube-system   kube-apiserver-atomic1            1/1       Running   0          6m
kube-system   kube-controller-manager-atomic1   1/1       Running   0          5m
kube-system   kube-dns-2425271678-lpqlt         3/3       Running   0          6m
kube-system   kube-flannel-ds-fcmbb             1/1       Running   0          4m
kube-system   kube-proxy-mrdf4                  1/1       Running   0          6m
kube-system   kube-scheduler-atomic1            1/1       Running   0          6m


  • Run some test apps
# kubectl run nginx --image=nginx --port=80 --replicas=3
deployment "nginx" created

# kubectl get pods -o wide
NAME                    READY     STATUS    RESTARTS   AGE       IP            NODE
nginx-158599303-dbkjw   1/1       Running   0          19s       10.244.0.3    atomic1
nginx-158599303-g4q7c   1/1       Running   0          19s       10.244.0.4    atomic1
nginx-158599303-n0mwm   1/1       Running   0          19s       10.244.0.5    atomic1

# kubectl expose deployment nginx --type NodePort
service "nginx" exposed

# kubectl get svc
NAME         CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
kubernetes   10.254.0.1      <none>        443/TCP        40m
nginx        10.254.52.120   <nodes>       80:32681/TCP   14s

# curl http://atomic1:32681
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

Expected Results

  1. kubeadm runs without error.
  2. You're able to run Kubernetes apps using the cluster.