You are browsing a read-only backup copy of Wikitech. The live site can be found at wikitech.wikimedia.org

Kubernetes

From Wikitech-static
Revision as of 18:08, 11 December 2019 by imported>Alexandros Kosiaris (→‎Restarting the APIU server)
Jump to navigation Jump to search

Kubernetes (often abbreviated k8s) is an open-source system for automating deployment, and management of applications running in containers. This page collects some notes/docs on the Kubernetes setup in the Foundation production environment.

Services

A service in Kubernetes is an 'abstract way to expose an application running on a set of Pods as a network service'.

Debugging

For a quick intro into the debugging actions one can take during a problem in production look at Kubernetes/Helm. There will also be a guide posted under Kubernetes/Kubectl

Administration

Rebooting a worker node

The unpolite way

To reboot a worker node, you can just reboot it in our environment. The platform will understand the event and respawn the pods on other nodes. However the system does not automatically rebalance itself currently (pods are not rescheduled on the node after it has been rebooted)

The polite way (recommended)

If you feel like being more polite, use kubectl drain, it will configure the worker node to no longer create new pods and move the existing pods to other workers. Draining the node will take time. Rough numbers on 2019-12-11 are at around 60 seconds.

# kubectl drain kubernetes1001.eqiad.wmnet
# kubectl describe pods  --all-namespaces | awk  '$1=="Node:" {print $NF}' | sort -u
kubernetes1002.eqiad.wmnet/10.64.16.75
kubernetes1003.eqiad.wmnet/10.64.32.23
kubernetes1004.eqiad.wmnet/10.64.48.52
kubernetes1005.eqiad.wmnet/10.64.0.145
kubernetes1006.eqiad.wmnet/10.64.32.18
# kubectl get nodes
NAME                         STATUS                     ROLES     AGE       VERSION
kubernetes1001.eqiad.wmnet   Ready,SchedulingDisabled   <none>    2y352d    v1.12.9
kubernetes1002.eqiad.wmnet   Ready                      <none>    2y352d    v1.12.9
kubernetes1003.eqiad.wmnet   Ready                      <none>    2y352d    v1.12.9
kubernetes1004.eqiad.wmnet   Ready                      <none>    559d      v1.12.9
kubernetes1005.eqiad.wmnet   Ready                      <none>    231d      v1.12.9
kubernetes1006.eqiad.wmnet   Ready                      <none>    231d      v1.12.9

When the node has been rebooted, it can be configured to reaccept pods using kubectl uncordon, e.g.

# kubectl uncordon kubernetes1001.eqiad.wmnet
# kubectl get nodes
NAME                         STATUS    ROLES     AGE       VERSION
kubernetes1001.eqiad.wmnet   Ready     <none>    2y352d    v1.12.9
kubernetes1002.eqiad.wmnet   Ready     <none>    2y352d    v1.12.9
kubernetes1003.eqiad.wmnet   Ready     <none>    2y352d    v1.12.9
kubernetes1004.eqiad.wmnet   Ready     <none>    559d      v1.12.9
kubernetes1005.eqiad.wmnet   Ready     <none>    231d      v1.12.9
kubernetes1006.eqiad.wmnet   Ready     <none>    231d      v1.12.9

The pods are not rebalanced automatically, i.e. the rebooted node is free of pods initially.

Restarting calico-node

calico-node maitains a BGP session with the core routers if you intend to restart this service you should use the following procedure

  1. drain the node on the kube controler as shown above
  2. systemctl restart calico-node on the kube worker
  3. Wait for BGP sessions on the coure router to re-established
  4. uncordon the node on the kube controler as shown above

you can use the following command on the cour routers to check BGP status (use match 64602 for codfw)

# show bgp summary | match 64601       
10.64.0.121           64601        220        208       0       2       32:13 Establ
10.64.0.145           64601     824512     795240       0       1 12w1d 21:45:51 Establ
10.64.16.75           64601        161        152       0       2       23:25 Establ
10.64.32.18           64601     824596     795247       0       2 12w1d 21:46:45 Establ
10.64.32.23           64601        130        123       0       2       18:59 Establ
10.64.48.52           64601     782006     754152       0       3 11w4d 11:13:52 Establ
2620:0:861:101:10:64:0:121       64601        217        208       0       2       32:12 Establ
2620:0:861:101:10:64:0:145       64601     824472     795240       0       1 12w1d 21:45:51 Establ
2620:0:861:102:10:64:16:75       64601        160        152       0       2       23:25 Establ
2620:0:861:103:10:64:32:18       64601     824527     795246       0       1 12w1d 21:46:45 Establ
2620:0:861:103:10:64:32:23       64601        130        123       0       2       18:59 Establ
2620:0:861:107:10:64:48:52       64601     782077     754154       0       2 11w4d 11:14:13 Establ

Restarting specific components

kube-controller-manager and kube-scheduler are components of the API server. In production multiple ones run and perform via the API an election to determine which one is the master. Restarting both is without grave consequences so it's safe to do. However both are critical components in as such that there are required for the overall cluster to function smoothly. kube-scheduler is crucial for node failovers, pod evictions, etc while kube-controller-manager packs multiple controller components and is critical for responding to pod failures, depools etc.

commands would be

sudo systemctl restart kube-controller-manager sudo systemctl restart kube-scheduler

Restarting the API server

It's behind LVS in production, it's fine to restart it as long as enough time is given between the restarts across the cluster.

sudo systemctl restart kube-apiserver

See also