You are browsing a read-only backup copy of Wikitech. The live site can be found at wikitech.wikimedia.org
Kubernetes: Difference between revisions
imported>Legoktm (→Deploy a service to staging: need a chart, copy for helmfile.d) |
imported>JMeybohm |
||
Line 59: | Line 59: | ||
==== Deploy a service to staging ==== | ==== Deploy a service to staging ==== | ||
At this point you should have a a Chart for your service (TODO: link to docs?), and will need to setup a <code>helmfile.d/services</code> directory in the | At this point you should have a a Chart for your service (TODO: link to docs?), and will need to setup a <code>helmfile.d/services</code> directory in the {{Gitweb|project=operations/deployment-charts}} repository for the deployment. You can copy the structure (helmfile.yaml, values.yaml, values-staging.yaml, etc.) from {{Gitweb|project=operations/deployment-charts|file=helmfile.d/services/_example_}} and customize as needed. | ||
You can proceed to deploy the new service to staging for real. Don't worry for TLS (if needed) since in staging it will be added a default config for your service auto-magically. Things are slightly different for production. | |||
'''On deploy1002:''' | '''On deploy1002:''' | ||
cd /srv/deployment-charts/helmfile.d/services/YOUR-SERVICE-NAME-HERE | cd /srv/deployment-charts/helmfile.d/services/YOUR-SERVICE-NAME-HERE |
Revision as of 06:52, 26 August 2021
<inputbox> type=fulltext prefix="Kubernetes" searchbuttonlabel= Search placeholder=Search Kubernetes Documentation width=35 break=no </inputbox>
- For information about Kubernetes in the Toolforge environment see Help:Toolforge/Kubernetes.
Kubernetes (often abbreviated k8s) is an open-source system for automating deployment, and management of applications running in containers. This page collects some notes/docs on the Kubernetes setup in the Foundation production environment.
Clusters
The list of currently maintained clusters in WMF, split by realm and team is at Kubernetes/Clusters
Packages
We deploy kubernetes in WMF production using Debian packages where appropriate. There is an upgrade policy in place for defining the timeframe and versions we run at every point in time. It's under Kubernetes/Kubernetes_Infrastructure_upgrade_policy. For more technical information on how we build the Debian packages have a look at Kubernetes/Packages
Images
For how our images are built and maintained have a look at Kubernetes/Images
Services
A service in Kubernetes is an 'abstract way to expose an application running on a set of workloads as a network service'.
- Learn more about Migrating a service to kubernetes and Deployment pipeline generally.
Debugging
For a quick intro into the debugging actions one can take during a problem in production look at Kubernetes/Helm. There will also be a guide posted under Kubernetes/Kubectl
Administration
Create a new cluster
Documentation for creating a new cluster is in Kubernetes/Clusters/New
Add a new service
To add a new service to the clusters:
- Ensure the service has it's ports registered at: Service ports
- Create deployment user/tokens in the puppet private (you can use a randomly generated 22-character [A-z0-9] password) and public repos. Example commits:
- tokens for service: labs/private: https://gerrit.wikimedia.org/r/c/labs/private/+/692672 (plus actual data in the private repo, see
e59f9496
) - tokens for user: labs/private: https://gerrit.wikimedia.org/r/c/labs/private/+/693169 (plus actual data in the private repo, see
51372030
) - token stanzas for service in puppet: operations/puppet: https://gerrit.wikimedia.org/r/c/operations/puppet/+/692667
- tokens for service: labs/private: https://gerrit.wikimedia.org/r/c/labs/private/+/692672 (plus actual data in the private repo, see
- Add a Kubernetes namespace. Example commit:
- kubernetes namespace: deployment-charts https://gerrit.wikimedia.org/r/c/operations/deployment-charts/+/693124
- At this point, you can safely merge the changes (after somebody from Service Ops validates). After merging, it is important to run the commands in the next step, so to avoid impacting other people rolling out changes later on.
- Setting up in staging-codfw cluster (and then to the other clusters)
On a cumin server
sudo cumin -b 4 -s 2 kubemaster* 'run-puppet-agent'
On deploy1002:
sudo run-puppet-agent sudo -i cd /srv/deployment-charts/helmfile.d/admin_ng/ helmfile -e staging-codfw -i apply
The command above should show you a diff in namespaces/quotas/etc.. related to your new service. If you don't see a diff, ping somebody from the Service Ops team! Check that everything is ok:
kube_env $YOUR-SERVICE-NAME staging-codfw kubectl get ns kubectl get pods
You should be able to see info about your namespace. kubectl get pods
should show a tiller pod.
Repeat for the staging-eqiad, eqiad and codfw clusters even if you aren't ready to fully deploy your service. Leaving undeployed things will impede further operations by other people.
Deploy a service to staging
At this point you should have a a Chart for your service (TODO: link to docs?), and will need to setup a helmfile.d/services
directory in the operations/deployment-charts repository for the deployment. You can copy the structure (helmfile.yaml, values.yaml, values-staging.yaml, etc.) from helmfile.d/services/_example_ and customize as needed.
You can proceed to deploy the new service to staging for real. Don't worry for TLS (if needed) since in staging it will be added a default config for your service auto-magically. Things are slightly different for production.
On deploy1002:
cd /srv/deployment-charts/helmfile.d/services/YOUR-SERVICE-NAME-HERE helmfile -e staging -i apply
The command above will show a diff related to the new service, make sure that everything looks fine and then hit Yes to proceed.
Testing a service
- Now we can test the service in staging. Use the very handy endpoint:
http(s)://staging.svc.eqiad.wmnet:$YOUR-SERVICE-PORT
to quickly test if everything works as expected.
Deploy a service to production
- Create certificates for the new service, if it has an HTTPS endpoint (remember that this step for staging is automatically handled for you, but for production it is not).
- Enable TLS for Kubernetes deployments
- If the new service requires specific secrets, commit them to
/srv/private/hieradata/role/common/deployment_server.yaml
- At this point, you need to update the admin config for eqiad and codfw (if you have configs for both of course):
- On deploy1002:
sudo -i; cd /srv/deployment-charts/helmfile.d/admin/codfw/; kube_env admin codfw; ./cluster-helmfile.sh -i apply
- On deploy1002:
sudo -i; cd /srv/deployment-charts/helmfile.d/admin/eqiad/; kube_env admin eqiad; ./cluster-helmfile.sh -i apply
- On deploy1002:
- Then the final step, namely deploying the new service:
- On deploy1002:
cd /srv/deployment-charts/helmfile.d/services/YOUR-SERVICE-NAME-HERE; helmfile -e codfw -i apply
- On deploy1002:
cd /srv/deployment-charts/helmfile.d/services/YOUR-SERVICE-NAME-HERE; helmfile -e eqiad -i apply
- On deploy1002:
The service can now be accessed via the registered port on any of the kubernetes nodes (for manual testing).
If you need the service to be easily accessible from outside of the cluster, you might want to add Add a new load balanced service.
Rebooting a worker node
The unpolite way
To reboot a worker node, you can just reboot it in our environment. The platform will understand the event and respawn the pods on other nodes. However the system does not automatically rebalance itself currently (pods are not rescheduled on the node after it has been rebooted)
The polite way (recommended)
If you feel like being more polite, use kubectl drain, it will configure the worker node to no longer create new pods and move the existing pods to other workers. Draining the node will take time. Rough numbers on 2019-12-11 are at around 60 seconds.
# kubectl drain --ignore-daemonsets kubernetes1001.eqiad.wmnet
# kubectl describe pods --all-namespaces | awk '$1=="Node:" {print $NF}' | sort -u
kubernetes1002.eqiad.wmnet/10.64.16.75
kubernetes1003.eqiad.wmnet/10.64.32.23
kubernetes1004.eqiad.wmnet/10.64.48.52
kubernetes1005.eqiad.wmnet/10.64.0.145
kubernetes1006.eqiad.wmnet/10.64.32.18
# kubectl get nodes
NAME STATUS ROLES AGE VERSION
kubernetes1001.eqiad.wmnet Ready,SchedulingDisabled <none> 2y352d v1.12.9
kubernetes1002.eqiad.wmnet Ready <none> 2y352d v1.12.9
kubernetes1003.eqiad.wmnet Ready <none> 2y352d v1.12.9
kubernetes1004.eqiad.wmnet Ready <none> 559d v1.12.9
kubernetes1005.eqiad.wmnet Ready <none> 231d v1.12.9
kubernetes1006.eqiad.wmnet Ready <none> 231d v1.12.9
When the node has been rebooted, it can be configured to reaccept pods using kubectl uncordon, e.g.
# kubectl uncordon kubernetes1001.eqiad.wmnet
# kubectl get nodes
NAME STATUS ROLES AGE VERSION
kubernetes1001.eqiad.wmnet Ready <none> 2y352d v1.12.9
kubernetes1002.eqiad.wmnet Ready <none> 2y352d v1.12.9
kubernetes1003.eqiad.wmnet Ready <none> 2y352d v1.12.9
kubernetes1004.eqiad.wmnet Ready <none> 559d v1.12.9
kubernetes1005.eqiad.wmnet Ready <none> 231d v1.12.9
kubernetes1006.eqiad.wmnet Ready <none> 231d v1.12.9
The pods are not rebalanced automatically, i.e. the rebooted node is free of pods initially.
Restarting specific components
kube-controller-manager and kube-scheduler are components of the API server. In production multiple ones run and perform via the API an election to determine which one is the master. Restarting both is without grave consequences so it's safe to do. However both are critical components in as such that there are required for the overall cluster to function smoothly. kube-scheduler is crucial for node failovers, pod evictions, etc while kube-controller-manager packs multiple controller components and is critical for responding to pod failures, depools etc.
commands would be
sudo systemctl restart kube-controller-manager
sudo systemctl restart kube-scheduler
Restarting the API server
It's behind LVS in production, it's fine to restart it as long as enough time is given between the restarts across the cluster.
sudo systemctl restart kube-apiserver
Switch the active staging cluster (eqiad<->codfw)
We do have one staging cluster per DC, mostly to separate staging of kubernetes and components from staging of the services running on top of it. To provide staging services during work on one of the clusters, we can (manually) switch between the DCs:
- Switch staging.svc.eqiad.wmnet to point to the new active k8s cluster (we should have a better solution/DNS name for this at some point)
- Switch the definition of "staging" on the deployment servers:
- Switch CI and releases to the other kubestagemaster:
- https://gerrit.wikimedia.org/r/c/operations/puppet/+/668114
sudo cumin -b 3 'O:ci::master or O:releases or O:deployment_server' 'run-puppet-agent -q'
- Make sure all service deployments are up to date after the switch (e.g. deploy them all)
Managing pods, jobs and cronjobs
Commands should be run from the deployment servers (at the time of this writing deploy1002).
You need to set the correct context, for example:
kube_env admin eqiad
Other choices are codfw, staging-eqiad and staging-codfw.
The management commands is called kubectl.
Listing cronjobs, jobs and pods
kubectl get cronjobs -n <namespace> kubectl get jobs -n <namespace> kubectl get pods -n <namespace>
Deleting a job
kubectl delete job <job id>
Updating the docker image run by a CronJob
The relationship between the resources is the following:
Cronjob --spawns--> Job(s) --spawns--> Pod(s)
Note: Technically speaking, it's a tight control loop that lives in kube-controller-manager that does the spawning part, but adding that to the above would make this more confusing.
Under normal conditions a docker image version will be updated when a new deploy happens. The cronjob will have the new version. However, already created jobs by the CronJob will not be stopped until they have run to completion.
When the job finishes, the cronjob will create new job(s), which in turn will create new pod(s).
Depending on the correlation between a CronJob scheduling and the job run time there might be a window of time where despite the new deployment, the old job is still running.
Deleting the kubernetes pod created by the job itself will NOT work, i.e. the job will still exist and it will create a new pod (which will still have the old image).
So, if we are dealing with a long running kubernetes Job one can get the same effect by deleting the kubernetes job created by the cronjob.
phab:T280076 is an example where this was needed.
Checking which image version a cronjob is using
kubectl describe pod <pod in question>
(look for Image:)