You are browsing a read-only backup copy of Wikitech. The live site can be found at wikitech.wikimedia.org
Portal:Toolforge/Admin/Kubernetes: Difference between revisions
imported>Bstorm (Add documentation on how to work with admin accounts.) |
imported>Bstorm (→Administrative Actions: Add quota change instructions) |
||
Line 259: | Line 259: | ||
== Administrative Actions == | == Administrative Actions == | ||
=== Quota management=== | |||
Resource quotas in Kubernetes are set on a namespace scope. Default quotas are created for Toolforge tools by the maintain-kubeusers service. | |||
To view a quota for a tool, you can use your admin account (your login user if you are in the tools.admin group) and run <code>kubectl -n tool-$toolname get resourcequotas</code> to list them. There should be only one with the same name as the namespace in most cases. To see what's in there the easy way is to output the yaml of the quota like <code>kubectl -n tool-cdnjs get resourcequotas tool-cdnjs -o yaml</code> | |||
If you want to update a quota for a user who has completed the process for doing so on [https://phabricator.wikimedia.org/project/manage/4834/ phabricator], it is as simple as editing the Kubernetes object. Your admin account needs to impersonate cluster-admin to do this, such as: | |||
<syntaxhighlight lang=shell-session> | |||
bstorm@tools-sgebastion-08:~$ kubectl --as admin --as-group system:masters edit resourcequota tool-cdnjs --namespace tool-cdnjs | |||
</syntaxhighlight> | |||
See also [[Help:Toolforge/Kubernetes#Namespace-wide_quotas]] | |||
=== Node management === | === Node management === |
Revision as of 00:12, 26 June 2020
Kubernetes (often abbreviated k8s) is an open-source system for automating deployment, and management of applications running in containers. Kubernetes was selected in 2015 by the Cloud Services team as the replacement for Grid Engine in the Toolforge project.[1] Usage of k8s by Tools began in mid-2016.[2]
![]() | For help on using kubernetes in Toolforge, see the Kubernetes help documentation. |
Sub pages
Upstream Documentation
If you need tutorials, information or reference material, check out https://kubernetes.io/docs/home/. The documentation can be adjusted to the version of Kubernetes we currently have deployed.
Cluster Build
The entire build process for reference and reproducibility is documented at Portal:Toolforge/Admin/Kubernetes/Deploying.
Components
K8S components are generally in two 'planes' - the control plane and the worker plane. You can also find more info about the general architecture of kubernetes (along with a nice diagram!) on the upstream documentation.
The most specific information on the build of our setup is available in the build documentation at Portal:Toolforge/Admin/Kubernetes/Deploying
Control Plane
This refers to the 'master' components, that provide a unified view of the entire cluster. Currently most of these (except etcd) run on each of three control nodes. The three nodes are redundant, load balanced by the service object inside the cluster and haproxy outside it.
Etcd
Kubernetes stores all state in etcd - all other components are stateless. The etcd cluster is only accessed directly by the API Server and no other component. Direct access to this etcd cluster is equivalent to root on the entire k8s cluster, so it is firewalled off to only be reachable by the rest of the control plane nodes as well as etcd nodes, has client certificate verification in use for authentication (puppet is CA) and secrets are encrypted at rest in our etcd setup.
We currently use a 3 node cluster, named tools-k8s-etcd-[4-6]
. They're all smallish Debian Buster instances configured largely by the same etcd puppet code we use in production.
API Server
This is the heart of the kubernetes control plane - it mediates access to all state stored in etcd for all other components (both in the control plane & the worker plane). It is purely a data access layer, containing no logic related to any of the actual end-functionality kubernetes offers. It offers the following functionality:
- Authentication & Authorization
- Validation
- Read / Write access to all the API endpoints
- Watch functionality for endpoints, which notifies clients when state changes for a particular resource
When you are interacting with the kubernetes API, this is the server that is serving your requests.
The API server runs on each control plane node, currently tools-k8s-control-1/2/3
. It listens on port 6443, using its own internal CA for TLS and authentication, and should be accessed outside the cluster via the haproxy frontend at k8s.tools.eqiad1.wikimedia.cloud
. The localhost insecure port is disabled. All certs for the cluster's use in API server communication are provisioned using the certificates API. Please note that we do use the cluster root CA in the certificates API. The wording in the upstream documentation is to warn the users that this is only one way to configure it. That API can be used for other types of certs as well, if a cluster builder so chooses.
Controller Manager
All other cluster-level functions are currently performed by the Controller Manager. For instance, deprecated ReplicationController
objects are created and updated by the replication controller (constantly checking & spawning new pods if necessary), and nodes are discovered, managed, and monitored by the node controller. The general idea is one of a 'reconciliation loop' - poll/watch the API server for desired state and current state, then perform actions to make them match.
The Controller Manager also runs on the k8s control nodes, and communicates with the API server with appropriate TLS over the ClusterIP of the API server. It runs as the in a static pod.
Scheduler
This simply polls the API for pods that are in unscheduled state & binds them to a specific node according to a set of principles. This is also conceptually very simple reconciliation loop, and it is possible to replace the one we use (the default kube-scheduler
) with a custom scheduler (and hence isn't part of Controller Manager).
The scheduler runs on the k8s control nodes in a static pod and communicates with the API server over mutual TLS like all other components. The scheduler makes decisions via a process of filtering nodes that are inappropriate (unschedulable) out and then scoring them. The scoring rules can be controlled somewhat by using scheduling profiles and plugins, but we haven't implemented anything custom in that regard.
Worker plane
The worker plane refers to the components of the nodes on which actual user code is executed in containers. In tools these are named tools-k8s-worker-*
, and run as Debian Buster instances.
Kubelet
Kubelet is the interface between kubernetes and the container engine (in our case, Docker), deployed via Debian packages rather than static pods. It checks for new pods scheduled on the node it is running on, and makes sure they are running with appropriate volumes / images / permissions. It also does the health checks of the running pods & updates state of them in the k8s API. You can think of it as a reconciliation loop where it checks what pods must be running / not-running in its node, and makes sure that matches reality.
This runs on each node and communicates with the k8s API server over TLS, authenticated with a client certificate (puppet node certificate + CA). It runs as root since it needs to communicate with docker, and being granted access to docker is root equivalent.
Kube-Proxy
kube-proxy is responsible for making sure that k8s service IPs work across the cluster. It is effectively an iptables management system. Its reconciliation loop is to get list of service IPs across the cluster, and make sure NAT rules for all of those exist in the node.
This is run as root, since it needs to use iptables. You can list the rules on any worker node with iptables -t nat -L
Docker
We're currently using Docker as our container engine. We place up-to-date docker packages in our thirdparty/k8s repo, and pin versions in puppet. Configuration of the docker service is handled in puppet.
Calico
Calico is the container overlay network and network policy system we use to allow all the containers to think they're on the same network. We currently use a /16 (192.168.0.0), from which each node gets a /24 and allocates an IP per container. It is currently a fairly bare minimum configuration to get the network going.
Calico is configured to use the Kubernetes storage, and therefore it is able to use the same etcd cluster as Kubernetes. It runs on worker nodes as a DaemonSet.
Proxy
We need to be able to get http requests from the outside internet to pods running on kubernetes. We have a NGINX ingress controller the handles this behind the main Toolforge proxy. Any time the DynamicProxy setup doesn't have a service listed, the incoming request will be proxied to the haproxy of the Kubernetes control plane on the port specified in hiera key profile::toolforge::k8s::ingress_port
(currently 30000), which forwards the request to the ingress controllers. Currently, the DynamicProxy only actually serves Gridengine web services.
This allows both Gridengine and Kubernetes based web services to co-exist under the tools.wmflabs.org domain and the toolforge.org domain.
Infrastructure centralized Logging
![]() | This section is totally false at this time. Central logging needs rebuilding. |
We aggregate all logs from syslog (so docker, kubernetes components, flannel, etcd, etc) into a central instance from all kubernetes related hosts. This is for both simplicity as well as to try capture logs that would be otherwise lost to kernel issues. You can see these logs in the logging host, which can be found in Hiera:Tools as k8s::sendlogs::centralserver
, in /srv/syslog
. The current central logging host is tools-logs-01
. Note that this is not related to logging for applications running on top of kubernetes at all.
Authentication & Authorization
In Kubernetes, there is no inherent concept of a user object at this time, but several methods of authentication to the API server by end users are allowed. They mostly require some external mechanism to generate OIDC tokens, x.509 certs or similar. The most native and convenient mechanism available to us seemed to be x.509 certificates provisioned using the Certificates API. This is managed with the maintain-kubeusers service that runs inside the cluster.
Services that run inside the cluster that are not managed by tool accounts are generally authenticated with a provisioned service account. Therefore, they use a service account token to authenticate.
Since the PKI structure of certificates is so integral to how everything in the system authenticates itself, further information can be found at Portal:Toolforge/Admin/Kubernetes/Certificates
Permissions and authorization are handled via role-based access control and pod security policy
Tool accounts are Namespaced accounts - for each tool we create a Kubernetes Namespace, and inside the namespace they have access to create a specific set of resources (RCs, Pods, Services, Secrets, etc). There are resource based (CPU/IO/Disk) quotas imposed on a per-namespace basis described here: News/2020_Kubernetes_cluster_migration#What_are_the_primary_changes_with_moving_to_the_new_cluster?. More documentation to come
Admin accounts
The maintain-kubeusers service creates admin accounts from the $project.admin
LDAP group. Admin accounts are basically users with the "view" permission, which allows read access to most (not all) Kubernetes resources. They have the additional benefit of having the ability to impersonate any user in the environment. This can be useful for troubleshooting in addition to allowing the administrators to assume cluster-admin privileges without logging directly into a control plane host.
For example, bstorm is an admin account on toolsbeta Kubernetes and can therefore see all namespaces:
bstorm@toolsbeta-sgebastion-04:~$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
ingress-admission ingress-admission-55fb8554b5-5sr82 1/1 Running 0 48d
ingress-admission ingress-admission-55fb8554b5-n64xz 1/1 Running 0 48d
ingress-nginx nginx-ingress-64dc7c9c57-6zmzz 1/1 Running 0 48d
However, bstorm cannot write or delete resources directly:
bstorm@toolsbeta-sgebastion-04:~$ kubectl delete pods test-85d69fb4f9-r22rl -n tool-test
Error from server (Forbidden): pods "test-85d69fb4f9-r22rl" is forbidden: User "bstorm" cannot delete resource "pods" in API group "" in the namespace "tool-test"
She can assume the system:masters group (with a random user impersonation to satisfy the API rules), which has cluster-admin, and now she can delete resources.
bstorm@toolsbeta-sgebastion-04:~$ kubectl delete pods test-85d69fb4f9-r22rl -n tool-test --as-group=system:masters --as=admin
pod "test-85d69fb4f9-r22rl" deleted"
NFS, LDAP and User IDs
![]() | This section is entirely obsolete. It needs to be replaced with the information on PSPs and PodPresets |
Kubernetes by default allows users to run their containers with any UID they want, including root (0). This is problematic for multiple reasons:
- They can then mount any path in the worker instance as r/w and do whatever they want. This basically gives random users full root on all the instances
- They can mount NFS and read / write all tools' data, which is terrible and unacceptable.
So by default, being able to access the k8s api is the same as being able to access the Docker socket, which is root equivalent. This is bad for a multi-tenant system like ours, where we'd like to have multiple users running in the same k8s cluster.
Fortunately, unlike docker, k8s does allow us to write admission controllers that can place additional restrictions / modifications on what k8s users can do. We utilize this in the form of a UidEnforcer
admission controller that enforces the following:
- All namespaces must have a
RunAsUser
annotation - Pods (and their constituent containers) can run only with that UID
In addition, we establish the following conventions:
- Each tool gets its own Namespace
- During namespace creation, we add the RunAsUser annotation to match the UID of the tool in LDAP
- Namsepace creation / modification is a restricted operation that only admins can perform.
This essentially provides us with a setup where users who can today run a process with user id X with Grid Engine / Bastions are the only people who can continue to do so with K8S as well. This works out great for dealing with NFS permissions and such as well.
Monitoring
![]() | This section needs a large update |
We've decided to use Prometheus for metrics collection & monitoring (and eventually alerting too). There's a publicly visible setup available at https://tools-prometheus.wmflabs.org/tools. There are also dashboards on Labs Grafana. There is a per-tool statistics dashboard, as well as a cluster health dashboard.
We have no alerting yet, but that should change at some point.
Docker Images
We restrict only running images from the Tools Docker registry, which is available publicly (and inside tools) at docker-registry.tools.wmflabs.org
. This is for the following purposes:
- Making it easy to enforce our Open Source Code only guideline
- Make it easy to do security updates when necessary (just rebuild all the containers & redeploy)
- Faster deploys, since this is in the same network (vs dockerhub, which is retreived over the internet)
- Access control is provided totally by us, less dependent on dockerhub
This is enforced with a K8S Admission Controller, called RegistryEnforcer. It enforces that all containers come from docker-registry.tools.wmflabs.org, including the Pause container.
Image building
Images are built on the tools-docker-imagebuilder-01 instance, which is setup with appropriate credentials (and a hole in the proxy for the docker registry) to allow pushing. Note that you need to be root to build / push docker containers. Suggest using sudo -i
for it - since docker looks for credentials in the user's home directory, and it is only present in root's home directory.
Building base image
We have a base 'wikimedia' image (named docker-registry.tools.wmflabs.org/wikimedia-jessie
) that is built using the command build-base-images
on the image builder instance. This code uses bootstrapvz to build the image and push it to the registry, and the specs can be found in the operations/puppet.git repository under modules/docker/templates/images
.
Building toolforge specific images
These are present in the git repository operations/docker-images/toollabs-images
. There is a base image called docker-registry.tools.wmflabs.org/toolforge-buster-sssd
that inherits from the wikimedia-buster base image but adds the toolforge debian repository + ldap SSSD support. All Toolforge related images should be named docker-registry.tools.wmflabs.org/toolforge-$SOMETHING
. The structure should be fairly self explanatory. There is a clone of it in /srv/images/toolforge
on the docker builder host.
You can rebuild any particular image by running the build.py
script in that repository. If you give it the path inside the repository where a Docker image lives, it'll rebuild all containers that your image lives from and all the containers that inherit from your container. This ensures that any changes in the Dockerfiles are completely built and reflected immediately, rather than waiting in surprise when something unrelated is pushed later on. We rely on Docker's build cache mechanisms to make sure this doesn't slow down incredibly. It then pushes them all to the docker registry.
Example of rebuilding the python2 images:
$ ssh tools-docker-imagebuilder-01.tools.eqiad.wmflabs
$ screen
$ sudo su
$ cd /srv/images/toolforge
$ git fetch
$ git log --stat HEAD..@{upstream}
$ git rebase @{upstream}
$ ./build.py --push python2-sssd/base
By default, the script will build the testing tag of any image, which will not be pulled by webservice and it will build with the prefix of toolforge. Webservice pulls the latest tag. If the image you are working on is ready to be automatically applied to all newly-launched containers, you should add the --tag latest
argument to your build.py command:
$ ./build.py --tag latest --push python2-sssd/base
You will probably want to clean up intermediate layers after building new containers:
$ docker ps --no-trunc -aqf "status=exited" | xargs docker rm
$ docker images --no-trunc | grep '<none>' | awk '{ print $3 }' | xargs -r docker rmi
All of the web
images install our locally managed toollabs-webservice
package. When it is updated to fix bugs or add new features the Docker images need to be rebuilt. This is typically a good time to ensure that all apt managed packages are updated as well by rebuilding all of the images from scratch:
$ ssh tools-docker-imagebuilder-01.tools.eqiad.wmflabs
$ screen
$ sudo su
$ cd /srv/images/toolforge
$ git fetch
$ git log --stat HEAD..@{upstream}
$ git reset --hard origin/master
$ ./rebuild_all.sh
See Portal:Toolforge/Admin/Kubernetes/Docker-registry for more info on the docker registry setup.
Building new nodes
Bastion nodes
Kubernetes bastion nodes provide kubectl
access to the cluster, installed from the thirdparty/k8s repo. This is in puppet and no other special configuration is required.
Worker nodes
Build nodes according to the information here Portal:Toolforge/Admin/Kubernetes/Deploying#worker_nodes Worker nodes are where user containers/pods are actually executed. They are large nodes running Debian Buster.
Builder nodes
Builder nodes are where you can create new Docker images and upload them to the Docker registry.
You can provision a new builder node with the following:
- Provision a new image using a name starting with
tools-docker-builder-
- Switch worker to new puppetmaster from steps below, and run puppet until it has no errors.
- Edit hiera to set
docker::builder_host
to the new hostname - Run puppet on the host named by
docker::registry
in hiera to allow uploading images
Switch to new puppetmaster
You need to switch the node to the tools puppetmaster first. This is common for all roles. This is because we require secret storage, and that is impossible with the default labs puppetmaster. This process should be made easier / simpler at some point, but until then...
- Make sure puppet has run at least once on the new instance. On second run, it will produce a large blob of red error messages about SSL certificates. So just run puppet until you get that :)
- Run
sudo rm -rf /var/lib/puppet/ssl
on the new instance. - Run puppet on the new instance again. This will make puppet create a new certificate signing request and send it to the puppetmaster. If you get errors similar to this, it means there already existed an instance with the same name attached to the puppetmaster that wasn't decomissioned properly. You can run <code>sudo puppet cert clean $fqdn</code> on the puppetmaster and then repeat steps 3 and 4.
- On the puppetmaster (
tools-puppetmaster-02.tools.eqiad.wmflabs
), runsudo puppet cert sign <fqdn>
, where fqdn is the fqdn of the new instance. This should not be automated away (the signing) since we depend on only signed clients having access for secrets we store in the puppetmaster. - Run puppet again on the new instance, and it should run to completion now!
Administrative Actions
Quota management
Resource quotas in Kubernetes are set on a namespace scope. Default quotas are created for Toolforge tools by the maintain-kubeusers service.
To view a quota for a tool, you can use your admin account (your login user if you are in the tools.admin group) and run kubectl -n tool-$toolname get resourcequotas
to list them. There should be only one with the same name as the namespace in most cases. To see what's in there the easy way is to output the yaml of the quota like kubectl -n tool-cdnjs get resourcequotas tool-cdnjs -o yaml
If you want to update a quota for a user who has completed the process for doing so on phabricator, it is as simple as editing the Kubernetes object. Your admin account needs to impersonate cluster-admin to do this, such as:
bstorm@tools-sgebastion-08:~$ kubectl --as admin --as-group system:masters edit resourcequota tool-cdnjs --namespace tool-cdnjs
See also Help:Toolforge/Kubernetes#Namespace-wide_quotas
Node management
You can run these as any user on the kubernetes control node (currently tools-k8s-control-1.tools.eqiad.wmflabs). It is ok to kill pods on individual nodes - the controller manager will notice they are gone soon and recreate them elsewhere.
Getting a list of nodes
kubectl get node
Cordoning a node
This prevents new pods from being scheduled on it, but does not kill currently running pods there.
kubectl cordon $node_fqdn
Depooling a node
This deletes all running pods in that node as well as marking it as unschedulable. The --delete-local-data --force
allows deleting paws containers (since those won't be automatically respawned)
kubectl drain --ignore-daemonsets --delete-local-data --force $node_fqdn
Uncordon/Repool a node
Make sure that the node shows up as 'ready' in kubectl get node
before repooling it!
kubectl uncordon $node_fqdn
Decommissioning a node
When you are permanently decommissioning a node, you need to do the following:
- Depool the node:
kubectl drain --delete-local-data --force $node_fqdn
- Remove the node:
kubectl delete node $node_fqdn
- Shutdown the node using Horizon or
openstack
commands - (optional) Wait a bit if you feel that this node may need to be recovered for some reason
- Delete the node using Horizon or
openstack
commands - Clean its puppet certificate: Run
sudo puppet cert clean $fqdn
on the tools puppetmaster - Remove it from the list of worker nodes in the profile::toolforge::k8s::worker_nodes hiera key for haproxy nodes (in prefixpuppet tools-k8s-haproxy).
pods management
Administrative actions related to concrete pods/tools.
pods causing too much traffic
Please read Portal:Toolforge/Admin/Kubernetes/Pod_tracing
Custom admission controllers
To get the security features we need in our environment, we have written and deployed a few additional admission webhooks. Since kubernetes is written in Go, so are these admission controllers to take advantage of using the same objects, etc. The custom controllers are documented largely in their README files.
Ingress Admission Webhook
This prevents Toolforge users from creating arbitrary ingresses that might incorrectly or maliciously route traffic https://gerrit.wikimedia.org/r/plugins/gitiles/cloud/toolforge/ingress-admission-controller/
Registry Admission Webhook
This webhook controller prevents pods from external image repositories from running. It does not apply to kube-system
or other namespaces we specify in the webhook config because these images are used from the upstream systems directly. https://gerrit.wikimedia.org/r/plugins/gitiles/labs/tools/registry-admission-webhook/
Common issues
SSHing into a new node doesn't work, asks for password
Usually because first puppet run hasn't happened yet. Just wait for a bit! If that doesn't work, look at the console log for the instance - if it is *not* at a login prompt, read the logs to see what is up.
Node naming conventions
Node tye | Prefix | How to find active one? |
---|---|---|
Kubernetes Control Node | tools-k8s-control- | Hiera: profile::toolforge::k8s::control_nodes
|
Kubernetes worker node | tools-k8s-worker- | Run kubectl get node on the kubernetes master host
|
Kubernetes etcd | tools-k8s-etcd- | All nodes with the given prefix, usually |
Docker Registry | tools-docker-registry- | The node that docker-registry.tools.wmflabs.org resolves to
|
Docker Builder | tools-docker-builder- | Hiera: docker::builder_host
|
Bastions | tools-sgebastion | DNS: login.toolforge.org and dev.toolforge.org |
Web Proxies | tools-proxy | DNS: tools.wmflabs.org. and toolforge.org.
Hiera: |
GridEngine worker node
(Debian Stretch) |
tools-sgeexec-09 | |
GridEngine webgrid node
(Lighttpd, Strech) |
tools-sgewebgrid-lighttpd-09 | |
GridEngine webgrid node
(Generic, Stretch) |
tools-sgewebgrid-generic-09 | |
GridEngine master | tools-sgegrid-master | |
GridEngine master shadow | tools-sgegrid-shadow | |
Redis | tools-redis | Hiera: active_redis
|
tools-mail | ||
Cron runner | tools-sgecron- | Hiera: active_cronrunner |
Elasticsearch | tools-elastic- |