You are browsing a read-only backup copy of Wikitech. The live site can be found at wikitech.wikimedia.org

Portal:Toolforge/Admin/Kubernetes: Difference between revisions

From Wikitech-static
Jump to navigation Jump to search
imported>Bstorm
(→‎Building toolforge specific images: Adding a note about the default now being "toolforge" for image names)
imported>BryanDavis
(→‎Node management: update control nodes)
 
(33 intermediate revisions by 10 users not shown)
Line 5: Line 5:
{{Notice|For help on using kubernetes in Toolforge, see the [[Help:Toolforge/Kubernetes|Kubernetes help]] documentation.}}
{{Notice|For help on using kubernetes in Toolforge, see the [[Help:Toolforge/Kubernetes|Kubernetes help]] documentation.}}


== Terminology ==
== Sub pages ==
{{Special:Prefixindex/{{FULLPAGENAME}}/|hideredirects=1|stripprefix=1}}


Kubernetes comes with its own set of jargon, some of which is listed here.
{{TOC right}}


=== Pod [http://kubernetes.io/docs/user-guide/pods/ k8s user guide]===
== Upstream Documentation ==


A pod is a collection of containers that share the network, IPC and hostname namespace. This means multiple containers can:
If you need tutorials, information or reference material, check out https://kubernetes.io/docs/home/.
The documentation can be adjusted to the version of Kubernetes we currently have deployed.


# Connect to each other securely via localhost (since that is shared)
== Cluster Build ==
# Communicate via traditional Linux IPC mechanisms


All containers in a pod will always be scheduled together on the same node, and started / killed together. This makes the pod the smallest unit of deployment in kubernetes.  
The entire build process for reference and reproducibility is documented at [[Portal:Toolforge/Admin/Kubernetes/Deploying]].


When a pod dies, depending on its RestartPolicy it will be restarted on the same node. This must not be depended on for resilience however - if the node dies the pod is gone. Hence you should never really create pods directly, and instead use ReplicaSets / Deployments / Jobs to manage them.
== Components ==


Each pod gets its own IP address from the overlay network we use.
K8S components are generally in two 'planes' - the control plane and the worker plane. You can also find more info about the general architecture of kubernetes (along with a nice diagram!) on [https://kubernetes.io/docs/concepts/overview/components/ the upstream documentation].


=== Service [http://kubernetes.io/docs/user-guide/services/ k8s user guide] ===
The most specific information on the build of our setup is available in the build documentation at [[Portal:Toolforge/Admin/Kubernetes/Deploying]]


Services provide a stable IP by which you can connect to a set of pods. Pods are ephemeral - they come and go and switch IP as they please, but a service IP is stable from time of creation and will route in a round robin fashion to all the pods it services.
=== Control Plane ===
 
Kubernetes control plane nodes makes global decisions about the cluster. This is where all the control and scheduling happen. Currently, most of these (except etcd) run on each of three control nodes. The three nodes are redundant, load balanced by the service object inside the cluster and haproxy outside it.
=== ReplicaSet [http://kubernetes.io/docs/user-guide/replicasets/ k8s user guide] ===
 
(replaces [http://kubernetes.io/docs/user-guide/replication-controller/ ReplicationControllers])
 
A lot of kubernetes operates on reconciliation loops, that do the following:
 
# Check if a specified condition is met
# If not, perform actions to try to make the specified condition be true.
 
For ReplicaSet, the condition they try to maintain as true is a given number of instances of a particular pod template always are running. So they are in a loop, checking how many pods with the given specification is running, and starting / killing pods to make sure that matches the expected number. You can use a replicaCount of 1 to make sure that a pod is running at least one instance - this makes it resilient against node failures as well. These are also units of horizontal scaling - you can increase the number of pods by just setting replicaCount on the ReplicaSet managing them.
 
=== Deployment [http://kubernetes.io/docs/user-guide/deployments/ k8s user guide] ===
 
This is a higher level object that spawns and manages ReplicaSets. The biggest use for it is that it allows zero downtime rolling deployments with health checks, but atm we aren't using any of those features.
 
== Components ==
 
K8S components are generally in two 'planes' - the control plane and the worker plane. You can also find more info about the general architecture of kubernetes (along with a nice diagram!) on [https://github.com/kubernetes/kubernetes/blob/release-1.5/docs/design/architecture.md github (v1.5)]. (Note: The 1.5 release of K8S was the last to include this document.  Since v1.6 the document was moved to the contributors section, without the diagram.  Nonetheless, the earlier docs provide a useful visual representation of K8S components.)


=== Control Plane ===
{{tracked|T142862|open}}
This refers to the 'master' components, that provide a unified view of the entire cluster. Currently most of these (except etcd) run in a single node, with HA scheduled to be setup soon{{cn}}.
==== Etcd ====
==== Etcd ====
{{See also|Portal:Toolforge/Admin/Kubernetes/Deploying#etcd_nodes}}
Kubernetes stores all state in [[etcd]] - all other components are stateless. The etcd cluster is only accessed directly by the API Server and no other component. Direct access to this etcd cluster is equivalent to root on the entire k8s cluster, so it is firewalled off to only be reachable by the rest of the control plane nodes as well as etcd nodes, has client certificate verification in use for authentication (puppet is CA) and secrets are encrypted at rest in our etcd setup.


Kubernetes stores all state in [[etcd]] - all other components are stateless. The etcd cluster is only accessed directly by the API Server and no other component. Direct access to this etcd cluster is equivalent to root on the entire k8s cluster, so it is firewalled off to only be reachable from the instance running the k8s 'master' (aka rest of the control plane).  
We currently use a 3 node cluster, named <code>tools-k8s-etcd-[4-6]</code>. They're all smallish Debian Buster instances configured largely by the same [https://phabricator.wikimedia.org/source/operations-puppet/browse/production/modules/etcd/ etcd puppet code] we use in production.


We currently use a 3 node cluster, named <code>tools-k8s-etcd-0[1-3]</code>. They're all smallish Debian Jessie instances configured by the same [https://phabricator.wikimedia.org/source/operations-puppet/browse/production/modules/etcd/ etcd puppet code] we use in production.
==== The API Server ====


==== API Server ====
This is the heart of the Kubernetes control plane. All communication between all components, whether they are internal system components or external user components, ''must'' go through the API server. It is purely a data access layer, containing no logic related to any of the actual end-functionality Kubernetes offers. It offers the following functionality:
 
This is the heart of the kubernetes control plane - it mediates access to all state stored in etcd for all other components (both in the control plane & the worker plane). It is purely a data access layer, containing no logic related to any of the actual end-functionality kubernetes offers. It offers the following functionality:
* Authentication & Authorization
* Authentication & Authorization
* Validation
* Validation
* Read / Write access to all the API endpoints
* Read / Write access to all the API endpoints
* Watch functionality for endpoints, which notifies clients when state changes for a particular resource
* Watch functionality for endpoints, which notifies clients when state changes for a particular resource
When you are interacting with the kubernetes API, this is the server that is serving your requests.
When you are interacting with the Kubernetes API, this is the server that is serving your requests.
 
The API server runs on the k8s master node, currently <code>tools-k8s-master-01</code>. It listens on port 6443 (with TLS enabled, using the puppet cert for the host). It also listens on localhost, without TLS and with an insecure bind that bypasses all authentication. It runs as the 'kubernetes' user.


It is accessible internally via the domain <code>k8s-master.tools.wmflabs.org</code>, using the *.tools.wmflabs.org certificate. This allows all nodes, including ones that aren't using the custom puppetmaster, to access the k8s master.
The API server runs on each control plane node, currently <code>tools-k8s-control-1/2/3</code>. It listens on port 6443, using its own internal CA for TLS and authentication, and should be accessed outside the cluster via the [[Portal:Toolforge/Admin/Kubernetes/Deploying#front_proxy_(haproxy)|haproxy frontend]] at <code>k8s.tools.eqiad1.wikimedia.cloud</code>. The localhost insecure port is disabled. All certs for the cluster's use in API server communication are provisioned using the [https://kubernetes.io/docs/tasks/tls/managing-tls-in-a-cluster/ certificates API]. Please note that we do use the cluster root CA in the certificates API. The wording in the upstream documentation is to warn the users that this is only one way to configure it. That API can be used for other types of certs as well, if a cluster builder so chooses.


==== Controller Manager ====
==== Controller Manager ====


All other cluster-level functions are currently performed by the Controller Manager. For instance, <code>ReplicationController</code> objects are created and updated by the replication controller (constantly checking & spawning new pods if necessary), and nodes are discovered, managed, and monitored by the node controller. The general idea is one of a 'reconciliation loop' - poll/watch the API server for desired state and current state, then perform actions to make them match.
All other cluster-level functions are currently performed by the Controller Manager. For instance, deprecated <code>ReplicationController</code> objects are created and updated by the replication controller (constantly checking & spawning new pods if necessary), and nodes are discovered, managed, and monitored by the node controller. The general idea is one of a 'reconciliation loop' - poll/watch the API server for desired state and current state, then perform actions to make them match.


The Controller Manager also runs on the k8s master node, currently <code>tools-k8s-master-01</code> and communicates with the API server over the unsecured localhost bind. It runs as the 'kubernetes' user.
The Controller Manager also runs on the k8s control nodes, and communicates with the API server with appropriate TLS over the ClusterIP of the API server. It runs as the in a static pod.


==== Scheduler ====
==== The scheduler ====


This simply polls the API for [http://kubernetes.io/docs/user-guide/pods/ pods] that are in unscheduled state & binds them to a specific node. This is also conceptually very simple reconciliation loop, and will be made pluggable later (and hence isn't part of Controller Manager).
This simply polls the API for [https://kubernetes.io/docs/concepts/workloads/pods/pod/ Pods] with no assigned node, and selects appropriate healthy worker nodes for them. This is also a conceptually very simple reconciliation loop, and it is possible to replace the one we use (the default <code>kube-scheduler</code>) with a custom scheduler (and hence isn't part of Controller Manager).


The scheduler runs on the k8s master node and communicates with the API server over the unsecured localhost bind. It runs as the 'kubernetes' user.
The scheduler runs on the k8s control nodes in a static Pod and communicates with the API server over mutual TLS like all other components. The scheduler makes decisions via a process of filtering out nodes that are incapable of running tasks, and then scoring the remaining ones according to a complex ranking system. The scoring rules can be controlled somewhat by using scheduling profiles and plugins, but we haven't implemented anything custom in that regard.


=== Worker plane ===
=== Worker plane ===


The worker plane refers to the components of the nodes on which actual user code is executed in containers. In tools these are named <code>tools-worker-****</code>, and run as Debian Jessie instances.
The worker plane refers to the components of the nodes on which actual user code is executed in containers. In tools these are named <code>tools-k8s-worker-*</code>, and run as Debian Buster instances.


==== Kubelet ====
==== Kubelet ====


Kubelet is the interface between kubernetes and the container engine (in our case, [[W:Docker (software)|Docker]]). It checks for new pods scheduled on the node it is running on, and makes sure they are running with appropriate volumes / images / permissions. It also does the health checks of the running pods & updates state of them in the k8s API. You can think of it as a reconciliation loop where it checks what pods must be running / not-running in its node, and makes sure that matches reality.
Kubelet is the interface between kubernetes and the container engine (in our case, [[W:Docker (software)|Docker]]), deployed via Debian packages rather than static pods. It checks for new pods scheduled on the node it is running on, and makes sure they are running with appropriate volumes / images / permissions. It also does the health checks of the running pods & updates state of them in the k8s API. You can think of it as a reconciliation loop where it checks what pods must be running / not-running in its node, and makes sure that matches reality.


This runs on each node and communicates with the k8s API server over TLS, authenticated with a client certificate (puppet node certificate + CA). It runs as root since it needs to communicate with docker, and being granted access to docker is root equivalent.
This runs on each node and communicates with the k8s API server over TLS, authenticated with a client certificate (puppet node certificate + CA). It runs as root since it needs to communicate with docker, and being granted access to docker is root equivalent.
Line 91: Line 69:
==== Kube-Proxy ====
==== Kube-Proxy ====


kube-proxy is responsible for making sure that k8s service IPs work across the cluster. We run it in iptables mode, so it uses iptables NAT rules to make this happen. Its reconciliation loop is to get list of service IPs across the cluster, and make sure NAT rules for all of those exist in the node.
kube-proxy is responsible for making sure that k8s service IPs work across the cluster. It is effectively an iptables management system. Its reconciliation loop is to get list of service IPs across the cluster, and make sure NAT rules for all of those exist in the node.


This is run as root, since it needs to use iptables. You can list the rules on any worker node with <code>iptables -t nat -L</code>
This is run as root, since it needs to use iptables. You can list the rules on any worker node with <code>iptables -t nat -L</code>
Line 97: Line 75:
==== Docker ====
==== Docker ====


We're currently using Docker as our container engine. We pull from upstream's deb repos directly, and pin versions in puppet. We run it in a slightly different configuration than straight upstream, primarily preventing it from doing iptables related changes (since flannel handles those for us). These changes are made in our systemd unit file that we use to replace upstream provided one.
We're currently using Docker as our container engine. We place up-to-date docker packages in our thirdparty/k8s repo, and pin versions in puppet. Configuration of the docker service is handled in puppet.
 
Note that we don't have a clear docker upgrade strategy yet.


==== Flannel ====
==== Calico ====


Flannel is the container overlay network we use to allow all the containers to think they're on the same network. We currently use a /16 (192.168.0.0), from which each node gets a /24 and allocates an IP per container. We use the VXLAN backend of flannel, which seems to produce fairly low overhead & avoids userspace proxying. We also have flannel do IP masquerading, We integrate flannel with docker with our modifications to the docker systemd unit.
Calico is the container overlay network and network policy system we use to allow all the containers to think they're on the same network. We currently use a /16 (192.168.0.0), from which each node gets a /24 and allocates an IP per container. It is currently a fairly bare minimum configuration to get the network going.


Flannel expects its configuration to come from an etcd, so we have a separate etcd cluster (<code>tools-flannel-etcd-0[1-3]</code>) serving just this purpose.
Calico is configured to use the Kubernetes storage, and therefore it is able to use the same etcd cluster as Kubernetes. It runs on worker nodes as a DaemonSet.


=== Proxy ===
=== Proxy ===
{{tracked|T129312|open}}
{{See also|Portal:Toolforge/Admin/Kubernetes/Networking and ingress}}
We need to be able to get http requests from the outside internet to pods running on kubernetes. While normally you would use an [http://kubernetes.io/docs/user-guide/ingress/ Ingress] service for this, we have a hacked up similar thing present in <code>kube2proxy.py</code> instead. This is mostly because there were no non-AWS/GCE ingress providers when we started. This will be replaced with a real ingress provider soon.
{{tracked|T234037|resolved}}
 
We need to be able to get http requests from the outside internet to pods running on kubernetes. We have a [https://kubernetes.github.io/ingress-nginx/ NGINX ingress controller] the handles this behind the main Toolforge proxy. Any time the [[Portal:Toolforge/Admin/Dynamicproxy|DynamicProxy]] setup doesn't have a service listed, the incoming request will be proxied to the haproxy of the Kubernetes control plane on the port specified in hiera key <code>profile::toolforge::k8s::ingress_port</code> (currently 30000), which forwards the request to the ingress controllers. Currently, the DynamicProxy only actually serves Gridengine web services.
The script works by doing the following:
 
# Look for all services in all namespaces that have a label <code>tools.wmflabs.org/webservice</code> set to the string <code>"true"</code>
# Add a rule to redis routing <code>tools.wmflabs.org/$servicename</code> to that service's IP address
# This redis rule is interpreted by the code in dynamicproxy module that we also use for routing gridengine webservices, and requests get routed appropriately.


This allows both gridengine and kubernetes based webserivces to co-exist under the tools.wmflabs.org domain
This allows both Gridengine and Kubernetes based web services to co-exist under the tools.wmflabs.org domain and the toolforge.org domain.


=== Infrastructure centralized Logging ===
=== Infrastructure centralized Logging ===
 
{{Note|type=warning|content=This section is totally false at this time. Central logging needs rebuilding.}}
We aggregate all logs from syslog (so docker, kubernetes components, flannel, etcd, etc) into a central instance from all kubernetes related hosts. This is for both simplicity as well as to try capture logs that would be otherwise lost to kernel issues. You can see these logs in the logging host, which can be found in [[Hiera:Tools]] as <code>k8s::sendlogs::centralserver</code>, in <code>/srv/syslog</code>. The current central logging host is <code>tools-logs-01</code>. Note that this is not related to logging for applications running on top of kubernetes at all.
We aggregate all logs from syslog (so docker, kubernetes components, flannel, etcd, etc) into a central instance from all kubernetes related hosts. This is for both simplicity as well as to try capture logs that would be otherwise lost to kernel issues. You can see these logs in the logging host, which can be found in [[Hiera:Tools]] as <code>k8s::sendlogs::centralserver</code>, in <code>/srv/syslog</code>. The current central logging host is <code>tools-logs-01</code>. Note that this is not related to logging for applications running on top of kubernetes at all.


== Authentication & Authorization ==
== Authentication & Authorization ==
{{See also|Portal:Toolforge/Admin/Kubernetes/RBAC and PSP|Portal:Toolforge/Admin/Kubernetes/Certificates}}
In Kubernetes, there is no inherent concept of a user object at this time, but several methods of authentication to the API server by end users are allowed. They mostly require some external mechanism to generate OIDC tokens, x.509 certs or similar. The most native and convenient mechanism available to us seemed to be x.509 certificates provisioned using the Certificates API. This is managed with the [https://gerrit.wikimedia.org/g/labs/tools/maintain-kubeusers maintain-kubeusers service] that runs inside the cluster.
Services that run inside the cluster that are not managed by tool accounts are generally authenticated with a provisioned service account. Therefore, they use a service account token to authenticate.


We use Kubernetes' [http://kubernetes.io/docs/admin/authentication/ token auth] sytem to authenticate users. This information is maintained in a csv format. The source of this info is two fold - <code>maintain-kubeusers</code> script ([[phab:source/operations-puppet/browse/production/modules/toollabs/files/maintain-kubeusers.py|source]]) for tool accounts & puppet for non-tool accounts.
Since the PKI structure of certificates is so integral to how everything in the system authenticates itself, further information can be found at [[Portal:Toolforge/Admin/Kubernetes/Certificates]]


We build on top of Kubernetes' [http://kubernetes.io/docs/admin/authorization/ Attribute Based Access Control] to have three kinds of accounts:
Permissions and authorization are handled via [[Portal:Toolforge/Admin/Kubernetes/RBAC and PSP|role-based access control and pod security policy]]


# Namespaced accounts (tool accounts)
{{tracked|T173312|resolved}}
# Infrastructure Readonly Accounts
Tool accounts are Namespaced accounts - for each tool we create a Kubernetes Namespace, and inside the namespace they have access to create a specific set of resources (RCs, Pods, Services, Secrets, etc). There are resource based (CPU/IO/Disk) quotas imposed on a per-namespace basis described here: [[News/2020_Kubernetes_cluster_migration#What_are_the_primary_changes_with_moving_to_the_new_cluster?]]. More documentation to come
# Infrastructure Full Access Accounts


{{tracked|T173312|open}}
Tool accounts are Namespaced accounts - for each tool we create a Kubernetes Namespace, and inside the namespace they have access to create a whitelisted set of resources (RCs, Pods, Services, Secrets, etc). There will be a resource based (CPU/IO/Disk) quota imposed on a per-namespace basis at some point in the future.


Infrastructure Readonly Accounts provide only read access but to all resources in all namespaces. This is currently used for services like prometheus / kube2proxy. Infrastructure Full Access accounts aren't dissimilar, just also have write accounts. These two types should get way more specific accounts in the future.
=== Admin accounts ===
{{tracked|T246059|open}}
The [https://gerrit.wikimedia.org/g/labs/tools/maintain-kubeusers maintain-kubeusers service] creates admin accounts from the <code>$project.admin</code> LDAP group. Admin accounts are basically users with the "view" permission, which allows read access to most (not all) Kubernetes resources. They have the additional benefit of having the ability to [https://kubernetes.io/docs/reference/access-authn-authz/authentication/#user-impersonation impersonate] any user in the environment. This can be useful for troubleshooting in addition to allowing the administrators to assume cluster-admin privileges without logging directly into a control plane host.


The script <code>maintain-kubeusers</code> is responsible for the following:
For example, bstorm is an admin account on toolsbeta Kubernetes and can therefore see all namespaces:
# Creating a namespace for each tool with proper annotation
<syntaxhighlight lang=shell-session>
# Providing a <code>.kube/config</code> file for each tool
bstorm@toolsbeta-sgebastion-04:~$ kubectl get pods --all-namespaces
# Creating the homedir for each new tool (so that we can do #2 reliably)
NAMESPACE            NAME                                                  READY  STATUS    RESTARTS  AGE
# Write out <code>/etc/kubernetes/abac.json</code> to provide proper access controls for all kinds of accounts
ingress-admission    ingress-admission-55fb8554b5-5sr82                    1/1    Running  0          48d
# Write out <code>/etc/kubernetes/tokenauth</code> to provide proper tokens + user names for all kinds of accounts
ingress-admission    ingress-admission-55fb8554b5-n64xz                    1/1    Running  0          48d
# Read <code>/etc/kubernetes/infrastructure-users</code> provisioned by puppet to know about non-namespaced accounts.
ingress-nginx        nginx-ingress-64dc7c9c57-6zmzz                        1/1    Running  0          48d
</syntaxhighlight>
However, bstorm cannot write or delete resources directly:
<syntaxhighlight lang=shell-session>
bstorm@toolsbeta-sgebastion-04:~$ kubectl delete pods test-85d69fb4f9-r22rl -n tool-test
Error from server (Forbidden): pods "test-85d69fb4f9-r22rl" is forbidden: User "bstorm" cannot delete resource "pods" in API group "" in the namespace "tool-test"
</syntaxhighlight>
She can use the <code>kubectl-sudo</code> plugin (which internally impersonates the <code>system:masters</code> group) to delete resources:
<syntaxhighlight lang=shell-session>
bstorm@toolsbeta-sgebastion-04:~$ kubectl sudo delete pods test-85d69fb4f9-r22rl -n tool-test
pod "test-85d69fb4f9-r22rl" deleted"
</syntaxhighlight>


== NFS, LDAP and User IDs ==
== NFS, LDAP and User IDs ==
{{warning|This section is entirely obsolete. It needs to be replaced with the information on PSPs and PodPresets}}


Kubernetes by default allows users to run their containers with any UID they want, including root (0). This is problematic for multiple reasons:
Kubernetes by default allows users to run their containers with any UID they want, including root (0). This is problematic for multiple reasons:
Line 169: Line 157:


== Monitoring ==
== Monitoring ==
{{tracked|T53434|open}}
{{warning|This section needs a large update}}
We've decided to use [https://prometheus.io/ Prometheus] for metrics collection & monitoring (and eventually alerting too). There's a publicly visible setup available at https://tools-prometheus.wmflabs.org/tools. There are also dashboards on [https://grafana-labs.wikimedia.org Labs Grafana]. There is a [https://grafana-labs.wikimedia.org/dashboard/db/kubernetes-tool-combined-stats per-tool statistics dashboard], as well as [https://grafana-labs.wikimedia.org/dashboard/db/tools-activity an overall activity dashboard]. We have no alerting yet, but that should change at some point.
The Kubernetes cluster contains multiple components responsible for cluster monitoring:
* [https://github.com/kubernetes-sigs/metrics-server metrics-server] (per-container metrics)
* [https://github.com/google/cadvisor cadvisor] (per-container metrics)
* [https://github.com/kubernetes/kube-state-metrics kube-state-metrics] (cluster-level metrics)
 
Data from those services is fed into the [[Portal:Toolforge/Admin/Prometheus|Prometheus servers]]. We have no alerting yet, but that should change at some point.


== Docker Images ==
== Docker Images ==
Line 180: Line 173:
# Faster deploys, since this is in the same network (vs dockerhub, which is retreived over the internet)
# Faster deploys, since this is in the same network (vs dockerhub, which is retreived over the internet)
# Access control is provided totally by us, less dependent on dockerhub
# Access control is provided totally by us, less dependent on dockerhub
# Provide required LDAP configuration, so tools running inside the container are properly integrated in the Toolforge environment


This is enforced with a K8S Admission Controller, called RegistryEnforcer. It enforces that all containers come from docker-registry.tools.wmflabs.org, including the Pause container.  
This is enforced with a K8S Admission Controller, called RegistryEnforcer. It enforces that all containers come from docker-registry.tools.wmflabs.org, including the Pause container.  
The decision to follow this approach was last discussed and re-evaluated at [[Wikimedia_Cloud_Services_team/EnhancementProposals/Decision_record_T302863_toolforge_byoc]].


=== Image building ===
=== Image building ===


Images are built on the '''tools-docker-builder-06''' instance, which is setup with appropriate credentials (and a hole in the proxy for the docker registry) to allow pushing. Note that you need to be root to build / push docker containers. Suggest using <code>sudo -i</code> for it - since docker looks for credentials in the user's home directory, and it is only present in root's home directory.
Images are built on the '''tools-docker-imagebuilder-01''' instance, which is setup with appropriate credentials (and a hole in the proxy for the docker registry) to allow pushing. Note that you need to be root to build / push docker containers. Suggest using <code>sudo -i</code> for it - since docker looks for credentials in the user's home directory, and it is only present in root's home directory.


==== Building base image ====
==== Building base image ====


We have a base 'wikimedia' image (named <code>docker-registry.tools.wmflabs.org/wikimedia-jessie</code>) that is built using the command <code>build-base-images</code> on the image builder instance. This code uses [https://github.com/andsens/bootstrap-vz bootstrapvz] to build the image and push it to the registry, and the specs can be found in the operations/puppet.git repository under <code>modules/docker/templates/images</code>.
We use base images from https://docker-registry.wikimedia.org/ as the starting point for the Toolforge images. There once was a separate process for creating our own base images, but that system is no longer used.


==== Building toolforge specific images ====
==== Building toolforge specific images ====


These are present in the git repository <code>operations/docker-images/toollabs-images</code>. There is a toollabs base image called <code>docker-registry.tools.wmflabs.org/toollabs-jessie</code> that inherits from the wikimedia-jessie base image but adds the toollabs debian repository + ldap NSS support. All toollabs related images should be named <code>docker-registry.tools.wmflabs.org/toollabs-$SOMETHING</code>. The structure should be fairly self explanatory. There is a clone of it in <code>/srv/images/toollabs-images</code> on the docker builder host.
These are present in the git repository <code>operations/docker-images/toollabs-images</code>. There is a base image called <code>docker-registry.tools.wmflabs.org/toolforge-buster-sssd</code> that inherits from the wikimedia-buster base image but adds the toolforge debian repository + ldap SSSD support. All Toolforge related images should be named <code>docker-registry.tools.wmflabs.org/toolforge-$SOMETHING</code>. The structure should be fairly self explanatory. There is a clone of it in <code>/srv/images/toolforge</code> on the docker builder host.


You can rebuild any particular image by running the <code>build.py</code> script in that repository. If you give it the path inside the repository where a Docker image lives, it'll rebuild all containers that your image lives from ''and'' all the containers that inherit from your container. This ensures that any changes in the Dockerfiles are completely built and reflected immediately, rather than waiting in surprise when something unrelated is pushed later on. We rely on Docker's build cache mechanisms to make sure this doesn't slow down incredibly. It then pushes them all to the docker registry.
You can rebuild any particular image by running the <code>build.py</code> script in that repository. If you give it the path inside the repository where a Docker image lives, it'll rebuild all containers that your image lives from ''and'' all the containers that inherit from your container. This ensures that any changes in the Dockerfiles are completely built and reflected immediately, rather than waiting in surprise when something unrelated is pushed later on. We rely on Docker's build cache mechanisms to make sure this doesn't slow down incredibly. It then pushes them all to the docker registry.
Line 199: Line 195:
Example of rebuilding the python2 images:
Example of rebuilding the python2 images:
<syntaxhighlight lang="shell-session">
<syntaxhighlight lang="shell-session">
$ ssh tools-docker-builder-06.tools.eqiad.wmflabs
$ ssh tools-docker-imagebuilder-01.tools.eqiad1.wikimedia.cloud
$ screen
$ screen
$ sudo su
$ sudo su
$ cd /srv/images/toollabs
$ cd /srv/images/toolforge
$ git fetch
$ git fetch
$ git log --stat HEAD..@{upstream}
$ git log --stat HEAD..@{upstream}
$ git rebase @{upstream}
$ git rebase @{upstream}
$ ./build.py --push python2/base
$ ./build.py --push python2-sssd/base
</syntaxhighlight>
</syntaxhighlight>


By default, the script will build the ''testing'' tag of any image, which will not be pulled by [[Help:Toolforge/Web#Web_Service_Introduction|webservice]] and it will build with the prefix of ''toolforge''.  Webservice pulls the ''latest'' tag. If the image you are working on is ready to be automatically applied to all newly-launched containers, you should add the <code>--tag latest</code> argument to your build.py command as well as specifying the prefix as "toollabs" (as of 2019-10-04):
By default, the script will build the ''testing'' tag of any image, which will not be pulled by [[Help:Toolforge/Web#Web_Service_Introduction|webservice]] and it will build with the prefix of ''toolforge''.  Webservice pulls the ''latest'' tag. If the image you are working on is ready to be automatically applied to all newly-launched containers, you should add the <code>--tag latest</code> argument to your build.py command:


<syntaxhighlight lang="shell-session">
<syntaxhighlight lang="shell-session">
$ ./build.py --tag latest --prefix toollabs --push python2/base
$ ./build.py --tag latest --push python2-sssd/base
</syntaxhighlight>
</syntaxhighlight>


Line 223: Line 219:
All of the <code>web</code> images install our locally managed <code>toollabs-webservice</code> package. When it is updated to fix bugs or add new features the Docker images need to be rebuilt. This is typically a good time to ensure that all apt managed packages are updated as well by rebuilding all of the images from scratch:
All of the <code>web</code> images install our locally managed <code>toollabs-webservice</code> package. When it is updated to fix bugs or add new features the Docker images need to be rebuilt. This is typically a good time to ensure that all apt managed packages are updated as well by rebuilding all of the images from scratch:
<syntaxhighlight lang="shell-session">
<syntaxhighlight lang="shell-session">
$ ssh tools-docker-builder-06.tools.eqiad.wmflabs
$ ssh tools-docker-imagebuilder-01.tools.eqiad1.wikimedia.cloud
$ screen
$ screen
$ sudo su
$ sudo su
$ cd /srv/images/toollabs
$ cd /srv/images/toolforge
$ git fetch
$ git fetch
$ git log --stat HEAD..@{upstream}
$ git log --stat HEAD..@{upstream}
$ git reset --hard origin/master
$ git reset --hard origin/master
$ ./build.py --no-cache --push base
$ ./rebuild_all.sh
</syntaxhighlight>
</syntaxhighlight>


See [[Portal:Toolforge/Admin/Docker-registry]] for more info on the docker registry setup.
See [[Portal:Toolforge/Admin/Kubernetes/Docker-registry]] for more info on the docker registry setup.
 
==== Managing images available for tools ====
Available images are managed in [[gitlab:repos/cloud/toolforge/image-config/|image-config]]. Here is how to add a new image:
* Add the new image name in the [[gitlab:repos/cloud/toolforge/image-config/|image-config]] repository
** Deploy this change to toolsbeta: <code>cookbook wmcs.toolforge.k8s.component.deploy --git-url [[gitlab:repos/cloud/toolforge/image-config/|https://gitlab.wikimedia.org/repos/cloud/toolforge/image-config/]]</code>
** Deploy this change to tools: <code>cookbook wmcs.toolforge.k8s.component.deploy --git-url [[gitlab:repos/cloud/toolforge/image-config/|https://gitlab.wikimedia.org/repos/cloud/toolforge/image-config/]] --project tools --deploy-node-hostname tools-k8s-control-1.tools.eqiad1.wikimedia.cloud</code>
** Recreate the jobs-api pods in the Toolsbeta cluster, to make them read the new ConfigMap
*** SSH to the bastion: <code>ssh toolsbeta-sgebastion-05.toolsbeta.eqiad1.wikimedia.cloud</code>
*** Find the pod ids: <code>kubectl get pod -n jobs-api</code>
*** Delete the pods, K8s will replace them with new ones: <code>kubectl sudo delete pod -n jobs-api {pod-name}</code>
** Do the same in the Tools cluster (same instructions, but use <code>login.toolforge.org</code> as the SSH bastion)
* From a bastion, check you can run the new image with <code>webservice {image-name} shell</code>
* From a bastion, check the new image is listed when running <code>toolforge-jobs images</code>
* Update the [https://wikitech.wikimedia.org/wiki/Help:Toolforge/Kubernetes Toolforge/Kubernetes] wiki page to include the new image


== Building new nodes ==
== Building new nodes ==
 
{{See also|Portal:Toolforge/Admin/Kubernetes/Deploying}}
This section documents how to build new k8s related nodes.


=== Bastion nodes ===
=== Bastion nodes ===


Kubernetes bastion nodes provide the following:
Kubernetes bastion nodes provide <code>kubectl</code> access to the cluster, installed from the thirdparty/k8s repo. This is in puppet and no other special configuration is required.
 
# <code>kubectl</code> access to the cluster
# A running <code>kube-proxy</code> so you can hit kubernetes service IPs
# A running <code>flannel</code> so you can hit kubernetes pod IPs
 
 
{{Note|content=On the new stretch grid only one of these is true.  Stretch bastions do not have kube-proxy or flannel at this time, but they do provide access to the kubectl client}}
 
 
You can provision a new bastion node with the following:
 
# Add the bastion node's fqdn under the <code>k8s::bastion_hosts</code> in [[Hiera:Tools]]. This allows the flannel etcds to open up their firewalls for this bastion so flannel can reach etcd for its operations. You now need to either run puppet on all the flannel etcd hosts (<code>tools-flannel-etcd-0[1-3].tools.eqiad.wmflabs</code>) or wait ~20mins.
# Switch to the new puppetmaster
# Apply the role <code>role::toollabs::k8s::bastion</code> to the instance
# Run puppet
# Run <code>sudo -i deploy-bastion <builder-host> <tag></code>, where builder-host is the host on which kubernetes is built (currently tools-docker-builder-06.tools.eqiad.wmflabs), and tag is the current tag of kubernetes that is deployed (you can find out the tag by looking at the file <code>modules/toollabs/manifests/kubebuilder.pp</code> in the operations/puppet.git repo).
# Run puppet again on the bastion instance.
# Look at the kube-proxy logs and flannel logs to make sure they're up.
# Attempt to hit a pod IP or service IP to make sure it works.


=== Worker nodes ===
=== Worker nodes ===


Worker nodes are where user containers/pods are actually executed. They are large nodes running Debian Jessie.
Build nodes according to the information here [[Portal:Toolforge/Admin/Kubernetes/Deploying#worker_nodes]]
 
Worker nodes are where user containers/pods are actually executed. They are large nodes running Debian Buster.
Kubernetes worker runs the following:
 
# <code>kubelet</code> to manage the pods, perform health checks, etc
# A running <code>kube-proxy</code> so you can hit kubernetes service IPs
# A running <code>flannel</code> so you can hit kubernetes pod IPs
# A <code>docker</code> daemon that manages the actual containers
 
You can provision a new worker node with the following:
 
# Add the worker node's fqdn under the <code>k8s::worker_hosts</code> in [[Hiera:Tools]]. This allows the flannel etcds to open up their firewalls for this worker so flannel can reach etcd for its operations. You now need to either run puppet on all the flannel etcd hosts (<code>clush -w @flannel-etcd "sudo puppet agent -t"</code>) or wait ~20mins.
# Switch worker to new puppetmaster from steps below, and run puppet until it has no errors.
# Once the tools-flannel-etcd hosts have picked up your node, you will almost certainly need to restart flannel on your new worker node to get puppet to finish cleanly.  A sure sign this is happening is an error in the docker daemon mentioned in puppet.
# As of July 26, 2019, you must also run <code>apt install kubernetes-node=1.4.6-6</code> or you end up with puppet and install problems.
# Look at the docker logs, kube-proxy logs, kubelet logs and flannel logs to make sure they're up.
# Run <code>kubectl get nodes</code> on the k8s master to make sure that the node is marked as ready
# As of August , 2019 you must run <code>sudo apt-get install prometheus-node-exporter</code> to update prometheus-node-exporter to a newer version or else that service won't start up (https://phabricator.wikimedia.org/T230147)


=== Builder nodes ===
=== Builder nodes ===
Line 300: Line 274:
# Run  <code>sudo rm -rf /var/lib/puppet/ssl</code> on the new instance.  
# Run  <code>sudo rm -rf /var/lib/puppet/ssl</code> on the new instance.  
# Run puppet on the new instance again. This will make puppet create a new certificate signing request and send it to the puppetmaster. If you get errors similar to [[phab:P3623|this]],  it means there already existed an instance with the same name attached to the puppetmaster that wasn't decomissioned properly. You can run <nowiki><code>sudo puppet cert clean $fqdn</code></nowiki> on the puppetmaster and then repeat steps 3 and 4.
# Run puppet on the new instance again. This will make puppet create a new certificate signing request and send it to the puppetmaster. If you get errors similar to [[phab:P3623|this]],  it means there already existed an instance with the same name attached to the puppetmaster that wasn't decomissioned properly. You can run <nowiki><code>sudo puppet cert clean $fqdn</code></nowiki> on the puppetmaster and then repeat steps 3 and 4.
# On the puppetmaster (<code>tools-puppetmaster-01.tools.eqiad.wmflabs</code>), run <code>sudo puppet cert sign <fqdn></code>, where fqdn is the fqdn of the new instance. This should not be automated away (the signing) since we depend on only signed clients having access for secrets we store in the puppetmaster.  
# On the puppetmaster (<code>tools-puppetmaster-02.tools.eqiad1.wikimedia.cloud</code>), run <code>sudo puppet cert sign <fqdn></code>, where fqdn is the fqdn of the new instance. This should not be automated away (the signing) since we depend on only signed clients having access for secrets we store in the puppetmaster.  
# Run puppet again on the new instance, and it should run to completion now!
# Run puppet again on the new instance, and it should run to completion now!


== Administrative Actions ==
== Administrative Actions ==
Perform these actions from a [[Portal:Toolforge/Admin/Kubernetes#Node_naming_conventions|toolforge bastion]].
=== Quota management===
Resource quotas and limit ranges in Kubernetes are set on a namespace scope. Default quotas are created for Toolforge tools by the maintain-kubeusers service. The difference between them is that resource quotas are for how much of a resource all pods can use, collectively while limit ranges limit how much CPU or RAM a particular container (''not'' pod) may consume.
To view a quota for a tool, you can use your admin account (your login user if you are in the tools.admin group) and run <code>kubectl -n tool-$toolname get resourcequotas</code> to list them. There should be only one with the same name as the namespace in most cases. To see what's in there the easy way is to output the yaml of the quota like <code>kubectl -n tool-cdnjs get resourcequotas tool-cdnjs -o yaml</code> Likewise, you can check the limit range with <code>kubectl -n tool-$toolname describe limitranges</code>
If you want to update a quota for a user who has completed the process for doing so on [https://phabricator.wikimedia.org/project/manage/4834/ phabricator], it is as simple as editing the Kubernetes object.  Your admin account needs to impersonate cluster-admin to do this, such as:
<syntaxhighlight lang=shell-session>
$ kubectl sudo edit resourcequota tool-cdnjs -n tool-cdnjs
</syntaxhighlight>
The same can be done for a limit range.
<syntaxhighlight lang=shell-session>
$ kubectl sudo edit limitranges tool-mix-n-match -n tool-mix-n-match
</syntaxhighlight>
Requests can be fulfilled by bumping whichever quota item is requested according to the approved request, but do not change the NodePort services from 0 because we don't allow those for technical reasons.
See also [[Help:Toolforge/Kubernetes#Namespace-wide_quotas]]


=== Node management ===
=== Node management ===


You can run these as any user on the kubernetes master node (currently tools-k8s-master-01.eqiad.wmflabs). It is ok to kill pods on nodes - the controller manager will notice they are gone soon and recreate them elsewhere.  
You can run these as any user on a kubernetes control node (currently tools-k8s-control-{4,5,6}.tools.eqiad1.wikimedia.cloud). It is ok to kill pods on individual nodes - the controller manager will notice they are gone soon and recreate them elsewhere.


==== Getting a list of nodes ====
==== Getting a list of nodes ====
Line 313: Line 309:
<code>kubectl get node</code>
<code>kubectl get node</code>


==== Depooling a node ====
==== Cordoning a node ====


This deletes all running pods in that node as well as marking it as unschedulable. The <code>--delete-local-data --force</code> allows deleting paws containers (since those won't be automatically respawned)
This prevents new pods from being scheduled on it, but does not kill currently running pods there.


<code>kubectl drain --delete-local-data --force $node_fqdn</code>
<code>kubectl cordon $node_hostname</code>


==== Cordoning a node ====
==== Depooling a node ====


This prevents new pods from being scheduled on it, but does not kill currently running pods there.
This deletes all running pods in that node as well as marking it as unschedulable. The <code>--delete-local-data --force</code> allows deleting paws containers (since those won't be automatically respawned)


<code>kubectl cordon $node_fqdn</code>
<code>kubectl drain --ignore-daemonsets --delete-emptydir-data --force $node_hostname</code>


==== Repooling a node ====
==== Uncordon/Repool a node ====


Make sure that the node shows up as 'ready' in <code>kubectl get node</code> before repooling it!
Make sure that the node shows up as 'ready' in <code>kubectl get node</code> before repooling it!
Line 333: Line 329:
==== Decommissioning a node ====
==== Decommissioning a node ====


When you are permanently decomissioning a node, you need to do the following:
When you are permanently decommissioning a node, you need to do the following:


# Depool the node
# Depool the node: <code>kubectl drain --delete-local-data --force $node_fqdn</code>
# Remove the node: <code>kubectl delete node $node_fqdn</code>
# Shutdown the node using Horizon or <code>openstack</code> commands
# (optional) Wait a bit if you feel that this node may need to be recovered for some reason
# Delete the node using Horizon or <code>openstack</code> commands
# Clean its puppet certificate: Run <code>sudo puppet cert clean $fqdn</code> on the tools puppetmaster
# Clean its puppet certificate: Run <code>sudo puppet cert clean $fqdn</code> on the tools puppetmaster
# Remove it from the list of worker nodes in [[Hiera:Tools]].
# Remove it from the list of worker nodes in the ''profile::toolforge::k8s::worker_nodes'' hiera key for haproxy nodes (in prefixpuppet tools-k8s-haproxy).


=== pods management ===
=== pods management ===
Line 346: Line 346:


Please read [[Portal:Toolforge/Admin/Kubernetes/Pod_tracing]]
Please read [[Portal:Toolforge/Admin/Kubernetes/Pod_tracing]]
== Upgrading Kubernetes ==
We try to keep up with kubernetes versions as they come up!
=== Code ===
We keep our Kubernetes code on Gerrit, in [https://gerrit.wikimedia.org/r/#/admin/projects/operations/software/kubernetes operations/software/kubernetes]. We keep code referenced by tags, and usually it is the upstream version followed by the string 'wmf' and then a monotonically increasing number - so you end up deploying tags such as <code>v1.3.3wmf1</code>. There isn't really much active CR there yet, the repository is set up for direct push, including force push, so be careful :)
=== Patches ===
We have a bunch of patches on top of upstream, mostly around access control. These are in the repository but not in a very clear way anywhere. TODO List all the patches and what they do and why we have them!
=== Building ===
==== Building debian packages ====
We 've gone into some lengths to debianize the kubernetes packages in order to provide a way for deploying kubernetes without having to resolve to scap3 (which is best suited for service deployment) or ad-hoc hacks. While the end result works, it's not a really good debianization effort and as such not fit for inclusion upstream. The main reasons is it is using docker and downloading docker images off the internet during the build.
Requirements are:
* A VM/physical box with internet access
* Docker with enough disk space to host the build container images as well as the containers themselves. Failing that, the error messages will be hugely cryptic. A sane value seems to be around '''20G''' currently
* Enough memory (failing that will result in cryptic error). A sane value seems to be around '''6G''' currently
On tool labs, you can use whatever is the docker builder instance to do this.
The stuff below assumes you have some basic Debian package building knowledge. A quick tutorial can be found at https://wiki.debian.org/BuildingTutorial
Here's the runabout:
# git clone https://gerrit.wikimedia.org/r/operations/debs/kubernetes
# Take a quick look into the debian subdirectory. That's the stuff we mostly mess with, not the code itself
# Fetch the upstream version specified in debian/changelog top entry from the kubernetes releases. Currently that will be at  https://github.com/kubernetes/kubernetes/archive/v<version>.tar.gz, NOT at https://github.com/kubernetes/kubernetes/releases or https://get.k8s.io/. We just want the tar.gz file. We also do NOT want the release file. We just want the source code. The above all holds true for up to 1.4 series. Documentation will be updated for 1.5 when we migrate to it.
# Name it correctly (kubernetes_<upstream-version>.orig.tar.gz) and place it at the same hiearchy level as your git clone.
# Do whatever changes you need to do (restrict to changes in the debian/ directory).
## Whatever patches we apply are under debian/patches directory (exactly 1 currently). Updating these is done using quilt. Teaching quilt is outside of the scope of this, a good tutorial is at https://wiki.debian.org/UsingQuilt. A quick way to get started is:
### quilt pop to unapply a patch.
### quilt push to apply a patch.
### quilt refresh to update the currently applied patch from the current state of the repo
# Create a new debian revision by using dch -i (assuming you want a new version of the package out)
# Install the prereq required by the package. Done with apt-get install dh-systemd
# Run dpkg-buildpackage -us -uc -sa. The user this is being run as should either be in the docker group or be run as sudo - kubernetes packaging we use requires docker to be built.
# wait about 40-60 minutes. Fix your errors if there are any and rinse and repeat
# Grab the built packages.
Note that there's a TODO item to actually make this work with git-buildpackage so we don't have to fetch the versions manually but rather have them autogenerated from the git repo.
==== Building the old way (deprecated as of March 2017) ====
The stuff below is already deprecated and kept for historical reasons
We have a puppet class called <code>toollabs::kubebuilder</code> that provisions the things required to build kubernetes for our use case. These are:
# A git clone of <code>operations/software/kubernetes</code> under <code>/srv/build/kubernetes</code>
# Docker (which is required to build kubernetes)
# A build script, <code>build-kubernetes</code> that checks out the appropriate tag (with our patches on top) and builds kubernetes with the recommended upstream way.
# A web server that serves the built binaries for tools to grab and deploy.
This class is included in the docker builder role, so you should use the active docker builder to build kubernetes.
So when a new version of kubernetes comes out, you do the following:
# Clone the kubernetes repo locally, get the tag for the newest version
# Cherry-pick our patches on top of the new version (TODO provide exact details of patch!)
# Push new tag with cherry picked patches with a new number:
## <code>git tag ${upstreamtagname}wmf${wmfversion}</code>
## git push origin --tags
# Set version number to your new version in kubebuilder.pp in ops/puppet and merge it
# Run puppet on docker builder so the build script picks up the new version
# Run <code>build-kubernetes</code>. This will fetch the new code and build it, and run all E2E tests. Make sure they all pass, including the tests for the patches!
# If they all pass, yay! You have successfully built kubernetes! If not, find out why they failed, fix the patches, and tag with an increased number, push, try again until you succeed.
Note that adding the tag to kubebuilder.pp is awkward, but that's the only place we track the 'currently deployed version', so is important.


== Custom admission controllers ==
== Custom admission controllers ==


To get the security features we need in our environment, we have written and deployed a few additional [http://kubernetes.io/docs/admin/admission-controllers/ admission controllers]. Since kubernetes is written in Go, so are these admission controllers. They need to live in the same repository as the go source code, and are hence maintained as patches on the upstream kubernetes source.
To get the security features we need in our environment, we have written and deployed a few additional [https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/#admission-webhooks admission webhooks]. Since kubernetes is written in Go, so are these admission controllers to take advantage of using the same objects, etc. The custom controllers are documented largely in their README files.
 
=== UidEnforcer ===


This enforces for each pod:
See [[Portal:Toolforge/Admin/Kubernetes/Custom_components]]


1. It belongs in a namespace (already enforced)
=== Ingress Admission Webhook ===
2. Namespace has an annotation <code>RunAsUser</code> set to a numeric value
3. The pod can run with UID & GID set to be the same value as the <code>RunAsUser</code> annotation. If this isn't true, this admission controller will modify the pod specification such that it is.


In the code that creates namespaces for tools automatically (<code>maintain-kubeusers</code>), we set this annotation to match the UID of each tool. This prevents users from impersonating other users, and also from running as root.
This prevents Toolforge users from creating arbitrary ingresses that might incorrectly or maliciously route traffic https://gerrit.wikimedia.org/r/plugins/gitiles/cloud/toolforge/ingress-admission-controller/


=== RegistryEnforcer ===
=== Registry Admission Webhook ===


Enforces that pods can only use containers from a specified docker registry. This is enforced by rejecting all pods that do not start with the configured registry name in their image spec. The registry name is passed in via the commandline flag <code>--enforced-docker-registry</code>. If the value passed in is, say, <code>docker-registry.tools.wmflabs.org</code>, then only images of the form <code>docker-registry.tools.wmflabs.org/<something></code> will be allowed to run on this cluster.
This webhook controller prevents pods from external image repositories from running. It does not apply to <code>kube-system</code> or other namespaces we specify in the webhook config because these images are used from the upstream systems directly. https://gerrit.wikimedia.org/r/plugins/gitiles/labs/tools/registry-admission-webhook/


=== HostPathEnforcer ===
=== Volume Admission Webhook ===


We want to allow users to mount [http://kubernetes.io/docs/user-guide/volumes/#hostpath hostPath] volumes into their containers - this is how our NFS mounts are mounted in containers. We have them mounted in the k8s worker nodes (via puppet), and then just hostPath mount them into the containers. This simplifies management a lot, but it also means we've to allow users to mount hostPath volumes into containers. This is a potential security issue, since you can theoretically mount <code>/etc</code> from the host as read-write and do things to it. We already have protection in the form of UidEnforcer, but this is additional protection.
This mutating admission webhook mounts NFS volumes to tool pods labelled with <code>toolforge: tool</code>. It replaced kubernetes PodPresets which were removed in the 1.20 update. https://gerrit.wikimedia.org/r/plugins/gitiles/cloud/toolforge/volume-admission-controller/
 
It allows you to whitelist paths / path-prefixes (comma separated values to <code>--host-paths-allowed</code> and <code>--host-path-prefixes-allowed</code>, and only those paths / path prefixes are allowed to be mounted as hostPaths. Pods that attempt to mount other paths will be rejected.
 
=== HostAutoMounter ===
 
We want *some* host paths to be mounted in *all* containers, regardless of wether they were in the spec or not. We currently do this for <code>/var/run/nslcd/socket</code> only, to allow libnss-ldap to work inside containers as is.  
 
This is configured with the commandline parameter <code>--host-automounts</code>, which takes a comma separated list of paths. These paths will be mounted from each host to all containers.
 
=== Deprecation plan ===
We will have to keep an eye on [http://kubernetes.io/docs/user-guide/pod-security-policy/ Pod Security Policy] - it is the feature being built to replace all our custom admission controllers. Once it can do all the things we need to, we should get rid of our custom controllers and switch to it.


== Common issues ==
== Common issues ==
Line 465: Line 377:
!How to find active one?
!How to find active one?
|-
|-
|Kubernetes Master
|Kubernetes Control Node
|tools-master-
|tools-k8s-control-
|Hiera: <code>k8s::master_host</code>
|Hiera: <code>profile::toolforge::k8s::control_nodes</code>
|-
|-
|Kubernetes worker node
|Kubernetes worker node
|tools-worker-
|tools-k8s-worker-
|Run <code>kubectl get node</code> on the kubernetes master host
|Run <code>kubectl get node</code> on the kubernetes master host
|-
|-
|Kubernetes etcd  
|Kubernetes etcd  
|tools-k8s-etcd-
|tools-k8s-etcd-
|All nodes with the given prefix, usually
|-
|Flannel etcd
|tools-flannel-etcd
|All nodes with the given prefix, usually
|All nodes with the given prefix, usually
|-
|-
Line 490: Line 398:
|-
|-
|Bastions
|Bastions
|tools-bastion
|tools-sgebastion
|DNS: tools-login.wmflabs.org and tools-dev.wmflabs.org
|DNS: login.toolforge.org and dev.toolforge.org
|-
|-
|Web Proxies
|Web Proxies
|tools-proxy
|tools-proxy
|DNS: tools.wmflabs.org.
|DNS: tools.wmflabs.org. and toolforge.org.
Hiera: <code>active_proxy_host</code>
Hiera: <code>active_proxy_host</code>
|-
|-
|GridEngine worker node
|GridEngine worker node
(Ubuntu Precise)
(Debian Stretch)
|tools-exec-12
|tools-sgeexec-09
|
|-
|GridEngine worker node
(Ubuntu Trusty)
|tools-exec-14
|
|-
|GridEngine webgrid node
(Lighttpd, Precise)
|tools-webgrid-lighttpd-12
|
|
|-
|-
|GridEngine webgrid node
|GridEngine webgrid node
(Lighttpd, Trusty)
(Lighttpd, Strech)
|tools-webgrid-lighttpd-14
|tools-sgewebgrid-lighttpd-09
|
|
|-
|-
|GridEngine webgrid node
|GridEngine webgrid node
(Generic, Trusty)
(Generic, Stretch)
|tools-webgrid-generic-14
|tools-sgewebgrid-generic-09
|
|
|-
|-
|GridEngine master
|GridEngine master
|tools-grid-master
|tools-sgegrid-master
|
|
|-
|-
|GridEngine master shadow
|GridEngine master shadow
|tools-grid-shadow
|tools-sgegrid-shadow
|
|
|-
|-
Line 540: Line 438:
|-
|-
|Cron runner
|Cron runner
|tools-cron-
|tools-sgecron-
|Hiera: active_cronrunner
|Hiera: active_cronrunner
|-
|-
Line 553: Line 451:
==See also==
==See also==
* [[PAWS/Tools/Admin]]
* [[PAWS/Tools/Admin]]
* [[Portal:Toolforge/Admin/lima-kilo]]

Latest revision as of 23:56, 27 April 2023

Kubernetes (often abbreviated k8s) is an open-source system for automating deployment, and management of applications running in containers. Kubernetes was selected in 2015 by the Cloud Services team as the replacement for Grid Engine in the Toolforge project.[1] Usage of k8s by Tools began in mid-2016.[2]

Sub pages

Upstream Documentation

If you need tutorials, information or reference material, check out https://kubernetes.io/docs/home/. The documentation can be adjusted to the version of Kubernetes we currently have deployed.

Cluster Build

The entire build process for reference and reproducibility is documented at Portal:Toolforge/Admin/Kubernetes/Deploying.

Components

K8S components are generally in two 'planes' - the control plane and the worker plane. You can also find more info about the general architecture of kubernetes (along with a nice diagram!) on the upstream documentation.

The most specific information on the build of our setup is available in the build documentation at Portal:Toolforge/Admin/Kubernetes/Deploying

Control Plane

Kubernetes control plane nodes makes global decisions about the cluster. This is where all the control and scheduling happen. Currently, most of these (except etcd) run on each of three control nodes. The three nodes are redundant, load balanced by the service object inside the cluster and haproxy outside it.

Etcd

Kubernetes stores all state in etcd - all other components are stateless. The etcd cluster is only accessed directly by the API Server and no other component. Direct access to this etcd cluster is equivalent to root on the entire k8s cluster, so it is firewalled off to only be reachable by the rest of the control plane nodes as well as etcd nodes, has client certificate verification in use for authentication (puppet is CA) and secrets are encrypted at rest in our etcd setup.

We currently use a 3 node cluster, named tools-k8s-etcd-[4-6]. They're all smallish Debian Buster instances configured largely by the same etcd puppet code we use in production.

The API Server

This is the heart of the Kubernetes control plane. All communication between all components, whether they are internal system components or external user components, must go through the API server. It is purely a data access layer, containing no logic related to any of the actual end-functionality Kubernetes offers. It offers the following functionality:

  • Authentication & Authorization
  • Validation
  • Read / Write access to all the API endpoints
  • Watch functionality for endpoints, which notifies clients when state changes for a particular resource

When you are interacting with the Kubernetes API, this is the server that is serving your requests.

The API server runs on each control plane node, currently tools-k8s-control-1/2/3. It listens on port 6443, using its own internal CA for TLS and authentication, and should be accessed outside the cluster via the haproxy frontend at k8s.tools.eqiad1.wikimedia.cloud. The localhost insecure port is disabled. All certs for the cluster's use in API server communication are provisioned using the certificates API. Please note that we do use the cluster root CA in the certificates API. The wording in the upstream documentation is to warn the users that this is only one way to configure it. That API can be used for other types of certs as well, if a cluster builder so chooses.

Controller Manager

All other cluster-level functions are currently performed by the Controller Manager. For instance, deprecated ReplicationController objects are created and updated by the replication controller (constantly checking & spawning new pods if necessary), and nodes are discovered, managed, and monitored by the node controller. The general idea is one of a 'reconciliation loop' - poll/watch the API server for desired state and current state, then perform actions to make them match.

The Controller Manager also runs on the k8s control nodes, and communicates with the API server with appropriate TLS over the ClusterIP of the API server. It runs as the in a static pod.

The scheduler

This simply polls the API for Pods with no assigned node, and selects appropriate healthy worker nodes for them. This is also a conceptually very simple reconciliation loop, and it is possible to replace the one we use (the default kube-scheduler) with a custom scheduler (and hence isn't part of Controller Manager).

The scheduler runs on the k8s control nodes in a static Pod and communicates with the API server over mutual TLS like all other components. The scheduler makes decisions via a process of filtering out nodes that are incapable of running tasks, and then scoring the remaining ones according to a complex ranking system. The scoring rules can be controlled somewhat by using scheduling profiles and plugins, but we haven't implemented anything custom in that regard.

Worker plane

The worker plane refers to the components of the nodes on which actual user code is executed in containers. In tools these are named tools-k8s-worker-*, and run as Debian Buster instances.

Kubelet

Kubelet is the interface between kubernetes and the container engine (in our case, Docker), deployed via Debian packages rather than static pods. It checks for new pods scheduled on the node it is running on, and makes sure they are running with appropriate volumes / images / permissions. It also does the health checks of the running pods & updates state of them in the k8s API. You can think of it as a reconciliation loop where it checks what pods must be running / not-running in its node, and makes sure that matches reality.

This runs on each node and communicates with the k8s API server over TLS, authenticated with a client certificate (puppet node certificate + CA). It runs as root since it needs to communicate with docker, and being granted access to docker is root equivalent.

Kube-Proxy

kube-proxy is responsible for making sure that k8s service IPs work across the cluster. It is effectively an iptables management system. Its reconciliation loop is to get list of service IPs across the cluster, and make sure NAT rules for all of those exist in the node.

This is run as root, since it needs to use iptables. You can list the rules on any worker node with iptables -t nat -L

Docker

We're currently using Docker as our container engine. We place up-to-date docker packages in our thirdparty/k8s repo, and pin versions in puppet. Configuration of the docker service is handled in puppet.

Calico

Calico is the container overlay network and network policy system we use to allow all the containers to think they're on the same network. We currently use a /16 (192.168.0.0), from which each node gets a /24 and allocates an IP per container. It is currently a fairly bare minimum configuration to get the network going.

Calico is configured to use the Kubernetes storage, and therefore it is able to use the same etcd cluster as Kubernetes. It runs on worker nodes as a DaemonSet.

Proxy

We need to be able to get http requests from the outside internet to pods running on kubernetes. We have a NGINX ingress controller the handles this behind the main Toolforge proxy. Any time the DynamicProxy setup doesn't have a service listed, the incoming request will be proxied to the haproxy of the Kubernetes control plane on the port specified in hiera key profile::toolforge::k8s::ingress_port (currently 30000), which forwards the request to the ingress controllers. Currently, the DynamicProxy only actually serves Gridengine web services.

This allows both Gridengine and Kubernetes based web services to co-exist under the tools.wmflabs.org domain and the toolforge.org domain.

Infrastructure centralized Logging

We aggregate all logs from syslog (so docker, kubernetes components, flannel, etcd, etc) into a central instance from all kubernetes related hosts. This is for both simplicity as well as to try capture logs that would be otherwise lost to kernel issues. You can see these logs in the logging host, which can be found in Hiera:Tools as k8s::sendlogs::centralserver, in /srv/syslog. The current central logging host is tools-logs-01. Note that this is not related to logging for applications running on top of kubernetes at all.

Authentication & Authorization

In Kubernetes, there is no inherent concept of a user object at this time, but several methods of authentication to the API server by end users are allowed. They mostly require some external mechanism to generate OIDC tokens, x.509 certs or similar. The most native and convenient mechanism available to us seemed to be x.509 certificates provisioned using the Certificates API. This is managed with the maintain-kubeusers service that runs inside the cluster.

Services that run inside the cluster that are not managed by tool accounts are generally authenticated with a provisioned service account. Therefore, they use a service account token to authenticate.

Since the PKI structure of certificates is so integral to how everything in the system authenticates itself, further information can be found at Portal:Toolforge/Admin/Kubernetes/Certificates

Permissions and authorization are handled via role-based access control and pod security policy

Tool accounts are Namespaced accounts - for each tool we create a Kubernetes Namespace, and inside the namespace they have access to create a specific set of resources (RCs, Pods, Services, Secrets, etc). There are resource based (CPU/IO/Disk) quotas imposed on a per-namespace basis described here: News/2020_Kubernetes_cluster_migration#What_are_the_primary_changes_with_moving_to_the_new_cluster?. More documentation to come


Admin accounts

The maintain-kubeusers service creates admin accounts from the $project.admin LDAP group. Admin accounts are basically users with the "view" permission, which allows read access to most (not all) Kubernetes resources. They have the additional benefit of having the ability to impersonate any user in the environment. This can be useful for troubleshooting in addition to allowing the administrators to assume cluster-admin privileges without logging directly into a control plane host.

For example, bstorm is an admin account on toolsbeta Kubernetes and can therefore see all namespaces:

bstorm@toolsbeta-sgebastion-04:~$ kubectl get pods --all-namespaces
NAMESPACE            NAME                                                   READY   STATUS    RESTARTS   AGE
ingress-admission    ingress-admission-55fb8554b5-5sr82                     1/1     Running   0          48d
ingress-admission    ingress-admission-55fb8554b5-n64xz                     1/1     Running   0          48d
ingress-nginx        nginx-ingress-64dc7c9c57-6zmzz                         1/1     Running   0          48d

However, bstorm cannot write or delete resources directly:

bstorm@toolsbeta-sgebastion-04:~$ kubectl delete pods test-85d69fb4f9-r22rl -n tool-test
Error from server (Forbidden): pods "test-85d69fb4f9-r22rl" is forbidden: User "bstorm" cannot delete resource "pods" in API group "" in the namespace "tool-test"

She can use the kubectl-sudo plugin (which internally impersonates the system:masters group) to delete resources:

bstorm@toolsbeta-sgebastion-04:~$ kubectl sudo delete pods test-85d69fb4f9-r22rl -n tool-test
pod "test-85d69fb4f9-r22rl" deleted"

NFS, LDAP and User IDs

Kubernetes by default allows users to run their containers with any UID they want, including root (0). This is problematic for multiple reasons:

  1. They can then mount any path in the worker instance as r/w and do whatever they want. This basically gives random users full root on all the instances
  2. They can mount NFS and read / write all tools' data, which is terrible and unacceptable.

So by default, being able to access the k8s api is the same as being able to access the Docker socket, which is root equivalent. This is bad for a multi-tenant system like ours, where we'd like to have multiple users running in the same k8s cluster.

Fortunately, unlike docker, k8s does allow us to write admission controllers that can place additional restrictions / modifications on what k8s users can do. We utilize this in the form of a UidEnforcer admission controller that enforces the following:

  1. All namespaces must have a RunAsUser annotation
  2. Pods (and their constituent containers) can run only with that UID

In addition, we establish the following conventions:

  1. Each tool gets its own Namespace
  2. During namespace creation, we add the RunAsUser annotation to match the UID of the tool in LDAP
  3. Namsepace creation / modification is a restricted operation that only admins can perform.

This essentially provides us with a setup where users who can today run a process with user id X with Grid Engine / Bastions are the only people who can continue to do so with K8S as well. This works out great for dealing with NFS permissions and such as well.

Monitoring

The Kubernetes cluster contains multiple components responsible for cluster monitoring:

Data from those services is fed into the Prometheus servers. We have no alerting yet, but that should change at some point.

Docker Images

We restrict only running images from the Tools Docker registry, which is available publicly (and inside tools) at docker-registry.tools.wmflabs.org. This is for the following purposes:

  1. Making it easy to enforce our Open Source Code only guideline
  2. Make it easy to do security updates when necessary (just rebuild all the containers & redeploy)
  3. Faster deploys, since this is in the same network (vs dockerhub, which is retreived over the internet)
  4. Access control is provided totally by us, less dependent on dockerhub
  5. Provide required LDAP configuration, so tools running inside the container are properly integrated in the Toolforge environment

This is enforced with a K8S Admission Controller, called RegistryEnforcer. It enforces that all containers come from docker-registry.tools.wmflabs.org, including the Pause container.

The decision to follow this approach was last discussed and re-evaluated at Wikimedia_Cloud_Services_team/EnhancementProposals/Decision_record_T302863_toolforge_byoc.

Image building

Images are built on the tools-docker-imagebuilder-01 instance, which is setup with appropriate credentials (and a hole in the proxy for the docker registry) to allow pushing. Note that you need to be root to build / push docker containers. Suggest using sudo -i for it - since docker looks for credentials in the user's home directory, and it is only present in root's home directory.

Building base image

We use base images from https://docker-registry.wikimedia.org/ as the starting point for the Toolforge images. There once was a separate process for creating our own base images, but that system is no longer used.

Building toolforge specific images

These are present in the git repository operations/docker-images/toollabs-images. There is a base image called docker-registry.tools.wmflabs.org/toolforge-buster-sssd that inherits from the wikimedia-buster base image but adds the toolforge debian repository + ldap SSSD support. All Toolforge related images should be named docker-registry.tools.wmflabs.org/toolforge-$SOMETHING. The structure should be fairly self explanatory. There is a clone of it in /srv/images/toolforge on the docker builder host.

You can rebuild any particular image by running the build.py script in that repository. If you give it the path inside the repository where a Docker image lives, it'll rebuild all containers that your image lives from and all the containers that inherit from your container. This ensures that any changes in the Dockerfiles are completely built and reflected immediately, rather than waiting in surprise when something unrelated is pushed later on. We rely on Docker's build cache mechanisms to make sure this doesn't slow down incredibly. It then pushes them all to the docker registry.

Example of rebuilding the python2 images:

$ ssh tools-docker-imagebuilder-01.tools.eqiad1.wikimedia.cloud
$ screen
$ sudo su
$ cd /srv/images/toolforge
$ git fetch
$ git log --stat HEAD..@{upstream}
$ git rebase @{upstream}
$ ./build.py --push python2-sssd/base

By default, the script will build the testing tag of any image, which will not be pulled by webservice and it will build with the prefix of toolforge. Webservice pulls the latest tag. If the image you are working on is ready to be automatically applied to all newly-launched containers, you should add the --tag latest argument to your build.py command:

$ ./build.py --tag latest --push python2-sssd/base

You will probably want to clean up intermediate layers after building new containers:

$ docker ps --no-trunc -aqf "status=exited" | xargs docker rm
$ docker images --no-trunc | grep '<none>' | awk '{ print $3 }' | xargs -r docker rmi

All of the web images install our locally managed toollabs-webservice package. When it is updated to fix bugs or add new features the Docker images need to be rebuilt. This is typically a good time to ensure that all apt managed packages are updated as well by rebuilding all of the images from scratch:

$ ssh tools-docker-imagebuilder-01.tools.eqiad1.wikimedia.cloud
$ screen
$ sudo su
$ cd /srv/images/toolforge
$ git fetch
$ git log --stat HEAD..@{upstream}
$ git reset --hard origin/master
$ ./rebuild_all.sh

See Portal:Toolforge/Admin/Kubernetes/Docker-registry for more info on the docker registry setup.

Managing images available for tools

Available images are managed in image-config. Here is how to add a new image:

  • Add the new image name in the image-config repository
    • Deploy this change to toolsbeta: cookbook wmcs.toolforge.k8s.component.deploy --git-url https://gitlab.wikimedia.org/repos/cloud/toolforge/image-config/
    • Deploy this change to tools: cookbook wmcs.toolforge.k8s.component.deploy --git-url https://gitlab.wikimedia.org/repos/cloud/toolforge/image-config/ --project tools --deploy-node-hostname tools-k8s-control-1.tools.eqiad1.wikimedia.cloud
    • Recreate the jobs-api pods in the Toolsbeta cluster, to make them read the new ConfigMap
      • SSH to the bastion: ssh toolsbeta-sgebastion-05.toolsbeta.eqiad1.wikimedia.cloud
      • Find the pod ids: kubectl get pod -n jobs-api
      • Delete the pods, K8s will replace them with new ones: kubectl sudo delete pod -n jobs-api {pod-name}
    • Do the same in the Tools cluster (same instructions, but use login.toolforge.org as the SSH bastion)
  • From a bastion, check you can run the new image with webservice {image-name} shell
  • From a bastion, check the new image is listed when running toolforge-jobs images
  • Update the Toolforge/Kubernetes wiki page to include the new image

Building new nodes

Bastion nodes

Kubernetes bastion nodes provide kubectl access to the cluster, installed from the thirdparty/k8s repo. This is in puppet and no other special configuration is required.

Worker nodes

Build nodes according to the information here Portal:Toolforge/Admin/Kubernetes/Deploying#worker_nodes Worker nodes are where user containers/pods are actually executed. They are large nodes running Debian Buster.

Builder nodes

Builder nodes are where you can create new Docker images and upload them to the Docker registry.

You can provision a new builder node with the following:

  1. Provision a new image using a name starting with tools-docker-builder-
  2. Switch worker to new puppetmaster from steps below, and run puppet until it has no errors.
  3. Edit hiera to set docker::builder_host to the new hostname
  4. Run puppet on the host named by docker::registry in hiera to allow uploading images

Switch to new puppetmaster

You need to switch the node to the tools puppetmaster first. This is common for all roles. This is because we require secret storage, and that is impossible with the default labs puppetmaster. This process should be made easier / simpler at some point, but until then...

  1. Make sure puppet has run at least once on the new instance. On second run, it will produce a large blob of red error messages about SSL certificates. So just run puppet until you get that :)
  2. Run sudo rm -rf /var/lib/puppet/ssl on the new instance.
  3. Run puppet on the new instance again. This will make puppet create a new certificate signing request and send it to the puppetmaster. If you get errors similar to this, it means there already existed an instance with the same name attached to the puppetmaster that wasn't decomissioned properly. You can run <code>sudo puppet cert clean $fqdn</code> on the puppetmaster and then repeat steps 3 and 4.
  4. On the puppetmaster (tools-puppetmaster-02.tools.eqiad1.wikimedia.cloud), run sudo puppet cert sign <fqdn>, where fqdn is the fqdn of the new instance. This should not be automated away (the signing) since we depend on only signed clients having access for secrets we store in the puppetmaster.
  5. Run puppet again on the new instance, and it should run to completion now!

Administrative Actions

Perform these actions from a toolforge bastion.

Quota management

Resource quotas and limit ranges in Kubernetes are set on a namespace scope. Default quotas are created for Toolforge tools by the maintain-kubeusers service. The difference between them is that resource quotas are for how much of a resource all pods can use, collectively while limit ranges limit how much CPU or RAM a particular container (not pod) may consume.

To view a quota for a tool, you can use your admin account (your login user if you are in the tools.admin group) and run kubectl -n tool-$toolname get resourcequotas to list them. There should be only one with the same name as the namespace in most cases. To see what's in there the easy way is to output the yaml of the quota like kubectl -n tool-cdnjs get resourcequotas tool-cdnjs -o yaml Likewise, you can check the limit range with kubectl -n tool-$toolname describe limitranges

If you want to update a quota for a user who has completed the process for doing so on phabricator, it is as simple as editing the Kubernetes object. Your admin account needs to impersonate cluster-admin to do this, such as:

$ kubectl sudo edit resourcequota tool-cdnjs -n tool-cdnjs

The same can be done for a limit range.

$ kubectl sudo edit limitranges tool-mix-n-match -n tool-mix-n-match


Requests can be fulfilled by bumping whichever quota item is requested according to the approved request, but do not change the NodePort services from 0 because we don't allow those for technical reasons.

See also Help:Toolforge/Kubernetes#Namespace-wide_quotas

Node management

You can run these as any user on a kubernetes control node (currently tools-k8s-control-{4,5,6}.tools.eqiad1.wikimedia.cloud). It is ok to kill pods on individual nodes - the controller manager will notice they are gone soon and recreate them elsewhere.

Getting a list of nodes

kubectl get node

Cordoning a node

This prevents new pods from being scheduled on it, but does not kill currently running pods there.

kubectl cordon $node_hostname

Depooling a node

This deletes all running pods in that node as well as marking it as unschedulable. The --delete-local-data --force allows deleting paws containers (since those won't be automatically respawned)

kubectl drain --ignore-daemonsets --delete-emptydir-data --force $node_hostname

Uncordon/Repool a node

Make sure that the node shows up as 'ready' in kubectl get node before repooling it!

kubectl uncordon $node_fqdn

Decommissioning a node

When you are permanently decommissioning a node, you need to do the following:

  1. Depool the node: kubectl drain --delete-local-data --force $node_fqdn
  2. Remove the node: kubectl delete node $node_fqdn
  3. Shutdown the node using Horizon or openstack commands
  4. (optional) Wait a bit if you feel that this node may need to be recovered for some reason
  5. Delete the node using Horizon or openstack commands
  6. Clean its puppet certificate: Run sudo puppet cert clean $fqdn on the tools puppetmaster
  7. Remove it from the list of worker nodes in the profile::toolforge::k8s::worker_nodes hiera key for haproxy nodes (in prefixpuppet tools-k8s-haproxy).

pods management

Administrative actions related to concrete pods/tools.

pods causing too much traffic

Please read Portal:Toolforge/Admin/Kubernetes/Pod_tracing

Custom admission controllers

To get the security features we need in our environment, we have written and deployed a few additional admission webhooks. Since kubernetes is written in Go, so are these admission controllers to take advantage of using the same objects, etc. The custom controllers are documented largely in their README files.

See Portal:Toolforge/Admin/Kubernetes/Custom_components

Ingress Admission Webhook

This prevents Toolforge users from creating arbitrary ingresses that might incorrectly or maliciously route traffic https://gerrit.wikimedia.org/r/plugins/gitiles/cloud/toolforge/ingress-admission-controller/

Registry Admission Webhook

This webhook controller prevents pods from external image repositories from running. It does not apply to kube-system or other namespaces we specify in the webhook config because these images are used from the upstream systems directly. https://gerrit.wikimedia.org/r/plugins/gitiles/labs/tools/registry-admission-webhook/

Volume Admission Webhook

This mutating admission webhook mounts NFS volumes to tool pods labelled with toolforge: tool. It replaced kubernetes PodPresets which were removed in the 1.20 update. https://gerrit.wikimedia.org/r/plugins/gitiles/cloud/toolforge/volume-admission-controller/

Common issues

SSHing into a new node doesn't work, asks for password

Usually because first puppet run hasn't happened yet. Just wait for a bit! If that doesn't work, look at the console log for the instance - if it is *not* at a login prompt, read the logs to see what is up.

Node naming conventions

Node tye Prefix How to find active one?
Kubernetes Control Node tools-k8s-control- Hiera: profile::toolforge::k8s::control_nodes
Kubernetes worker node tools-k8s-worker- Run kubectl get node on the kubernetes master host
Kubernetes etcd tools-k8s-etcd- All nodes with the given prefix, usually
Docker Registry tools-docker-registry- The node that docker-registry.tools.wmflabs.org resolves to
Docker Builder tools-docker-builder- Hiera: docker::builder_host
Bastions tools-sgebastion DNS: login.toolforge.org and dev.toolforge.org
Web Proxies tools-proxy DNS: tools.wmflabs.org. and toolforge.org.

Hiera: active_proxy_host

GridEngine worker node

(Debian Stretch)

tools-sgeexec-09
GridEngine webgrid node

(Lighttpd, Strech)

tools-sgewebgrid-lighttpd-09
GridEngine webgrid node

(Generic, Stretch)

tools-sgewebgrid-generic-09
GridEngine master tools-sgegrid-master
GridEngine master shadow tools-sgegrid-shadow
Redis tools-redis Hiera: active_redis
Mail tools-mail
Cron runner tools-sgecron- Hiera: active_cronrunner
Elasticsearch tools-elastic-

References

See also