You are browsing a read-only backup copy of Wikitech. The primary site can be found at wikitech.wikimedia.org

Help:Toolforge/Kubernetes: Difference between revisions

From Wikitech-static
Jump to navigation Jump to search
imported>Danilo
imported>Majavah
(→‎Broken npm: there is a modern npm these days)
 
(5 intermediate revisions by 3 users not shown)
Line 22: Line 22:




== Kubernetes single run jobs ==
== Kubernetes jobs ==
If you need to run a job only once you can use a pod, that is the smallest deployable units in kubernetes. To deploy a pod you need to create a yaml file like the example below.


<syntaxhighlight lang="yaml">
Every non-trivial task performed in Toolforge (like executing a script or running a bot) should be dispatched to a job scheduling backend (in this case, Kubernetes), which ensures that the job is run in a suitable place with sufficient resources.
apiVersion: v1
kind: Pod
metadata:
  name : example
  labels:
    toolforge: tool
spec:
  containers:
  - name: main
    workingDir: /data/project/mytool
    image: docker-registry.tools.wmflabs.org/toolforge-python37-sssd-base:latest
    command: ['/bin/bash', '-c', 'source venv3/bin/activate; ./myapp.py']
  restartPolicy: Never
</syntaxhighlight>
 
Change the name "example" to the name you want to your pod, change the workingDir to the directory where your application is, change the image to the image you need, change the command to call your app and save the yaml file. You can create the pod with the command <code>kubectl apply -f <path-to-yaml-file></code>.
 
You can see if the pod is running with <code>kubectl get pods</code> and see the pod output with <code>kubectl logs <pod-name></code>. Note that it can not have two pods with the same name, you need to delete the old pod with <code>kubectl delete pod <pod-name></code> before create a new one with the same name.
 
You can change the "restartPolicy: Never" to "restartPolicy: OnFailure" to make the pod restart the container when it exit with a error. However, if you want a continuous job it is recommended to use a "deployment" workload type as describe in a section below, because when the Kubernetes node where the pod is running has some failure the the deployment will recreate the pod in another node, what not happens when you create a simple pod.


==Kubernetes cronjobs==
The basic principle of running jobs is fairly straightforward:
It is possible to run cron jobs on Kubernetes (see [https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/ upstream documentation] for a full description).


===Example cronjob.yaml===
* You create a job from a submission server (usually <code>login.toolforge.org</code>)
* Kubernetes finds a suitable execution node to run the job on, and starts it there once resources are available
* As it runs, your job will send output and errors to files until the job completes or is aborted.


Wikiloveslove is a Python 3.7 bot that runs in a Kubernetes deployment. The cronjobs.yaml file that it uses to tell Kubernetes how to start and schedule the bot is reproduced below.
Jobs can be scheduled synchronously or asynchronously, continuously, or simply executed once.


{{Collapse top|/data/project/wikiloveslove/cronjobs.yaml (copied 2020-02-01)}}
There are two ways of running jobs on Kubernetes.
<syntaxhighlight lang="yaml">
---
apiVersion: batch/v1beta1
kind: CronJob
metadata:
  name: list-images
  labels:
    name: wikiloveslove.listimages
    # The toolforge=tool label will cause $HOME and other paths to be mounted from Toolforge
    toolforge: tool
spec:
  schedule: "28 * * 2 *"
  jobTemplate:
    spec:
      template:
        metadata:
          labels:
            toolforge: tool
        spec:
          containers:
          - name: bot
            workingDir: /data/project/wikiloveslove
            image: docker-registry.tools.wmflabs.org/toolforge-python37-sssd-base:latest
            args:
            - /bin/sh
            - -c
            - /data/project/wikiloveslove/list_images.sh
            env:
            - name: PYWIKIBOT_DIR
              value: /data/project/wikiloveslove
            - name: HOME
              value: /data/project/wikiloveslove
          restartPolicy: OnFailure
</syntaxhighlight>
{{Collapse bottom}}


Create the CronJob object in your tool's Kubernetes namespace using ''kubectl'':
* by using the [[Help:Toolforge/Jobs framework | Toolforge jobs framework]] (recommended).
{{Codesample|lang=shell-session|code=
* by directly using the [[Help:Toolforge/Raw kubernetes jobs | raw Kubernetes API]].
$ kubectl apply --validate=true -f $HOME/cronjobs.yaml
cronjob.batch/CRONJOB-NAME configured
}}


After creating the cronjob you can create a test job with <code>kubectl create job --from=cronjob/CRONJOB-NAME test</code> to immediately trigger the cronjob and then access the logs as usual with <code>kubectl logs job/test -f</code> to debug.
Previous to allowing jobs on Kubernetes, Toolforge offered [[Help:Toolforge/Grid | Grid Engine]] as job scheduling backend.
 
If that doesn't give you any useful output, try <code>kubectl describe job/test</code> to see what's going on: it might be a [https://phabricator.wikimedia.org/P13646 misconfigured limit], for instance.
 
If you want the application not to restart on failure, change "restartPolicy: OnFailure" to "restartPolicy: Never" and add "backoffLimit: 0" in the jobTemplate spec (with same indentation as "template:").
 
==Kubernetes continuous jobs==
The basic unit of managing execution on a Kubernetes cluster is called a "deployment". Each deployment is described with a YAML configuration file which describes the container images to be started ("pods" in the Kubernetes terminology) and commands to be run inside them after the container is initialized. A deployment also specifies where the pods run and what external resources are connected to them. The [https://kubernetes.io/docs/concepts/workloads/controllers/deployment/ upstream documentation] is comprehensive.
 
===Example deployment.yaml===
 
[[Tool:Stashbot|Stashbot]] is a Python 3.7 irc bot that runs in a Kubernetes deployment. The [[phab:diffusion/LTST/browse/master/etc/deployment.yaml|deployment.yaml file that it uses]] to tell Kubernetes how to start the bot is reproduced below. This deployment is launched using a [[phab:diffusion/LTST/browse/master/bin/stashbot.sh|<code>stashbot.sh</code> wrapper script]] which runs <code>kubectl create --validate=true -f /data/project/stashbot/etc/deployment.yaml</code>.
 
{{Collapse top|/data/project/stashbot/etc/deployment.yaml (copied 2020-01-03)}}
<syntaxhighlight lang="yaml">
---
# NOTE: this deployment works with the "toolforge" Kubernetes cluster, and not the legacy "default" cluster.
apiVersion: apps/v1
kind: Deployment
metadata:
  name: stashbot.bot
  namespace: tool-stashbot
  labels:
    name: stashbot.bot
    # The toolforge=tool label will cause $HOME and other paths to be mounted from Toolforge
    toolforge: tool
spec:
  replicas: 1
  selector:
    matchLabels:
      name: stashbot.bot
      toolforge: tool
  template:
    metadata:
      labels:
        name: stashbot.bot
        toolforge: tool
    spec:
      containers:
        - name: bot
          image: docker-registry.tools.wmflabs.org/toolforge-python37-sssd-base:latest
          command: [ "/data/project/stashbot/bin/stashbot.sh", "run" ]
          workingDir: /data/project/stashbot
          env:
            - name: HOME
              value: /data/project/stashbot
          imagePullPolicy: Always
</syntaxhighlight>
{{Collapse bottom}}
 
This deployment:
 
* Uses the 'tool-stashbot' namespace that the tool is authorized to control
* Creates a container using the 'latest' version of the 'docker-registry.tools.wmflabs.org/[[phab:diffusion/ODIT/browse/master/python37-sssd/base/Dockerfile.template|toolforge-python37-sssd-base]]' Docker image.
* Runs the command <code>/data/project/stashbot/bin/stashbot.sh run</code> inside the container to start the bot itself.
* Mounts the <tt>/data/project/stashbot/</tt> NFS directory as <tt>/data/project/stashbot/</tt> inside the container.
 
{{Note|The ''stashbot.sh'' script assumes that a Python 3.7 virtual environment has been manually created and populated with library dependencies for the project. See [[Help:Toolforge/Web/Python#Virtual Environments and Packages]] for more information about how to create a virtual environment. Make sure you call your venv python interpreter and not /usr/bin/python.}}
 
===Monitoring your jobs===
You can see which jobs you have running with <code>kubectl get pods</code>. Using the name of the pod, you can see the logs with <code>kubectl logs <pod-name></code>.
 
To restart a failing pod, use <code>kubectl delete <pod-name></code>. If you need to kill it entirely, find the deployment name with <code>kubectl get deployment</code>, and delete it with <code>kubectl delete deployment <deployment-name></code>.


==Namespaces==
==Namespaces==
Each tool has been granted control of a Kubernetes [https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/ "namespace"]. Your tool can only create and control objects in its namespace. A tool's namespace is the same as the tool's name with "tool-" appended to the beginning (e.g. <code>tool-admin</code>, <code>tool-stashbot</code>, <code>tool-hay</code>, etc).
Each tool has been granted control of a Kubernetes [https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/ "namespace"]. Your tool can only create and control objects in its namespace. A tool's namespace is the same as the tool's name with "tool-" appended to the beginning (e.g. <code>tool-admin</code>, <code>tool-stashbot</code>, <code>tool-hay</code>, etc).
You can see monitoring data of your namespace in Grafana, enter in [https://grafana-labs.wikimedia.org/d/toolforge-k8s-namespace-resources/kubernetes-namespace-resources this page] and select your namespace in the select box at the top of the page.


==Quotas and Resources==
==Quotas and Resources==
Line 170: Line 52:


If you find that you need containers to run with '''more''' than 1 CPU and 4 GB of RAM, the [[Help:Toolforge/Kubernetes#Quota_increases|quota increase procedure]] below can request that. You can verify the per-container limits you have by running <code>kubectl describe limitranges</code>
If you find that you need containers to run with '''more''' than 1 CPU and 4 GB of RAM, the [[Help:Toolforge/Kubernetes#Quota_increases|quota increase procedure]] below can request that. You can verify the per-container limits you have by running <code>kubectl describe limitranges</code>
The default storage size limit of a container, including the image size, is 10GB. You can store temporary data in the container directory /tmp, when the container ends all data is lost. For persistent storage use your tool directory, tools directories are in a NFS, mounted in /data/project, they are not inside the container.


=== Namespace-wide quotas ===
=== Namespace-wide quotas ===
Line 205: Line 89:
* jdk8 (openjdk 1.8.0_232; ''deprecated'')
* jdk8 (openjdk 1.8.0_232; ''deprecated'')
* node10 (nodejs v10.15.2)
* node10 (nodejs v10.15.2)
* '''node12''' (nodejs v12.21.0)
* node12 (nodejs v12.21.0)
* '''node16''' (nodejs v16.16.0)
* nodejs (nodejs v6.11.0; ''deprecated'')
* nodejs (nodejs v6.11.0; ''deprecated'')
* '''perl5.32''' (perl v5.32.1)
* php5.6 (PHP 5.6.33; ''deprecated'')
* php5.6 (PHP 5.6.33; ''deprecated'')
* php7.2 (PHP 7.2.24; ''deprecated'')
* php7.2 (PHP 7.2.24; ''deprecated'')
Line 255: Line 141:


===Node.js===
===Node.js===
The container images for Node.js, such as <code>docker-registry.tools.wmflabs.org/toollabs-nodejs-base:latest</code> currently come with a current version of Node.js LTS from Wikimedia APT (as of September 2018, this is Node.js 6). This is the same version used by Wikimedia Foundation in production and for continuous integration.
The Node.js container images contain a version Node.js LTS, NPM and Yarn either packaged by Debian or by [https://github.com/nodesource/distributions Nodesource].
 
====Broken npm====
Given npm is not suitable for use in Wikimedia production, the version of Node.js provided by Wikimedia APT is compiled without npm. (Unlike the official Node.js distribution.) And because there is no use for npm in Wikimedia production, there is no "npm" Debian package maintained in Wikimedia APT. The result is that the only "npm" Debian package available is the one from upstream Debian, which is npm 1.4 which was originally bundled in 2014 with Node 0.10 ([https://packages.debian.org/jessie/npm debian/npm], [https://packages.debian.org/jessie/nodejs debian/nodejs]). This version is EOL and is incompatible with most packages on the npmjs.org registry. To update it within your container, follow these steps
# Step 1: Start a shell in your Node.js pod (see "Shell" section below)
tool@tools-login$ kubectl exec -it podname-123-aaa -- /bin/bash
# Step 2: Create $HOME/bin and ensure it is in your PATH
podname:/data/project/tool$ mkdir bin/
podname:/data/project/tool$ export PATH="${HOME}/bin:${PATH}"
# To avoid having to re-export PATH every time you use your tool, add the export command to your .bashrc file!
# Step 3: Use npm to install 'npm'
podname:/data/project/tool$ npm install npm
....
# This installs the current version of npm at node_modules/.bin/npm
# Step 4: Create a symlink in $HOME/bin
podname:/data/project/tool$ ln -s $HOME/node_modules/.bin/npm $HOME/bin/npm
# Close the shell and create a new shell (to initialise PATH)
podname:/data/project/tool$ exit
tool@tools-login$ kubectl exec -it podname-123-aaa -- /bin/bash
podname:/data/project/tool$
# Step 5: Verify that you now use a current npm instead of npm 1.4
podname:/data/project/tool$ npm --version
6.4.1
 
==Troubleshooting==
==Troubleshooting==
=== "failed to create new OS thread" from kubectl ===
=== "failed to create new OS thread" from kubectl ===

Latest revision as of 17:20, 3 August 2022

Overview

Kubernetes (often abbreviated k8s) is a platform for running containers. It is used in Toolforge to isolate Tools from each other and allow distributing Tools across a pool of servers.

You can think about container like a "micro virtual machine" with only task to execute a single application, it has its own (minimal) file system and limited CPU and memory resources. In Kubernetes each container is inside a pod, that is what connect the container with the tools directories, the db replicas, the internet and with other pods.

One characteristic of containers is that, due to the small size, it can not have all packages that you can often find in other Toolforge virtual machines like the tools-login and grid engine nodes, so you need to select one container image that has the packages you need, you can see the images available in the section container images below.

Kubernetes webservices

The Toolforge webservice command has a --backend=kubernetes mode that will start, stop, and restart containers designed to run web services for various languages. See our Webservice help for more details.

Kubernetes backend has the following options:

  -m MEMORY, --mem MEMORY
                        Set higher Kubernetes memory limit
  -c CPU, --cpu CPU     Set a higher Kubernetes cpu limit
  -r REPLICAS, --replicas REPLICAS
                        Set the number of pod replicas to use


Kubernetes jobs

Every non-trivial task performed in Toolforge (like executing a script or running a bot) should be dispatched to a job scheduling backend (in this case, Kubernetes), which ensures that the job is run in a suitable place with sufficient resources.

The basic principle of running jobs is fairly straightforward:

  • You create a job from a submission server (usually login.toolforge.org)
  • Kubernetes finds a suitable execution node to run the job on, and starts it there once resources are available
  • As it runs, your job will send output and errors to files until the job completes or is aborted.

Jobs can be scheduled synchronously or asynchronously, continuously, or simply executed once.

There are two ways of running jobs on Kubernetes.

Previous to allowing jobs on Kubernetes, Toolforge offered Grid Engine as job scheduling backend.

Namespaces

Each tool has been granted control of a Kubernetes "namespace". Your tool can only create and control objects in its namespace. A tool's namespace is the same as the tool's name with "tool-" appended to the beginning (e.g. tool-admin, tool-stashbot, tool-hay, etc).

You can see monitoring data of your namespace in Grafana, enter in this page and select your namespace in the select box at the top of the page.

Quotas and Resources

On the Kubernetes cluster, all containers run with CPU and RAM limits set, just like jobs on the Gridengine cluster. Defaults are set at 0.5 CPU and 512Mi of memory per container. Users can adjust these up to the highest level allowed without any help from an administrator (the top limit is set at 1 CPU and 4Gi of memory) with command line arguments to the webservice command (--cpu and --mem) or properly formatted Kubernetes YAML specifications for your pod's resources fields for advanced users.

The Toolforge admin team encourages you to try running your webservice with the defaults before deciding that you need more resources. We believe that most PHP and Python3 webservices will work as expected with the lower values. Java webservices will almost certainly need higher limits due to the nature of running a JVM.

If you find that you need containers to run with more than 1 CPU and 4 GB of RAM, the quota increase procedure below can request that. You can verify the per-container limits you have by running kubectl describe limitranges

The default storage size limit of a container, including the image size, is 10GB. You can store temporary data in the container directory /tmp, when the container ends all data is lost. For persistent storage use your tool directory, tools directories are in a NFS, mounted in /data/project, they are not inside the container.

Namespace-wide quotas

Your entire tool account can only consume so many cluster resources. The cluster places quota limits on an entire namespace which determine how many pods can be used, how many service ports can be exposed, total memory, total CPU, and others. The default limits for a tool's entire namespace are:

requests.cpu: 2           # Soft limit on CPU usage
requests.memory: "6Gi"    # Soft limit on memory usage
limits.cpu: 2             # Hard limit on CPU usage
limits.memory: "8Gi"      # Hard limit on memory usage
pods: 4
services: 1
services.nodeport: 0      # Nodeport services are not allowed
replicationcontrollers: 1
secrets: 10
configmaps: 10
persistentvolumeclaims: 3

To view the live quotas that apply to your tool, run kubectl describe resourcequotas.

Quota increases

It is possible to request a quota increase if you can demonstrate your tool's need for more resources than the default namespace quota allows. Instructions and a template link for creating a quota request can be found at Toolforge (Quota requests) in Phabricator. Please read all the instructions there before submitting your request.

Container images

The Toolforge Kubernetes cluster is restricted to loading Docker images published at docker-registry.tools.wmflabs.org (see Portal:Toolforge/Admin/Kubernetes#Docker Images for more information). These images are built using the Dockerfiles in the operations/docker-images/toollabs-images git repository.

Available container types

The webservice command has an optional type argument that allows you to choose which Docker container to run your Tool in.

Currently provided types:

  • golang (go v1.11.5; deprecated)
  • golang111 (go v1.11.6)
  • jdk17 (openjdk 17)
  • jdk11 (openjdk 11.0.5)
  • jdk8 (openjdk 1.8.0_232; deprecated)
  • node10 (nodejs v10.15.2)
  • node12 (nodejs v12.21.0)
  • node16 (nodejs v16.16.0)
  • nodejs (nodejs v6.11.0; deprecated)
  • perl5.32 (perl v5.32.1)
  • php5.6 (PHP 5.6.33; deprecated)
  • php7.2 (PHP 7.2.24; deprecated)
  • php7.3 (PHP 7.3.11)
  • php7.4 (PHP 7.4.21)
  • python (Python 3.4.2; deprecated)
  • python2 (Python 2.7.9; deprecated)
  • python3.5 (Python 3.5.3; deprecated)
  • python3.7 (Python 3.7.3)
  • python3.9 (Python 3.9.2)
  • ruby2 (Ruby 2.1.5p273; deprecated)
  • ruby25 (Ruby 2.5.5p157)
  • ruby27 (Ruby 2.7)
  • tcl (TCL 8.6)

For example to start a webservice using a php7.4 container, run:

webservice --backend=kubernetes php7.4 start

A complete list of images is available from the docker-registry tool which provides a pretty frontend for browsing the Docker registry catalog.

As of Feb 2018, we don't support mixed runtime containers. This may change in the future. Also, we don't support "bring your own container" on our kubernetes (yet!). And there is no mechanism for a user to install system packages inside of a container.

PHP

PHP uses lighttpd as a webserver, and looks for files in ~/public_html/.

PHP versions & packages

There are four versions of PHP available, PHP 7.4, PHP 7.3 (on Debian Buster), PHP 7.2 (on Debian Stretch), and the legacy PHP 5.6 (on Debian Jessie).

You can view the installed PHP extensions on the phpinfo tool. This should match the PHP related packages installed on GridEngine exec nodes. Additional packages can be added on request by creating a Phabricator task tagged with #toolforge-software. Software that is not packaged by Debian upstream is less likely to be added due to security and maintenance concerns.

PHP Upgrade

To upgrade from PHP 5.6 to PHP 7.4, run the following two commands:

$ webservice stop
$ webservice --backend=kubernetes php7.4 start

To switch back:

$ webservice stop
$ webservice --backend=kubernetes php5.6 start

Running Locally

You may run the container on your local computer (not on Toolforge servers) by executing a command like this:

$ docker run --name toolforge -p 8888:80 -v "${PWD}:/var/www/html:cached" -d docker-registry.tools.wmflabs.org/toolforge-php73-sssd-web sh -c "lighty-enable-mod fastcgi-php && lighttpd -D -f /etc/lighttpd/lighttpd.conf"

Then the tool will be available at http://localhost:8888

Node.js

The Node.js container images contain a version Node.js LTS, NPM and Yarn either packaged by Debian or by Nodesource.

Troubleshooting

"failed to create new OS thread" from kubectl

If kubectl get pods or a similar command fails with the error message "runtime: failed to create new OS thread (have 12 already; errno=11)", use GOMAXPROCS=1 kubectl ... to reduce the number of resources that kubectl requests from the operating system.

The active thread quota is per-user, not per-session or per-tool, so if you have multiple shell sessions open to the same bastion server this will effect the available quota for each of your shells.

Get a shell inside a running Pod

Kubectl can be used to open a shell inside a running Pod: $ kubectl exec -it $NAME_OF_POD -- /bin/bash

See Get a Shell to a Running Container at kubernetes.io/docs for more information.

Communication and support

Support and administration of the WMCS resources is provided by the Wikimedia Foundation Cloud Services team and Wikimedia movement volunteers. Please reach out with questions and join the conversation:

Discuss and receive general support
Receive mail announcements about critical changes
Subscribe to the cloud-announce@ mailing list (all messages are also mirrored to the cloud@ list)
Track work tasks and report bugs
Use the Phabricator workboard #Cloud-Services for bug reports and feature requests about the Cloud VPS infrastructure itself
Learn about major near-term plans
Read the News wiki page
Read news and stories about Wikimedia Cloud Services
Read the Cloud Services Blog (for the broader Wikimedia movement, see the Wikimedia Technical Blog)

See also