You are browsing a read-only backup copy of Wikitech. The live site can be found at


From Wikitech-static
Revision as of 17:38, 7 August 2020 by imported>Bstorm (Remove draft tag)
Jump to navigation Jump to search


PAWS is a Jupyterhub deployment that runs in the PAWS Coud VPS project. The main Jupyterhub login is accessible at, and is a public service that can authenticated to via Wikimedia OAuth. More end-user info is at PAWS. Besides a simple Jupyterhub deployment, PAWS also contains easy access methods for the wiki replicas, the wikis themselves via the OAuth grant and pywikibot.

Kubernetes cluster


The PAWS Kubernetes cluster built to similar specifications as the Toolforge cluster, deployed using puppet to prepare the system and native kubeadm deployment of the Kubernetes layer. As such, the deployment is nearly identical to the process described for Toolforge. After you have the base layer of Kubernetes deployed using the procedures outlined for Toolforge and the yaml deployed by puppet for PAWS, you can proceed with deploying the Jupyterhub itself.


Upgrading should be following the same schedule and technique as upgrading Toolforge Kubernetes because this is a similarly kubeadm-plus-puppet cluster. Regular upgrades are essential to staying ahead of CVEs (via point-releases) and keeping the cluster's certs fresh. Kubernetes 1.16+ has nice tools for manually refreshing certs, but it isn't a fun situation. Remember to check that something strange isn't in the kubeadm-config configmap in the kube-system namespace if one of the control plane pods isn't staying live!

The only special consideration here is that you should make sure that the Jupyterhub helm chart isn't trying to deploy deprecated objects. Objects that are deprecated on Kubernetes will continue to work, but you'll have an issue doing upgrades and deployments of Jupyterhub. The best thing to do here is probably to try to get a PR in upstream to fix things or upgrade our version of Jupyterhub.


We opted to use a stacked control plane like in the original build, but we set it up with a redundant three-node cluster. To maintain HA for the control plane and for the services, two haproxy servers sit in front of the cluster with a floating IP managed with keepalived that should be capable of automatic failover. DNS simply points at that IP.

A simple diagram is as follows:

PAWS Design.png

General Build

With the exception of the introduction of keepalived, the stacked control plane and specific services, nearly the entire build re-uses the security and puppet design of Toolforge Kubernetes. By using helm 3, we were able to prevent any divergence from secure RBAC and Pod Security Policies. Upgrades should be conducted when Toolforge upgrades are on the same cycle, but the component repositories used (which are separated by major k8s version) allow the upgrade schedules to diverge if required. An ingress exists (not on this diagram) for the deploy-hook service, but that is disabled in the first iteration to work out some kinks in the process.

Floating IP

The floating IP is our second service using a manually-provisioned Neutron port with IP that is managed with keepalived, using this procedure: Portal:Cloud VPS/Admin/Keepalived That is is NAT'd to public IP


At the load balancer layer (haproxy), routing is done by port back to the Kubernetes control plane service on control plane nodes or ingresses at the dedicated ingress worker nodes. The control plane is hit at the usual port of TCP 6443, for both the frontend and backend. The ingress layer is served at the well-known web ports (TCP 80 and 443), which hits the dedicated ingress worker nodes on a Nodeport service at port 30000. The neutron security group paws-loadbalancer prevents internet clients from contacting the k8s API at this time.


TLS certs are done via acme-chief and distributed to the haproxy load balancer layer. Therefore inside the cluster, Kubernetes basically has the TLS ingress bits in helm turned off.


The maintain-kubeusers service used in Toolforge runs on paws, granting the same privileges to admin users on the paws.admin group as would be found for members of the tools.admin group in Toolforge. The certs for these users are automatically renewed as they come close to their expiration date. Where cluster-admin is required directly rather than through the usual impersonation method, such as for using the helm command directly root@paws-k8s-control-1/2/3 has that access.


Helm 3 is used to deploy kubernetes applications on the cluster. It is installed by puppet via Debian package. The community supported ingress-nginx controller is deployed by hand (kubectl apply -f paws/ingress/nginx-ingress.yaml), but the ingress objects are all managed in the helm chart. As this is helm 3, there is no tiller and RBAC affects what you can do.

General notes

  • The control plane uses a converged or "stacked" etcd system. Etcd runs in containers deployed by kubeadm directly on the control plane nodes. Therefore, it is unwise to ever turn off 2 control plane nodes at once since it will cause problems for the etcd raft election system.
  • The control plane and haproxy nodes are part of separate anti-affinity server groups so that Openstack will not schedule them on the same hypervisor. Worker nodes are placed in a soft anti-affinity server group.
  • Ingress controllers are deployed to dedicated ingress worker nodes, which also take advantage of being in an anti-affinity server group.
  • To see status of k8s control plane pods (running coredns, kube-proxy, calico, etcd, kube-apiserver, kube-controller-manager), see kubectl --namespace=kube-system get pod -o wide.
  • Prometheus stats and metrics-server are deployed in the metrics namespace during cluster build via kubectl apply -f $yaml-file, just like in the Toolforge deploy documentation.
  • Because of pod security policies in place, all init containers have been removed from the paws-project version of things. Privileged containers cannot be run inside the prod namespace.

Jupyterhub deployment

Jupyterhub & PAWS Components

Jupyterhub is a set of systems deployed together that provide Jupyter notebook servers per user. The three main subsystems for Jupyterhub are the Hub, Proxy, and the Single-User Notebook Server. Really good overview of these systems is available at

PAWS is a Jupyterhub deployment (Hub, Proxy, Single-User Notebook Server) with some added bells and whistles. Some additional PAWS-specific pods in our deployment are:

PAWS also includes customized versions of some Jupyterhub images:

  • singleuser: Since this is the environment for end users, there is a fair bit going on here. Our image is a replacement of the upstream one. We set the correct UID and directory. We install the jupyterhub/lab code directly from pip, along with PyWikiBot, a small library to allow importing a notebook like a python package along the lines of import paws.$username.$notebooks_name called ipynb-paws and code from to add a public link button. There are other customizations because this is a great surface for doing them. The general goal is to get a notebook up and running for use on wikis as fast as possible.
  • paws-hub: We build upon the upstream Jupyterhub hub image just a touch, adding bits that respect more of the UID settings and adding in a custom culling script. The code for doing OAuth is actually inserted in the helm chart instead.

The other custom image is a deploy-hook, which is undergoing some renovations before it is redeployed in the cluster.


  • The PAWS repository is at It should be cloned locally. Then the git-crypt key needs to be used to unlock secrets.yaml file. See one of the PAWS admins if you should have access to this key.
  • PAWS will be deployed in the near future with Travis CI as it in in tools, and the dashboard is at The configuration for the Travis builds are at, and builds and deploys launch the travis-script.bash script with appropriate parameters. However, this is not going to work at first, so please deploy via helm directly until the deploy-hook and CI setup is revisited.
  • To deploy via helm directly, you need to know some parameters because the values.yaml file of the helm chart both lacks some sane defaults (TODO) and requires some params no matter what so it deploys the right version of the images. At a bare minimum, you will need to know the right images and tags for some of the images. The command used to deploy it right now running cd'd into an unlocked git checkout is:
helm install paws --namespace prod ./paws -f paws/secrets.yaml

If you are deploying to an actual paws cluster, you will also need the ingress controller Pod Security Policy: kubectl apply -f paws/ingress/nginx-ingress-psp.yaml and the controllers themselves kubectl apply -f paws/ingress/nginx-ingress-psp.yaml. Please note, you will need your dedicated ingress worker nodes deployed (prefix puppet looks for the name paws-k8s-ingress-) for that to do anything because there are tolerations and affinities for the nodes.

If already deployed, do not use the "install" command. Change that to "upgrade" to deploy changes/updates, such as:

helm upgrade paws --namespace prod ./paws -f paws/secrets.yaml


JupyterHub uses a database to keep the user state, currently it uses ToolsDB. It can be changed to sqlite when ToolsDB is having an outage, but that generally doesn't scale as well. It should be moved to its own database server (ideally a Trove system) as soon as possible.

Moving to sqlite

During ToolsDB outages we can change the db to in memory sqlite without significant impact.

The smoothest way is to do a helm upgrade as root on a control node (as above, in an unlocked checkout) with this command: helm upgrade paws --namespace prod ./paws -f paws/secrets.yaml --set=jupyterhub.hub.db.url="sqlite://" --set=jupyterhub.hub.db.type=sqlite

You can roll back to ToolsDB with helm by going into an unlocked checkout of and run helm with helm upgrade paws --namespace prod ./paws -f paws/secrets.yaml