You are browsing a read-only backup copy of Wikitech. The live site can be found at


From Wikitech-static
< Kubernetes
Revision as of 15:02, 8 April 2020 by imported>Alexandros Kosiaris (Initial draft)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

OCI Images

We restrict only running images from the production Docker registry, aka (aka docker-registry.discovery.wmnet inside production networks), which is available publicly. This is for the following purposes:

  1. Make it easy to do security updates when necessary (just rebuild all the containers & redeploy)
  2. Faster deploys, since this is in the same network (vs dockerhub, which is retrieved over the internet)
  3. Access control is provided totally by us, less dependent on dockerhub
  4. We control what's in our images.

This is enforced at the network level, but we plan to add a webhook that enforces it on kubernetes.

Image building

Images are separated in 3 "generations":

  • base images
  • production images
  • service images

that are dependent on each other in the above order. A service images (e.g. mathoid) will depend on a production image (e.g. buster-node10) that will depend on a base image (e.g. wikimedia-buster), forming a tree. base images can never get deployed, but both service images and production are deployed. The former for powering a service, the latter for providing infrastructure functionality (e.g. metrics collecting, TLS demarcation, etc).

Base images

Those are the 1st layer of the tree. They are built on the designated production builder host (look at puppet.git/manifests/site.pp to figure out which host has that role) and are is built using the command build-base-images. This code uses bootstrapvz to build the image and push it to the registry, and the specs can be found in the operations/puppet.git repository under modules/docker/templates/images. Note that you need to be root to build / push docker containers. Suggest using sudo -i for it - since docker looks for credentials in the user's home directory, and it is only present in root's home directory. It's a very simplistic approach but it works well for this use case.

Production images

The code building those has been written from scratch. It's a tool called docker-pkg. It is still run on the same builder host as above, but will automatically infer versions, what needs or does not need to be built and dependencies. The repo containing the definitions to those images is at and the command /usr/local/bin/build-production-images is used to build them. Again, it's suggested that one would be using sudo -i for it - since docker looks for credentials in the user's home directory, and it is only present in root's home directory

Services images

Those are built by the Deployment pipeline using Blubber. They are being created automatically on every git merged commit per software.