You are browsing a read-only backup copy of Wikitech. The live site can be found at

Deployment pipeline

From Wikitech-static
Revision as of 09:33, 18 April 2019 by imported>Lucas Werkmeister (WMDE) (→‎Plan: add link to explain what TKTKTK mean)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

The deployment pipeline program is a change program to modernise Wikimedia's deployment processes. It has been running since mid-2017 and is expected to take some years to complete. It is currently managed under the TEC3 Annual Plan program for 2018–19, and was formerly known as Streamlined Service Delivery.


The intent of the release pipeline program is to migrate all Wikimedia deployed code's deployment from continuous integration to use a continuous delivery and eventually a continuous deployment release pipeline. Specifically, we're using kubernetes with docker images and helm as the manager.


As of January 2019, almost all Wikimedia-deployed code is manually deployed, in various forms:

  • MediaWiki – Code in MediaWiki, its skins and extensions, and dependent libraries (collectively, the monolith) is deployed manually, either via a bulk weekly "train" or by cherry-pick back-ports, with between one and two flavours of the monolith in production, set to either 0% or 100% of each wiki as it is scaled out.
  • MediaWiki config – Wikimedia sites configuration is entirely manually deployed. Only one version of the config repo runs in production at once.
  • MediaWiki services – Services code, like parsoid, the mobile content service, or the maps service, is deployed via a per-tool operation, where a service deployer creates a commit combining recently-merged code and any config changes, applies it to a special deployment repository, and triggers its release into production. Only one version of a service runs in production at once, except for short deployment periods with canary servers and phased roll-out over a few minutes.
  • Operations – A mix; puppet manifests are auto-deployed on merge as a back-up, but are mostly done manually by the merger. Other repos like DNS configuration are only deployed manually. Only one version of these run in production at once.



Not entirely in sequence:

  1. MediaWiki services
    1. Move to container-based deployment
    2. Move to continuous delivery
    3. Move to continuous deployment
  2. MediaWiki and config



  • Jan–Mar 2019: Initial services migrated
  • Apr–June 2019: Most services migrated

Who is involved

  • Site Reliability Engineering
    • Alexandros Kosiaris
    • Fabian Selles
    • Giuseppe Lavagetto
  • Release Engineering
    • Brennen Bearnes
    • Dan Duvall
    • James Forrester
    • Jeena Huneidi
    • Mukunda Modell
    • Tyler Cipriani
  • Core Platform
    • Marko Obrovac



Questions and answers

What does this mean for me?

  • If you are currently running a service in production, we'll talk to you about migration to kubernetes.
  • If you are building a new service to go into production, please contact us to make sure your service is "born kubernetes", rather than having to migrate a month after deployment.
  • If you don't have or plan to have a service, don't worry about things for now. Changes to non-services will take a while.

  • What is the pipeline intended to be?
  • What is the purpose of the pipeline?
  • How does the Wikipedia movement benefit from the pipeline?
  • How does a developer benefit from the pipeline?
  • How does the WMF release engineering team benefit from the pipeline?
  • How does the pipeline work, in broad strokes?
  • How does a developer use the pipeline?
    • What software does a developer need to install locally to use the pipeline?
  • How can one check the status of the pipeline?
  • How can one get alerted about problems of things in the pipeline?
  • How can one see statics of the pipeline in operation?
    • Number of successful builds and deployments to production?
    • Number of build failures?
  • How can one learn how the pipeline is implemented?