You are browsing a read-only backup copy of Wikitech. The live site can be found at

Wikimedia Cloud Services team/EnhancementProposals/Toolforge push to deploy

From Wikitech-static
< Wikimedia Cloud Services team‎ | EnhancementProposals
Revision as of 14:46, 29 September 2020 by imported>Nskaggs (spelling)
Jump to navigation Jump to search


Users of modern commercial PaaS's increasingly have access to hosted developer pipelines. These provide code hosting, build, and deployment services, all on-demand via a commit hook to the hosted repository. The concept of "push-to-deploy" can be thought of as a subset of GitOps. A user does not require shell access to deploy their code. Instead by pushing to a remote repository, with some additional settings stored somewhere for any secrets, the code can be automatically deployed.

The Toolforge community of developers would also benefit from similar integrations. The rise of developer oriented hosted services has lowered the technical barrier to entry, as well as the monetary cost for developing and maintaining a public service.


  • Lower the barrier of entry for Toolforge users who are familiar with other commercial PaaS's
  • Enable a modern "push-to-deploy" workflow, wherein pushed code undergoes a CI/CD process, lowering the requirements for tool developers
  • Allow for user built / managed containers to run on toolforge

Existing Toolforge Setup

TODO: PIctural representation of Toolforge Workflow

Today, Toolforge users store containers in NFS. These container images are built on-top of provided base images from WMCS. These containers are missing some of the details that would be required for automatic deployment, including authentication. Deployment is manual, via curated kubernetes commands. Behind the scenes, information required to launch is transferred automagically via a git service, including which container image to launch.

Presuming avoidance of a configuration-heavy user experience as is needed in some PaaS offerings out there, this experience is fairly close to the desired "push-to-deploy" model.


TODO: Pictural representation of desired Toolforge Workflow

At a high level, this will require:

  • A git system for tools to be pushed to
  • A reconciliation loop that watches git of one kind or another
  • A mechanism for that loop to connect to Kubernetes (our chosen backend)
  • A docker repo that has flexible enough authentication and storage backends to support even very limited buildpacks
  • Frontends for these things of some kind for user visibility, even if it is a simple CLI's output

Definition of Success

  1. "Trivial" Support for new languages and toolchains. Adding support for a new language or toolchain today requires custom image building and testing by WMCS.
  2. Adoption or Usage metric?


The road so far


I've been kicking the tires on buildpacks, and that should move along just fine with a more flexible and appropriate image registry. That piece should be ok with enough work and documentation of our particular modifications and solutions.

It is immediately apparent that there is a solution that can instrument Kubernetes, has an internal docker registry, and can do all the git, CD and auth we'd need--Gitlab. Unfortunately, Gitlab CE, the open source edition is seriously lacking when it comes to LDAP integration (no groups!) and wants to be more of the core of the setup than I think we'd want it to be in order to make it work. It is also so general purpose that it would be more difficult to limit what users do with it to keep things productive for the movement. It may be possible with some custom API clients or plugins to make it work, but I suspect we will end up spending more time making it work than we will get good use out of it. Gitlab is also an enormous project that would consume easily one tech's full attention to properly support once we start doing lots of customizing.

The next things to look at


I am looking at doing experimentation with gitea as the git layer for this. It is more fully open source, integrates well with LDAP, has a large number of features that are quite impressive, and enables it to authenticate with other tools nicely (even as an OAUTH2 and maybe OIDC provider). The small resource footprint is also attractive. Notably, Openstack has been adopting it at, so we would be moving in the same circles as well. It is also good that at least some of our team already has experience with it. Some experimentation should give us more info.

Harbor would seem to be a great possibility for docker image management. It's a CNCF incubator project that probably does the trick. It's a bit heavier than needed, but it's also multitenant, which brings up possibilities like splitting the repo so that users who over-provision somehow only hurt their own project, etc.

From there, it may be worth looking at Argo again. This is something that others in the org are already working on, so we may even benefit from cross-team collaboration or at least quizzing. Since its claim to fame is purely just putting things in git onto Kubernetes, then maybe it can be made to work one way or another.

Open Questions

  • [nskaggs] Does this work move us closer to deprecation of gridengine in any way, ie by enabling support of things only possible on gridengine today?