You are browsing a read-only backup copy of Wikitech. The live site can be found at wikitech.wikimedia.org

GitLab/Gitlab Runner

From Wikitech-static
< GitLab
Revision as of 15:07, 14 October 2021 by imported>Jelto
Jump to navigation Jump to search

GitLab Runner is an application that works with GitLab CI/CD to run jobs in a pipeline.[1] For more information see the official GitLab Runner documentation.

Current Gitlab Runner setup (T287279)

We're currently relying on WMCS VPSs for shared runner capacity. There is a project named gitlab-runners in which to provision new instances, and a profile to help provision Docker based runners on those instances. Note that a standalone puppetmaster in the same project stores the runner registration token under /etc/puppet/secret, and Puppet autosigning is turned off to protect the token value.

Setting up a new shared runner

To set up a new shared runner, following these steps.

  1. Create a new WMCS VPS instance.
    1. Log in to [1] and navigate to the gitlab-runners project.
    2. Launch a new Debian buster instance, following the runner-{nnnn} naming convention.
    3. Add profile::gitlab::runner to the instance's Puppet Classes under the Puppet Configuration tab.
  2. Wait until the new instance has fully provisioned and you can successfully ssh to the running instance using your authorized key. (This typically takes a few minutes.)
  3. Do the little SSL dance that is required of instances that use a standalone puppetmaster.
    1. On the new runner (runner-{nnnn}.gitlab-runners.eqiad1.wikimedia.cloud).
      1. Run sudo rm -rf /var/lib/puppet/ssl to remove the existing SSL certs used by the default puppetmaster.
      2. Run sudo -i puppet agent --test --verbose to have the puppet client generate a new SSL cert.
    2. On gitlab-runners-puppetmaster-01.gitlab-runners.eqiad1.wikimedia.cloud sign the new instance's SSL cert.
      1. Run sudo -i puppet cert list and find the new instance in the list.
      2. Run sudo -i puppet cert sign runner-{nnnn}.gitlab-runners.eqiad1.wikimedia.cloud to sign the client cert.
  4. Run sudo -i puppet agent --test --verbose on the runner to ensure it has fully provisioned the profile::gitlab::runner profile.
  5. Verify that the runner has successfully registered with our GitLab instance by viewing the runner list.

Future Gitlab Runner setup (T286958)

This section contains the requirements and plan for a future Gitlab-Runner setup. The goal is to find a solution which matches our needs for the GitLab Runner infrastructure. GitLab Runner support various platforms, such as Kubernetes, Docker, OpenShift or just Linux VMs. Furthermore a wide range of compute environments can be leveraged, such as WMCS, Ganeti, bare metal hosts or public clouds. So this section compares the different options and collects advantages and disadvantages. Privacy considerations can be found in the next section.

GitLab Runner platform

GitLab Runner can be installed:

The follow table compares the most important GitLab Runner platforms available (see Install GitLab Runner):

Platform Advantages Disadvantages Additional considerations
Linux
  • Easy to setup
  • low maintenance
  • Low elasticity, difficult to scale
  • no separation of jobs[2]
  • Separation between jobs is low, which could lead to security and privacy issues
Container
  • Easy to setup
  • low maintenance
  • separation of jobs by containers
  • Similar to current solution
  • difficult to scale
  • auto scaling needsdocker-machine
Kubernetes
  • High elasticity/auto scaling
  • separation of jobs by containers
  • Additional Kubernetes needed (for security)
  • Additional cluster needs maintenance
  • More difficult to setup
  • Could be used to strengthen Kubernetes knowledge
  • Auto scaling needs elastic compute plattform
  • Maybe a general purpose non-production cluster can be build?
OpenShift
  • not in use in WMF

Compute Environments

The following table compares the four main computing options for the GitLab Runner setup: WMCS, Ganeti, Bare Metal or Public Cloud.

Environment Advantages Disadvantages Additional considerations
WMCS
  • Somewhat elastic
  • Kubernetes auto scaling can leverage OpenStack(?)
  • Only in Eqiad
  • not fully trusted environment
  • Elasticity is bound to appropriate quotas
  • Kubernetes on OpenStack is new and different from existing Kubernetes solutions
Ganeti
  • Trusted environment
  • medium elasticity
Bare metal
  • Trusted environment
  • Similar environment to existing Kubernetes setups
  • Low elasticity
  • Machines have to be ordered and racked
  • Could old/decommissioned machines be used as runners?
Public Cloud (e.g. GCP)
  • High elasticity
  • Low maintenance
  • Easy Kubernetes setup (e.g. GKE)
  • untrusted environment (see privacy section)
  • Dependency to cloud provider
  • Discussion about public cloud usage is needed
  • Evaluation of privacy considerations is needed (see below)
Elastic demand

Typically the demand of computing resources for building code and running test is not constant. The usage of CI peaks around African, European and US Timezones and workdays (see Grafana dashboard and dashboard). So the ideal solution would adapt to this usage and scale computing resources up and down. This would maximize the utilization of resources and cover usage peaks. However this elasticity comes with costs. In general a dynamic provisioning of Runners is more complex than a static. Currently internal compute environments (such as Ganeti or Bare Metal) have limited elasticity, WMCS is somewhat elastic. So if high elasticity is needed, we have to consider using external providers like GKE. Which opens the discussion about privacy (see next chapter) and being independent from external parties.

We assume that elasticity won't have a major impact on costs with our current environment. More important elasticity could help to serve usage peaks and to keep the total pipeline latency low, thus increasing developer productivity. A similar effect could be achieved by simply over-provision the runner infrastructure.

So even if a elastic Runner setup would be the better technical solution we have to ask if we really need high elasticity now.

Further reading:

https://docs.gitlab.com/runner/enterprise_guide/#autoscaling-configuration-one-or-more-runner-managers-multiple-workers

https://docs.gitlab.com/runner/executors/kubernetes.html

https://docs.gitlab.com/runner/configuration/autoscale.html

Privacy and trust considerations

Privacy is one core principal of WMF. So if public clouds are used we have to make sure this usage aligns with our privacy policy and doesn't cause any security risks.

We have to think about what data is transmitted to public clouds during builds and tests. Do we include secrets, passwords or private user data when running a job? Do we need a special policy for CI variables and secrets? Do we consider this data leaked/compromised when transmitted to public cloud machines even when encrypted/restricted machines are used? We also have to think about how to secure the artifacts and test results of jobs running in public clouds. How do we implement trust? How do we check if artifacts (images, compiled code) or test results weren't compromised?

The safest and easiest approach would be to implement two different Runner environments, one for untrusted builds and one for trusted builds. This solution was proposed bei ServiceOps[3].

In GitLab terms this would mean hosting a Shared Runner for all untrusted projects and builds. This Shared Runners could be hosted in WMCS or a Public Cloud and if possible not inside the production network due to security considerations. Furthermore Specific Runners could be installed in a trusted environment and assigned to specific project. It is also possible to use this Specific Runners only for specific branches and tags, see Protected Runners.

Monitoring of performance and usage

Gitlab-Runner support Prometheus metric export. This metrics and some Grafana dashboards should give insights in performance and usage. See Monitoring Gitlab Runner documentation.

However the Gitlab Runner exporter does not support authorization or https. So depending on where the Runners are hosted, a https proxy with authorization is required.

[4]

Proposed future architecture

The following section describes the proposed architecture for GitLab Runner developed by ServiceOps. The architecture is open for discussion with other stakeholders.

The general architecture focuses on a non-GitLab specific architecture proposed by ServiceOps some time ago (see https://people.wikimedia.org/~oblivian/ci/ci-threat.pdf). The diagram on the right is the translation to a GitLab focused architecture. The proposed setup consists of one production GitLab instance (which needs some additional configuration, see questions below), a set of Shared GitLab Runners in an untrusted environment and a set of Protected/Specific GitLab Runners in a trusted environment.

Shared GitLab Runners

Shared GitLab Runners are general purpose CI workers. They can be used in every project but can also be disabled for certain projects or groups. In the proposed architecture Shared Runners execute untrusted code from volunteers, developers and SREs. So this kind of runners are also considered untrusted.[5]

Proposed Runner configuration: Shared Runners

Proposed environment: In the beginning WMCS, long term GCP

Proposed platform: In the beginning Linux + containerized Runners (an use existing puppet code), long term Google Kubernetes Engine

Open topics:

  • What happens to tagged jobs when Specific Runners become unavailable?
  • Can we have a dedicated artifact stores (and) for Shared Runners?

Protected GitLab Runners

To build and deploy code to production environments a trusted set of Runners are needed. Access to these Runners should be restricted and gated. For that purpose GitLab more specific CI workers, namely Group Runners and Specific Runners. Group Runners can be assigned on a per-group level, Specific Runners on a per-project level. Both Runners can be configured to run only jobs with certain tags (like 'production'). Special CI jobs (like building production code) need to define these tags as well.

Furthermore it is possible to secure CI credentials and variables of these Runners by protecting the runners. Protected Runners are only allowed to run Jobs from protected branches.

Proposed Runner configuration: Specific Runners

Proposed environment: In WMF datacenter/ganeti or bare metal

Proposed platform: Linux + containerized Runners (an use existing puppet code)

Open topics:

  • Can we have a dedicated artifact stores (and) for Protected Runners?
  • Is it possible to build docker images in Runners with the Docker Executor easily and securely?
  • Do we want to manage Specific Runners for every project or use Group Runners instead?