You are browsing a read-only backup copy of Wikitech. The live site can be found at wikitech.wikimedia.org

GitLab/Gitlab Runner

From Wikitech-static
< GitLab
Revision as of 07:54, 6 September 2021 by imported>Jelto (edit monitoring section)
Jump to navigation Jump to search

GitLab Runner is an application that works with GitLab CI/CD to run jobs in a pipeline.[1] For more information see the official GitLab Runner documentation.

Current Gitlab Runner setup (T287504)

We're currently relying on WMCS VPSs for shared runner capacity. There is a project named gitlab-runners in which to provision new instances, and a profile to help provision Docker based runners on those instances. Note that a standalone puppetmaster in the same project stores the runner registration token under /etc/puppet/secret, and Puppet autosigning is turned off to protect the token value.

Setting up a new shared runner

To set up a new shared runner, following these steps.

  1. Create a new WMCS VPS instance.
    1. Log in to [1] and navigate to the gitlab-runners project.
    2. Launch a new Debian buster instance, following the runner-{nnnn} naming convention.
    3. Add profile::gitlab::runner to the instance's Puppet Classes under the Puppet Configuration tab.
  2. Wait until the new instance has fully provisioned and you can successfully ssh to the running instance using your authorized key. (This typically takes a few minutes.)
  3. Do the little SSL dance that is required of instances that use a standalone puppetmaster.
    1. On the new runner (runner-{nnnn}.gitlab-runners.eqiad1.wikimedia.cloud).
      1. Run sudo rm -rf /var/lib/puppet/ssl to remove the existing SSL certs used by the default puppetmaster.
      2. Run sudo -i puppet agent --test --verbose to have the puppet client generate a new SSL cert.
    2. On gitlab-runners-puppetmaster-01.gitlab-runners.eqiad1.wikimedia.cloud sign the new instance's SSL cert.
      1. Run sudo -i puppet cert list and find the new instance in the list.
      2. Run sudo -i puppet cert sign runner-{nnnn}.gitlab-runners.eqiad1.wikimedia.cloud to sign the client cert.
  4. Run sudo -i puppet agent --test --verbose on the runner to ensure it has fully provisioned the profile::gitlab::runner profile.
  5. Verify that the runner has successfully registered with our GitLab instance by viewing the runner list.

Future Gitlab Runner setup (T286958)

This section contains the requirements and plan for a future Gitlab-Runner setup. The goal is to find a solution which matches our needs for the GitLab Runner infrastructure. GitLab Runner support various environments, such as Kubernetes, Docker, OpenShift or just Linux VMs. Furthermore a wide range of compute platforms can be leveraged, such as WMCS, Ganeti, bare metal hosts or public clouds. So this section compares the different options and collects advantages and disadvantages. Privacy considerations can be found in the next section.

GitLab Runner environments

GitLab Runner can be installed:

The follow table compares the most important GitLab Runner environments available (see Install GitLab Runner):

Environment Advantages Disadvantages Additional considerations
Linux
  • Easy to setup
  • low maintenance
  • Low elasticity, difficult to scale
Container
  • Easy to setup
  • low maintenance
  • separation of jobs by containers
  • Similar to current solution
  • difficult to scale
  • auto scaling needsdocker-machine
Kubernetes
  • High elasticity/auto scaling
  • separation of jobs by containers
  • Additional Kubernetes needed (for security)
  • Additional cluster needs maintenance
  • More difficult to setup
  • Could be used to strengthen Kubernetes knowledge
  • Auto scaling needs elastic compute plattform
  • Maybe a general purpose non-production cluster can be build?
OpenShift
  • not in use in WMF

Compute Platforms

The following table compares the four main computing options for the GitLab Runner setup: WMCS, Ganeti, Bare Metal or Public Cloud.

Plattform Advantages Disadvantages Additional considerations
WMCS
  • High elasticity
  • Semi-trusted environment (?)
  • Kubernetes auto scaling can leverage OpenStack
  • Only in Eqiad
  • Elasticity is bound to appropriate quotas
  • Kubernetes on OpenStack is new and different from existing Kubernetes solutions
Ganeti
  • Trusted environment
  • medium elasticity
Bare metal
  • Trusted environment
  • Similar environment to existing Kubernetes setups
  • Low elasticity
  • Machines have to be ordered and racked
  • Could old/decommissioned machines be used as runners?
Public Cloud (e.g. GCP)
  • High elasticity
  • Low maintenance
  • Easy Kubernetes setup (e.g. GKE)
  • untrusted environment (see privacy section)
  • Dependency to cloud provider
  • Discussion about public cloud usage is needed
  • Evaluation of privacy considerations is needed (see below)
Elastic demand

Typically the demand of computing resources for building code and running test is not constant. The usage of CI peaks around African, European and US Timezones and workdays (see Grafana dashboard and dashboard). So the ideal solution would adapt to this usage and scale computing resources up and down. This would maximize the utilization of resources and cover usage peaks. However this elasticity comes with costs. In general a dynamic provisioning of Runners is more complex than a static. Currently internal compute platforms (such as Ganeti or Bare Metal) have limited elasticity, WMCS is somewhat elastic. So if high elasticity is needed, we have to consider using external providers like GKE. Which opens the discussion about privacy (see next chapter) and being independent from external parties.

So even if a elastic Runner setup would be the better technical solution we have to ask if we really need high elasticity now.

Further reading:

https://docs.gitlab.com/runner/enterprise_guide/#autoscaling-configuration-one-or-more-runner-managers-multiple-workers

https://docs.gitlab.com/runner/executors/kubernetes.html

https://docs.gitlab.com/runner/configuration/autoscale.html

Privacy considerations

Privacy is one core principal of WMF. So if public clouds are used we have to make sure this usage aligns with our privacy policy and doesn't cause any security risks.

We have to think about what data is transmitted to public clouds during builds and tests. Do we include secrets, passwords or private user data when running a job? Do we need a special policy for CI variables and secrets? Do we consider this data leaked/compromised when transmitted to public cloud machines even when encrypted/restricted machines are used?

We also have to think about how to secure the artifacts and test results of jobs running in public clouds. How do we implement trust? How do we check if artifacts (images, compiled code) or test results weren't compromised?

Monitoring of performance and usage

Gitlab-Runner support Prometheus metric export. This metrics and some Grafana dashboards should give insights in performance and usage. See Monitoring Gitlab Runner documentation.

However the Gitlab Runner exporter does not support authorization or https. So depending on where the Runners are hosted, a https proxy with authorization is required.

[2]

Open Questions:

Open questions:

  • Do we need high elasticity now/in the near future?
  • Do we have any drawbacks with the current, non-elastic setup with static Jenkins machines?
  • Do we want to open a discussion about public cloud usage and privacy?
  • Do we want to be depended on public cloud offerings?
  • Do we have the resources to plan, implement and maintain a additional Kubernetes Cluster?
  • Is a GitLab Runner Kubernetes cluster the best way to accumulate Kubernetes knowledge?