You are browsing a read-only backup copy of Wikitech. The primary site can be found at wikitech.wikimedia.org

Dragonfly

From Wikitech-static
Revision as of 07:36, 17 August 2021 by imported>JMeybohm
Jump to navigation Jump to search

Dragonfly is a peer-to-peer-based file distribution system we use for distributing docker image layers to Kubernetes worker nodes. It was added to our infrastructure to overcome the issue of overloaded Docker-registry nodes when big deployments (in terms of number of replicas) that also use big docker images (in terms of layer size) are rolled out (read: MediaWiki).

Dragonfly consists of multiple components:

  • supernode: a service running on dedicated hosts (Ganeti VMs) that acts as a tracker and scheduler for the P2P network.
  • dfget: download client (like wget) that at the same time acts as peer in the P2P network.
  • dfdaemon: local HTTP(S) proxy between the docker container engine and the docker registry. It filters out requests for (specific) layers and uses dfget to download those via the P2P network instead.

For a more complete documentation about design and implementation of Dragonfly, please refer to design.md on github.

You may also want to watch the Introduction to Dragonfly from KubeCon 2019.

Operations

We currently run one supernode in each data center that is listening on tcp/8002. All Kubernetes nodes (P2P peers) in a given data center use their local supernode to span the P2P network. That means Dragonfly P2P networks do not span (and should not) cross data center boundaries.

On each Kubernetes node we run dfdaemon (listening on tcp/65001) as a HTTPS proxy between dockerd and the docker registries. dfdaemon is configured to use a TLS certificate that contains the alt name docker-registry.discovery.wmnet so that connections from dockerd can be transparently hijacked and potentially re-routed through the P2P network. dfdaemon does that by spawning multiple instances of dfget to download from and and one instance to serve parts (4MB chunks of docker image layers) to the P2P network. The latter listens on tcp/15001 for connections from other peers (for around 5min, after that time of inactivity the peer unregisters itself and removes the cached chunks from disk).

If a supernode fails, dfdeamon on each P2P peer will send traffic to the "source" of the data requested (e.g. the docker-registry) directly instead of failing. That means that in case of an issue with the P2P network, all docker daemons will pull (more or less) directly from the docker-registry again - potentially exhausting its network links.

As dfdaemon is hijacking the TLS connection to the registry, it is able to filter the incoming requests to only pass specific ones via the P2P network and pass everything else on to the docker registry directly. This is done by providing a URL regex list to dfdaemon and we currently only select immutable blobs of MediaWiki images to be transmitted via P2P (see hieradata/common/profile/dragonfly/dfdaemon.yaml).

Monitoring / logging

Monitoring currently relies on Icinga to watch over the state of the systemd service on supernode as well as on P2P peers (dfdaemon). direct link to Icinga checks for dragonfly nodes.

There is also a Grafana dashboard with some metrics.

Where to look for logs

  • supernode: /var/lib/dragonfly-supernode/logs/app.log
  • peer
    • dfdaemon: /var/lib/dragonfly-dfdaemon/logs/dfdaemon.log
    • dfget's downloading chunks: /var/lib/dragonfly-dfdaemon/dfget/logs/dfclient.log
    • dfget's serving chunks: /var/lib/dragonfly-dfdaemon/dfget/logs/dfserver.log

Disable the use of Dragonfly on a Kubernetes node

The only hook for docker into the P2P network is a HTTPS_PROXY environment variable added via a systemd override (/etc/systemd/system/docker.service.d/puppet-override.conf). That makes it pretty easy to disable dragonfly by just reverting said override and restarting docker:

sudo disable-puppet 'disable dragonfly'
sudo systemctl revert docker.service
sudo systemctl restart docker.service

Packaging

The code is hosted in operations/debs/dragonfly and uses Git-buildpackage flow.

Importing a new version

The imported upstream tarballs should include the complete vendor directory.

  • Check out the version (git tag) to import
$ ./debian/repack vX.Y.Z
  • This drops you into a shell with the git tag checked out. Do necessary changes here and commit
$ go mod vendor
$ git add -f vendor
# git diff --name-status --cached | grep -v 'vendor/' to make sure you only changed vendor
$ git commit -m "added vendor"
  • Exiting the shell will build a tarball to import
$ gbp import-orig /path/to/tarball.tar.xz
  • Push changes (including the tag crated by gpb) to gerrit
$ git push gerrit --all
$ git push gerrit --tags
  • Add a debian/changelog entry (as CR)
$ gbp dch
# Edit debian/changelog
$ git commit
$ git review

Building a new version

  • Check out the git repo on the build host
$ git clone "https://gerrit.wikimedia.org/r/operations/debs/dragonfly" && cd dragonfly
  • Build the package
$ BACKPORTS=yes gbp buildpackage --git-pbuilder --git-no-pbuilder-autoconf --git-dist=buster -sa -uc -us

Publishing a new version

# on apt1001
rsync -vaz deneb.codfw.wmnet::pbuilder-result/buster-amd64/dragonfly* .
sudo -i reprepro -C main include buster-wikimedia /path/to/<PACKAGE>.changes

# Copy the package over to other distros (this is possible because they only contain static binaries)
sudo -i reprepro copysrc stretch-wikimedia buster-wikimedia dragonfly

Patches

If you need to add/update patches, please see: https://honk.sigxcpu.org/projects/git-buildpackage/manual-html/gbp.patches.html

Resources

Open issues: