You are browsing a read-only backup copy of Wikitech. The live site can be found at


From Wikitech-static
< Portal:Toolforge
Revision as of 21:37, 7 May 2021 by imported>Bstorm (→‎Purpose of nodes: updated a lot of the nodes, removed flannel (long gone) and removed paws (it's own project now))
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search


The list of Toolforge nodes. This page provides information about what kind of nodes exists in the cluster, what every node does and a prospective list of canary server for common operations like testing new stuff.

Complete list of nodes

The source of trust for this information is at

Take this list as an example, and refer to the source of trust for actual/current data.

Purpose of nodes

Explanation of what each type of node is.

bastion nodes

Servers meant to allow access to the clusters for users using SSH. You can submit your grid jobs from here.

They are not meant to run any actual workload. There are usually like 3 of them.

Nodes are like:

GridEngine nodes

Nodes that run GridEngine and are the pool to run standard Toolforge jobs workloads.

There should be 20 or 30 of them.

  • Nodes are like:
  • Master nodes are like

Kubernetes nodes

Nodes that run Kubernetes and hold the k8s workloads (pods).

There should be like 10 or 20 of them.

  • Nodes are like:
  • Control plane nodes are like:
  • etcd nodes are like:

Web nodes

Nodes that act as frontend web hosts for tools in Toolforge. They are also part of the Grid Engine grid.

There should be like 10 or 20 of them.

  • Generic nodes are like:
  • Those using lighttpd are like:
  • General web proxy nodes:
  • Static content nodes:

Docker nodes

These nodes are meant to build and distribute docker containers.

There should be like 3 or them.

  • Builder nodes are like:
  • Docker registry nodes are like:

Elasticsearch nodes

Elasticsearch cluster for use by tools. All indices are available for read-only access inside Toolforge. Writing requires a username and password.

There should be 3 nodes.

  • Nodes are like:

checker nodes

  • Checker nodes are like:

misc nodes

Some misc nodes.

  • Clush master, control of the whole cluster using Clustershell:
  • Cron master, run cron jobs submitted by tools users:
  • SMTP email for the cluster:
  • DEB package builder:
  • Prometheus deployment nodes:
  • Redis nodes:
  • Misc services nodes (aptly and others):

Canary nodes

A pre-elected list of one node of each type you can use to test stuff before deploying changes to the whole cluster.

Depending on the test or task you are doing, you may need to craft a different list:

  • Different OS (Ubuntu trusty, Debian Jessie, Debian stretch)
  • By Linux kernel version
  • By different workload, usage or general load
  • Attending to other criteria

In Toolforge clushmaster, there should be a list ready to use with clush: toolforge_canary_list.txt

It can be used like this:

user@tools-clushmaster-01:~$ clush --hostfile /etc/clustershell/toolforge_canary_list.txt "command"

Communication and support

We communicate and provide support through several primary channels. Please reach out with questions and to join the conversation.

Communicate with us
Connect Best for
Phabricator Workboard #Cloud-Services Task tracking and bug reporting
IRC Channel #wikimedia-cloud connect
Telegram bridge
mattermost bridge
General discussion and support
Mailing List cloud@ Information about ongoing initiatives, general discussion and support
Announcement emails cloud-announce@ Information about critical changes (all messages mirrored to cloud@)
News wiki page News Information about major near-term plans
Cloud Services Blog Clouds & Unicorns Learning more details about some of our work
Wikimedia Technical Blog News and stories from the Wikimedia technical movement

See also