You are browsing a read-only backup copy of Wikitech. The live site can be found at wikitech.wikimedia.org

Difference between revisions of "Portal:Toolforge/Admin/Services"

From Wikitech-static
Jump to navigation Jump to search
imported>Arturo Borrero Gonzalez
(→‎Architecture: add note about how puppet code works for toolforge-stretch)
imported>Arturo Borrero Gonzalez
(→‎Addressing, DNS and proxy: refresh statementon deb-tools.wmcloud.org)
 
(6 intermediate revisions by 3 users not shown)
Line 1: Line 1:
{{draft}}
This page contains information regarding the '''services''' that are provided internally to '''Toolforge''' nodes.
 
= Deployment components and architecture =
 
Information on how the setup is deployed, and the different components.
 
== Servers ==


This page contains information regarding the '''services''' that are provided internally to '''Toolforge''' nodes.
Usually a VM with a cinder volume to store repository data.
 
== Addressing, DNS and proxy ==


= Architecture =
There is an horizon web proxy called '''deb-tools.wmcloud.org''' that should point to TCP/80 on the server. This allows to build docker images using toolforge internal packages.


The services run from a single VM (with a spare one already provisioned for maintenance and disaster recovery purposes):
Other than that, servers don't have any special DNS or adressing. The don't have floating IPs.


* tools-services-01
Worth noting that these servers in the '''tools''' cloudvps project may offer services for the '''toolsbeta''' project as well.
* tools-services-02


The active one is determine by the value of <code>role::toollabs::services::active_host</code> in Hiera.
== Puppet ==


'''NOTE:''' in new toolforge-stretch, the puppet role is ''role::wmcs::toolforge::services'' and the relevant hiera key is ''profile::toolforge::services::active_node''.
The main role in use is '''role::wmcs::toolforge::services'''.


== updatetools ==
== updatetools ==


[https://github.com/wikimedia/puppet/blob/production/modules/toollabs/files/updatetools updatetools] is a Python script that updates tools and maintainers information to be used by [https://tools.wmflabs.org tools.wmflabs.org] (source code available at [https://phabricator.wikimedia.org/source/tool-admin-web/ tool-admin-web]).
{{Warning | This no longer runs in services nodes. This is now part of the 'admin' tool. See https://phabricator.wikimedia.org/T229261}}
 
'''updatetools''' is a Python script that updates tools and maintainers information to be used by [https://tools.wmflabs.org tools.wmflabs.org] (source code available at [https://phabricator.wikimedia.org/source/tool-admin-web/ tool-admin-web]).


It gets a list of tools (accounts starting with "tools."), reads their <code>.description</code> and <code>toolinfo.json</code> files and adds it to the <code>tools</code> table in the <code>toollabs_p</code> database. Maintainer information is retrieved by getting all users that belong to the tool's group and using <code>getpwnam()</code> to retrieve user information, which then gets added to the <code>users</code> table.
It gets a list of tools (accounts starting with "tools."), reads their <code>.description</code> and <code>toolinfo.json</code> files and adds it to the <code>tools</code> table in the <code>toollabs_p</code> database. Maintainer information is retrieved by getting all users that belong to the tool's group and using <code>getpwnam()</code> to retrieve user information, which then gets added to the <code>users</code> table.


This script runs, as a service, from the active <code>tools-services-*</code> server, and wakes up every 120 seconds to populate the tables with new data.
This script runs, as a service, from the active <code>tools-services-*</code> server, and wakes up every 120 seconds to populate the tables with new data.
The database in use is <code>tools.labsdb</code> which is <code>tools.db.svc.eqiad1.wikimedia.cloud</code>.


== apt repository ==
== apt repository ==


'''TODO:''' fill me. Are we going to use the aptly repo in the new toolforge version?
One of the main purposes of this service is to host Debian packages for other servers by means of '''aptly'''.
 
Repositories are declared in puppet, but packages should be added to the aptly repository by hand.<br>
We usually have one repository per operating system and project, i.e:
* stretch-tools
* buster-tools
* stretch-toolsbeta
* buster-toolsbeta
 
Quick example of packages being stored here are:
 
* https://gerrit.wikimedia.org/r/#/admin/projects/labs/toollabs
* https://gerrit.wikimedia.org/r/admin/projects/operations/software/tools-webservice
* https://gerrit.wikimedia.org/r/admin/projects/operations/software/tools-manifest


== bigbrother ==
(among others)


'''TODO:''' fill me.
The repository data, located at '''/srv/packages''' is stored in a mounted cinder volume.


= Admin operations =
= Admin operations =


'''TODO:''' fill me.
Information on maintenance and administration of this setup.
 
== managing aptly repo ==
 
Is managed as a standard [[Aptly|aptly]] repo.
 
== health ==
 
Some interesting bits to check if you want to know the status/health of the server.
 
* aptly repos are present, and they contain packages, i.e: <code>sudo aptly repo list</code> and <code>sudo aptly repo show --with-packages=true stretch-tools</code>
* disk is not filled, i.e: <code>df -h /</code>
 
== failover ==
 
We don't have a specific failover mechanism rather than building a new VM and re-attach the cinder volume.
 
Care should be taken to don't loss aptly repo data, since generating it from scratch can take some time.
 
= History =
 
This was heavily remodeled when migrating the grid to SGE and to Stretch. Previous to the migration, the services nodes used to store [[Help:Toolforge/Grid#Bigbrother_(Deprecated)|Bigbrother (deprecated)]], and [[Portal:Toolforge/Admin/Webservicemonitor|webservicemonitor (moved to cron servers)]].
 
Again, when migrating from Stretch to Buster, the 2 VM approach was dropped in favor of storing the data in a cinder volume, see https://phabricator.wikimedia.org/T278354.


= See also =
= See also =
* [[Portal:Toolforge/Admin]]
* [[Portal:Toolforge/Admin]]
* [[Portal:Toolforge/Admin/Webservice]]

Latest revision as of 15:16, 30 March 2021

This page contains information regarding the services that are provided internally to Toolforge nodes.

Deployment components and architecture

Information on how the setup is deployed, and the different components.

Servers

Usually a VM with a cinder volume to store repository data.

Addressing, DNS and proxy

There is an horizon web proxy called deb-tools.wmcloud.org that should point to TCP/80 on the server. This allows to build docker images using toolforge internal packages.

Other than that, servers don't have any special DNS or adressing. The don't have floating IPs.

Worth noting that these servers in the tools cloudvps project may offer services for the toolsbeta project as well.

Puppet

The main role in use is role::wmcs::toolforge::services.

updatetools

updatetools is a Python script that updates tools and maintainers information to be used by tools.wmflabs.org (source code available at tool-admin-web).

It gets a list of tools (accounts starting with "tools."), reads their .description and toolinfo.json files and adds it to the tools table in the toollabs_p database. Maintainer information is retrieved by getting all users that belong to the tool's group and using getpwnam() to retrieve user information, which then gets added to the users table.

This script runs, as a service, from the active tools-services-* server, and wakes up every 120 seconds to populate the tables with new data.

The database in use is tools.labsdb which is tools.db.svc.eqiad1.wikimedia.cloud.

apt repository

One of the main purposes of this service is to host Debian packages for other servers by means of aptly.

Repositories are declared in puppet, but packages should be added to the aptly repository by hand.
We usually have one repository per operating system and project, i.e:

  • stretch-tools
  • buster-tools
  • stretch-toolsbeta
  • buster-toolsbeta

Quick example of packages being stored here are:

(among others)

The repository data, located at /srv/packages is stored in a mounted cinder volume.

Admin operations

Information on maintenance and administration of this setup.

managing aptly repo

Is managed as a standard aptly repo.

health

Some interesting bits to check if you want to know the status/health of the server.

  • aptly repos are present, and they contain packages, i.e: sudo aptly repo list and sudo aptly repo show --with-packages=true stretch-tools
  • disk is not filled, i.e: df -h /

failover

We don't have a specific failover mechanism rather than building a new VM and re-attach the cinder volume.

Care should be taken to don't loss aptly repo data, since generating it from scratch can take some time.

History

This was heavily remodeled when migrating the grid to SGE and to Stretch. Previous to the migration, the services nodes used to store Bigbrother (deprecated), and webservicemonitor (moved to cron servers).

Again, when migrating from Stretch to Buster, the 2 VM approach was dropped in favor of storing the data in a cinder volume, see https://phabricator.wikimedia.org/T278354.

See also