You are browsing a read-only backup copy of Wikitech. The primary site can be found at wikitech.wikimedia.org

MariaDB/misc

From Wikitech-static
Jump to navigation Jump to search

There are 4 "miscellaneous" shards: m1-m5.

  • m1: Basic ops utilities
  • m2: otrs, recommendations api and others
  • m3: phabricator and other older task systems
  • m5: wikitech, openstack and other cloud-related dbs

On the last cleanup, many unused databases were archived and/or deleted, and a contact person was discovered for each of them.

Sections description

m1

Current schemas

These are the current dbs, and what was needed to failover then:

  • bacula9: We make sure there is not backup running at the time so we avoid backup failures. Currently we stop bacula-dir (may require puppet disabling to prevent it from automatically restarting) to make sure no new backups start and potentially fail, as temporarily stopping the director should not have any user imapact. If backups are running, stopping the daemon will cancel the ongoing jobs. Consider rescheduling them (run) if they are important and time-sensitive, otherwise they will be schedule at a later time automatically following configuration.
  • bacula (old, non used bacula db): Nothing
  • dbbackups: Database backups metadata, on master failover need manual update as it doesn't use the proxy.
  • etherpadlite ; seems like etherpad-lite errors out and terminates after the migration. Normally systemd takes care of it and restarts it instantly. However if the maintenance window takes long enough, systemd will back out and stop trying to restart, in which case a systemctl restart etherpad-lite will be required. etherpad crashes anyway at least once a week if not more so no big deal ; tested by opening a pad
  • heartbeat: needs "manual migration"- change master role on puppet
  • librenms: required manual kill of its connections @netmon1001: apache reload
  • puppet: required manual kill of its connections; This caused the most puppet spam. Either restart puppet-masters or kill connections **as soon** as the failover happens.
  • racktables: went fine, no problems
  • rddmarc: ?
  • rt: required manual kill of its connections ; @unobtinium: apache reload

Deleted/archived schemas

  • blog: to archive
  • bugzilla: to archive * kill archived and dropped
  • bugzilla3: idem kill archived and dropped
  • bugzilla4: idem archive, actually, we also have this on dumps.wm.org https://dumps.wikimedia.org/other/bugzilla/ but that is the sanitized version, so keep this archive just in case i guess
  • bugzilla_testing: idem kill archived and dropped
  • communicate: ? archived and dropped
  • communicate_civicrm: not fundraising! we're not sure what this is, we can check users table to determine who administered it archived and dropped
  • dashboard_production: Puppet dashboard db. Never used it in my 3 years here, product sucks. Kill with fire. - alex archived and dropped
  • outreach_civicrm: not fundraising, this is the contacts.wm thing, not used anymore, but in turn it means i dont know what "communicate" is then, we can look at the users tables for info on the
  • admin: archived and dropped
  • outreach_drupal: kill archived and dropped
  • percona: jynus dropped
  • query_digests: jynus archived and dropped
  • test: archived and dropped
  • test_drupal: er, kill with fire ? kill archived and dropped

owners, (or in many cases just people that volunteer to help for the failover)

  • bacula9, bacula: Jaime
  • dbbackups: Jaime
  • etherpadlite: Alex. Killed idle db connection.
  • heartbeat: will be handled as part of the failover process by DBAs
  • librenms: Arzhel. Killed idle db connection.
  • puppet: Alex
  • racktables: jmm
  • rt: Daniel, alex can help. Restarted apache2 on ununpentium to reset connections.

m2

Current schemas

These are the current dbs, and what was needed to failover then:

  • otrs: Normally requires restart of otrs-daemon, apache on mendelevium. People: akosiaris
  • debmonitor: Normally nothing is required. People: volans, moritz
    • Django smoothly fails over without any manual intervention.
    • At most check sudo tail -F /srv/log/debmonitor/main.log on the active Debmonitor host (debmonitor1001 as of Jul. 2019).
      • Some failed writes logged with HTTP/1.1 500 and a stacktrace like django.db.utils.OperationalError: (1290, 'The MariaDB server is running with the --read-only option so it cannot execute this statement') are expected, followed by the resume of normal operations with most write operations logged as HTTP/1.1 201.
    • In case of issues it's safe to try a restart performing: sudo systemctl restart uwsgi-debmonitor.service
  • heartbeat: Nothing required
  • xhgui: performance team
  • recommendationapi: Normally requires a restart on scb. People: akosiaris
  • iegreview: Shared nothing PHP application; should "just work". People: bd808, Niharika
  • scholarships: Shared nothing PHP application; should "just work". People: bd808, Niharika

dbproxies will need reload (systemctl reload haproxy && echo "show stat" | socat /run/haproxy/haproxy.sock stdio). You can check what's the active proxy by:

host m2-master.eqiad.wmnet

The passive can be checked by running grep -iR m2 hieradata/hosts/* on the puppet repo

Deleted/archived schemas

  • testotrs: alex: kill it with ice and fire
  • testblog: archive it like blog
  • bugzilla_testing: archive it with the rest of bugzillas
  • reviewdb + reviewdb-test (deprecated, scheduled to be deleted): Gerrit: Normally needs a restart on gerrit1001 just in case. People: akosiaris, hashar

owners, (or in many cases just people that volunteer to help for the failover)

  • reviewdb/reviewdb-test: Daniel, Chad, Akosiaris on SRE side
  • otrs: Akosiaris
  • heartbeat: DBA
  • debmonitor: volans, moritzm
  • recommendationapi: bmansurov, #Research on Phabricator. Akosiaris on SRE side
  • xhgui: performance team

m3

Current schemas

  • phabricator_*: 57 schemas to support phabricator itself
  • rt_migration: schema needed for some crons related to phabricator jobs
  • bugzilla_migration: schema needed for some crons related to phabricator jobs

Dropped schemas

  • fab_migration

m5

Current schemas

Example Failover process

See https://wikitech.wikimedia.org/wiki/MariaDB#Misc_section_failover_checklist_(example_with_m2)