You are browsing a read-only backup copy of Wikitech. The live site can be found at wikitech.wikimedia.org

Difference between revisions of "Dumps/XML-SQL Dumps"

From Wikitech-static
Jump to navigation Jump to search
imported>Bstorm
(→‎A labstore host dies (web or nfs server for dumps): Updated certs section again as we work through it)
 
imported>ArielGlenn
Line 7: Line 7:
*For current dumps issues, see the [https://phabricator.wikimedia.org/project/sprint/board/1519/ Dumps-generation project] in Phabricator.
*For current dumps issues, see the [https://phabricator.wikimedia.org/project/sprint/board/1519/ Dumps-generation project] in Phabricator.
*For current redesign plans and discussion, see [[Dumps/Dumps 2.0: Redesign]].
*For current redesign plans and discussion, see [[Dumps/Dumps 2.0: Redesign]].
*For information about the WikiTeam initiative to upload these dumps to the Internet Archive, see the [[Nova Resource:Dumps]] project.


'''Older Info'''
'''Older Info'''


*For information about the initiative to upload these dumps to the Internet Archive, (see [[Dumps/Archive.org]] for details).
*For information about the initiative to upload these dumps to the Internet Archive, see the [[Nova Resource:Dumps]] project.
* See [[Dumps/Known issues and wish list]] for a very old wishlist.
*For historical information about the dumps, see [[Dumps/History]].
*For historical information about the dumps, see [[Dumps/History]].
*For old info on HTML/static dumps, see  [[Dumps/DumpHTML]].


'''Hodge Podge'''
'''Hodge Podge'''

Revision as of 00:14, 15 March 2019

We want mirrors! For more information see Dumps/Mirror status.

Docs for end-users of the xml/sql dumps can be found on meta. If you're a Toolforge user and want to use the dumps, check out Help:Shared storage for information on where to find the files.

Current Info

Older Info

  • For information about the initiative to upload these dumps to the Internet Archive, see the Nova Resource:Dumps project.
  • For historical information about the dumps, see Dumps/History.

Hodge Podge

For a list of various information sources about the dumps, see Dumps/Other information sources.


The following info is for folks who hack on, maintain and administer the dumps and the dump servers.

Setup

Current architecture

Rather than bore you with that here, see Dumps/Current Architecture.

Current hosts

For which hosts are serving data, see Dumps/Dump servers. For which hosts are generating dumps, see Dumps/Snapshot hosts. For which hosts are providing space via NFS for the generated dumps, see Dumps/Dumpsdata hosts.

Adding a new snapshot host

Install and add to site.pp in the snapshot stanza (see snapshot1005-9). Add the relevant hiera entries, documented in site.pp, according to whether the server will run enwiki or wikidatawiki xml/sql dumps (only one server should do so for each of these huge wikis), or misc cron jobs (one host should do so, and it should not run xml/sql dumps).

Dumps run out of /srv/deployment/dumps/dumps/xmldumps-backup on each server. Deployment is done via scap3 from the deployment server.

Starting dump runs

  1. Do nothing. These jobs run out of cron.

Troubleshooting

Fixing code

The python dumps scripts are all in the operations/dumps.git repo, branch 'master'. Various supporting scripts that are not part of the dumps proper, are in puppet; you can find those in the snapshot module.

The python dump scripts rely on a number of C utilities for manipulating MediaWiki xml files and/or bzip2-compressed files. These can be found in the operations/dumps/mwbzutils repo.

Getting a copy of the python scripts as a committer:

git clone ssh://<user>@gerrit.wikimedia.org:29418/operations/dumps.git
git checkout master

ssh to the deployment host.

  1. cd /srv/deployment/dumps/dumps
  2. git pull
  3. scap deploy

Note: you likely need to be in the ops ldap group to do the scap. Also note that changes pushed will not take place until the next dump run; any current run uses the existing dump code to complete.

Fixing configuration files

Configuration file setup is handled in the snapshot puppet module. You can check the config files themselves at /etc/dumps/confs on any snapshot host.

Out of space

See Dumps/Dumpsdata hosts#Space issues if we are running out of space on the hosts where the dumps are written as generated.

See Dumps/Dump servers#Space issues if we are running out of space on the dumps web or rsync servers.

Broken dumps

The dumps can break in a few interesting ways.

  1. They no longer appear to be running. Is the monitor running? See below. If it is running, perhaps all the workers are stuck on a stage waiting for a previous stage that failed.
  1. Shoot them all and let the cron job sort it out. You can also look at the error notifications section and see if anything turns up; fix the underlying problem and wait for cron.
  1. A dump for a particular wiki has been aborted. This may be due to me shooting the script because it was behaving badly, or because a host was powercycled in the middle of a run.
  1. The next cron job should fix this up.
  1. A dump on a particular wiki has failed.
  1. Check the information on error notifications, track down the underlying issue (db outage? MW deploy of bad code? Other?), fix it, and wait for cron to rerun it.
  1. A dump has hung on some step, the processes in the pipeline apparently reading/writing and yet no output being produced.
  1. We get email notifications to ops-dumps@wikimedia.org if there is a lockfile for a wiki and no file updated within the last 4 hours. These must be investigated on a case by case basis.

Error notifications

Email is ordinarily sent if a dump does not complete successfully, going to ops-dumps@wikimedia.org which is an alias. If you want to follow and fix failures, add yourself to that alias.

Logs are kept of each run. From any snapshot host, you can find the logs in the directory (/mnt/data/xmldatadumps/private/<wikiname>/<date>/dumplog.txt). From these you may glean more reasons for the failure.

Logs that capture the rest are available in /var/log/dumps/ and may also contain clues.

When one or more steps of a dump fail, the index.html file for that dump includes a notation of the failure and sometimes more information about it. Note that one step of a dump failing does not prevent other steps from running unless they depend on the data from that failed step as input.

Monitoring is broken

If the monitor does not appear to be running (the index.html file showing the dumps status is never updated), check which host should have it running (look for the host with profile::dumps::generation::worker::monitor in the role, at this writing snapshot1007). This is a service that should be restarted with systemd or upstart, depending on the os version, so you'll want to see what change broke it.

Rerunning dumps

You really really don't want to do this. These jobs run out of cron. All by themselves. Trust me. Once the underlying problem (bad MW code, unhappy db server, out of space, etc) is fixed, it will get taken care of.

Okay, you don't trust me, or something's really broken. See Dumps/Rerunning a job if you absolutely have to rerun a wiki/job.

A dump server (snapshot host) dies

If it can be brought back up within a day, don't bother to take any measures, just get the box back in service. If there are deployments scheduled in the meantime, you may want to remove it from scap targets for mediawiki: edit hieradata/common/scap/dsh.yaml for that.

If it's the testbed host (check the role in site.pp), just leave everything alone, no services will be impacted

If it will take more than a day to be fixed, swap it for the testbed/canary box, and remove it from scap targets for mediawiki:

  • open manifests/site.pp and find the stanza for the broken snapshot host, grab that role
  • now look for the snapshot host with role(dumps::generation::worker::testbed), and put the broken host's role there
  • in hieradata/hosts, git mv brokenhost to that testbed hostname, if there is such a file
  • edit hieradata/common/scap/dsh.yaml to remove the broken host as a mediawki scap target
  • merge all the things

A dumpsdata host dies

Coming soon...

A labstore host dies (web or nfs server for dumps)

These are managed by Wikimedia Cloud Services. When this situation should arise, someone on that team should conduct the procedure below.

At current writing there are two labstore boxes that we care about; one serves web to the public + NFS to stats hosts; the other serves NFS to cloud VPS instances/toolforge.

  • Determine which box went down. You can look at hieradata/common.yaml and the values for dumps_dist_active_web, dumps_dist_nfs_servers, and dumps_dist_active_vps for this.
  • Remove the host from dumps_dist_nfs_servers.
  • Change dumps_dist_active_vps to the other server, if the dead server was the vps NFS server.
  • Change dumps_dist_active_web to the other server, if the dead server was NOT the vps NFS server (this means it was the stats NFS server, which is all that this setting controls).
  • Forcibly unmount the NFS mount for the dead host everywhere you can in ToolForge. Try Cumin first, if that fails try clush for ToolForge. See #Notes on NFS issues and ToolForge load for more about this.
    • Hint: If using clush under pressure, try:
      clush -w @all 'sudo umount -fl /mnt/nfs/dumps-[FQDN of down server]'
      on tools-clushmaster-02.tools.eqiad.wmflabs
  • If the dead server was the web server:
    • The certificate should be active on both hosts, so that shouldn't be a problem thanks to the acme_chief module, but you still need to change the do_acme hiera value on each host after you change DNS.
      • By all means still check!! From a shell as your user account on the host you can run
        echo | openssl s_client -showcerts  -connect localhost:443 2>/dev/null | openssl x509 -inform pem -noout -text
        on both servers to see the certs. Check the "Validity" section.
    • Change the 'dumps' entry here, and deploy to gdns according to https://wikitech.wikimedia.org/wiki/DNS#authdns-update
    • Once that change has had some time to propagate (check the TTL), test to see that it successfully picked up a cert (checking https://dumps.wikimedia.org should work). Trying puppet runs on the working server might be helpful here.

Notes on NFS issues and ToolForge load

Both hosts' NFS filesystems are mounted on all hosts that use either server for NFS, and the clients determine which nfs filesystem to use based on a symlink that varies from cluster to cluster. The dumps_dist_active_web setting only affects the symlink to the NFS filesystem on the stats hosts. Likewise, the dumps_dist_active_vps only affects the symlink to NFS filesystem on the VPSes (including Toolforge).

If the server is the vps NFS server (the value of dumps_dist_active_vps), Toolforge is probably losing its mind by now. The best that can be done is to remove it from dumps_dist_nfs_servers and change dumps_dist_active_vps to the working server and unmount that NFS share everywhere you possibly can. The earlier this is done, the better. Load will be climbing like mad on any Cloud VPS server, including Toolforge nodes the entire time. This may or may not stop because you unmounted everything.