You are browsing a read-only backup copy of Wikitech. The live site can be found at wikitech.wikimedia.org

Nova Resource:Deployment-prep/Dumps/Setup notes

From Wikitech-static
Jump to navigation Jump to search

Notes on initial setup of dumps snapshot instance in beta

The existing instance (development-snapshot01.development-prep.eqiad.wmflabs) was set up so that dumps could be tested with php7 on stretch.

These notes describe both things any new instance would need, as well as things I needed specifically for the given testbed.

Image type

You'll want m1.medium; 40G gives you enough space for /srv/mediawiki and /srv/dumps on the main 20G volume, with 20G left for the data directory where dumps will be written. Expect to run 4 processes at once for testing "large' wikis; that seems to be ok as far as db server load.

Puppetmaster

If you are setting up an instance that will use standard puppet manifests, you can use the deployment-prep project puppetmaster. I needed to be able to cherry-pick uncommitted changesets from gerrit onto the puppetmaster without breaking the rest of the cluster in beta. For this reason I set up a standalone puppet master, which needed to be on jessie, since stretch does not support the role::puppetmaster::standalone role. See Help:Standalone_puppetmaster for more details.

Etcd

If you are using the deployment-prep project puppetmaster, you can skip this part. If you are using your own puppetmaster so you can cherry-pick stuff out of gerrit, you'll need to deal with the certificate issue.

Etcd presents the puppet ca cert for the deployment-prep project puppetmaster. Local connections from MediaWiki to etcd are verified via a copy of this cert on your snapshot host, sitting in /etc/ssl/certs/Puppet_Internal_CA.pem If you use your own puppetmaster, then your copy of that cert will be the one for your puppetmaster instead of the deployment-prep project puppetmaster, and php will give you an error like the following:

Fatal error: Uncaught ConfigException: Failed to load configuration from etcd: (curl error: 60) Peer certificate cannot be authenticated with given CA certificates in /srv/mediawiki/php-master/includes/config/EtcdConfig.php:190

You can verify that this is the problem by running curl standalone: curl --trace-ascii - -L https://deployment-etcd-01.deployment-prep.eqiad.wmflabs:2379/v2/stats/leader

You'll be shown the contents of the cert presented, at least the CN, and the locations of the cert crt and cert dir on the local host (/etc/ssl/certs/ca-certificates.crt and /etc/ssl/certs). You can run openssl x509 -inform pem -in /etc/ssl/certs/Puppet_Internal_CA.pem -text on the local snapshot host to see that your version of the cert is for your local puppetmaster.

To work around this, grab the Puppet_Internal_CA.pem off the deployment-prep project puppetmaster, or form one of the other mediawiki instances, stash it in, say, /usr/local/share/ca-certificates/Puppet_Beta_Internal_CA.crt. You can check that the directory is right, by looking at where the symlinks for the certs in /etc/ssl/certs go.

Then make the symlink of that into /etc/ssl/certsPuppet_Beta_Internal_CA.pem

And finally, run the script to update the certs crt: /usr/sbin/update-ca-certificates

This will add the new cert to the crt. If later you go back to using the deployment-prep project puppetmaster, no problems that there's an extra cert in there for your old local puppetmaster.

Test with curl; it should just work.

Hiera config

Snapshot hosts need a special hiera setup for labs. You should add these values before applying any classes.


dumps_datadir_mount_type: labslvm
dumps_managed_subdirs:
- /mnt/dumpsdata/xmldatadumps
- /mnt/dumpsdata/xmldatadumps/public
- /mnt/dumpsdata/xmldatadumps/private
- /mnt/dumpsdata/xmldatadumps/temp
dumps_nfs_server: ''
mediawiki_php7: true
puppetmaster: deployment-dumps-puppetmaster.deployment-prep.eqiad.wmflabs
snapshot::dumps::php: /usr/bin/php7.0
snapshot::dumps::runtype: regular

Here's what they do, in brief.

  • dumps_datadir_mount_type - what sort of data directory you want. In this case, you want a labs lvm created. It will use that extra 20g from your image.
  • dumps_managed_subdirs - these are directories that, in production, would be created on the nfs server that provides the data dir. Since we have a local dat adir, we make these ourselves.
  • dumps_nfs_server - no nfs server provides the data dir. Don't try to mount from anywhere.
  • mediawiki_php7 - stretch only: use php7. Install all the required packages, set up the php config files.
  • puppetmaster - point it to your local puppetmaster if you're not using the deployment-prep project puppetmaster.
  • snapshot::dumps::php: - stretch only, use php7. On other dstros, If you want hhvm, put that instead. If you want php5 instead, put that.
  • snapshot::dumps::runtype - type of dumps we will run here. Irrelevant for dumps in beta, since we don't run them out of cron with two boxes doing 27 parallel jobs for enwiki or wikidatawiki, but put the value here anyways so puppet doesn't whine.

Scap of dumps on the beta deployment host

The dumps repo on deployment-tin was left over from trebuchet. I needed to do a bunch of setup and cleanup. You won't have to. Here's a description of it for the record...(coming soon)

Classes

Every mediawiki instance, including dumps snapshot hosts, needs to have role::beta::mediawiki applied. Do this first via horizon.

Scap to the instance should get setup next. Add the host to ... (coming soon). Then add role::dumps::generation::worker::beta_testbed to get mediawiki and dumps on there. Run puppet a few times until things settle down.