You are browsing a read-only backup copy of Wikitech. The live site can be found at wikitech.wikimedia.org
Switch Datacenter
Introduction
A datacenter switchover (from eqiad to codfw, or vice-versa) comprises switching over multiple different components, some of which can happen independently and many of which need to happen in lockstep. This page documents all the steps needed to switch over from a master datacenter to another one, broken up by component. SRE Service Operations maintains the process and software necessary to run the switchover.
Weeks in advance preparation
- 10 weeks before: Coordinate dates and communication plan with involved groups: Switch_Datacenter/Coordination
- 3 weeks before: Run a "live test" of the cookbooks by "switching" from the passive DC to the active DC. The
--live-test
flag will skip actions that could harm the active DC or do them on the passive DC. This will instrument most of the code paths used in the switchover and help identify issues. This process will !log to SAL so you should coordinate with others, but otherwise should be non-disruptive. Due to changes since the last switchover you can expect code changes to become necessary, so take the time and assistance needed into account.
Per-service switchover instructions
MediaWiki
We divide the process in logical phases that should be executed sequentially. Within any phase, top-level tasks can be executed in parallel to each other, while subtasks are to be executed sequentially to each other. The phase number is referred to in the names of the tasks in the operations/cookbooks repository, in the cookbooks/sre/switchdc/mediawiki/ path.
Days in advance preparation
- OPTIONAL: SKIP IN AN EMERGENCY: Make sure databases are in a good state. Normally this requires no operation, as the passive datacenter databases are always prepared to receive traffic, so there are no actionables. Some things that #DBAs normally should make sure for the most optimal state possible (sanity checks):
- There is no ongoing long-running maintenance that affects database availability or lag (schema changes, upgrades, hardware issues, etc.). Depool those servers that are not ready.
- Replication is flowing from eqiad -> codfw and from codfw -> eqiad (replication is usually stopped in the passive -> active direction to facilitate maintenance)
- All database servers have its buffer pool filled up. This is taken care automatically with the automatic buffer pool warmup functionality. For sanity checks, some sample load could be sent to the MediaWiki application server to check requests happen as quickly as in the active datacenter.
- These were the things we prepared/checked for the 2018 switch
- Make absolutely sure that parsercache replication is working from the active to the passive DC. This is important
- Check appserver weights on servers in the passive DC, make sure that newer hardware is weighted higher (usually 30) and older hardware is less (usually 25)
Phase 0 - preparation
- Disable puppet on maintenance hosts in both eqiad and codfw: 00-disable-puppet.py
- Reduce the TTL on
appservers-ro, appservers-rw, api-ro, api-rw, jobrunner, videoscaler, parsoid-php
to 10 seconds: 00-reduce-ttl.py The warmup step (below) typically takes longer than the old TTL (5 minutes) so this step doesn't wait it to expire. Make sure that at least 5 minutes have passed before moving to Phase 1, the cookbook should force you to wait. - Warm up APC running the mediawiki-cache-warmup on the new site clusters. The warmup queries will repeat automatically until the response times stabilize: 00-warmup-caches.py
- The global "urls-cluster" warmup against the appservers cluster
- The "urls-server" warmup against all hosts in the appservers cluster.
- The "urls-server" warmup against all hosts in the api-appservers cluster.
- Set downtime for Read only checks on mariadb masters changed on Phase 3 so they don't page. This is not covered by the switchdc script.
Phase 1 - stop maintenance
- Stop maintenance jobs in both datacenters and kill all the periodic jobs (systemd timers and cronjobs) on all maintenance host in both datacenters: 01-stop-maintenance.py
Phase 2 - read-only mode
- Go to read-only mode by changing the
ReadOnly
conftool value: 02-set-readonly.py
Phase 3 - lock down database masters
- Put old-site core DB masters (shards: s1-s8, x1, es4-es5) in read-only mode and wait for the new site's databases to catch up replication: 03-set-db-readonly.py
Phase 4 - switch active datacenter configuration
- Switch the discovery records and MediaWiki active datacenter: 04-switch-mediawiki.py
- Flip
appservers-ro, appservers-rw, api-ro, api-rw, jobrunner, videoscaler, parsoid-php
topooled=true
in the new site. Since both sites are now pooled in etcd, this will not actually change the DNS records for the active datacenter. - Flip
WMFMasterDatacenter
from the old site to the new. - Flip
appservers-ro, appservers-rw, api-ro, api-rw, jobrunner, videoscaler, parsoid-php
topooled=false
in the old site. After this, DNS will be changed for the old DC and internal applications (except mediawiki) will start hitting the new DC
- Flip
Phase 5 - Invert Redis replication for MediaWiki sessions
- Invert the Redis replication for the
sessions
cluster: 05-invert-redis-sessions.py
Phase 6 - Set new site's databases to read-write
- Set new-site's core DB masters (shards: s1-s8, x1, es4-es5) in read-write mode: 06-set-db-readwrite.py
Phase 7 - Set MediaWiki to read-write
- Go to read-write mode by changing the
ReadOnly
conftool value: 07-set-readwrite.py
Phase 8 - post read-only
- Restart Envoy on the jobrunners that are now inactive, to trigger changeprop to re-resolve the DNS name and connect to the new DC: 08-restart-envoy-on-jobrunners.py
- A steady rate of 500s is expected until this step is completed, because changeprop will still be sending edits to jobrunners in the old DC, where the database master will reject them.
- Start maintenance in the new DC: 08-start-maintenance.py
- Run puppet on the maintenance hosts, which will reactivate systemd timers in both datacenters and add cronjobs in the primary DC
- Most Wikidata-editing bots will restart once this is done and the "dispatch lag" has recovered. This should bring us back to 100% of editing traffic.
- Update tendril for new database masters: 08-update-tendril.py
- Pure cosmetic change, no effect on production. No changes required for database zarcillo (which has a different master for eqiad and codfw).
- The parsercache hosts and x2 will need to manually be updated in tendril see T266723. This is not covered by the switchdc script.
- Set the TTL for the DNS records to 300 seconds again: 08-restore-ttl.py
- Update DNS records for new database masters deploying eqiad->codfw; codfw->eqiad This is not covered by the switchdc script. Please use the following to SAL log
!log Phase 8.5 Update DNS records for new database masters
- Run Puppet on the database masters in both DCs, to update expected read-only state: 08-run-puppet-on-db-masters.py.
- Make sure the CentralNotice banner informing users of readonly is removed. Keep in mind, there is some minor HTTP caching involved (~5mins)
- Remove the downtime added in phase 0.
Phase 9 - verification and troubleshooting
This is not covered by the switchdc script
- Make sure reading & editing works! :)
- Make sure recent changes are flowing (see Special:RecentChanges, EventStreams, and the IRC feeds)
- Make sure email works (
exim4 -bp
on mx1001/mx2001, test an email)
Dashboards
- Apache/FCGI
- App servers
- ATS cluster view (text)
- ATS backends<->Origin servers overview (appservers, api, restbase)
- MediaWiki errors
- Database Errors logs
Media storage/Swift
Switchover
![]() | This information is outdated. swiftrepl will now only run in the active datacenter, automatically: T285373. (This section will be removed post-switchover) |
Cache-wise, Swift is active-active. However the background synchronization swiftrepl
needs to run in what Mediawiki considers the primary datacenter. A patch such as 622522 accomplishes this and needs to be merged after Mediawiki has been switched over (up to e.g. 24/48 hours later is acceptable)
Switching back
Revert 622522 to move swiftrepl
back to eqiad
Dashboards
- Swift eqiad
- Swift codfw
- Thumbor
- ATS cluster view (upload)
- ATS backends<->Origin servers overview (swift)
ElasticSearch
General context on how to switchover
CirrusSearch talks by default to the local datacenter ($wmfDatacenter
). If Mediawiki switches datacenter, elasticsearch will automatically follow.
Manually switching CirrusSearch to a specific datacenter can always be done. Point CirrusSearch to codfw by editing wmgCirrusSearchDefaultCluster
InitialiseSettings.php.
To ensure coherence in case of lost updates, a reindex of the pages modified during the switch can be done by following Recovering from an Elasticsearch outage / interruption in updates.
Preserving more_like query cache performance
CirrusSearch has a caching layer that caches the result of Elasticsearch queries such as "more like this" queries (which are used, among other things, to generate "Related Articles" at the bottom of mobile Wikipedia pages).
Switching datacenters will result in degraded performance while the cache fills back up.
In order to avoid the aforementioned performance degradation, a mitigation should be deployed that will hardcode more_like
queries to keep routing to the "old" datacenter for 24 hours following the switchover.
Hardcoding the cirrus cluster will allow the stampede of cache misses to be sent to the secondary search cluster which has enough capacity, once typical traffic has migrated to the new datacenter, to serve the increased load.
This hardcoding should be deployed in advance of the switchover. Since it is effectively a no-op until the actual cutover, it can be deployed as far in advance as desired.
For example, if we are switching over from eqiad to codfw, more_like
queries should be hardcoded to route to eqiad; this change should be deployed before the actual cutover.
Then, 24 hours after the cutover, the hardcoding can be removed, allowing more_like queries
to route to the new cirrus dc - in this example, codfw.
Days in advance preparation
Deploy a patch to hardcode more_like query routing to the currently active DC (i.e. the datacenter we are switching over from).
Example Patch: [mediawiki-config] 635411 cirrus: Temporarily hardcode more_like query routing
This mitigation should be left in place for 24 hours following the switchover (equivalent to the cache length), at which point there is no longer any performance penalty to removing the hardcoding.
One day after datacenter switch
Revert the earlier patch to hardcode more_like query routing; this will allow these queries to route to the newly active DC, and there will not be any performance degradation since the caches have been fully populated by this point.
Dashboards
Traffic
General information on generic procedures
Switchover
GeoDNS (User-facing) Routing:
- gerrit: C+2 and Submit commit https://gerrit.wikimedia.org/r/#/c/operations/dns/+/458806
- <any authdns node>: authdns-update
- SAL Log using the following
!log Traffic: depool eqiad from user traffic
Switchback
Same procedure as above, with reversion of the commit specified: GeoDNS.
Dashboards
Services
All services, are active-active in DNS discovery, apart from restbase, that needs special treatment. The procedure to fail over to one site only is the same for every one of them:
- reduce the TTL of the DNS discovery records to 10 seconds
- depool the datacenter we're moving away from in confctl / discovery
- restore the original TTL
- All of the above is done using the
sre.switchdc.services
cookbooks:
# Switch the service "parsoid" to codfw-only
$ cookbook sre.switchdc.services --services parsoid -- eqiad codfw
# Switch all active-active services to codfw, excluding parsoid and cxserver
$ cookbook sre.switchdc.services --exclude parsoid cxserver -- eqiad codfw
Restbase is a bit of a special case, and needs an additional step, if we're just switching active traffic over and not simulating a complete failover:
- pool restbase-async everywhere, then depool restbase-async in the newly active dc, so that async traffic is separated from real-users traffic as much as possible.
Dashboards
Other miscellaneous
Upcoming and past switches
June 2021 switch
- Schedule
- Services: Monday, June 28th, 2021 14:00 UTC
- Traffic: Monday, June 28th, 2021 15:00 UTC
- MediaWiki: Tuesday, June 29th, 2021 14:00 UTC
- Reports
- Services and Traffic
- MediaWiki (1m57s of read-only time)
- Incident documentation/2021-06-29 trwikivoyage primary db
Switching back:
- TBD. Sometime after August 1st. See T281515.
2020 switch
- Schedule
- Services: Monday, August 31st, 2020 14:00 UTC
- Traffic: Monday, August 31st, 2020 15:00 UTC
- MediaWiki: Tuesday, September 1st, 2020 14:00 UTC
- Reports
- Incident documentation/2020-09-01 data-center-switchover (2m49s of read-only time)
Switching back:
- Traffic: Thursday, September 17th, 2020 17:00 UTC
- MediaWiki: Tuesday, October 27th, 2020 14:00 UTC
- Services: Wednesday, October 28th, 2020 14:00 UTC
2018 switch
- Schedule
- Services: Tuesday, September 11th 2018 14:30 UTC
- Media storage/Swift: Tuesday, September 11th 2018 15:00 UTC
- Traffic: Tuesday, September 11th 2018 19:00 UTC
- MediaWiki: Wednesday, September 12th 2018: 14:00 UTC
- Reports
- Datacenter Switchover recap (7m34s of read-only time)
Switching back:
- Schedule
- Traffic: Wednesday, October 10th 2018 09:00 UTC
- MediaWiki: Wednesday, October 10th 2018: 14:00 UTC
- Services: Thursday, October 11th 2018 14:30 UTC
- Media storage/Swift: Thursday, October 11th 2018 15:00 UTC
- Reports
- Datacenter Switchback recap (4m41s of read-only time)
2017 switch
- Schedule
- Elasticsearch: elasticsearch is automatically following mediawiki switch
- Services: Tuesday, April 18th 2017 14:30 UTC
- Media storage/Swift: Tuesday, April 18th 2017 15:00 UTC
- Traffic: Tuesday, April 18th 2017 19:00 UTC
- MediaWiki: Wednesday, April 19th 2017 14:00 UTC (user visible, requires read-only mode)
- Deployment server: Wednesday, April 19th 2017 16:00 UTC
- Reports
- Editing pause for failover test on Wikimedia Blog
Switching back:
- Schedule
- Traffic: Pre-switchback in two phases: Mon May 1 and Tue May 2 (to avoid cold-cache issues Weds)
- MediaWiki: Wednesday, May 3rd 2017 14:00 UTC (user visible, requires read-only mode)
- Elasticsearch: elasticsearch is automatically following mediawiki switch
- Services: Thursday, May 4th 2017 14:30 UTC
- Swift: Thursday, May 4th 2017 15:30 UTC
- Deployment server: Thursday, May 4th 2017 16:00 UTC
- Reports
2016 switch
- Schedule
- Deployment server: Wednesday, January 20th 2016
- Traffic: Thursday, March 10th 2016
- MediaWiki 5-minute read-only test: Tuesday, March 15th 2016, 07:00 UTC
- Elasticsearch: Thursday, April 7th 2016, 12:00 UTC
- Media storage/Swift: Thursday, April 14th 2016, 17:00 UTC
- Services: Monday, April 18th 2016, 10:00 UTC
- MediaWiki: Tuesday, April 19th 2016, 14:00 UTC / 07:00 PDT / 16:00 CEST (requires read-only mode)
- Reports
- Wikimedia failover test on Wikimedia Blog
Switching back:
- MediaWiki: Thursday, April 21st 2016, 14:00 UTC / 07:00 PDT / 16:00 CEST (requires read-only mode)
- Services, Elasticsearch, Traffic, Swift, Deployment server: Thursday, April 21st 2016, after the above is done
Monitoring Dashboards
Aggregated list of interesting dashboards