You are browsing a read-only backup copy of Wikitech. The live site can be found at

Switch Datacenter

From Wikitech-static
Revision as of 12:23, 12 April 2016 by imported>BBlack (→‎MediaWiki-related)
Jump to navigation Jump to search


A datacenter switchover (from eqiad to codfw, or vice-versa) comprises switching over multiple different components, some of which can happen independently and many of which need to happen in lockstep. This page documents all the steps needed to switch over from a master datacenter to another one, broken up by component.

Schedule for Q3 FY2015-2016 rollout

  • Deployment server: Wednesday, January 20th
  • Media storage/Swift: Thursday, April 14th 17:00 UTC
  • Traffic: Thursday, March 10th
  • MediaWiki 5-minute read-only test: Tuesday, March 15th 07:00 UTC
  • Services: Thursday, March 17th, 10:00 UTC
  • Services (second test): (week of March 28th)
  • ElasticSearch: Thursday, April 7th, 12:00 UTC
  • MediaWiki: Tuesday, April 19th, 14:00 UTC / 07:00 PDT / 16:00 CEST (requires read-only mode)

Switching back

  • MediaWiki: Thursday, April 21st, 14:00 UTC / 07:00 PDT / 16:00 CEST (requires read-only mode)
  • Services, ElasticSearch, Traffic, Swift, Deployment server: Thursday, April 21st, after the above is done

Per-service switchover instructions


Before switch over (after any local testing within codfw):

  • Wipe memcached to prevent stale values (MediaWiki isn't ready for Multi-DC yet).
    • Once eqiad is read-only and cofdw read-only master/slaves are caught up; any subsequent requests to codfw that set memcached are fine.

The overall plan for an eqiad->codfw switchover is:

  • Warmup codfw databases
  • Warming up memcached with parsercache entries
  • Stop jobqueues in eqiad: cherry-pick and run salt -C 'G@cluster:jobrunner and G@site:eqiad' 'service jobrunner stop; service jobchron stop;'
  • Deploy mediawiki-config with all shards set to read-only in eqiad: revive
  • Set eqiad databases (masters) in read-only mode; stop pt-heartbeat (?)
  • Switch the datacenter in puppet: set $app_routes['mediawiki'] = 'codfw' in puppet (cherry-pick and $wmfMasterDatacenter in mediawiki-config ( This has several consequences:
    • Redis replication will be flowing codfw => eqiad once puppet has ran first in codfw salt 'mc2*' 'puppet agent -t'; salt 'rdb2*' 'puppet agent -t' and then in eqiad salt 'mc1*' 'puppet agent -t'; salt 'rdb1*' 'puppet agent -t'
    • RESTBase (uses puppet's $app_routes, needs puppet run + service restart) salt -b10% -G 'cluster:restbase' 'puppet agent -t; service restbase restart'
    • All other services will be automatically reconfigured whenever puppet runs after $app_routes is modified salt 'sc*' 'puppet agent -t'
  • Parsoid switch of the action api endpoint (manages its own config, will need a deploy + restart of its own) merge
  • Deploy Varnish to switch backend to appserver.svc.codfw.wmnet/api.svc.codfw.wmnet - merge and run puppet on all cache_text
  • Master swap for every core (s1-7), ES (es1-3), parsercache (pc) and External store (x1) database
    • Technically there is nothing to do at database level once circular replication is setup
    • In reality, some small deploys: set codfw masters mysql as read-write and start pt-heartbeat-wikimedia there; change *-master dns to the new masters (only used by humans); optionally: puppetize $master = true
  • Deploy mediawiki-config (only codfw?) with all shards set to read-write
  • Start the jobqueue in codfw: cherry-pick and run salt -b 6 -C 'G@cluster:jobrunner and G@site:codfw' 'puppet agent -t'

The plan for switching back is the reverse of the above, with the following extra step:

  • Wipe memcached to clear invalidated cached memory


See the separate page on how to promote a new slave to master. Please note that no topology change happens on failover, so very little from that applies.

Job queue

  • Jobrunners in eqiad get stopped. This is done by setting jobrunner_state: 'stopped' in hiera
  • mediawiki goes read-only - this should ensure no new job gets enqueued.
  • $mw_primary is set to 'codfw
  • mediawiki primary gets switched in mediawiki-config
  • force a puppet run on the codfw redis hosts and on eqiad hosts after that: salt 'rdb2*' 'puppet agent --enable; puppet agent -t'
  • mediawiki is set read-write in codfw
  • We start the jobrunners in codfw and they will consume the jobs left over from eqiad. This is done by setting jobrunner_state: running


You can force Varnish to pass a request to a backend in codfw or eqiad using the X-Wikimedia-Debug header.

For codfw, use X-Wikimedia-Debug: backend=mw2017.codfw.wmnet

For eqiad, use X-Wikimedia-Debug: backend=mw1017.eqiad.wmnet

Media storage/Swift

Ahead of the switchover, originals and thumbs

 salt -v -t 10 -b 17 -C 'G@cluster:cache_upload and G@site:eqiad' 'puppet agent --test'
 salt -v -t 10 -b 17 -C 'G@cluster:cache_upload' 'puppet agent --test'
 salt -v -t 10 -b 17 -C 'G@cluster:cache_upload' 'puppet agent --test'

Once mediawiki has been switched to codfw


Point CirrusSearch to codfw by editing wmgCirrusSearchDefaultCluster InitialiseSettings.php. The default value is "local", which means that if mediawiki switches DC, everything should be automatic.


GeoDNS user routing

Inter-Cache routing

Cache->App routing


  • RESTBase and Parsoid already active in codfw, using eqiad MW API.
  • Shift traffic to codfw:
    • Public traffic: Update Varnish backend config.
    • Update RESTBase and Flow configs in mediawiki-config to use codfw.
  • During MW switch-over:

Other miscellaneous