You are browsing a read-only backup copy of Wikitech. The primary site can be found at wikitech.wikimedia.org

Depooling servers: Difference between revisions

From Wikitech-static
Jump to navigation Jump to search
imported>Ottomata
No edit summary
imported>MVernon
m (correct path to conftool data (node not nodes))
Line 9: Line 9:
</pre>
</pre>


You can find out <code><cluster-name></code> from looking at <code>conftool-data/nodes/</code> in the <code>operations/puppet</code> repository.
You can find out <code><cluster-name></code> from looking at <code>conftool-data/node/</code> in the <code>operations/puppet</code> repository.


An easier option is to ssh into the server and just run
An easier option is to ssh into the server and just run

Revision as of 15:16, 2 February 2022

Contains various ways to depool different servers. In a glorious future, we'd just need to do conftool depool <service> <node> but we aren't there yet!

cp*** machines

These are the varnish frontends, and should be depooled via conftool. To fully depool a server, do as root on any puppetmaster frontend:

confctl select dc=<dc>,cluster=<cluster-name>,name=cp*** set/pooled=no

You can find out <cluster-name> from looking at conftool-data/node/ in the operations/puppet repository.

An easier option is to ssh into the server and just run

$ sudo -i depool

To re-pool, run:

$ sudo -i pool

Note the use of sudo -i. depool needs environment variables to get access to etcd.

mw* machines (and others)

The mw* application servers and other servers managed via pybal/etcd are depooled by using conftool. This is documented at https://wikitech.wikimedia.org/wiki/LVS#Etcd_as_a_backend_to_Pybal_.28All_of_production.29