You are browsing a read-only backup copy of Wikitech. The primary site can be found at wikitech.wikimedia.org

Depooling servers: Difference between revisions

From Wikitech-static
Jump to navigation Jump to search
imported>Krinkle
No edit summary
imported>BCornwall
(Consistency of shell prompt and use of code block)
 
Line 5: Line 5:
These are the varnish frontends, and should be depooled via [[conftool]]. To fully depool a server, do as root on any puppetmaster frontend:
These are the varnish frontends, and should be depooled via [[conftool]]. To fully depool a server, do as root on any puppetmaster frontend:


<pre>
<code>
confctl select dc=<dc>,cluster=<cluster-name>,name=cp*** set/pooled=no
$ confctl select dc=<dc>,cluster=<cluster-name>,name=cp*** set/pooled=no
</pre>
</code>


You can find out <code><cluster-name></code> from looking at <code>conftool-data/node/</code> in the <code>operations/puppet</code> repository.
You can find out <code><cluster-name></code> from looking at <code>conftool-data/node/</code> in the <code>operations/puppet</code> repository.


An easier option is to ssh into the server and just run
An easier option is to ssh into the server and just run
<syntaxhighlight lang="bash">
<code>
$ sudo -i depool
$ sudo -i depool
</syntaxhighlight>
</code>
 
To re-pool, run:
<syntaxhighlight lang="bash">
$ sudo -i pool
</syntaxhighlight>


Note the use of sudo -i. depool needs environment variables to get access to etcd.
Note the use of sudo -i. depool needs environment variables to get access to etcd.
Line 26: Line 21:


The mw* application servers and various other servers are managed by [[PyBal]] directly in Etcd. These can be controlled via [[conftool]] as well, including with the <code>depool</code> and <code>pool</code>utilities. This is documented at [[LVS#Etcd as a backend to Pybal (All of production)]].
The mw* application servers and various other servers are managed by [[PyBal]] directly in Etcd. These can be controlled via [[conftool]] as well, including with the <code>depool</code> and <code>pool</code>utilities. This is documented at [[LVS#Etcd as a backend to Pybal (All of production)]].
== Re-pooling ==
Using the same example host from above, re-pooling the hosts consists of a similar command from the de-pooling:
<code>
$ sudo confctl select dc=<dc>,cluster=<cluster-name>,name=cp*** set/pooled=yes
</code>
Alternatively, ssh into the server and run
<code>
$ sudo -i pool
</code>
[[Category:How-To]]
[[Category:How-To]]
[[Category:Runbooks]]
[[Category:Runbooks]]

Latest revision as of 17:28, 2 August 2022

Contains various ways to depool different servers. In a glorious future, we'd just need to do conftool depool <service> <node> but we aren't there yet!

cp*** machines

These are the varnish frontends, and should be depooled via conftool. To fully depool a server, do as root on any puppetmaster frontend:

$ confctl select dc=<dc>,cluster=<cluster-name>,name=cp*** set/pooled=no

You can find out <cluster-name> from looking at conftool-data/node/ in the operations/puppet repository.

An easier option is to ssh into the server and just run $ sudo -i depool

Note the use of sudo -i. depool needs environment variables to get access to etcd.

mw* machines (and others)

The mw* application servers and various other servers are managed by PyBal directly in Etcd. These can be controlled via conftool as well, including with the depool and poolutilities. This is documented at LVS#Etcd as a backend to Pybal (All of production).

Re-pooling

Using the same example host from above, re-pooling the hosts consists of a similar command from the de-pooling:

$ sudo confctl select dc=<dc>,cluster=<cluster-name>,name=cp*** set/pooled=yes

Alternatively, ssh into the server and run

$ sudo -i pool