You are browsing a read-only backup copy of Wikitech. The live site can be found at wikitech.wikimedia.org

Wikimedia Cloud Services team/EnhancementProposals/Operational Automation: Difference between revisions

From Wikitech-static
Jump to navigation Jump to search
imported>David Caro
(Created page with "THIS PROPOSAL WRITE-UP IS IN PROGRESS We currently use Puppet to automate most of our tasks, but it has it's limitations. We still need a tool to automate, collect and review...")
 
imported>BryanDavis
(→‎Proposal: Use wiki links, fix a typo)
Line 1: Line 1:
THIS PROPOSAL WRITE-UP IS IN PROGRESS
We currently use Puppet to automate most of our tasks, but it has it's limitations. We still need a tool to automate, collect and review all our operational procedures. Some examples of such procedures are:
We currently use Puppet to automate most of our tasks, but it has it's limitations. We still need a tool to automate, collect and review all our operational procedures. Some examples of such procedures are:
* Adding a new member to a toolforge instance etcd cluster.
* Adding a new member to a toolforge instance etcd cluster.
Line 8: Line 6:
* Re-image all/a set of the cloudvirt nodes.
* Re-image all/a set of the cloudvirt nodes.
* Manage non-automated upgrades.
* Manage non-automated upgrades.
* Take down a cloudvirt node for maintenance.
* Take down a cloudvirt node for maintenance.<br />
 
 




Line 15: Line 15:


== Proposal ==
== Proposal ==
After reviewing several automation tools (spicerack, saltstack, puppet, ...), and doing a quick POC (see https://gerrit.wikimedia.org/r/c/cloud/wmcs-ansible/+/647735), ansible seems to be the most appropriate tool at this time.
After reviewing several automation tools (spicerack, saltstack, puppet, ...), and doing a quick POC (see [[gerrit:647735]] and [[gerrit:658637]]) for the two more relevant (summary of the experience here https://etherpad.wikimedia.org/p/ansible_vs_spicerack), I've decided to propose spicerack as the de-facto tool for WMCS operational tasks automation.
 
=== Collaboration ===
The main advantage of choosing spicerack is collaboration with the rest of the SRE teams. This comes with both the duty and privilege of becoming co-maintainers for spicerack and related projects, allowing us to have a say in the direction of the project and the use cases that will be supported. With the duty of driving, reviewing and maintaining the projects for all the users (including other SRE teams).
 
=== Structure ===
The Spicerack ecosystem is split in several projects:
 
==== Cumin ====
[[Cumin]] is the lowermost layer, built on top of clusterssh takes care of translating host expressions to hosts, running commands in them (using whatever strategy is selected) and returning the results.
 
This library should be pretty stable and require little to no changes.
 
==== Wmflib ====
[[Python/Wmflib|Wmflib]] is a bundle of generic commonly used functions related to the wikimedia foundation, has some helpful decorators (ex. retry) and similar tools.
 
This library should be used for generic functions that are not bound to the spicerack library and can be reused in other non-spicerack related projects easily.
 
==== Spicerack ====
[[Spicerack]] is the core library, contains more wikimedia specific libraries and cli (cookbook) to interact with different services (ex. toolforge.etcd) and is meant to be used to store the core logic for any interaction with the services.
 
Here we will have to add, specially at the beginning, some libraries to interact with our services, here will also be where more of the reusage of code and collaboration will happen. We should keep always in mind things here that can be used by other group around the foundation. Code in this library will be considerably tested, and no merges should happen without review.


Then, this proposes having a repository with all the playbooks, roles, modules and collections (explained a bit more in detail later). The generic modules and roles can be later moved to other repositories for sharing if they are found useful for other groups.
==== Cookbooks ====
The [[Spicerack/Cookbooks]] repo contains the main recipes to execute, the only logic should be orchestration, and any service management related code should be eventually moved to the above Spicerack library.


=== Repository structure ===
This repository of cookbooks will be shared with the rest of the SRE group, but our specific cookbooks will go under the `cookbooks/wmcs` directory.
Any helper library should go under `cookbooks/wmcs/__init__.py` and we should periodically consider moving as much of the code from there to Spicerack.


=== Execution of the cookbooks ===
As of now, this cookbooks can be run locally, I'm actively considering how to provide a host/vm/... with a spicerack + cumin setup for easy usage and running long cookbooks, but as of right now, we can start locally.


    ├── ansible.cfg
If your cookbooks are only accessing bare metal machines, you can already run them on the cumin hosts for the wiki operations (ex. cumin1001.equiad.wmnet), but those hosts have no access to the VMs as of writing this.
    ├── ansible_collections
    ├── collections.yml
    ├── inventory.ini
    ├── playbooks
    ├── README.md
    ├── requirements.txt
    └── requirements.yml


==== ansible.cfg ====
==== Local setup ====
This is the file with the generic configuration, for example the path to the inventory.
To run locally the cookbooks, you will need to create a virtualenv and install the dependencies, for example:


==== inventory.ini ====
    mkvirtualenv spicerack
Default list of hosts to act on, currently only the control node as that's the one from which we execute actions on the cloud.
    pip install wikimedia-spicerack


==== playbooks ====
NOTE: as of writing this, we are pending a release of spicerack, so for this to work you might have to clone spicerack locally and install it, see (https://gerrit.wikimedia.org/r/admin/repos/operations/software/spicerack):
Folder with the set of top-level playbooks, one for each task, if any of these should be reused by another playbook, better move the tasks to a role and just import the role.


==== requirements.txt ====
    git clone "https://dcaro@gerrit.wikimedia.org/r/a/operations/software/spicerack"
Python modules needed to run the playbooks.
    cd spicerack
    pip uninstall wikimedia-spicerack
    pip install -e .


==== collections.yml ====
Then clone the cookbooks repo (see https://gerrit.wikimedia.org/r/admin/repos/operations/cookbooks):
List of external collections needed to run the playbooks.


==== ansible_collections ====
    git clone "https://dcaro@gerrit.wikimedia.org/r/a/operations/cookbooks"
Tree with all the custom collections, and where ansible-galaxy will install the external dependencies.


Collections have their own plugins (modules and libraries) and roles (sets of tasks and variables). Currently there's no support for playbook re-usage.
Then create the relevant config files, one for cumin, wherever you prefer, I recommend `~/.config/spicerack/cumin_config.yaml`, with the contents:


More information here:
    ---
https://docs.ansible.com/ansible/latest/dev_guide/developing_collections.html#developing-collections
    transport: clustershell
    log_file: cumin.log  # feel free to change this to another path of your choosing
    default_backend: direct
    clustershell:
        ssh_options:
            # needed for vms that repeat a name
            - |
                -o StrictHostKeyChecking=no
                -o "UserKnownHostsFile=/dev/null"
                -o "LogLevel=ERROR"


=== Secrets ===
And another for spicerack itself (the cookbook cli), I'll use `~/.config/spicerack/cookbook_config.yaml`
One of the advantages of ansible is that it actually runs on your laptop, that allows to reuse the credentials for ssh and Openstack from your laptop, not having to install them on any other machine.
For the POC it uses a plaintext file (../passwordfile) but we can use an encrypted file, prompt, or some other mechanism.


=== Idempotency ===
    ---
These scripts are focused on operational procedures, so it would not be a big issue if they are not idempotent, but whenever possible, specially at the module/role level, we should try to make them idempotent, that is, that running the same task twice should be possible, and when it makes sense (ex. add node <new_node> to the cluster), do nothing if there's nothing to do.
    # adapt to wherever you cloned the repo
    cookbooks_base_dir: ~/Work/wikimedia/operations/cookbooks
    instance_params:
        cumin_config: ~/.config/spicerack/cumin-config.yaml


=== Future ===
==== Sharing code ====
If we are able to create generic enough modules/roles/playbooks we can easily move them to their own repositories and share either using ansible-galaxy repository or just sharing the modular repositories.


There's the possibility of creating some playbooks for users of the cloud, and though the current idea is to automate our own operational toil, being able to share the modules/roles opens that possibility too (though might be worth trying to avoid users having to use them).
With those config files, now you are able to run the client, from the root of the operations/cookbooks repository, you can list all the cookbooks:


==== Unattended automation ====
    cookbook -c ~/.config/spicerack/cookbook_config.yaml --list
Should be relatively easy to setup a host with access to openstack (probably in the infra project, as it needs credentials for openstack/ssh) to run the exact same scripts using a bot account, for example, as first step in automatic disaster recovery.


=== Playing with the POC ===
NOTE: This is a very premature proposal, this workflow will be improved, feel free to start any questions/discussions in the talks page or ping me directly ([[User:David Caro|David Caro]] ([[User talk:David Caro|talk]]) 17:21, 5 February 2021 (UTC)).
For details on how to test the POC check the README.md file in the patch: https://gerrit.wikimedia.org/r/c/cloud/wmcs-ansible/+/647735

Revision as of 18:30, 5 February 2021

We currently use Puppet to automate most of our tasks, but it has it's limitations. We still need a tool to automate, collect and review all our operational procedures. Some examples of such procedures are:

  • Adding a new member to a toolforge instance etcd cluster.
  • Bootstrapping a new toolforge instance.
  • Searching for the host where a backup is kept.
  • Provisioning a new cloudvirt node.
  • Re-image all/a set of the cloudvirt nodes.
  • Manage non-automated upgrades.
  • Take down a cloudvirt node for maintenance.



Problem statement

All these tasks still require manual operations, following a runbook whenever available, that easily get outdated, are prone to human error and require considerable attention to execute.

Proposal

After reviewing several automation tools (spicerack, saltstack, puppet, ...), and doing a quick POC (see gerrit:647735 and gerrit:658637) for the two more relevant (summary of the experience here https://etherpad.wikimedia.org/p/ansible_vs_spicerack), I've decided to propose spicerack as the de-facto tool for WMCS operational tasks automation.

Collaboration

The main advantage of choosing spicerack is collaboration with the rest of the SRE teams. This comes with both the duty and privilege of becoming co-maintainers for spicerack and related projects, allowing us to have a say in the direction of the project and the use cases that will be supported. With the duty of driving, reviewing and maintaining the projects for all the users (including other SRE teams).

Structure

The Spicerack ecosystem is split in several projects:

Cumin

Cumin is the lowermost layer, built on top of clusterssh takes care of translating host expressions to hosts, running commands in them (using whatever strategy is selected) and returning the results.

This library should be pretty stable and require little to no changes.

Wmflib

Wmflib is a bundle of generic commonly used functions related to the wikimedia foundation, has some helpful decorators (ex. retry) and similar tools.

This library should be used for generic functions that are not bound to the spicerack library and can be reused in other non-spicerack related projects easily.

Spicerack

Spicerack is the core library, contains more wikimedia specific libraries and cli (cookbook) to interact with different services (ex. toolforge.etcd) and is meant to be used to store the core logic for any interaction with the services.

Here we will have to add, specially at the beginning, some libraries to interact with our services, here will also be where more of the reusage of code and collaboration will happen. We should keep always in mind things here that can be used by other group around the foundation. Code in this library will be considerably tested, and no merges should happen without review.

Cookbooks

The Spicerack/Cookbooks repo contains the main recipes to execute, the only logic should be orchestration, and any service management related code should be eventually moved to the above Spicerack library.

This repository of cookbooks will be shared with the rest of the SRE group, but our specific cookbooks will go under the `cookbooks/wmcs` directory. Any helper library should go under `cookbooks/wmcs/__init__.py` and we should periodically consider moving as much of the code from there to Spicerack.

Execution of the cookbooks

As of now, this cookbooks can be run locally, I'm actively considering how to provide a host/vm/... with a spicerack + cumin setup for easy usage and running long cookbooks, but as of right now, we can start locally.

If your cookbooks are only accessing bare metal machines, you can already run them on the cumin hosts for the wiki operations (ex. cumin1001.equiad.wmnet), but those hosts have no access to the VMs as of writing this.

Local setup

To run locally the cookbooks, you will need to create a virtualenv and install the dependencies, for example:

   mkvirtualenv spicerack
   pip install wikimedia-spicerack

NOTE: as of writing this, we are pending a release of spicerack, so for this to work you might have to clone spicerack locally and install it, see (https://gerrit.wikimedia.org/r/admin/repos/operations/software/spicerack):

   git clone "https://dcaro@gerrit.wikimedia.org/r/a/operations/software/spicerack"
   cd spicerack
   pip uninstall wikimedia-spicerack
   pip install -e .

Then clone the cookbooks repo (see https://gerrit.wikimedia.org/r/admin/repos/operations/cookbooks):

   git clone "https://dcaro@gerrit.wikimedia.org/r/a/operations/cookbooks"

Then create the relevant config files, one for cumin, wherever you prefer, I recommend `~/.config/spicerack/cumin_config.yaml`, with the contents:

   ---
   transport: clustershell
   log_file: cumin.log  # feel free to change this to another path of your choosing
   default_backend: direct
   clustershell:
       ssh_options:
           # needed for vms that repeat a name
           - |
               -o StrictHostKeyChecking=no
               -o "UserKnownHostsFile=/dev/null"
               -o "LogLevel=ERROR"

And another for spicerack itself (the cookbook cli), I'll use `~/.config/spicerack/cookbook_config.yaml`

   ---
   # adapt to wherever you cloned the repo
   cookbooks_base_dir: ~/Work/wikimedia/operations/cookbooks
   instance_params:
       cumin_config: ~/.config/spicerack/cumin-config.yaml


With those config files, now you are able to run the client, from the root of the operations/cookbooks repository, you can list all the cookbooks:

   cookbook -c ~/.config/spicerack/cookbook_config.yaml --list

NOTE: This is a very premature proposal, this workflow will be improved, feel free to start any questions/discussions in the talks page or ping me directly (David Caro (talk) 17:21, 5 February 2021 (UTC)).