You are browsing a read-only backup copy of Wikitech. The live site can be found at wikitech.wikimedia.org

Difference between revisions of "Scap3/Migration Guide"

From Wikitech-static
Jump to navigation Jump to search
imported>Quiddity
m (fixes)
imported>Thcipriani
(Thcipriani moved page Scap3/Migration Guide to Scap/Scap3 migration guide: Remove scap3 top-level page)
 
Line 1: Line 1:
This page is a guide for migrating [[Trebuchet]] deployed services to [[Scap3]].
#REDIRECT [[Scap/Scap3 migration guide]]
 
If you are migrating a Node.js service, check out the [[Services/Scap_Migration|Services Migration Guide]].
 
This guide does not represent every consideration for moving a deployment to Scap3, but attempts to highlight important details that are part of the migration from Trebuchet to Scap3.
 
== Migration checklist ==
 
* Decide on canary deployment hosts and deployment checks
* Create Puppet patches
* Create <code>scap.cfg</code> patches for your deployed repo
* Schedule a migration window on the [[Deployments]] page and make sure you have the support you need from both ops and releng.
 
== Canary hosts and checks ==
 
{{note|type=warning|Use of canary hosts is highly recommended}}
 
Scap3 supports using canary hosts. This means you can deploy to a subset of nodes before deploying a service everywhere. While this is not *required* it is highly recommended that you use canary nodes, particularly for the first deployment.
 
By default, Scap3 will assume that a deployment has been successful if no errors were encountered during deployment; however, it also supports more advanced checks including running arbitrary commands at any stage of deployment. For more information about what types of checks are available refer to the more detailed documentation on [https://doc.wikimedia.org/mw-tools-scap/scap3/quickstart/setup.html#service-restarts-and-checks Service Checks].
 
== Puppet Patches ==
 
There are a few files all deployments will need to edit:
# <code>hieradata/common/role/deployment.yaml</code>
# <code>hieradata/role/common/deployment_server.yaml</code>
# The puppet file that defines all service targets.
 
=== <code>hieradata/common/role/deployment.yaml</code> ===
 
First, you'll need to remove your deployment definition from <code>hieradata/common/role/deployment.yaml</code>. This is a dictionary that is used by Trebuchet and stored in salt -- it is not needed by Scap3 as Scap3 does not use salt.
 
=== <code>hieradata/role/common/deployment_server.yaml</code> ===
 
Next, you'll need to add your deployment to the <code>scap::sources</code> dictionary inside <code>hieradata/role/common/deployment_server.yaml</code>. This definition will be used on the deployment host to clone your repository under <code>/srv/deployment/[repo/name]</code>. Here is an example of what to add to the sources dictionary to clone a repo from gerrit (at <code>https://gerrit.wikimedia.org/r/an/example/repo</code>) to tin (at <code>tin:/srv/deployment/example/repo</code>:
 
<syntaxhighlight lang="yaml">
example/repo:
  repository: an/example/repo
</syntaxhighlight>
 
Alternatively, if your repo on gerrit has the same name as the directory structure on tin (e.g., <code>https://gerrit.wikimedia.org/r/real/repo</code> → <code>tin:/srv/deployment/real/repo</code>) the yaml inside <code>server.yaml</code> should look like:
 
<syntaxhighlight lang="yaml">
real/repo: {}
</syntaxhighlight>
 
=== <code>scap::target</code> ===
 
Finally, you'll need to add a <code>scap::target</code> definition to the puppet code that executes on your target machines.
 
Things to consider in your <code>scap::target</code> definition:
 
# What user should own my remote files to be deployed?
# Should Scap3 handle service restarts as part of deployments?
# Do I want Scap3 to handle any other checks that require sudoer permissions?
 
The default answers to those questions are: The <code>deploy-service</code> user does deploys, Scap3 restarts a service as part of deployment, Scap3 performs no other checks that require sudoer priveliges. If the default answers suffice, the following <code>scap::target</code> definition should be sufficient (using the <code>example/repo</code> example):
 
<syntaxhighlight lang="puppet">
scap::target { 'example/repo':
    service_name => 'example-service',
    deploy_user  => 'deploy-service',
}
</syntaxhighlight>
 
=== New User ===
 
Sometimes, it is inappropriate to use the <code>deploy-service</code> user. In that instance, you'll need to setup a new user. <code>scap::target</code> can do that for you with the <code>manage_user</code> option.
 
<syntaxhighlight lang="puppet">
scap::target { 'example/repo':
    service_name => 'example-service',
    deploy_user  => 'deploy-my-service',
    manage_user  => true,
}
</syntaxhighlight>
 
<code>scap/target</code> will do all user, group, and ssh-key management. You will need to add an ssh-key for that user under <code>modules/secret/secrets/keyholder</code>. Any non-word character (i.e., <code>\W</code>) will be replaced by <code>_</code> in the key name.
 
For example, for the user <code>deploy-my-service</code>, <code>scap::target</code> will expect to find a public key under <code>modules/secret/secrets/keyholder/deploy_my_service.pub</code>.
 
Noteworthy, that any new key added to [[Keyholder]] '''MUST''' be password protected.
 
=== Other considerations ===
 
Scap3 replaces the directory into which code is deployed with each deployment. This means any puppet generated and/or non-version-controlled files will be replaced on every deploy!
 
== Scap3 Configuration ==
 
Scap3 uses a configuration file that lives in the root of your repository at <code>./scap/scap.cfg</code>. A basic <code>scap.cfg</code> that supports:
 
* Submodules
* Service restart
* Canary nodes
* Post-service-restart port check
 
is shown here:
 
<syntaxhighlight lang="ini">
[global]
git_repo: <repo/name> # (e.g. example/repo)
git_deploy_dir: /srv/deployment
git_repo_user: deploy-service
ssh_user: deploy-service
server_groups: canary, default
canary_dsh_targets: target-canary
dsh_targets: targets
git_submodules: True
service_name: <service-name>
service_port: <service-port>
lock_file: /tmp/scap.<service-name>.lock
 
[wmnet]
git_server: deployment.eqiad.wmnet
 
[deployment-prep.eqiad.wmflabs]
git_server: deployment-tin.deployment-prep.eqiad.wmflabs
server_groups: default
dsh_targets: betacluster-targets
</syntaxhighlight>
 
The <code>dsh_targets</code> and <code>canary_dsh_targets</code> point to files under the <code>./scap</code> directory that contain a list of targets. You can use comments beginning with <code>#</code> to help maintain target lists:
 
<syntaxhighlight lang="shell-session">
$ cat target-canary
scb1001.eqiad.wmnet
 
$ cat targets
# eqiad
# scb1001 is a canary
scb1002.eqiad.wmnet
# codfw
scb2001.codfw.wmnet
scb2002.codfw.wmnet
</syntaxhighlight>
 
 
== First Deployment ==
 
After the puppet patches and <code>scap.cfg</code> and other configuration files are written, the first deployment can be tricky and should be done in a specific order.
 
# Merge the <code>scap/scap.cfg</code> and other patches within the repo
# Fetch those patches down to <code>/srv/deployment/[repo]</code> on tin or the current active deployment server
# Stop puppet on the target hosts using cumin
# Merge the puppet patches
# Run puppet on tin (this should effectively be a no-op)
# On tin, as a regular user, run: <syntaxhighlight lang="shell-session">
$ cd /srv/deployment/[repo]
$ scap deploy --init
</syntaxhighlight> This should create <code>/srv/deployment/[repo]/.git/DEPLOY_HEAD</code> that will act as the configuration for scap running on the remote hosts
# Run puppet on each of the target hosts. This should ensure that <code>/srv/deployment/[repo]</code> is owned by the <code>deploy_user</code> defined in <code>scap::target</code> definition in puppet.
# Now you should be able to run a deployment from the deployment server: <syntaxhighlight lang="shell-session">
$ cd /srv/deployment/[repo]
$ scap deploy -v "message for [[SAL]]"
</syntaxhighlight>
 
The important steps to keep in mind are that <code>scap deploy --init</code> must be run on the deployment server before puppet runs on any of the target machines and that puppet must run on the target machines before the first deployment.

Latest revision as of 20:07, 14 September 2021