You are browsing a read-only backup copy of Wikitech. The live site can be found at wikitech.wikimedia.org
Difference between revisions of "Services/Scap Migration"
(s/wmnet/wmflabs/ for deployment-prep)
(→Deployment Repo: s/promote/restart_service/)
|Line 67:||Line 67:|
Revision as of 18:02, 18 May 2016
All Node.JS should start using Scap3 as the deployment method as soon as possible. This document describes how to migrate from Trebuchet to Scap3. Before starting the process, you should schedule a migration window on the Deployments page and make sure you have support at that time from the relevant teams -
ops && (services || releng).
Before actually converting your deploy repo to Scap3 (and use it for deployment), you need to add yourself to the scap3 deployment group and prepare your service's module. Create a patch for the operations/puppet repository with the following changes:
- In modules/admin/data/data.yaml find the definition for the deploy-service group and add all of the deployers to the list of its members.
- In modules/<service-name>/manifests/init.pp, add the following line to the
deployment => 'scap3',
- In hieradata/common/scap/server.yaml, add your repository to the list of scap::sources to clone:
sources: ... # The ~ tilde uses the default params to scap::source. See the scap::source docs for usage info. <service-name>/deploy: ~
Commit your changes, send them to review and have an Operations Engineer ready to review them.
Next, you need prepare the deploy repository for usage with Scap3. Create the scap directory inside your deploy repository and fill the contents of scap/scap.cfg with:
[global] git_repo: <service-name>/deploy git_deploy_dir: /srv/deployment git_repo_user: deploy-service ssh_user: deploy-service server_groups: canary, default canary_dsh_targets: target-canary dsh_targets: targets git_submodules: True service_name: <service-name> service_port: <service-port> lock_file: /tmp/scap.<service-name>.lock [wmnet] git_server: tin.eqiad.wmnet [deployment-prep.eqiad.wmflabs] git_server: deployment-tin.deployment-prep.eqiad.wmflabs server_groups: default dsh_targets: betacluster-targets
This represents the basic configuration needed by Scap3 to deploy the service. We still need to tell Scap3 on which nodes to deploy and which checks to perform after the deployment on each of the nodes. First, the list of nodes. Two files need to be created: scap/target-canary and scap/targets. In the former, you need to put the FQDN of the node that will act as the canary deployment node, i.e. the node that will first receive the new code, while in the latter file put the remainder of the nodes. For example, if your target nodes are in the SCB cluster, these files should look like this:
$ cat target-canary scb1001.eqiad.wmnet $ cat targets scb1002.eqiad.wmnet scb2001.codfw.wmnet scb2002.codfw.wmnet
In the same vein, you need to create the scap/betacluster-targets file which will contain the FQDNs of the targets in BetaCluster.
Finally, enable the automatic checker script to check the service after each deployment by placing the following in scap/checks.yaml:
checks: endpoints: type: nrpe stage: restart_service command: check_endpoints_<service-name>
Commit your changes, send them to Gerrit for review and merge them.
Before deploying for the first time, you have to make sure the operations/puppet changes have been merged and Puppet has been run on all of your target nodes as well as on the deployment server (tin.eqiad.wmnet). You can then proceed with deploying your service by following the deployment guide.