You are browsing a read-only backup copy of Wikitech. The primary site can be found at

ORES/Deployment: Difference between revisions

From Wikitech-static
Jump to navigation Jump to search
(→‎Beta ( note "deployment-prep" when logging beta deployments)
Line 98: Line 98:
# Record the NEWHASH at the top of <tt>git log -1</tt>
# Record the NEWHASH at the top of <tt>git log -1</tt>
# Deploy with  <tt>scap deploy -v "<relevant task -- e.g. T1234>"</tt> and check out if anything works as expected.  
# Deploy with  <tt>scap deploy -v "<relevant task -- e.g. T1234>"</tt> and check out if anything works as expected.  
# Record the new revision (NEWHASH) and prepare a message to send to {{channel|wikimedia-releng}}: "!log deployment-prep deploying ores <NEWHASH>"
# Record the new revision (NEWHASH) and prepare a message to send to {{channel|wikimedia-cloud}}: "!log deployment-prep deploying ores <NEWHASH>"

=== Production cluster ( ===
=== Production cluster ( ===

Revision as of 21:18, 29 November 2017

This page is a guide on how to deploy new version of ORES into the server.


So, your patches are merged into ores/revscoring/other dependencies. You need to increment the version number. Try to do that in a SemVer fashion. Like only upgrading the patch level (e.g. 0.5.8 -> 0.5.9). You need to do it in and (and probably some other place too, use grep to check where the current version is used)

Then you need to push new version into PyPI using:

python sdist bdist_wheel upload

If you got GPG/PGP you can try adding sign to the list above to also sign the wheel and the sdist

Update models

If you are doing breaking changes to revscoring probably old model files won't work, so you need to rebuild models. Do it using Makefile in editquality & wikiclass repos. If a model changes substantially (new features, new algorithm, etc), make sure to increment the model versions in the Makefile too.

Update wheels

First, clone

git clone

There is a file in ores-wmflabs-deploy called "requirements.txt". Update their version number and make wheels by making a virtualenv and installing everything in it:

virtualenv -p python3 tmp
source tmp/bin/activate
pip install --upgrade pip
pip install wheel
pip wheel -w wheels/ -r requirements.txt

It's critical to do this in an environment that will be binary-compatible with the production cluster. ores-misc-01.ores.eqiad.wmflabs is designed to do that. Don't forget to install C dependencies beforehand. Be careful if any kind of error happened.

Once wheels are ready, there is a repo in gerrit called wheels (in research/ores/wheels) we keep wheels and nltk data in it. You need to git clone, update wheels and make a patch:

git clone ssh://

Then, you need to copy new versions to wheels folder, delete old ones and make a new patch:

cd wheels
git commit -m "New wheels for wiki-ai 1.2" -a
git review -R

To rebuild the production wheels, use frozen-requirements.txt rather than requirements.txt.

Update ores-wmflabs-deploy

After +2ing and being merged, you should update ores-wmflabs-deploy

cd ores-wmflabs-deploy
git checkout -b wiki_ai_1.2
source tmp/bin/activate
pip freeze | grep -v setuptools > frozen-requirements.txt
cd submodules/wheels
git pull
cd ../..
git commit -m "Release wiki-ai 1.2"
git push -f origin wiki_ai_1.2

After that you need to make a PR in github and once it's merged it's good to go!

If you want to deploy to prod as well ( you need to backport your commits in gerrit too (ewww). The gerrit repos are:

git clone ssh://

For ores.


  • "mediawiki/services/ores/deploy" for ores-wmflabs-deploy (note that these repos have diverged [FIXME: Mande?])
  • "mediawiki/services/ores/editquality" for editquality
  • "mediawiki/services/ores/wikiclass" for wikiclass


You need to log into each deploy server to deploy a new version using fabric or scap3, so make sure you have the required permissions.

We have a series of increasingly production-like environments available for smoke testing each release, please take the time to go through each step, labs staging -> beta -> production. There is also an automatic canary deployment during scap, which stops after pushing to scb1002 and gives you the opportunity to compare that server's health to its brethren's.

Read the logs

If something does go wrong, you'll want to read the diagnostic messages. See /srv/log/ores/main.log and app.log. Monitor the logs throughout each of these deployment stages, by going to the target server and running,

 sudo tail -f /srv/log/ores/*.log

Labs (

First, go to staging. Simply make your changes in the ores-wmflabs-deploy repo and do fab stage (don't forget to log it in #wikimedia-cloud by typing this: "!log ores-staging deploying <HASH> into staging".

Then check to see if everything is healthy. If so, you are good to go to the labs setup. Rebase the "deploy" branch onto master.

git checkout deploy
git rebase origin/master
git push -f origin deploy

If working as expected, deploy with "fab deploy_web" and then "fab deploy_celery". Once it's done, test to see if everything is working as expected.

Beta (

  1. ssh deployment-tin.eqiad.wmflabs and cd /srv/deployment/ores/deploy
  2. git pull && git submodule update --init
  3. Record the NEWHASH at the top of git log -1
  4. Deploy with scap deploy -v "<relevant task -- e.g. T1234>" and check out if anything works as expected.
  5. Record the new revision (NEWHASH) and prepare a message to send to #wikimedia-cloud connect: "!log deployment-prep deploying ores <NEWHASH>"

Production cluster (

You are doing a dangerous thing. Remember, breaking the site is extremely easy! Be careful in every step and try to have someone from the team and ops supervising you. Also remember, ORES is depending on a huge number of puppet configurations, check out if your change is compatible with puppet configs and change puppet configs if necessary.

Prep work

We'll double check the hash that is deployed in case we need to revert and then update the code to current master.

  1. ssh deployment.eqiad.wmnet. Then cd /srv/deployment/ores/deploy.
  2. Record the latest revision (OLDHASH) with git log -1 (in case you needed to rollback)
  3. Update the deploy repository with git pull && git submodule update --init
Deploy to canary

Then you need to deploy it into a node to check if it works as expected. It's called canary node. Right now, it's scb1002.eqiad.wmnet.

  1. Run the deploy with scap deploy -v "<relevant task -- e.g. T1234>". Do not hit "y" yet! You have just deployed to the canary server, please smoke test.
  2. ssh scb1002.eqiad.wmnet and check the service internally by commanding curl$(date +%s)
    • It would be great if you test other aspects if you are changing them (e.g. test if it returns data if you are adding a new model).
Continue deployment to prod

If everything works as expected, we're ready to continue.

  1. Deploy it fully by answering "y" to the scap prompt.
  2. ...
    • If something went wrong, rollback with scap deploy -v -r <OLDHASH>
    • If everything looks OK, say "Victory! ORES deploy looks good" (or something like that) in #wikimedia-operations