You are browsing a read-only backup copy of Wikitech. The live site can be found at wikitech.wikimedia.org

Analytics/Systems/Cluster/Deploy/Refinery

From Wikitech-static
< Analytics‎ | Systems‎ | Cluster
Revision as of 13:44, 7 April 2017 by imported>Milimetric (Milimetric moved page Analytics/Cluster/Deploy/Refinery to Analytics/Systems/Cluster/Deploy/Refinery: Reorganizing documentation)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

Refinery is the software infrastructure that is used on the Analytics Cluster. The source code is in the analytics/refinery repository.

What are we deploying

When deploying new code to the cluster we we might be deploying refinery (just oozie jobs for example) or refinery source (new java code for changes to pageview definition) or both.

  • If you want to deploy refinery source first, follow procedure here: Analytics/Cluster/Refinery-source
  • If we are deploying refinery but no refinery/source then we can just do the scap deploy listed below.
  • If oozie jobs are affected you might need to re-start those.

How to deploy

Caution: Double check in stat1002 the root partition space left before proceeding to deploy anything (at least for the moment, until we find a more permanent solution). Big repositories like the Refinery (due to git-fat) might consume a lot of space on the target hosts.

The refinery now uses Scap so the deployments are as simple as:

  • Ssh into Tin
  • Create a screen/tmux session to prevent network failures to ruin your deployment.
  • Run:
    cd /srv/deployment/analytics/refinery
    git pull
    scap deploy

Scap will deploy to the canary host first (stat1002) and then if everything goes fine it will ask you to proceed with the rest of the hosts. If you want to double check the status of the deployment, you can run scap deploy-log and check what is happening. Please note that Scap will create another copy of the repository after each deployment, and it will use symlinks to switch versions.

  • Make sure that all the deployment went fine and rollback in case of fireworks.
  • After the deployment, ssh into stat1002. Make sure to wait until the entire scap deployment is completed before ssh to stat1002. If you cd into the refinery directory before the symlinks change, you'll still be seeing the old git log and be very confused.
  • Change dir into /srv/deployment/analytics/refinery and check (git log) that the code has been pulled.
  • Create a screen/tmux session to prevent network failures to interfere with the execution of the following command.
  • Run sudo -u hdfs /srv/deployment/analytics/refinery/bin/refinery-deploy-to-hdfs --verbose --no-dry-run
    • This part brings the refinery code to the HDFS (but it does not resubmit Oozie jobs, if you need to do so please see the Analytics/Cluster/Oozie/Administration page.)
    • This step needs to be done only on one host (stat1002 is fine).
  • Finally, consider changing any documentation that needs to be updated. This may include: Analytics/Data/Webrequest, Analytics/Data/Pageview_hourly, Research:Page_view (and its sub-pages).

How to deploy Oozie jobs

You can find test / production deployment information here: Analytics/Cluster/Oozie/Administration

For a tutorial / introduction to oozie, read that page first: Analytics/Cluster/Oozie.