You are browsing a read-only backup copy of Wikitech. The live site can be found at

Performance/Runbook/Webperf-processor services

From Wikitech-static
< Performance‎ | Runbook
Revision as of 00:37, 22 December 2018 by imported>Krinkle (Krinkle moved page Performance/Runbook/Webperf services to Performance/Runbook/Webperf-processor services)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

This is the run book for: deploying and monitoring webperf-processor services.


The puppet role for these services is role::webperf:processors_and_site.


The navtiming service (written in Python) extracts information for the NavigationTiming and SaveTiming schemas from EventLogging using Kafka. It submits them to Graphite via Statsd. The original data is submitted to EventLogging by a JS client for MediaWiki (beacon js source, MediaWiki extension).


Monitor navtiming

Application logs for this service are not sent to Logstash currently.

  • Ssh to the host you want to monitor.
  • Run sudo journalctl -u navtiming

Deploy navtiming

This service runs on the webperf*1 hosts.

To update the service on the Beta Cluster:

  1. ssh to deployment-deploy01.deployment-prep.eqiad.wmflabs
  2. cd /srv/deployment/performance/navtiming
  3. git pull
  4. scap deploy

To deploy a change in production:

  1. Before you start, open a terminal window in which you monitor the service on a host in the currently main data center. For example, ssh to webperf1001 (if Eqiad is primary) and run sudo journalctl -u navtiming.
  2. In another terminal window, ssh to deployment.eqiad.wmnet and navigate to /srv/deployment/performance/navtiming.
  3. Prepare the working copy:
    • Ensure the working copy is clean, git status.
    • Fetch the latest changes from Gerrit remote, git fetch origin.
    • Review the changes, git log -p HEAD..@{u}.
    • Apply the changes to the working copy, git rebase.
  4. Deploy the changes, this will automatically restarts the service afterward.
    • Run scap deploy


Written in Python.

Application logs are kept locally, and can be read via sudo journalctl -u coal.


The statsv service (written in Python) forwards data from the Kafka stream for /beacon/statsv web requests to Statsd.

Application logs are kept locally, and can be read via sudo journalctl -u statsv.


Written in Python.


This powers the site at Beta Cluster instance at