You are browsing a read-only backup copy of Wikitech. The live site can be found at


From Wikitech-static
Revision as of 01:23, 23 March 2023 by imported>Krinkle (→‎Contact: add section)
Jump to navigation Jump to search

This is the entrypoint for documentation and infrastructure developed or operated by the Performance Team at Wikimedia.

For more information about the team, what we do, and how to reach us, see Wikimedia Performance Team on


The "performance practices" guides help set direction, and can be used to guide new developments, or to periodically identify areas of improvement in current code by looking for differences.

The "measure" guides help assess the actual performance of existing code, and to help iterate on proposed changes before, during, and after their development.

Tools and data

Tools we provide for use by engineers that develop or operate Wikimedia software.

Synthetic traffic

Monitoring real traffic

  • PHP Flame Graphs ( Flame graphs from time spent in the MediaWiki backend application (reported hourly and daily, split by service entry point).
  • Navigation Timing (Grafana): Page load time and other real-user metrics from MediaWiki page views, collected via Navigation Timing and Paint Timing APIs (split by platform, country, and browser).
    • responseStart by CDN host: Roundtrip latency from browsers, split by CDN server. Allows for natural experimentation and regression detection around upgrades and configuration changes to Wikimedia CDN, e.g. Linux kernel changes, and upgrades/changes to Varnish, HAProxy or ATS.
  • CruX report (Grafana): Independent copy of Google's periodically published Chrome UX Report and the Core Web Vitals as measured from eligible Chrome-with-Google-account users.
  • CPU benchmark (Grafana): Collected as part of our Navigation Timing beacon to help asses baseline performance. Also powers the AS Report.
  • AS Report ( Periodic comparison of backbone connectivity from different Internet service providers, based on anonymised Navigation Timing and CPU benchmark datasets.
  • WANCache (Grafana): Metrics about Memcached keys and computations in your backend code anywhere in MediaWiki core or extensions.
  • Backend Pageview Timing: Backend latency from MediaWiki when generating pageviews to logged-in users (and to our CDN), split by platform.
  • Save Timing (Grafana) : Time from submit to finishing the edit save on an article (and breakdown by page type, account type, and service entry point).

Debugging and development

  • mw.inspect: Inspect build sizes in production or locally during development.
  • Fresnel CI: Easy access to a subset of synthetic and real-user metrics during code review.
  • WikimediaDebug: Capture and analyze performance profiles and debug logs in production (integrates with with Logstash and XHGui), e.g. when staging a deployment or afterwards.
  • XHGui: Access to detailed per-request profiles captured via WikimediaDebug.



Internal runbooks

Infrastructure diagram.

These pages are mainly for use within the team.

Internal workflows:

Other dashboards we regularly monitor: