You are browsing a read-only backup copy of Wikitech. The primary site can be found at

Performance/Guides/Measure frontend performance

From Wikitech-static
Jump to navigation Jump to search

The Performance Team continuously measures site performance using mediawiki/extensions/NavigationTiming and our WebPageTest instance, but you can also measure frontend performance by either using the Performance API in your local web browsers, or by using the infrastructure we provide to collect metrics.

In production

If you are starting a new project or otherwise have not yet set performance objectives, first read our guidance Page load performance.

  • Navigation Timing dashboard. responseStart reflects how fast the client received a response from the server ("time to first byte", includes time the server takes to compute the response). firstPaint reflects when the browser first draws content on the screen (e.g. stylesheets arrived and above-the-fold HTML content).
  • ResourceLoader dashboard: A significant change in HTTP request rate, or cache hit ratio, is usually sign of a problem. You can also monitor the time it takes to "build" your feature's module, at ResourceLoader module builds

The frontend Navigation Timing" dashboard covers:

  • "did we accidentally reduce the localStorage and/or native browser cache hit rate"
  • "did we just deploy something that adds slow code to the critical path for loading a page"

Grafana "WebPageTest drilldown" dashboard: To answer, "Did we just deploy something that causes more stuff to be downloaded on a page view"

Getting metrics from your browser

Modern browsers has built in support for performance measurements. We use some of these metrics to collect data from real users to know the performance of Wikipedia. You can get those metrics yourself by running JavaScript in your browser console.

One important thing: Most of these metrics are browser focused instead of user focused.

Navigation Timing API

The Navigation Timing API is supported in all major browsers. From the API you get information about how the browser is processing the page and all the assets. See for the full picture.

In version 1 all time metrics are measured as UNIX time, i.e. milliseconds since midnight of January 1, 1970 (UTC). With version 2 all metrics are relative to navigation start.

If your browser supports version 1 you can get the information by doing the following:

var t = window.performance.timing;
    navigationStart: 0,
    unloadEventStart: t.unloadEventStart > 0 ? t.unloadEventStart - t.navigationStart : undefined,
    unloadEventEnd: t.unloadEventEnd > 0 ? t.unloadEventEnd - t.navigationStart : undefined,
    redirectStart: t.redirectStart > 0 ? t.redirectStart - t.navigationStart : undefined,
    redirectEnd: t.redirectEnd > 0 ? t.redirectEnd - t.navigationStart : undefined,
    fetchStart: t.fetchStart - t.navigationStart,
    domainLookupStart: t.domainLookupStart - t.navigationStart,
    domainLookupEnd: t.domainLookupEnd - t.navigationStart,
    connectStart: t.connectStart - t.navigationStart,
    connectEnd: t.connectEnd - t.navigationStart,
    secureConnectionStart: t.secureConnectionStart ? t.secureConnectionStart - t.navigationStart : undefined,
    requestStart: t.requestStart - t.navigationStart,
    responseStart: t.responseStart - t.navigationStart,
    responseEnd: t.responseEnd - t.navigationStart,
    domLoading: t.domLoading - t.navigationStart,
    domInteractive: t.domInteractive - t.navigationStart,
    domContentLoadedEventStart: t.domContentLoadedEventStart - t.navigationStart,
    domContentLoadedEventEnd: t.domContentLoadedEventEnd - t.navigationStart,
    domComplete: t.domComplete - t.navigationStart,
    loadEventStart: t.loadEventStart - t.navigationStart,
    loadEventEnd: t.loadEventEnd - t.navigationStart

If your browser supports version 2 you can run it like this to get all the entries:

window.performance.getEntriesByType('navigation').forEach(entry => console.log(entry));

Or if you wanna have deltas (easier to understand where the time is spent):

var t = window.performance.timing;
    domainLookupTime: (t.domainLookupEnd - t.domainLookupStart),
    redirectionTime: (t.fetchStart - t.navigationStart),
    serverConnectionTime: (t.connectEnd - t.connectStart),
    serverResponseTime: (t.responseEnd - t.requestStart),
    pageDownloadTime: (t.responseEnd - t.responseStart),
    domInteractiveTime: (t.domInteractive - t.navigationStart),
    domContentLoadedTime: (t.domContentLoadedEventStart - t.navigationStart),
    pageLoadTime: (t.loadEventStart - t.navigationStart),
    frontEndTime: (t.loadEventStart - t.responseEnd),
    backEndTime: (t.responseStart - t.navigationStart)

User Timing API

The User Timing API is also supported by all major browsers. The API lets developers define custom measurements on the page. We use it today to measure the JavaScript startup time for MediaWiki and in the future this API will be more important to us if we do a single page application.

We currently only use marks and not measurements. To get the marks we create, you can do:


Resource Timing API

The Resource Timing API is about getting information on all resources downloaded for a page. In version 2 the size of the resource is also included. To get information about resources on different domains, you need to add the Timing-Allow-Origin response header (we do that on Wikimedia domains).


Paint Timing API

The Paint Timing API provides information about when the browser starts to paint something on the screen. First paint is interesting because it's more related to the user experience.

In the past Chrome and IE have been the only ones supporting this unstandardized feature, but Firefox is coming along.

One important thing about first paint is it's from the browser's perspective, it doesn't take into account the rest of the pipeline bringing pixels to the user's eyes (operating system, motherboard, GPU, screen).

console.log(window.performance.timing.msFirstPaint - window.performance.timing.navigationStart);

In Firefox firstPaint is called timeToNonBlankPaint and is at the moment behind a preference (you need to turn it on in your browser). You need to set dom.performance.time_to_non_blank_paint.enabled to true in about:config for it to work.

console.log(window.performance.timing.timeToNonBlankPaint - window.performance.timing.navigationStart);

Testing performance on your local machine

There are two ways to test performance on your local machine:

  • Use developer tools to get in-depth information about JavaScript and layout. This is good to investigate bottlenecks or known problems.
  • Collect First Visual Change and Speed Index to make sure that a change you make doesn't impact those values (by testing each change x amount of times).

Using developer tools

Using developer tools is perfect for finding in-depth information about JavaScript or CSS performance.


Chrome has a long history of strong developer tools. You should check out the performance tab where you can investigate JavaScript and CSS performance. The best way to learn how to do this well is to watch performance audits done by Google Engineers. Check out Paul Irish investigating CNET, Time and Wikipedia or look at Sam Saccone Profiling Paint Perf or Identifying the JavaScript slowdown.


Firefox is about to release Firefox 57 (Quantum) in November this year. When that's happened I'm gonna look for good examples on how to use Firefox devtools, but you can use as a start for now.


Windows Performance Toolkit is the way to test on Edge (The Microsoft team recommends that you don't use devtools because it adds too much overhead). You need to invest some time to get into the (powerful) toolkit. Let check if we can find tutorial videos as a start. To get Edge on other platforms than Windows you can use

Collecting Visual Metrics like SpeedIndex and First Visual Change

One problem with the metrics that you collect from the browser is that they are browser-centric instead of user-centric. To get a feeling of why Visual Metrics are important you can look at

Speed Index is the average time at which visible parts of the page are displayed.  It is expressed in milliseconds and dependent on size of the view port.

To get user-centric metrics, we need to record a video of the screen and analyze the results. To do that we use FFMPEG, ImageMagick and a couple of Python libraries. The easiest way to get that all to work is to use a ready-made Docker container containing all the software you need.

Testing for changes in Visual Metrics is something you don't need to do for every change, but if you know that the change you are doing can impact performance, you should do it.

Setup with Docker

The Docker container comes ready made with Firefox and Chrome.

  1. Install Docker
  2. Download the container to your local machine.
    docker pull sitespeedio/browsertime
    And get the IP for localhost
  3. Get the IP for localhost:
    docker run --rm -it alpine nslookup docker.for.mac.localhost
    nslookup: can't resolve '(null)': Name does not resolve
    Name:      docker.for.mac.localhost
    Address 1:
  4. Run the container against your localhost (on Mac OS X the is the way the container access localhost on your machine).
    docker run sitespeedio/browsertime

Setting up connectivity

Running tests on your localhost will be super fast and will not be the same experience as a real user. To better simulate real users conditions you need to slow down your connection. It's hard to do that inside of Docker, since it's depending on the host you run on. It's simpler to change the connectivity outside of Docker.

On Mac OS X you can do that with pfctl and on Linux you can use tc. If you want help to simulate slow networks you can use Throttle.

Throttle has pre configured connectivity profiles following the same setup as WebPageTest so it will be easy for you to test out simulating traffic on 3g/2g connectivity.


To get Throttle up and running you need latest LTS release of NodeJS and then install it:

npm install sitespeedio/throttle -g
Linux (tc based)

If you test on your localhost on Linux, you need to specify that to Throttle. You can add delay/latency with the rtt-switch. To add 100 ms latency on all traffic on localhost run:

throttle --localhost -rtt 100

To remove the latency you stop Throttle by:

throttle --stop

You can also use the pre made connectivity profiles (3g, 3gfast, 3gslow, 2g and cable) with the following setup:

throttle --profile 3g

You will then have the connectivity set as a 3g network.

Mac OS X

On Mac you can specify RTT and up/download speed on all network interfaces (and not only on localhost):

throttle --up 330 --down 780 --rtt 200

And to stop it you:

throttle --stop

Full setup

To run this locally, you first set the connectivity (this will affect every access to internet from your local machine, so be sure to turn it off when you are ready), run your tests and then remove the connectivity filters.

throttle --up 330 --down 780 --rtt 200
docker run --shm-size=1g --rm -v "$(pwd)":/browsertime sitespeedio/browsertime --video --speedIndex -n 5 -b chrome
throttle --stop

You can choose between running Chrome and Firefox. For Chrome you can also get the trace log by adding --chrome.collectTracingEvents. You can take the trace log and drag and drop into your Performance tab in Chrome devtools to get the full picture.

The files output by browsertime go into a browsertime-results folder in the working directory by default.

Using WebPageReplay

You can also run your tests through WebPageReplay that is a tool that records and replay web pages. The idea with that is that it should eliminate server/internet discrepancy and let you focus on only changes in front end. We use it today when we collect metrics for the alerts, you can read more about how we do that in the WebPageReplay setup.

You can do the the same on your local machine. It will work like this: The browser will first access the URL that you configure from the internet or your local server, WebPageReplay will record the responses, and then the browser will access the WebPageReplay replay server local on your machine. It will test the page locally for a configurable number of times and you can then take the median value of metrics like SpeedIndex or First Visual Change and compare them with and without your change.

You need Docker to run it and make sure you use the latest released container (2.1.1-wpr-alpha in this example).


You run it the same way as standalone Browsertime except that you pass on the Docker environment variable -e REPLAY=true to turn on the replay functionality.

docker run --cap-add=NET_ADMIN --shm-size=1g --rm -v "$(pwd)":/browsertime -e REPLAY=true -e LATENCY=100 sitespeedio/browsertime:2.1.1-wpr-alpha -n 21


Make sure to disable fetching the HAR (--skipHar) when you use Firefox since the HAR functionality is broken in Firefox > 54 (but the Mozilla team will soon release a new version).

docker run --cap-add=NET_ADMIN --shm-size=1g --rm -v "$(pwd)":/browsertime -e REPLAY=true -e LATENCY=100 sitespeedio/browsertime:2.1.1-wpr-alpha -b firefox --skipHar -n 21

The HAR file

When you test your page, your tool will generate a HAR file that describes how the browser did the requests and how the server responded. Analyzing the HAR file is the best way to understand what's happening.

To get a waterfall view of the HAR file you can use HAR viewers like and .

Comparing HAR files

The easiest way to compare HAR files is to have them on a layer on top on each other and toggle the transparency between them. You can do that by uploading you HAR files to

Testing performance on your Android phone

You can run tests on your Android phone using Chrome to collect performance metrics. You need an Android phone and you need to prepare it for testing.

Prepare your phone

These are the steps you need to do on your phone:

  • Install Chrome or update Chrome so you run the latest version
  • Enable developer USB access to your phone: Go to About device, tap it, scroll down to the Build number, tap it seven (7) times.
  • Disable screen lock on your device.
  • Enable Stay awake
  • Enable USB debugging in the device system settings, under Developer options.
  • Plug in your phone using the USB port on your desktop computer.
  • When you plugin your phone, click OK on the “Allow USB debugging?” popup.

If you will run many test you can install the Stay Alive app and start it.

Prepare your computer

Using Ubuntu/Linux

If you are on Ubuntu, it is really easy to run your tests because you can use Docker with pre-installed software. On Ubuntu you can map your USB ports inside of Docker (that doesn't work on Mac) and that makes it really simple.

You need to run Docker in privileged to be able to mount your USB ports. Else it is the same way as you run on your local desktop, except that you need to tell the container to start the ADB server ( -e START_ADB_SERVER=true) and tell to use your default Chrome instance on your phone ( and turn off XVFB that is automatically started.

docker run --privileged -v /dev/bus/usb:/dev/bus/usb -e START_ADB_SERVER=true --shm-size=1g --rm -v "$(pwd)":/ sitespeedio/ -n 5 --browsertime.xvfb false

Make sure to change the version of the package (6.2.3 in the example) to use the absolute latest version.

When you run your tests like this you will use the current network on your phone. If you are looking for having limited connectivity you should look at something like

Mac OS X

You need to have ADB and NodeJS (LTS) to be able to run your tests. Install the following:

  • Install the Android SDK on your desktop (just the command line tools!). If you are on a Mac and use Homebrew just run: brew install android-platform-tools
  • Install NodeJS LTS
  • Install npm install -g

Before your start to test, you need to start the adb-server (plugin your phone first) on your desktop: adb start-server

Then you can run your first run/test like this:

If you want to be able to record a video and get Visual Metrics like First Visual Change and Speed Index you need to install all the dependencies needed and that is quite much work. Using Docker on Ubuntu you will get those dependencies for free.

Using different Chrome versions

You can run your tests on different Chrome versions. First install beta/canary/dev or whatever version you want to test, and then choose by the configuration :

  • - stable
  • - beta
  • - canary
  • - dev

Collect internal browser metrics

When you use Chrome you can turn on the internal trace logging to fetch metrics like how much time is spent on repaint, pareseHTML etc. You do that by adding the switches "devtools.timeline" true. The default trace categories makes metrics unstable so turning on just devtools.timeline is enough.

To get something valuable of the metrics, you can use the plugin that collects the metrics and show them.

To do that, you need to clone the repository and install it dependencies:

git clone

cd chrometrace-sitespeedio-plugin && npm install

Then you run the plugin like this (make sure to change the path to the plugin): --plugins.add ../chrometrace-sitespeedio-plugin/ "" -n 11 "devtools.timeline" true

You will then get a tab called Chrome trace where you will see time spent per category (scripting/rendering/loading/painting) and time per activity.

Testing using WebPageReplay

To test using WebPageReplay (replying your page locally from your desktop) you need to run Android 5+.