You are browsing a read-only backup copy of Wikitech. The primary site can be found at wikitech.wikimedia.org

Prometheus

From Wikitech-static
Revision as of 10:50, 18 July 2016 by imported>Filippo Giunchedi (→‎Service Discovery)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

What is it?

Prometheus is a free software ecosystem for monitoring and alerting, with focus on reliability and semplicity. See also prometheus overview and prometheus FAQ.

There's a few interesting features that are missing from what we have now, among others:

multi-dimensional data model
Metrics have a name and several key=value pairs to better model what the metric is about. e.g. to measure varnish requests in the upload cache in eqiad we'd have a metric like http_requests_total{cache="upload",site="eqiad"}.
a powerful query language
Makes it able to ask complex questions, e.g. when debugging problems or drilling down for root cause during outages. From the example above, the query topk(3, sum(http_requests_total{status~="^5"}) by (cache)) would return the top 3 caches (text/upload/misc) with the most errors (status matches the regexp "^5")
pull metrics from targets
Prometheus is primarily based on a pull model, in which the prometheus server has a list of targets it should scrape metrics from. The pull protocol is HTTP based and simply put, the target returns a list of "<metric> <value>". Pushing metrics is supported too, see also http://prometheus.io/docs/instrumenting/pushing/.

After the Prometheus POC (as per User:Filippo_Giunchedi/Prometheus_POC) has been running in Labs for some time, during FQ1 2016-2017 we'll be extending Prometheus deployment to production, as outlined in the Technical Operations goals .

Architecture

Each prometheus server is configured to scrape a list of targets (i.e. HTTP endpoints) at a certain frequency, in our case starting at 60s. All metrics are stored on the local disk with a per-server retention period (minimum of 4 months for the initial goal).

All targets to be scraped are grouped into jobs, depending on the purpose that those targets serve. For example the job to scrape all host-level data for a given location using node-exporter will be called node and each target will be listed as hostname:9100. Similarly there could be jobs for varnish, mysql, etc.

Each prometheus server is meant to be stand-alone and polling targets in the same failure domain as the server itself as appropriate (e.g. the same datacenter, the same vlan and so on). For example this allows to keep the monitoring local to the datacenter and not have spotty metrics upon cross-datacenter connectivity blips. (See also Federation)

Prometheus single server.png

Exporters

The endpoint being polled by the prometheus server and answering the GET requests is typically called exporter, e.g. the host-level metrics exporter is node-exporter.

Each exporter serves the current snapshot of metrics when polled by the prometheus server, there is no metric history kept by the exporter itself. Further, the exporter usually runs on the same host as the service or host it is monitoring.

Storage

Why just stand-alone prometheus servers with local storage and not clustered storage? The idea behind a single prometheus server is one of reliability: a monitoring system must be more reliabile than the systems it is monitoring. It is certainly easier to get local storage right and reliable than clustered storage, especially important when collecting operational metrics.

See also prometheus storage documentation for a more in-depth explanation and storage space requirements.

High availability

With local storage being the basic building block we can still achieve high-availability by running more than one server in parallel, each configured the same and polling the same set of targets. Queries for data can be routed via LVS in an active/standby fashion.

Prometheus HA server.png

Backups

For efficiency reasons, prometheus spools chunks of datapoints in memory for each metric before flushing them to disk. This makes it harder to perform backups online by simply copying the files on disk. The issue of having consistent backups is also discussed in prometheus #651.

Notwithstanding the above, it should be possible to backup the prometheus local storage files as-is by archiving its storage directory with tar before regular (bacula) backups. Since the backup is being done online it will result in some inconsistencies, upon restoring the backup Prometheus will crash-recovery its storage at startup.

To perform backups of consistent/clean state, at the moment prometheus needs to be shutdown gracefully, therefore when running an active/standby configuration backup can be taken on the standby prometheus to minimize its impact. Note that the shutdown will result in gaps in the standby prometheus server for the duration of the shutdown.

Failure recovery

In the event of a prometheus server having an unusable local storage (disk failed, FS failed, corruption, etc) failure recovery can take the form of:

  • start with empty storage: of course it is a complete loss of metric history for the local server and will obviously fully recover once the metri retention period has passed.
  • recover from backups: restore the storage directory to the last good backup
  • copy data from a similar server: when deployed in pairs it is possible to copy/rsync the storage directory onto the failed server, this will likely result in gaps in the recent history though (see also Backups)

Federation

Each prometheus server is able to act as a target to another prometheus server by means of federation. Our use case for this feature is primarily hierarchical federation, namely to have a 'global' prometheus that aggregates datacenter-level metrics from prometheus in each datacenter.

See also federation documentation

Service Discovery

Prometheus supports different kinds of discovery through its configuration. For example, in role::prometheus::labs_project implements auto-discovery of all instances for a given labs project. file_sd_config is used to continuously monitor a set of configuration files for changes and the script prometheus-labs-targets is run periodically to write the list of instances to the relative configuration file. The file_sd files are reloaded automatically by prometheus, so new instances will be auto-discovered and have their instance-level metrics collected.

While file-based service discovery works, Prometheus also supports higher-level discovery for example for Kubernetes (see also role::prometheus::tools).