You are browsing a read-only backup copy of Wikitech. The live site can be found at wikitech.wikimedia.org

Difference between revisions of "Portal:Cloud VPS/Admin/Monitoring"

From Wikitech-static
Jump to navigation Jump to search
imported>BryanDavis
(→‎Managing notifications: update instance name)
imported>Majavah
(→‎Adding new projects: update to reflect reality)
Line 49: Line 49:


=== Adding new projects ===
=== Adding new projects ===
The hiera key `profile::wmcs::prometheus::metricsinfra::projects:` defines which projects have monitoring enabled.
The monitoring configuration is mostly kept in a Trove database. There is no interface for more user-friendly management yet, but for now you can ssh to <code>metricsinfra-controller-1.metricsinfra.eqiad1.wikimedia.cloud</code> and use <code>sudo -i mariadb</code> to edit the database by hand.
 
profile::wmcs::prometheus::metricsinfra::projects:
# List of projects that are monitored by the metricsinfra prometheus server. Each project can be
# configured with an optional notify_email list of addresses that will receive alert notifications
# in addition to WMCS admins.
- name: <project>
    notify_email:
      - user@example.org
      - another@example.org


=== Managing notifications ===
=== Managing notifications ===
To silence existing or expected (downtime) notifications you can use the `amtool` command on the metricsinfra prometheus server (prometheus01.metricsinfra.eqiad1.wikimedia.cloud).
{{tracked|T285055}}
 
Some hardcoded accounts (WMCS staff and some trusted volunteers) can use [https://prometheus-alerts.wmcloud.org prometheus-alerts.wmcloud.org] to create and edit silences. In the future the same interface will work for all project administrators for their own projects.
View active notifications
<syntaxhighlight lang="shell-session">
prometheus01:~$ sudo amtool alert
Alertname                      Starts At                Summary 
InstanceDown                  2020-04-29 19:10:26 UTC         
PuppetAgentFailures            2020-04-29 19:20:26 UTC         
WidespreadPuppetAgentFailures  2020-04-29 19:20:26 UTC 
</syntaxhighlight>
 
You can add a `query` to filter alerts
<syntaxhighlight lang="shell-session">
prometheus01:~$ sudo amtool alert query project=tools
Alertname            Starts At                Summary 
PuppetAgentDisabled  2020-04-29 23:12:26 UTC         
PuppetAgentDisabled  2020-04-29 23:12:26 UTC         
</syntaxhighlight>
 
You can use the same query syntax to silence notifications
<syntaxhighlight lang="shell-session">
prometheus01:~$ sudo amtool silence add project=tools -c "Silence all tools projects alerts" -d 30d
3e68bf51-63f6-4406-a009-e6765acf5d8e
</syntaxhighlight>
 
View all silences
<syntaxhighlight lang="shell-session">
prometheus01:~$ sudo amtool silence query
ID                                    Matchers      Ends At                  Created By  Comment                           
3e68bf51-63f6-4406-a009-e6765acf5d8e  project=tools  2020-06-04 14:39:38 UTC  root        Silence all tools projects alerts
</syntaxhighlight>
 
Expire (remove) a silence
<syntaxhighlight lang="shell-session">
prometheus01:~$ sudo amtool silence expire 3e68bf51-63f6-4406-a009-e6765acf5d8e
</syntaxhighlight>


=== Links ===
=== Links ===

Revision as of 19:20, 2 August 2021

This page describes how monitoring works as deployed and managed by the WMCS team, for both Cloud VPS and Toolforge.

Deployment

There are 2 physical servers:

Both servers get applied the puppet role role::wmcs::monitoring (modules/role/manifests/wmcs/monitoring.pp), which get them ready to collect metrics using a software stack composed of carbon, graphite, Prometheus and friends.

Although the ideal would be for both servers to collect and serve metrics at the same time using a cluster approach, right now only the master actually works. The cold standby fetch metrics using rsync from the master (/srv/carbon/whisper/), so in case of a failover we could rebuild the service without much metrics loss.

These bits are located at modules/profile/manifests/wmcs/monitoring.pp.

Grafana-labs Graphite-labs

The DNS records grafana-labs.discovery.wmnet and graphite-labs.discovery.wmnet define the active web server servicing requests. This entry is managed in the DNS git repo at /dns/browse/master/templates/wmnet and configured on trafficserver in hieradata/common/profile/trafficserver/backend.yaml.

Accessing "labs" prometheus

Our monitoring for physical servers is a mix of production Prometheus/Thanos and the Prometheus setup on the cloudmetrics100x servers. These are mentioned in https://grafana.wikimedia.org as "eqiad prometheus/labs". To access the servers directly in order to troubleshoot what the scrapes are coming up with and more quickly construct queries, you can set up an ssh proxy like so:

ssh -L 8000:prometheus-labmon.eqiad.wmnet:80 cloudmetrics1001.eqiad.wmnet

And then point your web browser to http://localhost:8000/labs to bring up the Prometheus web interface. You can then construct and execute PromQL queries as needed per the upstream docs. Note, that sometimes a copied grafana query will not work because it has a grafana variable in it. Just watch out for things with a "$name" format, since that's not PromQL.


Metrics Retention

Our metrics retention policy is 90 days. There are two cronjobs for the _graphite user that are running on labmon1001 for this task:

  • archive-deleted-instances: Moves data from deleted instances to /srv/carbon/whisper/archived_metrics
  • delete-old-instance-archives: Deletes archived data that is older than 90 days

This prevents the /srv partition from becoming full.

The archive-instances script logs operations to /var/log/graphite/instance-archiver.log

Monitoring for Cloud VPS

The Cloud VPS project "metricsinfra" provides the base infrastructure and services for multi-tenant instance monitoring on Cloud VPS.

Adding new projects

The monitoring configuration is mostly kept in a Trove database. There is no interface for more user-friendly management yet, but for now you can ssh to metricsinfra-controller-1.metricsinfra.eqiad1.wikimedia.cloud and use sudo -i mariadb to edit the database by hand.

Managing notifications

Some hardcoded accounts (WMCS staff and some trusted volunteers) can use prometheus-alerts.wmcloud.org to create and edit silences. In the future the same interface will work for all project administrators for their own projects.

Links

Monitoring for Toolforge

There are metrics for every node in the Toolforge cluster.

Dashboards and handy links

If you want to get an overview of what's going on the Cloud VPS infra, open these links:

Datacenter What Mechanism Comments Link
eqiad NFS servers icinga labstore1xxx servers [1]
eqiad NFS Server Statistics grafana labstore and cloudstore NFS operations, connections and various details [2]
eqiad Cloud VPS main services icinga service servers, non virts [3]
codfw Cloud VPS labtest servers icinga all physical servers [4]
eqiad Toolforge basic alerts grafana some interesting metrics from Toolforge [5]
eqiad ToolsDB (Toolforge R/W MariaDB) grafana Database metrics for ToolsDB servers [6]
eqiad Toolforge grid status custom tool jobs running on Toolforge's grid [7]
any cloud servers icinga all physical servers with the cloudXXXX naming scheme [8]
eqiad Cloud VPS eqiad1 capacity grafana capacity planning [9]
eqiad labstore1004/labstore1005 grafana load & general metrics [10]
eqiad Cloud VPS eqiad1 grafana load & general metrics [11]
eqiad Cloud VPS eqiad1 grafana internal openstack metrics [12]
eqiad Cloud VPS eqiad1 grafana hypervisor metrics from openstack [13]
eqiad Cloud VPS memcache grafana cloudservices servers [14]
eqiad openstack database backend (per host) grafana mariadb/galera on cloudcontrols [15]
eqiad openstack database backend (aggregated) grafana mariadb/galera on cloudcontrols [16]
eqiad Toolforge grafana Arturo's metrics [17]
eqiad Cloud HW eqiad icinga Icinga group for WMCS in eqiad [18]
eqiad Toolforge, new kubernetes cluster prometheus/grafana Generic dashboard for the new Kubernetes cluster [19]
eqiad Toolforge, new kubernetes cluster, namespaces prometheus/grafana Per-namspace dashboard for the new Kubernetes cluster [20]
eqiad Toolforge, new kubernetes cluster, ingress prometheus/grafana dashboard about the ingress for the new kubernetes cluster [21]
eqiad Toolforge prometheus/grafana dashboard showing a table with basic information about all VMs in the tools project [22]
eqiad Toolforge email server prometheus/grafana dashboard showing data about Toolforge exim email server [23]
Datacenter What Mechanism Comments Link

See also