You are browsing a read-only backup copy of Wikitech. The live site can be found at wikitech.wikimedia.org

Incidents/2022-03-04 esams availability banner sampling

From Wikitech-static
< Incidents
Revision as of 16:29, 21 April 2022 by imported>Krinkle (Created page with "{{irdoc|status=draft}} ==Summary== {{Incident scorecard | task =T303036 | paged-num =25 | responders-num =10 | coordinators =jcrespo | start =2022-03-04 09:18:00 | end =2022-03-04 10:47:53 |metrics=Varnish uptime, general site availability|impact=For 1.5h, wikis were largely unreachable from Europe (via Esams) with shorter and more limited impact across the globe via other data centers as well.}} A particular banner was deployed via CentralNotice that was both ena...")
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

document status: draft

Summary

Incident metadata (see Incident Scorecard)
Incident ID 2022-03-04 esams availability banner sampling Start 2022-03-04 09:18:00
Task T303036 End 2022-03-04 10:47:53
People paged 25 Responder count 10
Coordinators jcrespo Affected metrics/SLOs Varnish uptime, general site availability
Impact For 1.5h, wikis were largely unreachable from Europe (via Esams) with shorter and more limited impact across the globe via other data centers as well.

A particular banner was deployed via CentralNotice that was both enabled for all users and with 100% sampling rate for its event instrumentation.

This caused instabilities at the outer traffic layer. The large amount of incoming traffic for event beacons, each of which had to be handed off to a backend service (eventgate-analytics-external), resulted in connections piling up and Varnish was unable to handle it and other traffic as a result, thus causing wikis to be unreachable in the affected regions. Initially Esams datacenter clients (mostly Europe, Africa and Middle East), with some temporary issues on other datacenters (Eqiad) as well when we initially attempted to reroute traffic to there.

File:Varnish traffic 2022-03-04 8-12AM.png
Varnish traffic 08:00-12:00
File:Navtiming pageviews 2022-03-04.png
Impacted pageviews by continent.

Documentation:

Actionables

Create a list of action items that will help prevent this from happening again as much as possible. Link to or create a Phabricator task for every step.

  • To do #1 (TODO: Create task)
  • To do #2 (TODO: Create task)

TODO: Add the #Sustainability (Incident Followup) and the #SRE-OnFIRE (Pending Review & Scorecard) Phabricator tag to these tasks.

Scorecard

Incident Engagement™ ScoreCard
Rubric Question Score
People Were the people responding to this incident sufficiently different than the previous N incidents? (0/5pt)
Were the people who responded prepared enough to respond effectively (0/5pt)
Did fewer than 5 people get paged (0/5pt)?
Were pages routed to the correct sub-team(s)?
Were pages routed to online (working hours) engineers (0/5pt)? (score 0 if people were paged after-hours)
Process Was the incident status section actively updated during the incident? (0/1pt)
If this was a major outage noticed by the community, was the public status page updated? If the issue was internal, was the rest of the organization updated with relevant incident statuses? (0/1pt)
Is there a phabricator task for the incident? (0/1pt)
Are the documented action items assigned? (0/1pt)
Is this a repeat of an earlier incident (-1 per prev occurrence)
Is there an open task that would prevent this incident / make mitigation easier if implemented? (0/-1p per task)
Tooling Did the people responding have trouble communicating effectively during the incident due to the existing or lack of tooling? (0/5pt)
Did existing monitoring notify the initial responders? (1pt)
Were all engineering tools required available and in service? (0/5pt)
Was there a runbook for all known issues present? (0/5pt)
Total score