You are browsing a read-only backup copy of Wikitech. The live site can be found at wikitech.wikimedia.org
Incidents/2022-03-10 MediaWiki availability
< Incidents
Jump to navigation
Jump to search
Revision as of 17:49, 8 April 2022 by imported>Krinkle (Krinkle moved page Incident documentation/2022-03-10 MediaWiki availability to Incidents/2022-03-10 MediaWiki availability)
document status: in-review
MediaWiki availability on all wikis for logged-in users / uncached content
Incident ID | 2022-03-10 MediaWiki availability | Start | 2022-03-10 08:26:11 |
---|---|---|---|
Task | T303499 | End | 2022-03-10 08:46:03 |
People paged | 21 | Responder count | 10 |
Coordinators | Riccardo | Affected metrics/SLOs | API Gateway SLO |
Impact | For 12 minutes, all wikis were unreachable for logged-in users and for anonymous users trying to edit or access uncached content. Two spikes: from 08:24 to 08:30 UTC, and from 08:39 to 08:45 UTC approximately. | ||
Summary | The root cause seems to have been db1099, a replica database in the s8 section (Wikidata), that was rebooted for maintenance shortly before the incident and it was slowly repooled into production while at the same time there was a file transfer over the network from the same host. The load caused by the repooling, although at a very small percentage of the traffic, in addition to the existing bandwidth used by the file transfer, the host became slow to respond to queries, but not enough to be considered down and depooled automatically by other systems. This caused a cascade effect on DBs on the same section (s8) which got overloaded, and, because practically every page render involve reads from s8, it had a secondary cascade effect on all wikis, causing the exhaustion of workers at the application layer. The user-facing side of the outage was seen as slow or unavailable access to uncached render pages or perform read-write actions. |
Documentation:
- Dashboard with the query throughput for all databases showing the dip of query processed on all database sections.
- RED dashboard for the application servers (MediaWiki) showing that the problem was related to the S8 database section.
- API Gateway SLO dashboard
Actionables
- Incident tracking task
- Investigate if stopping mysql with buffer_pool dump between 10.4 versions is safe
Scorecard
Rubric | Question | Score |
---|---|---|
People | Were the people responding to this incident sufficiently different than the previous N incidents? (0/5pt) | |
Were the people who responded prepared enough to respond effectively (0/5pt) | 5 | |
Did fewer than 5 people get paged (0/5pt)? | 0 | |
Were pages routed to the correct sub-team(s)? | ||
Were pages routed to online (working hours) engineers (0/5pt)? (score 0 if people were paged after-hours) | 5 | |
Process | Was the incident status section actively updated during the incident? (0/1pt) | 1 |
If this was a major outage noticed by the community, was the public status page updated? If the issue was internal, was the rest of the organization updated with relevant incident statuses? (0/1pt) | 0 | |
Is there a phabricator task for the incident? (0/1pt) | 1 | |
Are the documented action items assigned? (0/1pt) | 0 | |
Is this a repeat of an earlier incident (-1 per prev occurrence) | 0 | |
Is there an open task that would prevent this incident / make mitigation easier if implemented? (0/-1p per task) | 0 | |
Tooling | Did the people responding have trouble communicating effectively during the incident due to the existing or lack of tooling? (0/5pt) | 0 |
Did existing monitoring notify the initial responders? (1pt) | 1 | |
Were all engineering tools required available and in service? (0/5pt) | ||
Was there a runbook for all known issues present? (0/5pt) | ||
Total score |