You are browsing a read-only backup copy of Wikitech. The live site can be found at wikitech.wikimedia.org

Incidents/2022-03-10 MediaWiki availability: Difference between revisions

From Wikitech-static
Jump to navigation Jump to search
imported>Krinkle
 
imported>Herron
No edit summary
 
(2 intermediate revisions by 2 users not shown)
Line 11: Line 11:
| metrics = API Gateway SLO
| metrics = API Gateway SLO
| impact = For 12 minutes, all wikis were unreachable for logged-in users and for anonymous users trying to edit or access uncached content. Two spikes: from 08:24 to 08:30 UTC, and from 08:39 to 08:45 UTC approximately.
| impact = For 12 minutes, all wikis were unreachable for logged-in users and for anonymous users trying to edit or access uncached content. Two spikes: from 08:24 to 08:30 UTC, and from 08:39 to 08:45 UTC approximately.
| summary = The root cause seems to have been db1099, a replica database in the [[s8]] section  (Wikidata), that was rebooted for maintenance shortly before the incident and it was slowly repooled into production while at the same time there was a file transfer over the network from the same host. The load caused by the repooling, although at a very small percentage of the traffic, in addition to the existing bandwidth used by the file transfer, the host became slow to respond to queries, but not enough to be considered down and depooled automatically by other systems. This caused a cascade effect on DBs on the same section (s8) which got overloaded, and, because practically every page render involve reads from s8, it had a secondary cascade effect on all wikis, causing the exhaustion of workers at the application layer. The user-facing side of the outage was seen as slow or unavailable access to uncached render pages or perform read-write actions.
}}
}}
<!-- Reminder: No private information on this page! -->
The root cause seems to have been db1099, a replica database in the [[s8]] section  (Wikidata), that was rebooted for maintenance shortly before the incident and it was slowly repooled into production while at the same time there was a file transfer over the network from the same host. The load caused by the repooling, although at a very small percentage of the traffic, in addition to the existing bandwidth used by the file transfer, the host became slow to respond to queries, but not enough to be considered down and depooled automatically by other systems. This caused a cascade effect on DBs on the same section (s8) which got overloaded, and, because practically every page render involve reads from s8, it had a secondary cascade effect on all wikis, causing the exhaustion of workers at the application layer. The user-facing side of the outage was seen as slow or unavailable access to uncached render pages or perform read-write actions.


'''Documentation''':
'''Documentation''':
Line 28: Line 27:
{| class="wikitable"
{| class="wikitable"
|+[[Incident Scorecard|Incident Engagement™  ScoreCard]]
|+[[Incident Scorecard|Incident Engagement™  ScoreCard]]
!Rubric
!
!Question
!Question
!Score
!Score
!Notes
|-
|-
! rowspan="5" |People
! rowspan="5" |People
|Were the people responding to this incident sufficiently different than the previous N incidents? (0/5pt)
|Were the people responding to this incident sufficiently different than the previous five incidents? (score 1 for yes, 0 for no)
|
|
|
|-
|-
|Were the people who responded prepared enough to respond effectively (0/5pt)
|Were the people who responded prepared enough to respond effectively (score 1 for yes, 0 for no)
|5
|1
|
|-
|-
|Did fewer than 5 people get paged (0/5pt)?
|Were more than 5 people paged? (score 0 for yes, 1 for no)
|0
|0
|
|-
|-
|Were pages routed to the correct sub-team(s)?
|Were pages routed to the correct sub-team(s)? (score 1 for yes, 0 for no)
|
|
|
|-
|-
|Were pages routed to online (working hours) engineers (0/5pt)? (score 0 if people were paged after-hours)
|Were pages routed to online (business hours) engineers? (score 1 for yes,  0 if people were paged after business hours)
|5
|1
|
|-
|-
! rowspan="6" |Process
! rowspan="5" |Process
|Was the incident status section actively updated during the incident? (0/1pt)
|Was the incident status section actively updated during the incident? (score 1 for yes, 0 for no)
|1
|1
|
|-
|-
|If this was a major outage noticed by the community, was the public status page updated? If the issue was internal, was the rest of the organization updated with relevant incident statuses? (0/1pt)
|Was the public status page updated? (score 1 for yes, 0 for no)
|0
|0
|
|-
|-
|Is there a phabricator task for the incident? (0/1pt)
|Is there a phabricator task for the incident? (score 1 for yes, 0 for no)
|1
|1
|
|-
|-
|Are the documented action items assigned? (0/1pt)
|Are the documented action items assigned?  (score 1 for yes, 0 for no)
|0
|0
|
|-
|-
|Is this a repeat of an earlier incident (-1 per prev occurrence)
|Is this a repeat of an earlier incident (score 0 for yes, 1 for no)
|0
|0
|
|-
|-
|Is there an open task that would prevent this incident / make mitigation easier if implemented? (0/-1p per task)
! rowspan="5" |Tooling
|Was there, before the incident occurred, open tasks that would prevent this incident / make mitigation easier if implemented? (score 0 for yes, 1 for no)
|0
|0
|
|-
|-
! rowspan="4" |Tooling
|Were the people responding able to communicate effectively during the incident with the existing tooling? (score 1 for yes, 0 or no)
|Did the people responding have trouble communicating effectively during the incident due to the existing or lack of tooling? (0/5pt)
|0
|0
|
|-
|-
|Did existing monitoring notify the initial responders? (1pt)
|Did existing monitoring notify the initial responders? (score 1 for yes, 0 for no)
|1
|1
|
|-
|-
|Were all engineering tools required available and in service? (0/5pt)
|Were all engineering tools required available and in service? (score 1 for yes, 0 for no)
|
|
|
|-
|-
|Was there a runbook for all known issues present? (0/5pt)
|Was there a runbook for all known issues present? (score 1 for yes, 0 for no)
|
|
|
|-
|-
! colspan="2" align="right" |Total score
! colspan="2" align="right" |Total score
|5
|
|
|}
|}

Latest revision as of 15:09, 28 April 2022

document status: in-review

MediaWiki availability on all wikis for logged-in users / uncached content

Incident metadata (see Incident Scorecard)
Incident ID 2022-03-10 MediaWiki availability Start 2022-03-10 08:26:11
Task T303499 End 2022-03-10 08:46:03
People paged 21 Responder count 10
Coordinators Riccardo Affected metrics/SLOs API Gateway SLO
Impact For 12 minutes, all wikis were unreachable for logged-in users and for anonymous users trying to edit or access uncached content. Two spikes: from 08:24 to 08:30 UTC, and from 08:39 to 08:45 UTC approximately.

The root cause seems to have been db1099, a replica database in the s8 section (Wikidata), that was rebooted for maintenance shortly before the incident and it was slowly repooled into production while at the same time there was a file transfer over the network from the same host. The load caused by the repooling, although at a very small percentage of the traffic, in addition to the existing bandwidth used by the file transfer, the host became slow to respond to queries, but not enough to be considered down and depooled automatically by other systems. This caused a cascade effect on DBs on the same section (s8) which got overloaded, and, because practically every page render involve reads from s8, it had a secondary cascade effect on all wikis, causing the exhaustion of workers at the application layer. The user-facing side of the outage was seen as slow or unavailable access to uncached render pages or perform read-write actions.

Documentation:

Actionables

Scorecard

Incident Engagement™ ScoreCard
Question Score Notes
People Were the people responding to this incident sufficiently different than the previous five incidents? (score 1 for yes, 0 for no)
Were the people who responded prepared enough to respond effectively (score 1 for yes, 0 for no) 1
Were more than 5 people paged? (score 0 for yes, 1 for no) 0
Were pages routed to the correct sub-team(s)? (score 1 for yes, 0 for no)
Were pages routed to online (business hours) engineers? (score 1 for yes,  0 if people were paged after business hours) 1
Process Was the incident status section actively updated during the incident? (score 1 for yes, 0 for no) 1
Was the public status page updated? (score 1 for yes, 0 for no) 0
Is there a phabricator task for the incident? (score 1 for yes, 0 for no) 1
Are the documented action items assigned?  (score 1 for yes, 0 for no) 0
Is this a repeat of an earlier incident (score 0 for yes, 1 for no) 0
Tooling Was there, before the incident occurred, open tasks that would prevent this incident / make mitigation easier if implemented? (score 0 for yes, 1 for no) 0
Were the people responding able to communicate effectively during the incident with the existing tooling? (score 1 for yes, 0 or no) 0
Did existing monitoring notify the initial responders? (score 1 for yes, 0 for no) 1
Were all engineering tools required available and in service? (score 1 for yes, 0 for no)
Was there a runbook for all known issues present? (score 1 for yes, 0 for no)
Total score 5