You are browsing a read-only backup copy of Wikitech. The live site can be found at wikitech.wikimedia.org

Incident documentation/2021-10-25 s3 db recentchanges replica: Difference between revisions

From Wikitech-static
Jump to navigation Jump to search
imported>Legoktm
(move out of draft)
imported>Herron
(Add summary and scorecard sections from template)
Line 5: Line 5:
* {{irdoc|status=final}}
* {{irdoc|status=final}}
-->
-->
 
==Summary and Metadata==
== Summary ==
{| class="wikitable"
The s3 replica (db1112.eqiad.wmnet) that handles recentchanges/watchlist/contributions queries went down, triggering an icinga alert for the host being down, and a few minutes later an alert for increased appserver latency on GET requests. Confusion over the role of db1112, as it's also the s3 sanitarium master, didn't appropriately recognize the severity. Only while investigating the latency alerts was it realized that the database server was down, leading it to be depooled and restarted via mgmt. Once the host came back, a page was sent out. The incident was resolved by pooling a different s3 replica in its place.
|'''Incident ID'''
|2021-10-25 s3 db recentchanges replica
| '''UTC Start Timestamp:'''
|YYYY-MM-DD hh:mm:ss
|-
|'''Incident Task'''
|Phabricator Link
| '''UTC End Timestamp'''
|YYYY-MM-DD hh:mm:ss
|-
|'''People Paged'''
|<amount of people>
|'''Responder Count'''
|<amount of people>
|-
|'''Coordinator(s)'''
|Names - Emails
|'''Relevant Metrics / SLO(s) affected'''
|Relevant metrics
% error budget
|-
|'''Summary:'''
| colspan="3" |
* For ~30min (from 18:25 until 19:06) average HTTP GET latency for mediawiki backends was higher than usual.
* For ~12 hours, database replicas of many wikis were stale for Wikimedia Cloud Services such as Toolforge.
|}The s3 replica (db1112.eqiad.wmnet) that handles recentchanges/watchlist/contributions queries went down, triggering an icinga alert for the host being down, and a few minutes later an alert for increased appserver latency on GET requests. Confusion over the role of db1112, as it's also the s3 sanitarium master, didn't appropriately recognize the severity. Only while investigating the latency alerts was it realized that the database server was down, leading it to be depooled and restarted via mgmt. Once the host came back, a page was sent out. The incident was resolved by pooling a different s3 replica in its place.


s3 replication to WMCS wikireplicas was broken until it was restarted at 2021-10-26 09:15. s3 is the default database section for smaller wikis, which currently accounts for 92% of wikis (905/981 wikis).  
s3 replication to WMCS wikireplicas was broken until it was restarted at 2021-10-26 09:15. s3 is the default database section for smaller wikis, which currently accounts for 92% of wikis (905/981 wikis).  
Line 15: Line 40:
* For ~30min (from 18:25 until 19:06) average HTTP GET latency for mediawiki backends was higher than usual.
* For ~30min (from 18:25 until 19:06) average HTTP GET latency for mediawiki backends was higher than usual.
* For ~12 hours, database replicas of many wikis were stale for Wikimedia Cloud Services such as Toolforge.
* For ~12 hours, database replicas of many wikis were stale for Wikimedia Cloud Services such as Toolforge.
 
=Scorecard=
{| class="wikitable"
| colspan="2" |'''Incident Engagement™  ScoreCard'''
|'''Score'''
|-
| rowspan="5" |'''People'''
|Were the people responding to this incident sufficiently different than the previous N incidents? (0/5pt)
|
|-
|Were the people who responded prepared enough to respond effectively (0/5pt)
|
|-
|Did fewer than 5 people get paged (0/5pt)?
|
|-
|Were pages routed to the correct sub-team(s)?
|
|-
|Were pages routed to online (working hours) engineers (0/5pt)? (score 0 if people were paged after-hours)
|
|-
| rowspan="6" |'''Process'''
|Was the incident status section actively updated during the incident? (0/1pt)
|
|-
|If this was a major outage noticed by the community, was the public status page updated? If the issue was internal, was the rest of the organization updated with relevant incident statuses? (0/1pt)
|
|-
|Is there a phabricator task for the incident? (0/1pt)
|
|-
|Are the documented action items assigned?  (0/1pt)
|
|-
|Is this a repeat of an earlier incident (-1 per prev occurrence)
|
|-
|Is there an open task that would prevent this incident / make mitigation easier if implemented? (0/-1p per task)
|
|-
| rowspan="4" |'''Tooling'''
|Did the people responding have trouble communicating effectively during the incident due to the existing or lack of tooling? (0/5pt)
|
|-
|Did existing monitoring notify the initial responders? (1pt)
|
|-
|Were all engineering tools required available and in service? (0/5pt)
|
|-
|Was there a runbook for all known issues present? (0/5pt)
|
|-
| colspan="2" |'''Total Score'''
|
|}
== Actionables ==
== Actionables ==
* [[phab:T294490|T294490]]: db1112 being down did not trigger any alert that paged until the host was brought back up (we get paged for replication lag but not for host down, Marostegui said for DB hosts we should start paging on HOST down which we normally don't do. This would require a puppet change.)
* [[phab:T294490|T294490]]: db1112 being down did not trigger any alert that paged until the host was brought back up (we get paged for replication lag but not for host down, Marostegui said for DB hosts we should start paging on HOST down which we normally don't do. This would require a puppet change.)

Revision as of 18:00, 1 February 2022

document status: in-review

Summary and Metadata

Incident ID 2021-10-25 s3 db recentchanges replica UTC Start Timestamp: YYYY-MM-DD hh:mm:ss
Incident Task Phabricator Link UTC End Timestamp YYYY-MM-DD hh:mm:ss
People Paged <amount of people> Responder Count <amount of people>
Coordinator(s) Names - Emails Relevant Metrics / SLO(s) affected Relevant metrics

% error budget

Summary:
  • For ~30min (from 18:25 until 19:06) average HTTP GET latency for mediawiki backends was higher than usual.
  • For ~12 hours, database replicas of many wikis were stale for Wikimedia Cloud Services such as Toolforge.

The s3 replica (db1112.eqiad.wmnet) that handles recentchanges/watchlist/contributions queries went down, triggering an icinga alert for the host being down, and a few minutes later an alert for increased appserver latency on GET requests. Confusion over the role of db1112, as it's also the s3 sanitarium master, didn't appropriately recognize the severity. Only while investigating the latency alerts was it realized that the database server was down, leading it to be depooled and restarted via mgmt. Once the host came back, a page was sent out. The incident was resolved by pooling a different s3 replica in its place.

s3 replication to WMCS wikireplicas was broken until it was restarted at 2021-10-26 09:15. s3 is the default database section for smaller wikis, which currently accounts for 92% of wikis (905/981 wikis).

Impact:

  • For ~30min (from 18:25 until 19:06) average HTTP GET latency for mediawiki backends was higher than usual.
  • For ~12 hours, database replicas of many wikis were stale for Wikimedia Cloud Services such as Toolforge.

Scorecard

Incident Engagement™  ScoreCard Score
People Were the people responding to this incident sufficiently different than the previous N incidents? (0/5pt)
Were the people who responded prepared enough to respond effectively (0/5pt)
Did fewer than 5 people get paged (0/5pt)?
Were pages routed to the correct sub-team(s)?
Were pages routed to online (working hours) engineers (0/5pt)? (score 0 if people were paged after-hours)
Process Was the incident status section actively updated during the incident? (0/1pt)
If this was a major outage noticed by the community, was the public status page updated? If the issue was internal, was the rest of the organization updated with relevant incident statuses? (0/1pt)
Is there a phabricator task for the incident? (0/1pt)
Are the documented action items assigned?  (0/1pt)
Is this a repeat of an earlier incident (-1 per prev occurrence)
Is there an open task that would prevent this incident / make mitigation easier if implemented? (0/-1p per task)
Tooling Did the people responding have trouble communicating effectively during the incident due to the existing or lack of tooling? (0/5pt)
Did existing monitoring notify the initial responders? (1pt)
Were all engineering tools required available and in service? (0/5pt)
Was there a runbook for all known issues present? (0/5pt)
Total Score

Actionables

  • T294490: db1112 being down did not trigger any alert that paged until the host was brought back up (we get paged for replication lag but not for host down, Marostegui said for DB hosts we should start paging on HOST down which we normally don't do. This would require a puppet change.)