You are browsing a read-only backup copy of Wikitech. The live site can be found at wikitech.wikimedia.org
Incidents/2021-10-25 s3 db recentchanges replica
document status: in-review
Summary and Metadata
|Incident ID||2021-10-25 s3 db recentchanges replica||UTC Start Timestamp:||YYYY-MM-DD hh:mm:ss|
|Incident Task||Phabricator Link||UTC End Timestamp||YYYY-MM-DD hh:mm:ss|
|People Paged||<amount of people>||Responder Count||<amount of people>|
|Coordinator(s)||Names - Emails||Relevant Metrics / SLO(s) affected||Relevant metrics
% error budget
The s3 replica (db1112.eqiad.wmnet) that handles recentchanges/watchlist/contributions queries went down, triggering an icinga alert for the host being down, and a few minutes later an alert for increased appserver latency on GET requests. Confusion over the role of db1112, as it's also the s3 sanitarium master, didn't appropriately recognize the severity. Only while investigating the latency alerts was it realized that the database server was down, leading it to be depooled and restarted via mgmt. Once the host came back, a page was sent out. The incident was resolved by pooling a different s3 replica in its place.
s3 replication to WMCS wikireplicas was broken until it was restarted at 2021-10-26 09:15. s3 is the default database section for smaller wikis, which currently accounts for 92% of wikis (905/981 wikis).
- For ~30min (from 18:25 until 19:06) average HTTP GET latency for mediawiki backends was higher than usual.
- For ~12 hours, database replicas of many wikis were stale for Wikimedia Cloud Services such as Toolforge.
|Incident Engagement™ ScoreCard||Score|
|People||Were the people responding to this incident sufficiently different than the previous N incidents? (0/5pt)|
|Were the people who responded prepared enough to respond effectively (0/5pt)|
|Did fewer than 5 people get paged (0/5pt)?|
|Were pages routed to the correct sub-team(s)?|
|Were pages routed to online (working hours) engineers (0/5pt)? (score 0 if people were paged after-hours)|
|Process||Was the incident status section actively updated during the incident? (0/1pt)|
|If this was a major outage noticed by the community, was the public status page updated? If the issue was internal, was the rest of the organization updated with relevant incident statuses? (0/1pt)|
|Is there a phabricator task for the incident? (0/1pt)|
|Are the documented action items assigned? (0/1pt)|
|Is this a repeat of an earlier incident (-1 per prev occurrence)|
|Is there an open task that would prevent this incident / make mitigation easier if implemented? (0/-1p per task)|
|Tooling||Did the people responding have trouble communicating effectively during the incident due to the existing or lack of tooling? (0/5pt)|
|Did existing monitoring notify the initial responders? (1pt)|
|Were all engineering tools required available and in service? (0/5pt)|
|Was there a runbook for all known issues present? (0/5pt)|
- T294490: db1112 being down did not trigger any alert that paged until the host was brought back up (we get paged for replication lag but not for host down, Marostegui said for DB hosts we should start paging on HOST down which we normally don't do. This would require a puppet change.)