You are browsing a read-only backup copy of Wikitech. The live site can be found at wikitech.wikimedia.org
Incidents/2022-03-31 api errors
document status: draft
Summary
Incident ID | 2022-03-31 api errors | Start | 2022-03-31 05:18 |
---|---|---|---|
Task | T305119 | End | 2022-03-31 05:40 |
People paged | 17 | Responder count | 2 |
Coordinators | n/a | Affected metrics/SLOs | No relevant SLOs exist. The API Gateway SLO was unaffected. If the planned app server SLO existed, we would have consumed a small portion of its error budget. |
Impact | For 22 minutes, API server and app server availability were slightly decreased (~0.1% errors, all for wikis in s7) and API server latency was elevated. |
After a code change [1] [2] rolled out in this week's train, GlobalUsersPager (a CentralAuth component) produced expensive DB queries that exhausted resources on s7 replicas.
Backpressure from the databases tied up PHP-FPM workers on the API servers, triggering a paging alert for worker saturation. The slow queries were identified and manually killed on the database, which resolved the incident.
Because the alert fired and the queries were killed before available workers were fully exhausted, the impact was limited to s7. Full worker saturation would have resulted in a complete API outage.
Because only two engineers responded to the page and the response only took half an hour, we decided not to designate an incident coordinator, start a status doc, and so on. We didn't need those tools to organize the response, and they would have taken time away from solving the problem.
Documentation:
- Phabricator task detailing the slow query
- API server RED dashboard showing elevated latency and errors; php-fpm workers peaking around 75% saturation; and s7 database errors
- Same dashboard for the app servers showing measurable but lesser impact
Actionables
- Revert the patches generating the slow queries - done [3] [4]
- Later (2022-04-06) it was discovered the query killer was using the old 'wikiuser' name, which prevented it from acting. Fixed in [5], deploying soon.
Scorecard
Rubric | Question | Score |
---|---|---|
People | Were the people responding to this incident sufficiently different than the previous N incidents? (0/5pt) | |
Were the people who responded prepared enough to respond effectively (0/5pt) | ||
Did fewer than 5 people get paged (0/5pt)? | ||
Were pages routed to the correct sub-team(s)? | ||
Were pages routed to online (working hours) engineers (0/5pt)? (score 0 if people were paged after-hours) | ||
Process | Was the incident status section actively updated during the incident? (0/1pt) | |
If this was a major outage noticed by the community, was the public status page updated? If the issue was internal, was the rest of the organization updated with relevant incident statuses? (0/1pt) | ||
Is there a phabricator task for the incident? (0/1pt) | ||
Are the documented action items assigned? (0/1pt) | ||
Is this a repeat of an earlier incident (-1 per prev occurrence) | ||
Is there an open task that would prevent this incident / make mitigation easier if implemented? (0/-1p per task) | ||
Tooling | Did the people responding have trouble communicating effectively during the incident due to the existing or lack of tooling? (0/5pt) | |
Did existing monitoring notify the initial responders? (1pt) | ||
Were all engineering tools required available and in service? (0/5pt) | ||
Was there a runbook for all known issues present? (0/5pt) | ||
Total score |