You are browsing a read-only backup copy of Wikitech. The live site can be found at wikitech.wikimedia.org

Incidents/2022-05-20 Database slow

From Wikitech-static
< Incidents
Revision as of 14:47, 19 August 2022 by imported>Krinkle
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

document status: final

Summary

Incident metadata (see Incident Scorecard)
Incident ID 2022-05-20 Database slow Start 2022-05-20 09:35:00
Task T308380 End 2022-05-20 09:35:00
People paged 26 Responder count 10
Coordinators jwodstrcil Affected metrics/SLOs
Impact Two occurrences of impact on uncached traffic (high latency, unavailability) related to application server worker thread exhaustion caused by slow database response.

On 2022-05-14 at 8:18 UTC there was a 3 minute impact on uncached traffic (high latency, unavailability) related to application server worker thread exhaustion caused by slow database response. There was no clear root cause at the time. The incident occurred again on the same database host on 2022-05-20 at 09:35 UTC, this time lasting for 5 minutes. After further investigation the likely root cause is a MariaDB 10.6 performance regression under load, further researched in https://phabricator.wikimedia.org/T311106.

Documentation:

Actionables

Scorecard

Incident Engagement™ ScoreCard
Question Answer

(yes/no)

Notes
People Were the people responding to this incident sufficiently different than the previous five incidents? no
Were the people who responded prepared enough to respond effectively yes
Were fewer than five people paged? no
Were pages routed to the correct sub-team(s)? no
Were pages routed to online (business hours) engineers? Answer “no” if engineers were paged after business hours. yes
Process Was the incident status section actively updated during the incident? no
Was the public status page updated? yes
Is there a phabricator task for the incident? yes
Are the documented action items assigned? yes
Is this incident sufficiently different from earlier incidents so as not to be a repeat occurrence? yes
Tooling To the best of your knowledge was the open task queue free of any tasks that would have prevented this incident? Answer “no” if there are

open tasks that would prevent this incident or make mitigation easier if implemented.

yes
Were the people responding able to communicate effectively during the incident with the existing tooling? yes
Did existing monitoring notify the initial responders? yes
Were the engineering tools that were to be used during the incident, available and in service? yes
Were the steps taken to mitigate guided by an existing runbook? yes
Total score (count of all “yes” answers above) 11