You are browsing a read-only backup copy of Wikitech. The primary site can be found at wikitech.wikimedia.org
Incidents/2022-08-16 Beta Cluster 502
document status: draft
|Incident ID||2022-08-16 Beta Cluster 502||Start||2022-08-16 17:57:00|
|People paged||0||Responder count||8|
|Coordinators||TheresNoTime||Affected metrics/SLOs||beta has a best effort SLA|
|Impact||All sites within the Beta Cluster were unavailable, which affected CI workflows|
After an inadvertent restart of some WMCS cloudvirts and their associated VMs, all sites within the Beta Cluster (e.g. https://meta.wikimedia.beta.wmflabs.org/wiki/Main_Page) failed to load, with
Error: 502, Next Hop Connection Failed — this persisted post-restart of the relevant VMs.
Drafting: possibly an apache config/puppet failure (https://phabricator.wikimedia.org/T315350#8159826), restarting trafficserver seems to have fixed it (https://phabricator.wikimedia.org/T315350#8159954)
The incident was complicated by the lack of Beta Cluster's maintenance meaning ongoing "normal" errors distracted from the cause.
Summary of what happened, in one or two paragraphs. Avoid assuming deep knowledge of the systems here, and try to differentiate between proximate causes and root causes.
Write a step by step outline of what happened to cause the incident, and how it was remedied. Include the lead-up to the incident, and any epilogue.
Consider including a graphs of the error rate or other surrogate.
All times in UTC.
- 00:00 (TODO) OUTAGE BEGINS
- 00:04 (Something something)
- 00:06 (Voila) OUTAGE ENDS
- 00:15 (post-outage cleanup finished)
TODO: Clearly indicate when the user-visible outage began and ended.
- User reports
- CI errors from beta sync
Copy the relevant alerts that fired in this section.
Did the appropriate alert(s) fire? Was the alert volume manageable? Did they point to the problem with as much accuracy as possible?
TODO: If human only, an actionable should probably be to "add alerting".
What went well?
- A number of volunteers were available to triage
(Use bullet points) for example: automated monitoring detected the incident, outage was root-caused quickly, etc
What went poorly?
- lack of fast response from people with sufficient knowledge
- beta is already broken in weird and wonderful ways
What weaknesses did we learn about and how can we address them? (Use bullet points) for example: documentation on the affected service was unhelpful, communication difficulties, etc
Where did we get lucky?
- No deployments were affected
- Beta wasn't needed due to test/replicate production issues during the time.
(Use bullet points) for example: user's error report was exceptionally detailed, incident occurred when the most people were online to assist, etc
How many people were involved in the remediation?
- 3 Volunteers
- 2 Release Engineers
- 2 SRE - 1 Search Platform, 1 ServiceOps
- 1 WMCS admin
(Use bullet points) for example: 2 SREs and 1 software engineer troubleshooting the issue plus 1 incident commander
Links to relevant documentation
Add links to information that someone responding to this alert should have (runbook, plus supporting docs). If that documentation does not exist, add an action item to create it.
- logspam watch broken on beta Done
- Remove two cherry-picked reverts from deployment-puppetmaster04
- Rebase & merge or re-cherry-pick 668701 on deployment-puppetmaster04
- Replace certificate on deployment-elastic09.deployment-prep.eqiad1.wikimedia.cloud
Create a list of action items that will help prevent this from happening again as much as possible. Link to or create a Phabricator task for every step.
|People||Were the people responding to this incident sufficiently different than the previous five incidents?||no|
|Were the people who responded prepared enough to respond effectively||no|
|Were fewer than five people paged?||yes|
|Were pages routed to the correct sub-team(s)?||N/A|
|Were pages routed to online (business hours) engineers? Answer “no” if engineers were paged after business hours.||N/A|
|Process||Was the incident status section actively updated during the incident?||no|
|Was the public status page updated?||no|
|Is there a phabricator task for the incident?||yes|
|Are the documented action items assigned?||no|
|Is this incident sufficiently different from earlier incidents so as not to be a repeat occurrence?||yes|
|Tooling||To the best of your knowledge was the open task queue free of any tasks that would have prevented this incident? Answer “no” if there are
open tasks that would prevent this incident or make mitigation easier if implemented.
|Were the people responding able to communicate effectively during the incident with the existing tooling?||yes|
|Did existing monitoring notify the initial responders?||no|
|Were the engineering tools that were to be used during the incident, available and in service?||no|
|Were the steps taken to mitigate guided by an existing runbook?||no|
|Total score (count of all “yes” answers above)||4/12|