You are browsing a read-only backup copy of Wikitech. The primary site can be found at wikitech.wikimedia.org
Incidents/2022-09-08 codfw appservers degradation
document status: draft
|Incident ID||2022-09-08 codfw appservers degradation||Start||2022-09-08 15:18:18|
|People paged||1||Responder count||3|
|Coordinators||claime||Affected metrics/SLOs||Response time and 5xx rate|
|Impact||For 2 minutes, api-https, api_appserver and appserver in codfw were in a degraded state
For 16 minutes, parsoid in codfw was in a degraded state
An nginx server restart (RC) triggered an etcdmirror outage during a wikimedia-config deployment. This led to php-fpm not being able to contact its configuration server and failing to restart for the deployment. The appservers got depooled because of the failure until pybal depool protection kicked in. When etcdmirror was restarted to resolve the restart issue, the configuration state with the depooled servers was synchronized, which triggered the depooling of 50% of codfw api-https, api_appserver, appserver, and parsoid servers.
Write a step by step outline of what happened to cause the incident, and how it was remedied. Include the lead-up to the incident, and any epilogue.
Consider including a graphs of the error rate or other surrogate.
All times in UTC.
- 15:17:21 : moritzm updates nginx-light on conf1009, the update triggers a daemon restart (port 4001 on the conf* hosts serves the etcd tlsproxy which is accessed by etcdmirror)
- 15:17:24 : conf2005 systemd: etcdmirror-conftool-eqiad-wmnet.service crashes
- 15:18:18 : claime launches scap sync-file and notices errors
- 15:22:02: irc alert | PROBLEM - etcdmirror-conftool-eqiad-wmnet service on conf2005 is CRITICAL: CRITICAL - Expecting active but unit etcdmirror-conftool-eqiad-wmnet is failed
- 15:28:36 : jayme notices issues with conf2005/etcdmirror
- 15:33:29 : akosiaris restarts etcdmirror
- 15:34:00~: User-visible degradation begins
- 15:34:42 : _joe_ notices https://config-master.wikimedia.org/pybal/codfw/api-https
- 15:35:36 : _joe_ repools api-https
- 15:36:32 : _joe_ repools api_appserver
- 15:36:42 : _joe_ repools appserver
- 15:36:42~: User-visible degradation ends
- 15:50:32 : claime repools parsoid
Write how the issue was first detected. Was automated monitoring first to detect it? Or a human reporting an error?
claime reports errors during scap sync-file, jayme picks up on conf2005/etcdmirror being in a CRITICAL state
Copy the relevant alerts that fired in this section.
15:22:02 +icinga-wm | PROBLEM - etcdmirror-conftool-eqiad-wmnet service on conf2005 is CRITICAL: CRITICAL - Expecting active but unit etcdmirror-conftool-eqiad-wmnet is failed
Did the appropriate alert(s) fire? Was the alert volume manageable? Did they point to the problem with as much accuracy as possible?
Alert fired on IRC but no page went out.
TODO: If human only, an actionable should probably be to "add alerting".
OPTIONAL: General conclusions (bullet points or narrative)
What went well?
- RC very quickly identified due to akosiaris, _joe_, moritzm and jayme being around and correlating very quickly
OPTIONAL: (Use bullet points) for example: automated monitoring detected the incident, outage was root-caused quickly, etc
What went poorly?
- scap continues deployment even in the face of rising failure rates once canaries are passed
- scap doesn't check the status of etcdmirrors before deployment
- no documentation on what to do with scap deployment in the face of rising failure rates
- the etcdmirror alert didn't page
OPTIONAL: (Use bullet points) for example: documentation on the affected service was unhelpful, communication difficulties, etc
Where did we get lucky?
- _joe_ thought to check the pybal config after the etcdmirror restart which allowed very rapid response to the depooling
- pybal depooling protection kicked in
- 5 persons were online to assist and quickly correlated RCA
OPTIONAL: (Use bullet points) for example: user's error report was exceptionally detailed, incident occurred when the most people were online to assist, etc
Links to relevant documentation
Add links to information that someone responding to this alert should have (runbook, plus supporting docs). If that documentation does not exist, add an action item to create it.
- Page on etcdmirror alert
- Add etcdmirror connection retry on etcd-tls-proxy unavailability
- Update Etcd/Main cluster#Replication with safe restart conditions and information
- Add etcdmirror status check to scap
- Add failure rate triggered rollback to scap
Create a list of action items that will help prevent this from happening again as much as possible. Link to or create a Phabricator task for every step.
|People||Were the people responding to this incident sufficiently different than the previous five incidents?||no|
|Were the people who responded prepared enough to respond effectively||yes||got lucky|
|Were fewer than five people paged?||no||no pages -- but we wanted one|
|Were pages routed to the correct sub-team(s)?||no|
|Were pages routed to online (business hours) engineers? Answer “no” if engineers were paged after business hours.||no||no pages|
|Process||Was the incident status section actively updated during the incident?||no|
|Was the public status page updated?||no||not warranted|
|Is there a phabricator task for the incident?||yes|
|Are the documented action items assigned?||no|
|Is this incident sufficiently different from earlier incidents so as not to be a repeat occurrence?||yes|
|Tooling||To the best of your knowledge was the open task queue free of any tasks that would have prevented this incident? Answer “no” if there are
open tasks that would prevent this incident or make mitigation easier if implemented.
|Were the people responding able to communicate effectively during the incident with the existing tooling?||yes|
|Did existing monitoring notify the initial responders?||no||irc alert only|
|Were the engineering tools that were to be used during the incident, available and in service?||yes|
|Were the steps taken to mitigate guided by an existing runbook?||no||etcdmirror documentation is spooky|
|Total score (count of all “yes” answers above)||6|