You are browsing a read-only backup copy of Wikitech. The live site can be found at wikitech.wikimedia.org
Incidents/2022-09-08 codfw api-https api appserver appserver parsoid degradation
document status: draft
|Incident ID||2022-09-08 codfw api-https api appserver appserver parsoid degradation||Start||2022-09-08 15:18:18|
|People paged||1||Responder count||3|
|Coordinators||claime||Affected metrics/SLOs||Response time and 5xx rate|
|Impact||For 2 minutes, api-https, api_appserver and appserver in codfw were in a degraded state
For 16 minutes, parsoid in codfw was in a degraded state
An nginx server restart (RC) triggered an etcdmirror outage during a wikimedia-config deployment. When etcdmirror was restarted, a wrong state was synchronized which depooled a part of api-https api appserver appserver and parsoid servers.
Write a step by step outline of what happened to cause the incident, and how it was remedied. Include the lead-up to the incident, and any epilogue.
Consider including a graphs of the error rate or other surrogate.
All times in UTC.
https://grafana.wikimedia.org/d/RIA1lzDZk/application-servers-red?orgId=1&var-site=codfw&var-cluster=api_appserver&var-method=GET&var-code=200&var-php_version=All&from=1662651131187&to=1662651531187 https://grafana.wikimedia.org/d/RIA1lzDZk/application-servers-red?orgId=1&var-site=codfw&var-cluster=appserver&var-method=GET&var-code=200&var-php_version=All&from=1662651131187&to=1662651531187 https://grafana.wikimedia.org/d/RIA1lzDZk/application-servers-red?orgId=1&var-site=codfw&var-cluster=parsoid&var-method=GET&var-code=200&var-php_version=All&from=1662651131187&to=1662651531187
- ??? : moritzm updates nginx, causing a restart
- 15:17:24 : conf2005 systemd: etcdmirror-conftool-eqiad-wmnet.service crashes
- 15:18:18 : claime launches scap sync-file and notices errors
- 15:22:02: irc alert | PROBLEM - etcdmirror-conftool-eqiad-wmnet service on conf2005 is CRITICAL: CRITICAL - Expecting active but unit etcdmirror-conftool-eqiad-wmnet is failed
- 15:28:36 : jayme notices issues with conf2005/etcdmirror
- 15:33:29 : akosiaris restarts etcdmirror
- 15:34:00~: User-visible degradation begins
- 15:34:42 : _joe_ notices https://config-master.wikimedia.org/pybal/codfw/api-https
- 15:35:36 : _joe_ repools api-https
- 15:36:32 : _joe_ repools api_appserver
- 15:36:42 : _joe_ repools appserver
- 15:36:42~: User-visible degradation ends
- 15:50:32 : claime repools parsoid
Write how the issue was first detected. Was automated monitoring first to detect it? Or a human reporting an error?
claime reports errors during scap sync-file
Copy the relevant alerts that fired in this section.
15:22:02 +icinga-wm | PROBLEM - etcdmirror-conftool-eqiad-wmnet service on conf2005 is CRITICAL: CRITICAL - Expecting active but unit etcdmirror-conftool-eqiad-wmnet is failed
Did the appropriate alert(s) fire? Was the alert volume manageable? Did they point to the problem with as much accuracy as possible?
Alert fired but (no page)?
TODO: If human only, an actionable should probably be to "add alerting".
OPTIONAL: General conclusions (bullet points or narrative)
What went well?
OPTIONAL: (Use bullet points) for example: automated monitoring detected the incident, outage was root-caused quickly, etc
What went poorly?
OPTIONAL: (Use bullet points) for example: documentation on the affected service was unhelpful, communication difficulties, etc
Where did we get lucky?
OPTIONAL: (Use bullet points) for example: user's error report was exceptionally detailed, incident occurred when the most people were online to assist, etc
Links to relevant documentation
Add links to information that someone responding to this alert should have (runbook, plus supporting docs). If that documentation does not exist, add an action item to create it.
Create a list of action items that will help prevent this from happening again as much as possible. Link to or create a Phabricator task for every step.
|People||Were the people responding to this incident sufficiently different than the previous five incidents?|
|Were the people who responded prepared enough to respond effectively|
|Were fewer than five people paged?|
|Were pages routed to the correct sub-team(s)?|
|Were pages routed to online (business hours) engineers? Answer “no” if engineers were paged after business hours.|
|Process||Was the incident status section actively updated during the incident?|
|Was the public status page updated?|
|Is there a phabricator task for the incident?|
|Are the documented action items assigned?|
|Is this incident sufficiently different from earlier incidents so as not to be a repeat occurrence?|
|Tooling||To the best of your knowledge was the open task queue free of any tasks that would have prevented this incident? Answer “no” if there are
open tasks that would prevent this incident or make mitigation easier if implemented.
|Were the people responding able to communicate effectively during the incident with the existing tooling?|
|Did existing monitoring notify the initial responders?|
|Were the engineering tools that were to be used during the incident, available and in service?|
|Were the steps taken to mitigate guided by an existing runbook?|
|Total score (count of all “yes” answers above)|