You are browsing a read-only backup copy of Wikitech. The live site can be found at wikitech.wikimedia.org

Difference between revisions of "Incident documentation/2021-07-20 asw-a2-codfw crash"

From Wikitech-static
Jump to navigation Jump to search
imported>Vgutierrez
imported>Krinkle
 
Line 1: Line 1:
{{irdoc|status=draft}} <!--
#REDIRECT [[Incident documentation/2021-07-16 asw-a2-codfw network]]
The status field should be one of:
* {{irdoc|status=draft}} - Initial status. When you're happy with the state of your draft, change it to status=review.
* {{irdoc|status=review}} - The incident review working group will contact you then to finalise the report. See also the steps on [[Incident documentation]].
* {{irdoc|status=final}}
-->
 
== Summary ==
asw-a2-codfw, the switch handling the network traffic of rack A2 on codfw became unresponsive rendering 14 hosts unreachable. Besides losing the 14 hosts hosted on rack A2, two additional load balancers lost access to codfw's row A. 
 
'''Impact''': Degraded service on: RESTBase, MediaWiki API and edge caching service on codfw. Several services had to be migrated to eqiad: Maps Tile Service and WikiData Query Service. High availability lost in high-traffic1 load balancer in codfw.
{{TOC|align=right}}
 
== Timeline ==
'''All times in UTC.'''
'''Friday 16th'''
* 13:16 asw-a2-codfw becomes unresponsive '''OUTAGE BEGINS'''
* 13:36 authdns2001 is depooled to restore ns1.wikimedia.org
* 14:07 Maps Tile Service is moved from codfw to eqiad
* 14:11 WikiData Query Service gets pooled in eqiad
* 14:30 ports on the affected switch are marked as disabled on asw-a-codfw virtual-chassis
* 14:37 disable affected network interface in lvs2010
* 14:41 disable affected network interface in lvs2009
* 15:14 re-enable affected network interface in lvs2010
* 15:31 remote hands are used in codfw to power-cycle the affected switch without success
* 15:38 re-enable affected network interface in lvs2009
* 15:48 Decrease depool threshold for the edge caching services
* 16:29 Decrease depool threshold for MediaWiki API service
* 16:56 Error rate recovers '''OUTAGE ENDS'''
'''Monday 19th'''
* 08:15 depool text cache codfw PoP
* 17:10 defective switch gets replaced
* 17:21 authdns2001 is pooled
*18:20 restored high availability in high-traffic1 load balancer in codfw
*18:35 lvs2010 recovers row A connectivity
*18:53 lvs2009 recovers row A connectivity
 
* 20:29 pool text cache codfw PoP
'''Tuesday 20th'''
 
* 13:45 Maps Tile Service is back to being served from codfw
 
== Detection ==
The incident was detected via automated monitoring reporting several hosts of rack A2 going down at the very same time:
* <icinga-wm> PROBLEM - Host elastic2038 is DOWN: PING CRITICAL - Packet loss = 100%
* <icinga-wm> PROBLEM - Host kafka-logging2001 is DOWN: PING CRITICAL - Packet loss = 100
* <icinga-wm> PROBLEM - Host ns1-v4 is DOWN: PING CRITICAL - Packet loss = 100%
* <icinga-wm> PROBLEM - Host authdns2001 is DOWN: PING CRITICAL - Packet loss = 100%
* <icinga-wm> PROBLEM - Host ms-be2051 is DOWN: PING CRITICAL - Packet loss = 100%
* <icinga-wm> PROBLEM - Host lvs2007 is DOWN: PING CRITICAL - Packet loss = 100%
* <icinga-wm> PROBLEM - Host thanos-fe2001 is DOWN: PING CRITICAL - Packet loss = 100%
* <icinga-wm> PROBLEM - Host elastic2055 is DOWN: PING CRITICAL - Packet loss = 100%
* <icinga-wm> PROBLEM - Host elastic2037 is DOWN: PING CRITICAL - Packet loss = 100%
* <icinga-wm> PROBLEM - Host ms-fe2005 is DOWN: PING CRITICAL - Packet loss = 100%
* <icinga-wm> PROBLEM - Host ms-be2040 is DOWN: PING CRITICAL - Packet loss = 100%
== Conclusions ==
As a result of losing one single switch, services were affected more than expected due to several weaknesses:
* 3 load balancers including the backup one get the row A traffic from one single network switch
* Depool threshold of several services is too restrictive to continue to work as expected losing a complete row
 
=== What went well? ===
* Automated monitoring detected the incident
* Even if several services have been affected, user facing impact was mild.
 
=== What went poorly? ===
 
* One switch providing row A traffic to three out of four load balancers magnified the incident unnecessarily.
* Misconfigured depool threshold on several services made the outage longer than strictly required
* Pybal IPVS diff check failing to consider a too restrictive depool threshold scenario made the debugging harder
 
=== Where did we get lucky? ===
* Mild user-facing impact.
 
=== How many people were involved in the remediation? ===
* 4 SREs troubleshooting the issue plus 1 incident commander
 
== Links to relevant documentation ==
<mark>Add links to information that someone responding to this alert should have (runbook, plus supporting docs). If that documentation does not exist, add an action item to create it.</mark>
 
== Actionables ==
 
* Fix Pybal IPVS diff check https://phabricator.wikimedia.org/T286913
* Fix depool threshold for text & upload edge caching services https://gerrit.wikimedia.org/r/c/operations/puppet/+/705381
*Fix depool threshold for mw api service https://gerrit.wikimedia.org/r/c/operations/puppet/+/708072
* Avoid using the same switch to get traffic from a row on the primary and secondary load balancers https://phabricator.wikimedia.org/T286881 and https://phabricator.wikimedia.org/T286879
* Load balancers should be able to handle a network interface card failing to be configured https://phabricator.wikimedia.org/T286924

Latest revision as of 04:48, 18 August 2021