You are browsing a read-only backup copy of Wikitech. The live site can be found at wikitech.wikimedia.org
Incidents/2022-02-01 ulsfo network: Difference between revisions
imported>Krinkle m (Krinkle moved page Incident documentation/2022-02-01 ulsfo network to Incidents/2022-02-01 ulsfo network) |
imported>Krinkle No edit summary |
||
Line 13: | Line 13: | ||
| end = 2022-02-01 16:08 | | end = 2022-02-01 16:08 | ||
| impact = For 3 minutes, clients served by the ulsfo POP were not able to contribute or display un-cached pages. 8000 errors per minute (HTTP 5xx). | | impact = For 3 minutes, clients served by the ulsfo POP were not able to contribute or display un-cached pages. 8000 errors per minute (HTTP 5xx). | ||
}} | }} | ||
A firewall change was pushed to ulsfo routers, which caused [[ulsfo]] to lose connectivity to the other POPs and core sites for 3min. | |||
== Timeline == | == Timeline == |
Revision as of 16:20, 11 April 2022
document status: in-review
Summary
Incident ID | 2022-02-01 ulsfo network | Start | 2022-02-01 16:05 |
---|---|---|---|
Task | End | 2022-02-01 16:08 | |
People paged | Responder count | 1 | |
Coordinators | Affected metrics/SLOs | ||
Impact | For 3 minutes, clients served by the ulsfo POP were not able to contribute or display un-cached pages. 8000 errors per minute (HTTP 5xx). |
A firewall change was pushed to ulsfo routers, which caused ulsfo to lose connectivity to the other POPs and core sites for 3min.
Timeline
- 16:03 - apply configuration change on cr3-ulsfo
- 16:05 - apply configuration change on cr4-ulsfo - outage starts
- 16:06 - Icinga notifies about connectivity issues to ulsfo - paging
- 16:08 - change rolled back - outage ends
Documentation:
Root cause:
The change incorrectly restricts BFD to BGP peers:
+ term allow_bfd {
+ from {
+ source-prefix-list {
+ bgp-sessions;
+ }
+ protocol udp;
+ port 3784-3785;
+ }
+ then accept;
+ }
While BFD is also used by OSPF sessions, which caused them to be tear down.
One surprising point is that the issue didn't show up in the verification commands (show ospf interfaces
, show ospf neighbors
), all neighbors are present.
show ospf interface
Interface State Area DR ID BDR ID Nbrs
ae0.2 PtToPt 0.0.0.0 0.0.0.0 0.0.0.0 1
et-0/0/1.401 PtToPt 0.0.0.0 0.0.0.0 0.0.0.0 1
xe-0/1/1.0 PtToPt 0.0.0.0 0.0.0.0 0.0.0.0 1
show ospf neighbor
Address Interface State ID Pri Dead
198.35.26.197 ae0.2 Full 198.35.26.193 128 35
198.35.26.199 et-0/0/1.401 Full 198.35.26.194 128 33
198.35.26.209 xe-0/1/1.0 Full 208.80.154.198 128 34
While it was effectively down:
rpd[16292]: RPD_OSPF_NBRDOWN: OSPF neighbor 198.35.26.209 (realm ospf-v2 xe-0/1/1.0 area 0.0.0.0) state changed from Full to Down due to InActiveTimer (event reason: BFD session timed out and neighbor was declared dead)
Conclusion
- More progressive roll out (longer wait time between each routers, as well as fewer changes at a time) could have reduced the risk of the issue happening
- Monitoring properly caught the issue
Scorecard
Incident Engagement™ ScoreCard | Score | |
People | Were the people responding to this incident sufficiently different than the previous N incidents? (0/5pt) | N/A |
Were the people who responded prepared enough to respond effectively (0/5pt) | 5 | |
Did fewer than 5 people get paged (0/5pt)? | ||
Were pages routed to the correct sub-team(s)? | 1 | |
Were pages routed to online (working hours) engineers (0/5pt)? (score 0 if people were paged after-hours) | ||
Process | Was the incident status section actively updated during the incident? (0/1pt) | N/A |
If this was a major outage noticed by the community, was the public status page updated? If the issue was internal, was the rest of the organization updated with relevant incident statuses? (0/1pt) | N/A | |
Is there a phabricator task for the incident? (0/1pt) | N/A | |
Are the documented action items assigned? (0/1pt) | N/A | |
Is this a repeat of an earlier incident (-1 per prev occurrence) | 0 | |
Is there an open task that would prevent this incident / make mitigation easier if implemented? (0/-1p per task) | 0 | |
Tooling | Did the people responding have trouble communicating effectively during the incident due to the existing or lack of tooling? (0/5pt) | 5 |
Did existing monitoring notify the initial responders? (1pt) | 1 | |
Were all engineering tools required available and in service? (0/5pt) | 5 | |
Was there a runbook for all known issues present? (0/5pt) | 5 | |
Total Score |