You are browsing a read-only backup copy of Wikitech. The live site can be found at wikitech.wikimedia.org

Incident documentation/2021-11-23 Core Network Routing

From Wikitech-static
< Incident documentation
Revision as of 23:45, 1 December 2021 by imported>Krinkle (fix per Cathal's review)
Jump to navigation Jump to search

document status: in-review

Summary

At approx 09:37 on Tues 23-11-2021 a change was made on cr1-eqiad and cr2-eqiad, to influence route selection in BGP. The specific change was to remove a BGP setting which causes the BGP "MED" attribute to be set to the OSPF cost to reach the next-hop of the BGP route, as part of T295672. This caused a change in how routes to certain remote sites were evaluated by the core routers there. At a high-level the change meant that the correct, externally-learnt routes for remote BGP destinations, suddenly looked less preferable than reflections of these routes devices in eqiad were announcing to each other.

For instance, cr2-eqiad was previously sending traffic for Eqsin BGP routes via Codfw. But following the change BGP decided it was better to send this traffic to its local peer cr1-eqiad. Unfortunately, cr1-eqiad was a mirror image of this, and decided the best thing was to send such traffic to cr2-eqiad. A "routing loop" thus came into being, with the traffic never flowing out externally, but instead bouncing between the two Eqiad routers. Alerts fired at 09:39, the first being due to Icinga in Eqiad failing to reach the public text-lb address in Eqsin (Singapore). At 09:42 the configuration change was reverted on both devices by Cathal. Unfortunately that did not immediately resolve the issue, due to the particular way these routes update on a policy change. After some further troubleshooting a forced reset was done to the BGP session between the eqiad routers at 09:51, which resolved the issue.

The impact was noticed by Icinga via its health checks against edge caching clusters like Eqsin. However, the Icinga alert was likely also the only impact as we generally don't explicitly target remote data centers over public IP. The exception to that is WMCS instances, but that runs within Eqiad and DNS generally resolves public endpoints to the same DC, not a remote one.

Impact: For about 12 minutes, Eqiad was unable to reach hosts in other data centers (e.g. cache PoPs) via public IP addresses due to a BGP routing error. There was no impact on end-user traffic, and internal traffic generally uses local IP subnets that are routed with OSPF instead of BGP.

Actionables

  • Ticket T295672 has been updated with further detail on what happened and a proposal to adjust our BGP sessions to address both the "second IXP port" issue and prevent this happening again in future. The changes were backed out so situation is back to what it was before the incident.