You are browsing a read-only backup copy of Wikitech. The live site can be found at wikitech.wikimedia.org

Difference between revisions of "Incident documentation/2021-11-23 Core Network Routing"

From Wikitech-static
Jump to navigation Jump to search
imported>Cathal Mooney
 
imported>Krinkle
m (fix per Cathal's review)
 
Line 5: Line 5:
* {{irdoc|status=final}}
* {{irdoc|status=final}}
-->
-->
== Summary ==
== Summary ==
At approx 09:37 on Tues 23-11-2021 a change was made by SRE Cathal Mooney on cr1-eqiad and cr2-eqiad, to influence route selection in BGP.  The specific change was to remove a BGP setting which causes the BGP "MED" attribute to be set to the OSPF cost to reach the next-hop of the BGP route, as part of https://phabricator.wikimedia.org/T295672. This caused a change in how routes to certain remote sites were evaluated by the core routers there. At a high-level the change meant that the correct, externally-learnt routes for remote BGP destinations, suddenly looked less preferable than reflections of these routes devices in eqiad were announcing to each other.
At approx 09:37 on Tues 23-11-2021 a change was made on cr1-eqiad and cr2-eqiad, to influence route selection in BGP.  The specific change was to remove a BGP setting which causes the BGP "MED" attribute to be set to the OSPF cost to reach the next-hop of the BGP route, as part of [[phab:T295672|T295672]]. This caused a change in how routes to certain remote sites were evaluated by the core routers there. At a high-level the change meant that the correct, externally-learnt routes for remote BGP destinations, suddenly looked less preferable than reflections of these routes devices in eqiad were announcing to each other.
 
For instance cr2-eqiad was previously sending traffic for Singapore BGP routes via codfw.  But following the change it decided it was better to send this traffic to its local peer cr1-eqiad.  Unfortunately cr1-eqiad was a mirror image of this, and decided the best thing was to send such traffic to cr2-eqiad.  A "routing loop" thus came into being, with the traffic never flowing out externally, but instead bouncing between the two eqiad routers.  Alerts fired at 09:39, the first being due to Icinga in eqiad failing to ping the public text-lb address in Singapore.  At 09:42 the configuration change was reverted on both devices by Cathal.  Unfortunately that did not immediately resolve the issue, due to the particular way these routes update on a policy change.  After some further troubleshooting a forced reset was done to the BGP session between the eqiad routers at 09:51, which resolved the issue.


'''Impact''': The impact was minimal, as production traffic between satellite sites and eqiad flows between server IP subnets that are exchanged using OSPFThe issue only affected BGP routes.
For instance, cr2-eqiad was previously sending traffic for [[Eqsin cluster|Eqsin]] BGP routes via [[Codfw cluster|Codfw]]. But following the change BGP decided it was better to send this traffic to its local peer cr1-eqiad. Unfortunately, cr1-eqiad was a mirror image of this, and decided the best thing was to send such traffic to cr2-eqiad. A "routing loop" thus came into being, with the traffic never flowing out externally, but instead bouncing between the two [[Eqiad cluster|Eqiad]] routers. Alerts fired at 09:39, the first being due to Icinga in Eqiad failing to reach the public text-lb address in Eqsin (Singapore).  At 09:42 the configuration change was reverted on both devices by CathalUnfortunately that did not immediately resolve the issue, due to the particular way these routes update on a policy change. After some further troubleshooting a forced reset was done to the BGP session between the eqiad routers at 09:51, which resolved the issue.


'''Documentation''':
The impact was noticed by Icinga via its health checks against edge caching clusters like Eqsin. However, the Icinga alert was likely also the only impact as we generally don't explicitly target remote data centers over public IP. The exception to that is [[WMCS]] instances, but that runs within Eqiad and DNS generally resolves public endpoints to the same DC, not a remote one.
Ticket https://phabricator.wikimedia.org/T295672 has been updated with further detail on what happened and a proposal to adjust our BGP sessions to address both the "second IXP port" issue and prevent this happening again in future.


== Actionables ==
'''Impact''': For about 12 minutes, Eqiad was unable to reach hosts in other data centers (e.g. cache PoPs) via public IP addresses due to a BGP routing error. There was no impact on end-user traffic, and internal traffic generally uses local IP subnets that are routed with OSPF instead of BGP.
See above referenced task for suggested way forward. Right now changes were backed out so situation is back to what it was before the change began.
==Actionables==
*Ticket [[phab:T295672|T295672]] has been updated with further detail on what happened and a proposal to adjust our BGP sessions to address both the "second IXP port" issue and prevent this happening again in future. The changes were backed out so situation is back to what it was before the incident.

Latest revision as of 23:45, 1 December 2021

document status: in-review

Summary

At approx 09:37 on Tues 23-11-2021 a change was made on cr1-eqiad and cr2-eqiad, to influence route selection in BGP. The specific change was to remove a BGP setting which causes the BGP "MED" attribute to be set to the OSPF cost to reach the next-hop of the BGP route, as part of T295672. This caused a change in how routes to certain remote sites were evaluated by the core routers there. At a high-level the change meant that the correct, externally-learnt routes for remote BGP destinations, suddenly looked less preferable than reflections of these routes devices in eqiad were announcing to each other.

For instance, cr2-eqiad was previously sending traffic for Eqsin BGP routes via Codfw. But following the change BGP decided it was better to send this traffic to its local peer cr1-eqiad. Unfortunately, cr1-eqiad was a mirror image of this, and decided the best thing was to send such traffic to cr2-eqiad. A "routing loop" thus came into being, with the traffic never flowing out externally, but instead bouncing between the two Eqiad routers. Alerts fired at 09:39, the first being due to Icinga in Eqiad failing to reach the public text-lb address in Eqsin (Singapore). At 09:42 the configuration change was reverted on both devices by Cathal. Unfortunately that did not immediately resolve the issue, due to the particular way these routes update on a policy change. After some further troubleshooting a forced reset was done to the BGP session between the eqiad routers at 09:51, which resolved the issue.

The impact was noticed by Icinga via its health checks against edge caching clusters like Eqsin. However, the Icinga alert was likely also the only impact as we generally don't explicitly target remote data centers over public IP. The exception to that is WMCS instances, but that runs within Eqiad and DNS generally resolves public endpoints to the same DC, not a remote one.

Impact: For about 12 minutes, Eqiad was unable to reach hosts in other data centers (e.g. cache PoPs) via public IP addresses due to a BGP routing error. There was no impact on end-user traffic, and internal traffic generally uses local IP subnets that are routed with OSPF instead of BGP.

Actionables

  • Ticket T295672 has been updated with further detail on what happened and a proposal to adjust our BGP sessions to address both the "second IXP port" issue and prevent this happening again in future. The changes were backed out so situation is back to what it was before the incident.