You are browsing a read-only backup copy of Wikitech. The live site can be found at wikitech.wikimedia.org

Incidents/2021-11-02 Cloud VPS networking: Difference between revisions

From Wikitech-static
Jump to navigation Jump to search
imported>Krinkle
 
imported>Jcrespo
(updating after review)
Line 44: Line 44:
|-
|-
|Were the people who responded prepared enough to respond effectively (0/5pt)
|Were the people who responded prepared enough to respond effectively (0/5pt)
| 5
| 1
|-
|-
|Did fewer than 5 people get paged (0/5pt)?
|Did fewer than 5 people get paged (0/5pt)?
Line 88: Line 88:
|-
|-
| colspan="2" |'''Total Score'''
| colspan="2" |'''Total Score'''
| 26
| 22
|}
|}



Revision as of 17:15, 11 April 2022

document status: in-review

Summary and Metadata

The metadata is aimed at helping provide a quick snapshot of context around what happened during the incident.

Incident ID 2021-11-02 Cloud VPS networking UTC Start Timestamp: 2021-11-02 11:35:00
Incident Task T294853 UTC End Timestamp 2021-11-02 13:20:00
People Paged 0 Responder Count 3: Dcaro, Aborrero, Majavah (+ volunteers reporting on IRC)
Coordinator(s) No ICs Relevant Metrics / SLO(s) affected No SLO defined
Summary: For about 1 hour and 40 minutes, Toolforge services and VMs in Cloud VPS may have experienced connectivity issues

After a kernel upgrade for several Cloud VPS network components (cloudnet, cloudgw servers; see T291813), we found problems with Toolforge NFS in Kubernetes. Later LDAP connections were found to be affected. Eventually it turned out to be a problem with all ingress traffic to the network edge for cloud VMs (except those with floating IPs, which were unaffected). The issue was resolved by rolling back the kernel upgrade.

Impact: For about 1 hour and 40 minutes, Toolforge services and VMs in Cloud VPS may have experienced connectivity issues

Scorecard

Incident Engagement™ ScoreCard Score
People Were the people responding to this incident sufficiently different than the previous N incidents? (0/5pt) 3
Were the people who responded prepared enough to respond effectively (0/5pt) 1
Did fewer than 5 people get paged (0/5pt)? 5
Were pages routed to the correct sub-team(s)? 0
Were pages routed to online (working hours) engineers (0/5pt)? (score 0 if people were paged after-hours) 5
Process Was the incident status section actively updated during the incident? (0/1pt) 0
If this was a major outage noticed by the community, was the public status page updated? If the issue was internal, was the rest of the organization updated with relevant incident statuses? (0/1pt) 0
Is there a phabricator task for the incident? (0/1pt) 1
Are the documented action items assigned? (0/1pt) 1
Is this a repeat of an earlier incident (-1 per prev occurrence) 0
Is there an open task that would prevent this incident / make mitigation easier if implemented? (0/-1p per task) 0
Tooling Did the people responding have trouble communicating effectively during the incident due to the existing or lack of tooling? (0/5pt) 4
Did existing monitoring notify the initial responders? (1pt) 0
Were all engineering tools required available and in service? (0/5pt) 2
Was there a runbook for all known issues present? (0/5pt) 0
Total Score 22

Actionables

  • Improve automated testing and monitoring of cloud networking, T294955
  • Set up static route for cr-codfw, T295288
  • Avoid keepalived flaps when rebooting servers, T294956