You are browsing a read-only backup copy of Wikitech. The live site can be found at wikitech.wikimedia.org

Incidents/2022-02-22 vrts

From Wikitech-static
< Incidents
Revision as of 17:49, 8 April 2022 by imported>Krinkle (Krinkle moved page Incident documentation/2022-02-22 vrts to Incidents/2022-02-22 vrts)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

document status: draft

Summary

Incident metadata (see Incident Scorecard)
Incident ID 2022-02-22 vrts Start 2022-02-22 08:00
Task - End 2022-02-22 16:47
People paged 0 Responder count 3
Coordinators - Affected metrics/SLOs
Impact For 12 hours, incoming emails to a specific new VRTS queue were not processed with senders receiving a bounce with an SMTP 550 Error. It is estimated no "useful" emails were lost.
Summary A stuck vrts aliases generating process on mx2001 resulted in rejects for dcw@wikimedia.org, a new VRTS queue

Documentation:

On 2022-02-02 an SRE with long-time knowledge about VRTS received an email to their individual work address from a known VRTS admin, stating that a newly created VRTS queue "dcw@wikimedia.org" returned errors to some users that tried to use it (but not always, e.g. manual testing worked fine). The errors were of type SMTP 550 Error and looked as follows:

208.80.153.45 does not like recipient.
Remote host said: 550 Previous (cached) callout verification failure

A few hours later (by 2022-02-22 13:29), an investigation independently verified that email would not always be reliably sent to this VRTS email queue and the issue was escalated to a couple of other knowledgeable SREs. Given the incoming path and the fact that the only failing email address was a relatively new one not yet in widespread use, the incident was implicitly triaged as low priority. By 14:35 UTC it was verified again, adding more data points and a first theory formulated that our Google work email system was at fault as emails from other MTAs were sent out successfully but sending from wikimedia.org domains failed. However, by 16:47 UTC, it became clear that the generate_otrs_aliases.service systemd timer job was stuck and was not updating VRTS mailing lists/queues on mx2001 while it was running fine on mx1001 (that discrepancy explains why it was sometimes reproducible). After restart of the systemd timer job, the issue was fixed and the fix communicated to the VRTS admin.

Actionables

  • Figure out why generate_otrs_aliases.service was stuck.
  • Alert on a stuck generate_otrs_aliases.service.

TODO: Add the #Sustainability (Incident Followup) and the #SRE-OnFIRE (Pending Review & Scorecard) Phabricator tag to these tasks.

Scorecard

Incident Engagement™ ScoreCard
Rubric Question Score
People Were the people responding to this incident sufficiently different than the previous N incidents? (0/5pt) 4
Were the people who responded prepared enough to respond effectively (0/5pt) 3
Did fewer than 5 people get paged (0/5pt)? N/A
Were pages routed to the correct sub-team(s)? N/A
Were pages routed to online (working hours) engineers (0/5pt)? (score 0 if people were paged after-hours) N/A
Process Was the incident status section actively updated during the incident? (0/1pt) N/A
If this was a major outage noticed by the community, was the public status page updated? If the issue was internal, was the rest of the organization updated with relevant incident statuses? (0/1pt) N/A
Is there a phabricator task for the incident? (0/1pt) 0
Are the documented action items assigned? (0/1pt) 0
Is this a repeat of an earlier incident (-1 per prev occurrence) 0
Is there an open task that would prevent this incident / make mitigation easier if implemented? (0/-1p per task) 0
Tooling Did the people responding have trouble communicating effectively during the incident due to the existing or lack of tooling? (0/5pt) 5
Did existing monitoring notify the initial responders? (1pt) 0
Were all engineering tools required available and in service? (0/5pt) 5
Was there a runbook for all known issues present? (0/5pt) 0
Total score