You are browsing a read-only backup copy of Wikitech. The live site can be found at

Incidents/2019-03-20 ORES

From Wikitech-static
< Incidents
Revision as of 17:47, 8 April 2022 by imported>Krinkle (Krinkle moved page Incident documentation/2019-03-20 ORES to Incidents/2019-03-20 ORES)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

ORES in CODFW stopped processing requests. The result was sustained overload errors and a growing backlog of requests to process.


This is a short (<= 1 paragraph) of what happened. Please ensure to remove private information.


All times in UTC

  • March 19th
    • 0400 - We observe a very high, sustained request rate from Google Cloud in SFO The sustained request rate brings EQIAD/CODFW near capacity. (grafana of external requests)
  • March 20
    • 15:58 DNS oresrdb.svc.codfw.wmnet is switched over to oresrdb2002
    • 14:02 - oresrdb2002 is rebooted for maintenance.
    • 14:10 - oresrdb2002 comes back up.
    • 14:12:22 - Score cache redis start and begins loading the redis databases from disk
    • 14:13:57 - the score cache redis database loads the file and start accepting connections. It also begins a full resynchronization from master
    • 14:12 - ORES codfw stops returning any scores (grafana of scores processed)
    • 14:14 - ORES codfw begins to return overload errors (grafana of overload errors)
    • 14:40 - Reversal of previous DNS change and forced restart of workers.
[14:40:21] <akosiaris> lemme reverst the switchover of the redis just in case
[14:43:21] <akosiaris> I 'll force a worker restart just to make sure it was that


Miscommunication between SRE team members ended up in the reboot of the backup redis server after it was switched to serve redis traffic.

Links to relevant documentation

Where is the documentation that someone responding to this alert should have (cookbook / runbook). If that documentation does not exist, there should be an action item to create it.