You are browsing a read-only backup copy of Wikitech. The live site can be found at wikitech.wikimedia.org
Incident documentation/2021-07-26 ruwikinews DynamicPageList
document status: draft
Following the bot import of 200,000 pages to the Russian Wikinews in the span of 3 days, slow queries originating from ruwikinews's usage of the DynamicPageList extension (also known as "intersection") overloaded the s3 cluster of databases, causing php-fpm processes to hang/stall, eventually taking down all wikis with it. The outage was resolved by disabling the DynamicPageList extension on ruwikinews and aggressively killing queries on s3 replicas. In worst case usage, DPL's database queries roughly scale to the size of the largest category being checked, which on ruwikinews is orders of magnitude higher than other wikis. One slow query seen during the outage took more than 3 minutes to finish on an idle replica (see EXPLAIN).
Impact: There was 30 minutes of outage with high latencies, failing to load pages and errors around all wikis due to unavailable PHP-FPM workers. Based on traffic graphs (see below) the outage impacted to approximately a 15% of all incoming HTTP requests for wikis, those being either lost, suffering high latencies or 5XX error codes. The main impact was uncached requests, suffering a 0% availability during several moments of the outage, on all wikis.
- 2020-09-07 and 2020-09-08: Following rapid bot imports (~100k pages in 1 day), DynamicPageList queries from ruwikinews caused problems on s3, though it did not lead to a sitewide outage. A summary of that incident is available at T262240#6449531 (TODO: create proper incident report).
- 2021-07-14 through 2021-07-17: NewsBots imports/creates 200,000 pages to the Russian Wikinews in the span of 3 days (per Wikimedia News).
- 2021-07-26: Something happens, triggering cache invalidation of pages using the
Main outage, all times in UTC.
- 10:30: Database overload starts OUTAGE BEGINS
- 10:33: Page fire for both appserver and api_appserver clusters:
Not enough idle PHP-FPM workers for Mediawiki
- 10:34: Significant IRC alert spam ensues, comms move to #wikimedia-sre and #mediawiki_security
- 10:35: "upstream connect error or disconnect/reset before headers. reset reason: overflow" on enwiki
- 10:38: Manuel depools db2149, that seems the most affected DB (SAL entry)
- 10:39: T287362 filed by users unable to access arwiki,
- 10:40: after a brief apparent recovery the load shifts to another DB
- 10:42: Slow query identified as coming from
- 10:42: Link to previous incident from 2020-09 established (T262240), people involved in that ticket pinged on IRC
- 10:46-10:49: Manuel slowly repools db2149
- 10:48: Recommendation made to disable DynamicPageList on ruwikinews instead of increasing cache TTL
- 10:50: Incident opened (private Google Doc).
- 10:51: Jaime sets the query killer on S3 replicas to 10 seconds for the MediaWiki user (SAL entry)
- 10:55: Amir disables DPL on ruwikinews (SAL entry)
- 10:56: Icinga recoveries starts to be fired
- 10:59: Database throughput back to normal levels OUTAGE ENDS
- 11:01: Last Icinga recovery
Icinga sent two pages at 10:33 for
Not enough idle PHP-FPM workers for Mediawiki on the appserver and api_appserver clusters.
The first user report on IRC appears to have been at 10:35 in #wikimedia-sre:
<RhinosF1> Meta is down.
Because this was a full outage, every host was individually alerting and so were services that depend upon MediaWiki. Each appserver triggered two alerts, like:
<icinga-wm> PROBLEM - Apache HTTP on mw2316 is CRITICAL: CRITICAL - Socket timeout after 10 seconds https://wikitech.wikimedia.org/wiki/Application_servers <icinga-wm> PROBLEM - PHP7 rendering on mw2316 is CRITICAL: CRITICAL - Socket timeout after 10 seconds https://wikitech.wikimedia.org/wiki/Application_servers/Runbook%23PHP7_rendering
icinga-wm sent 145 messages to #wikimedia-operations between 10:34 and 10:36 before being kicked off the Libera Chat network for flooding. That IRC channel was unusable and discussion was moved to #wikimedia-sre and then #mediawiki_security.
What weaknesses did we learn about and how can we address them?
What went well?
- (Use bullet points) for example: automated monitoring detected the incident, outage was root-caused quickly, etc
What went poorly?
- (Use bullet points) for example: documentation on the affected service was unhelpful, communication difficulties, etc
Where did we get lucky?
- (Use bullet points) for example: user's error report was exceptionally detailed, incident occurred when the most people were online to assist, etc
How many people were involved in the remediation?
- (Use bullet points) for example: 2 SREs and 1 software engineer troubleshooting the issue plus 1 incident commander
Links to relevant documentation
Add links to information that someone responding to this alert should have (runbook, plus supporting docs). If that documentation does not exist, add an action item to create it.
Create a list of action items that will help prevent this from happening again as much as possible. Link to or create a Phabricator task for every step.
- To do #1 (TODO: Create task)
- To do #2 (TODO: Create task)
TODO: Add the #Sustainability (Incident Followup) Phabricator tag to these tasks.