You are browsing a read-only backup copy of Wikitech. The live site can be found at wikitech.wikimedia.org

Incidents/2022-09-09 Elastic Autocomplete Missing

From Wikitech-static
< Incidents
Revision as of 21:14, 9 September 2022 by imported>Bking
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

document status: draft

Summary

Incident metadata (see Incident Scorecard)
Incident ID 2022-09-09 Elastic Autocomplete Missing Start 2022-09-09 01:00
Task T317381 End 2022-09-09 05:26
People paged 0 Responder count 3
Coordinators Affected metrics/SLOs
Impact During the incident, Wikipedia users had weird and unhelpful autocomplete results

Wikimedia users noticed "weird and unhelpful search results," specifically with regards to autocomplete, as documented here. This was an unexpected result of changes related to Search Platform's Elasticsearch 7 upgrade.

Timeline

All times in UTC.

  • 7 September 2022: wmf.27 maintenance script (compatible only with ES6) tries to build indices against CODFW, which has ES7. This fails silently.
  • 21:18:44 , 8 September 2022 : the 08 September UTC late backport window train rolls out and traffic switches to CODFW, effectively bringing the failed indices into production.
  • 01:00, 9 September 2022  : Wikimedia users report "weird and unhelpful search results," specifically with regards to autocomplete.
  • 02:13:24, 9 September 2022 Legoktm reports the issue in #wikimedia-search
  • 05:10:37, 9 September 2022 ebernhardson switches $wgCirrusSearchUseCompletionSuggester to use "build" (prefix search) working around the issue. Autocomplete caches for 3 hours, so in worst-case scenario, users impacted until 08:10:37.
  • 14:00, 9 September 2022 Pool counter rejections increase to around 5% due to use of prefix search as opposed to CirrusSearch-Completion (this is a QoS limitation as opposed to a resource limitation).
  • 14:30 9 September 2022 Pool counter rejections drop as CompletionSuggestor is re-activated via this patch

Detection

Detection: users reported the error

Alerts: None

Conclusions

What went well?

  • ebernhardson quickly realized the root cause and worked around it.

What went poorly?

  • Monitoring did not detect the issue
  • We did not realize this could happen and thus did not include it in our ES7 rollout plan
  • We did not communicate adequately around the upgrade.

Where did we get lucky?

  • ebernhardson was still online after work hours, and was able to address the issue.

Links to relevant documentation

  • None

Actionables

  • Create documentation on UpdateSuggesterIndex.php, probably should go on the Search page
  • Better monitoring, specifics to be added later
  • Sanity-checking for index size/age
  • Pool counter limits should be verified against what's running in production (CompletionSuggest limits are much higher than PrefixSearch, and when we gracefully degrade to PrefixSearch, we need more slots for PrefixSearch). Probably add this as a test this in mediawiki-config.
  • Better communication, so others are aware when we roll out a major version change.

Scorecard

Incident Engagement ScoreCard
Question Answer

(yes/no)

Notes
People Were the people responding to this incident sufficiently different than the previous five incidents? Y
Were the people who responded prepared enough to respond effectively Y
Were fewer than five people paged? Y
Were pages routed to the correct sub-team(s)? N/A
Were pages routed to online (business hours) engineers?  Answer “no” if engineers were paged after business hours. N/A
Process Was the incident status section actively updated during the incident? ?
Was the public status page updated? ?
Is there a phabricator task for the incident? Y
Are the documented action items assigned? N
Is this incident sufficiently different from earlier incidents so as not to be a repeat occurrence? Y
Tooling To the best of your knowledge was the open task queue free of any tasks that would have prevented this incident? Answer “no” if there are

open tasks that would prevent this incident or make mitigation easier if implemented.

Y
Were the people responding able to communicate effectively during the incident with the existing tooling? Y
Did existing monitoring notify the initial responders? N
Were the engineering tools that were to be used during the incident, available and in service? Y
Were the steps taken to mitigate guided by an existing runbook? N
Total score (count of all “yes” answers above) 10