You are browsing a read-only backup copy of Wikitech. The live site can be found at wikitech.wikimedia.org
Deployments/Holding the train: Difference between revisions
imported>Majavah m (toolforge.org, swat -> backport window, advertise my tool) |
imported>Ahmon Dancy |
||
Line 5: | Line 5: | ||
== Issues that hold the train == | == Issues that hold the train == | ||
This is | This is nonexhaustive list of things that would cause the train to pause or roll back. As always, it's up to the best judgment of SRE and release engineering, however, the following are representative examples of what we'd take action on. | ||
* Security issues | * Security issues |
Revision as of 20:40, 5 August 2020
Deployments |
---|
|
Holding the deployment train is not something the Release Engineering team takes lightly. When RelEng does hold a deployment train, we expect all engineers with relevant expertise to be focused on resolving the issue. A quick resolution is beneficial to all engineers as holding the train, counter-intuitively, can create more problems than it solves. Over time the versions of MediaWiki and extensions that are deployed to the cluster will become more widely divergent from the primary development versions (e.g. master
) of the code.
Issues that hold the train
This is nonexhaustive list of things that would cause the train to pause or roll back. As always, it's up to the best judgment of SRE and release engineering, however, the following are representative examples of what we'd take action on.
- Security issues
- Data loss
- Major feature regressions
- Inability to login/logout/create account for a large portion of users
- Inability to edit for a large portion of users
- Performance regressions
- Page load time
- Page save/update time
- Major stylistic problems affecting all pages
- Significant Error-rate increases (See #Logspam)
- Any new error messages that occur frequently enough to be noticed in logstash will block the train.
- If the frequency increases significantly after a deployment then it should be immediately rolled back until the error can be fixed and the branch re-deployed.
- Even
DEBUG
/INFO
-level logs are a problem. Especially problematic if the frequency of the messages is high enough to put unnecessary load on the logstash servers.
What happens during backport windows while the train is on hold?
Only simple config changes and emergency fixes are allowed during backport windows while we are reverted. This is to reduce the complexity during investigation.
Remember, while we are reverted people are diligently diagnosing and debugging issues; any seemingly unrelated change could in fact affect their investigations.
What happens next?
- If a blocker was found and addressed before 3pm Pacific Tues/Wed/Thur THEN
- the planned deploy/rollout can move forward at that time (deployment schedule permitting)
- If the new
wmf.XX
version wasn't deployed to group2 (all wikipedias) on Thursday due to blockers THEN- If there is a fix available for deploy, RelEng will attempt to get the train back on track to ensure we adhere as closely as possible to the train schedule.
- An incident report will be filed to address follow-up actions and process improvements, and,
- A post-mortem will be conducted.
- If there are issues affecting performance discovered significantly after the current version of MediaWiki and extensions has been deployed to all wikis (group2, Thursday) THEN
- The current code version will remain on servers—we will not attempt to rollback to a version > 1 week old, and,
- The next rollout of the following release will be at the Performance Team's discretion, and,
- An incident report will be filed to address follow-up actions and process improvements, and,
- A post-mortem will be conducted.
Train "blocker tasks"
What: For each weekly train version rollout an accompanying task is filed in Phabricator. They all live in the #Train-Deployments tag. You can find the current task at https://train-blockers.toolforge.org.
Purpose: The purpose of these tasks is to track the rollout of the train especially including any blocking issues that may arise (see above). These blocking issues are filed as sub-tasks.
Blocking (sub) tasks types:
- A task which causes an entire revert/rollback to the previously deployed version and which must be addressed before moving forward.
- A task which prevents the continued rollout of the new version until it is addressed.
Priority of blocking (sub) tasks:
Tasks which block the train from moving forward or cause it to be rolled back are set to UBN! ("Unbreak Now!") priority as getting the train moving again should be the highest priority for the person(s)/team responsible for the code in question.
Status of blocking (sub) tasks:
Most times a blocking task must be "Resolved" in Phabricator for the train to move forward. A subset of times the task itself is not resolved because the issue has been worked around in another way, for instance when eg: a backport was prepared and merged but that backport is not yet merged in master
. The task will normally be closed after that patch is merged into master
.
Communication on blocking tasks:
The "train conductor" for that week is responsible for commenting on any blocking (sub) tasks with their assumptions on status and impact, especially if they choose to move the train forward with the task not set to "Resolved" for whatever reason. The reason for this commenting (and potential over communication) is to ensure all parties are aware of all assumptions and decisions.
Maintaining the task series in Phabricator:
Periodically, the release manager will create batches of new tasks in Phabricator for planned upcoming MediaWiki version. This is accomplished by running the scap task-series
plugin. For documentation, see: Deployments/Blocking_Tasks
Logspam
What it is
#Logspam is the term we use to describe the category of noisy error messages in our logs. These usually do not represent actual error conditions or the errors are being ignored/purposefully not prioritized by the responsible parties (when any exist). Specific error messages that have been identified by Release Engineering are tracked in the #Wikimedia-Production-Error Phabricator project.
Why it's a problem
Logspam is a problem because noisy logs make it more difficult to detect real problems quickly when looking at a log dashboard, for example fatalmonitor.
All deployers need to be able to quickly detect any new problems that are introduced by their newly deployed code. If important error messages are drowned out by this logspam then they might not detect more serious issues.
Major Causes (and how you can fix them)
Incorrectly categorized log messages
The most common example of this type would be expected (or known) conditions being recorded as exceptional conditions, eg: Debug
notices or Warnings
being logged as Errors
. This is incorrect use of logging and should be corrected.
Notice "Undefined variable", "Undefined index" or "Undefined offset"
These are a common occurrence in PHP code. Whenever you attempt to access a variable or index of an array that doesn't exist, PHP logs a notice. These are coding errors and they need to be fixed. It might be that the input is malformed and the error is in the caller; or it might a mistyped reference; or it might be that the key is allowed to be absent but forgot to access it conditionally.