You are browsing a read-only backup copy of Wikitech. The live site can be found at wikitech.wikimedia.org
Deployments/Holding the train: Difference between revisions
imported>Ivey13608 mNo edit summary |
imported>BryanDavis |
||
Line 2: | Line 2: | ||
{{Draft}} | {{Draft}} | ||
Holding the deployment train is not something that should happen unless there are serious security, performance, or functionality issues. Holding the train, counter-intuitively, can create more problems than it solves as the differences between the versions of MediaWiki and extensions that are deployed to the cluster become more widely divergent from the primary development versions of the code. | |||
== Issues that hold the train == | == Issues that hold the train == |
Revision as of 14:07, 19 May 2017
This page is currently a draft. More information and discussion about changes to this draft on the talk page. |
Holding the deployment train is not something that should happen unless there are serious security, performance, or functionality issues. Holding the train, counter-intuitively, can create more problems than it solves as the differences between the versions of MediaWiki and extensions that are deployed to the cluster become more widely divergent from the primary development versions of the code.
Issues that hold the train
This is not exhaustive list of things that would cause the train to pause or roll back. As always, it's up to the best judgment of operations and release engineering, but the following scenarios are pretty indicative of what we'd take action on.
- Security issues
- Data loss
- Major feature regressions
- Inability to login/logout/create account for a large portion of users
- Inability to edit for a large portion of users
- Performance regressions
- Page load time
- Page save/update time
- Major stylistic problems affecting all pages
- Error-rate increases (See Logspam)
- Any new error messages that occur frequently enough to be noticed in logstash will block the train.
- If the frequency increases significantly after a deployment then it should be immediately rolled back until the error can be fixed and the branch re-deployed.
- Even
DEBUG
/INFO
-level logs are a problem. Especially problematic if the frequency of the messages is high enough to put unnecessary load on the logstash servers.
What happens in SWAT while the train is on hold?
Only simple config changes and emergency fixes are allowed during SWAT while we are reverted. This is to reduce the complexity during investigation.
Remember, while we are reverted people are diligently diagnosing and debugging issues; any seemingly unrelated change could in fact effect their investigations.
What happens next (modified train scheduled)?
- If a new
wmf.XX
version wasn't deployed due to blockers for the entire week then- The following week no new branch will be cut (target getting
wmf.XX
to all wikis) OR The following week a new branch will be cut (skipping last week'swmf.XX
branch) - An incident report will be filed to address follow-up actions and process improvements
- The following week no new branch will be cut (target getting
- If a blocker was found and addressed before 3pm Pacific then
- the planned deploy/rollout can move forward at that time (deployment schedule permitting)
- If there are issues affecting performance discovered after the current version of MediaWiki and extensions has been deployed then
- The current code version will remain on servers—we will not attempt to rollback to a version > 1 week old
- The next release will remain at the Performance Team's discretion until XXX time, after which a new branch will be cut and rolled out
- IF...THEN
Logspam
What it is
#Logspam is the term we use to describe the category of noisy error messages in our logs. These usually do not represent actual error conditions or the errors are being ignored/purposefully not prioritized by the responsible parties (when any exist). Specific error messages that have been identified by Release Engineering are tracked in the #Wikimedia-Log-Errors Phabricator project.
Why it's a problem
Logspam is a problem because noisy logs make it more difficult to detect real problems quickly when looking at a log dashboard, for example fatalmonitor.
All deployers need to be able to quickly detect any new problems that are introduced by their newly deployed code. If important error messages are drowned out by this logspam then they might not detect more serious issues.
Major Causes (and how you can fix them)
Incorrectly categorized log messages
The most common example of this type would be expected (or known) conditions being recorded as exceptional conditions, eg: Debug
notices or Warnings
being logged as Errors
. This is incorrect use of logging and should be corrected.
Undefined index notices
These are a common occurrence in php code. Whenever you attempt to access an index of an array but the array does not contain the specified key, HHVM will log a notice. These are coding errors and they need to be fixed. If the array index is not always expected to exist then the code needs to check with isset()
or array_key_exists()
before referencing the key.