You are browsing a read-only backup copy of Wikitech. The live site can be found at wikitech.wikimedia.org

Release Engineering/SAL: Difference between revisions

From Wikitech-static
Jump to navigation Jump to search
imported>Labslogbot
(Reloading Zuul to deploy I18fa56da9a27a4efeb061ceba773c8b50bc4a8f4 (marxarelli))
imported>Stashbot
(dwalden: restarted mathoid service on deployment-docker-mathoid01)
Line 1: Line 1:
== 2015-08-04 ==
== 2022-07-05 ==
* 00:30 marxarelli: Reloading Zuul to deploy I18fa56da9a27a4efeb061ceba773c8b50bc4a8f4
* 14:17 dwalden: restarted mathoid service on deployment-docker-mathoid01
* 11:39 hashar: Reloaded Zuul for `skip selenium for Wikibase repo/rest-api` https://gerrit.wikimedia.org/r/c/integration/config/+/811258
* 08:49 hauskatze: Diffusion rORES repository. Changed URI settings: enabled SSH push for mirroring; disabled HTTP {{!}} [[phab:T311390|T311390]]


== 2015-08-02 ==
== 2022-06-30 ==
* 04:30 legoktm: deploying https://gerrit.wikimedia.org/r/228606
* 22:02 TheresNoTime: unstuck beta-mediawiki-config-update-eqiad jobs, will comment at [[phab:T72597|T72597]]
* 03:00 legoktm: deploying https://gerrit.wikimedia.org/r/228601
* 21:05 TheresNoTime: cancelled beta-code-update-eqiad#398138 to make way for pending beta-scap-sync-world#57641, queued another beta-code-update-eqiad
* 02:39 legoktm: deploying https://gerrit.wikimedia.org/r/228600
* 16:47 taavi: reloading zuul to deploy https://gerrit.wikimedia.org/r/810053
* 01:58 legoktm: deploying https://gerrit.wikimedia.org/r/228596
* 01:53 legoktm: deploying https://gerrit.wikimedia.org/r/222760
* 01:28 legoktm: deploying https://gerrit.wikimedia.org/r/226753
* 01:20 legoktm: deploying https://gerrit.wikimedia.org/r/228507
* 00:36 legoktm: deploying https://gerrit.wikimedia.org/r/228583


== 2015-08-01 ==
== 2022-06-29 ==
* 23:27 legoktm: deploying https://gerrit.wikimedia.org/r/228492
* 14:48 ori: Clearing data from incomplete migration on Wikifunctionswiki via sql.php
* 13:39 TheresNoTime: clearing stuck beta deployment jobs, watching to ensure they catch up :')


== 2015-07-31 ==
== 2022-06-28 ==
* 01:32 jzerebecki: reload zuul for 83a30e5..f2d2517
* 14:45 TheresNoTime: clear stuck beta deployment jobs, now running & will keep an eye
* 13:39 hashar: gerrit: added `Cindy-the-browser-test-bot` to the `Service Users` group https://gerrit.wikimedia.org/r/admin/groups/d39fe9cefd40ca1a07e372c0d7bd7e72ce2e4a2f,members {{!}} [[phab:T311370|T311370]]
* 09:37 hashar: phabricator: changed username of rORES Phab>Gerrit replication from `phab` to `phabricator` # [[phab:T311390|T311390]]


== 2015-07-30 ==
== 2022-06-27 ==
* 23:50 bd808: upgraded nutcracker to 0.4.1-1+wm2~precise1 on deployment-bastion
* 21:19 Reedy: Reloading Zuul to deploy https://gerrit.wikimedia.org/r/809022
* 21:48 legoktm: deploying https://gerrit.wikimedia.org/r/228155
* 19:28 Reedy: Reloading Zuul to deploy https://phabricator.wikimedia.org/T308406
* 17:09 ostriches: cleaned up /var space on deployment-videoscaler01
* 09:17 hashar: apt-get upgrade on all Trusty slaves
* 09:13 hashar_: integration: upgrading Zuul package on Precise/Trusty instances ( https://phabricator.wikimedia.org/T106499 )


== 2015-07-29 ==
== 2022-06-24 ==
* 23:55 marxarelli: clearing disk space on integrations-slave-trusty-1012 with `find /mnt/jenkins-workspace/workspace -mindepth 1 -maxdepth 1 -type d -mtime +15 -exec rm -rf {} \;`
* 20:52 taavi: added `denisse` as a member
* 18:15 bd808: upgraded nutcracker on deployment-jobrunner01
* 18:14 bd808: upgraded nutcracker on deployment-videoscaler01
* 18:08 bd808: rm deployment-fluorine:/a/mw-log/archive/*-201506*
* 18:08 bd808: rm deployment-fluorine:/a/mw-log/archive/*-201505*
* 18:02 bd808: rm deployment-videoscaler01:/var/log/atop.log.?*
* 16:49 thcipriani: lots of "Error connecting to 10.68.16.193: Can't connect to MySQL server on '10.68.16.193'" deployment-db1 seems up and functional :(
* 16:27 thcipriani: deployment-prep login timeouts, tried restarting apache, hhvm, and nutcracker on mediawiki{01..03}
* 14:38 bblack: cherry-picked https://gerrit.wikimedia.org/r/#/c/215624 (updated to PS8) into deployment-puppetmaster ops/puppet
* 14:28 bblack: cherry-picked https://gerrit.wikimedia.org/r/#/c/215624 into deployment-puppetmaster ops/puppet
* 12:38 hashar_: salt minions are back somehow
* 12:36 hashar_: salt on deployment-salt is missing most of the instances :-(((
* 03:00 ostriches: deployment-bastion: please please someone rebuild me to not have a stupid 2G /var partition
* 03:00 ostriches: deployment-bastion: purged a bunch of atop and pacct logs, and apt cache...clogging up /var again.
* 02:34 legoktm: deploying https://gerrit.wikimedia.org/r/227640


== 2015-07-28 ==
== 2022-06-23 ==
* 23:43 marxarelli: running `jenkins-jobs update config/ 'mwext-mw-selenium'` to deploy I7afa07e9f559bffeeebaf7454cc6b39a37e04063
* 15:59 taavi: reload zuul for https://gerrit.wikimedia.org/r/808021
* 21:05 bd808: upgraded nutcracker on mediawiki03
* 21:04 bd808: upgraded nutcracker on mediawiki02
* 21:01 bd808: upgraded nutcracker on mediawiki01
* 19:49 jzerebecki: reloading zuul b1b2cab..b02830e
* 11:18 hashar: Assigning label "BetaClusterBastion" to https://integration.wikimedia.org/ci/computer/deployment-bastion.eqiad/
* 11:12 hashar: Jenkins jobs for the beta cluster ended up stuck again.  Found a workaround by removing the Jenkins label  on deployment-bastion node and reinstating it.  Seems to get rid of the deadlock ( ref: https://phabricator.wikimedia.org/T72597#1487801 )
* 09:50 hashar: deployment-apertium01 is back!  The ferm rules were outdated / not maintained by puppet, dropped ferm entirely.
* 09:40 hashar: rebooting deployment-apertium01 to ensure its ferm rules are properly loaded on boot ( https://phabricator.wikimedia.org/T106658 )
* 00:46 legoktm: deploying https://gerrit.wikimedia.org/r/227383


== 2015-07-27 ==
== 2022-06-22 ==
* 23:04 marxarelli: running `jenkins-jobs update config/ 'browsertests-*'` to deploy I3c61ff4089791375e21aadfa045d503dfd73ca0e
* 17:36 taavi: gerrit: add tfellows to the extension-OpenBadges group per request in [[phab:T308278|T308278]]
* 13:26 hashar: Precise slaves had faulty elasticsearch: apt-get install --reinstall elasticsearch
* 17:35 taavi: gerrit: create group extension-JsonData with robla in it, make it an owner of mediawiki/extensions/JsonData per request in [[phab:T303147|T303147]]
* 13:21 hashar: puppet stalled on Precise Jenkins slaves :-(
* 16:19 hashar: Reloaded Zuul for https://gerrit.wikimedia.org/r/807586
* 08:52 hashar: upgrading packages on Precise slaves
* 09:35 hashar: Switched `gitlab-prod-1001.devtools.eqiad1.wikimedia.cloud` instance to use the project Puppet master `puppetmaster-1001.devtools.eqiad1.wikimedia.cloud`
* 08:49 hashar: rebooting all Trusty jenkins slaves
* 09:08 hashar: contint1001 , contint2002: deleting `.git/logs` from all zuul-merger repositories. We do not need the reflog `sudo -u zuul find /srv/zuul/git -type d -name .git -print -execdir rm -fR .git/logs \;` # [[phab:T307620|T307620]]
* 08:39 hashar: upgrading python-pip on Trusty from 1.5.4-1ubuntu1 to 1.5.4-1ubuntu3 . Fix up pip silently removing system packages ( https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=771794 )
* 09:00 hashar: contint1001 , contint2002: setting `core.logallrefupdates=false` on all Zuul merger git repositories: `sudo -u zuul find /srv/zuul/git -type d -name .git -print -execdir git config core.logallrefupdates false \;` # [[phab:T307620|T307620]]
* 08:12 hashar: On CI slaves, bumping HHVM from  3.6.1+dfsg1-1+wm3 to 3.6.5+dfsg1-1+wm1
* 07:46 hashar: Building operations-puppet docker image for https://gerrit.wikimedia.org/r/c/integration/config/+/807180
* 08:11 hashar: apt-get upgrade Trusty Jenkins slaves


== 2015-07-24 ==
== 2022-06-21 ==
* 17:35 marxarelli: updating integration slave scripts from integration-saltmaster to deploy I6906fadede546ce2205797da1c6b267aed586e17
* 22:01 brennen: gitlab-runners: re-registering all shared runners
* 17:17 marxarelli: running `jenkins-jobs update config/ 'mediawiki-selenium-integration' 'mwext-mw-selenium'` to deploy Ib289d784c7b3985bd4823d967fbc07d5759dc756
* 17:55 dancy: Upgrading scap to 4.9.4-1+0~20220621174226.320~1.gbp56e4d4 in beta cluster
* 17:05 marxarelli: running `jenkins-jobs update config/ 'mediawiki-selenium-integration'` to deploy and test Ib289d784c7b3985bd4823d967fbc07d5759dc756
* 17:04 hashar: integration-saltmaster, in  a '''screen''' : salt -b 1 '*slave*' cmd.run '/usr/local/sbin/puppet-run'|tee hashar-massrun.log
* 17:04 hashar: cancelled last command
* 17:03 hashar: integration-saltmaster : salt -b 1 '*slave*' cmd.run '/usr/local/sbin/puppet-run' &  && disown && exit
* 16:55 hashar: Might have fixed the puppet/pip mess on CI slaves by creating a symlink from /usr/bin/pip to /usr/local/bin/pip ( https://gerrit.wikimedia.org/r/#/c/226729/1..2/modules/contint/manifests/packages/python.pp,unified )
* 16:36 hashar: puppet on Jenkins slaves might have some intermittent issues due to pip installation  https://gerrit.wikimedia.org/r/226729
* 15:29 hashar: removing pip obsolete download-cache setting ( https://gerrit.wikimedia.org/r/#/c/226730/ )
* 15:27 hashar: upgrading pip to 7.1.0 via pypi ( https://gerrit.wikimedia.org/r/#/c/226729/ ).  Revert plan is to uncherry pick the patch on the puppetmaster and:  pip uninstall pip
* 12:46 hashar: Jenkins: switching gearman plugin from our custom compiled 0.1.1-9-g08e9c42-change_192429_2  to upstream 0.1.2. They are actually the exact same versions.
* 08:40 hashar: upgrading zuul to zuul_2.0.0-327-g3ebedde-wmf3precise1 to fix a regression ( https://phabricator.wikimedia.org/T106531 )
* 08:39 hashar: upgrading zuul


== 2015-07-23 ==
== 2022-06-20 ==
* 23:03 marxarelli: running `jenkins-jobs update config/ 'browsertests-*'` to deploy I2d0f83d0c6a406d46627578cb8db0706d1b8655d
* 16:30 urbanecm: add sgimeno as a project member (Growth engineer with need for access)
* 16:38 marxarelli: Reloading Zuul to deploy I96b6218a208f133209452c71bcf01a1088305aea
* 15:50 ori: On deployment-cache-<nowiki>{</nowiki>text,upload<nowiki>}</nowiki>06, ran: touch /srv/trafficserver/tls/etc/ssl_multicert.config && systemctl reload trafficserver-tls.service ([[phab:T310957|T310957]])
* 15:39 urandom: applied wip logstash & cassandra changes (https://gerrit.wikimedia.org/r/#/c/226025/) to deployment-prep
* 14:07 ori: restarted acme-chief on deployment-acme-chief03
* 13:24 hashar: apt-get upgrade integration-puppetmaster and rebooting it
* 13:23 hashar: integration puppetmaster in bad shape: Warning: Error 400 on SERVER: Cannot allocate memory - fork(2)
* 10:58 hashar: beta : salt '*' cmd.run 'rm /etc/apt/apt.conf.d/20auto-upgrades.ucf-dist'
* 10:52 hashar: Beta cluster puppetmaster is now deployment-puppetmaster.deployment-prep.eqiad.wmflabs . Migrated all instances (solves https://phabricator.wikimedia.org/T106649 )
* 10:30 hashar: regenerated puppet cert on deployment-salt , the old puppetmaster now a puppet client
* 10:23 hashar: running apt-get upgrade on deployment-parsoidcache02
* 09:32 hashar: puppet broken on deployment-fluorine : Error: Could not request certificate: Neither PUB key nor PRIV key:: header too long
* 08:39 hashar: Disabling puppet agent on ALL beta cluster instances
* 08:18 hashar: creating deployment-puppetmaster m1.medium :D
* 01:57 jzerebecki: reconnected slave and needed to kill a few pending beta jobs, works again
* 01:50 jzerebecki: trying https://www.mediawiki.org/wiki/Continuous_integration/Jenkins#Hung_beta_code.2Fdb_update
* 01:09 legoktm: beta-mediawiki-config-update-eqiad jobs stuck
* 00:41 jzerebecki: clean up doc dir after job changes gallium:~$ sudo -iu jenkins-slave rm -r /srv/org/wikimedia/doc/MobileFrontend/master/{app-0c945a27f43452df695771ddb60b3d14.js,data-500abda2bcb0df13609e38707dfa7f4e.js,eg-iframe.html,extjs,favicon.ico,index.html,member-icons,output,resources,source,styles-3eba09980fa05ead185cb17d9c0deb0f.css}
* 00:14 jzerebecki: reloading zuul 369e6eb..73dc1f6 for https://gerrit.wikimedia.org/r/#/c/223527/


== 2015-07-22 ==
== 2022-06-17 ==
* 10:24 hashar: Upgrading Zuul on Jenkins Precise slaves to zuul_2.0.0-327-g3ebedde-wmf2precise1_amd64.deb
* 17:15 ori: provisioned deployment-cache-text07 in deployment-prep to test query normalization via VCL
* 09:32 hashar_: Reupgrading Zuul to zuul_2.0.0-327-g3ebedde-wmf2precise1_amd64.deb with an approval fix ( https://gerrit.wikimedia.org/r/#/c/226274/ ) for gate-and-submit no more matching Code-Review+2 events ( https://phabricator.wikimedia.org/T106436 )
* 01:08 TimStarling: on deployment-docker-cpjobqueue01 and deployment-docker-changeprop01 I redeployed the changeprop configuration, reverting the PHP 7.4 hack


== 2015-07-21 ==
== 2022-06-16 ==
* 22:54 greg-g: 22:50 <  chasemp> "then git reset --hard 9588d0a6844fc9cc68372f4bf3e1eda3cffc8138 in  /etc/zuul/wikimedia"
* 12:24 hashar: gitlab: runner-1030: `docker volume prune -f`
* 22:53 greg-g: 22:47 <  chasemp> service zuul stop && service zuul-merger stop && sudo apt-get install  zuul=2.0.0-304-g685ca22-wmf1precise1
* 12:24 hashar: gitlab: runner-1026: `docker volume prune -f`
* 21:48 greg-g: Zuul not responding
* 10:02 elukey: ran `scap install-world --batch` to allow scap/puppet to work on ml-cache100[2,3]
* 20:23 hasharConfcall: Zuul no more reports back to Gerrit due to an error with the Gerrit label
* 20:10 hasharConfcall: Zuul restarted with 2.0.0-327-g3ebedde-wmf2precise1
* 19:48 hasharConfcall: Upgrading Zuul to zuul_2.0.0-327-g3ebedde-wmf2precise1  Previous version failed because python-daemon was too old, now shipped in the venv  https://phabricator.wikimedia.org/T106399
* 15:04 hashar: upgraded Zuul on gallium from zuul_2.0.0-306-g5984adc-wmf1precise1_amd64.deb to zuul_2.0.0-327-g3ebedde-wmf1precise1_amd64.deb . now uses python-daemon 2.0.5
* 13:37 hashar: upgraded Zuul on gallium from zuul_2.0.0-304-g685ca22-wmf1precise1 to zuul_2.0.0-306-g5984adc-wmf1precise1 . Uses a new version of GitPython
* 02:15 bd808: upgraded to elasticsearch-1.7.0.deb on deployment-logstash2


== 2015-07-20 ==
== 2022-06-15 ==
* 16:55 thcipriani: restarted puppetmaster on deployment-salt, was acting whacky
* 22:39 brennen: phabricator: tagged release/2022-06-15/1 ([[phab:T310742|T310742]])
* 16:31 hashar: integration-agent-docker-1035: docker image prune
* 15:26 dancy: Upgrading scap to 4.9.4-1+0~20220615151557.315~1.gbped3b8d in beta cluster


== 2015-07-17 ==
== 2022-06-14 ==
* 21:45 hashar: upgraded nodepool to 0.0.1-104-gddd6003-wmf4 . That fix graceful stop via SIGUSR1 and let me complete the systemd integration
* 21:30 TheresNoTime: clear out stuck `beta-scap-sync-world` jobs (repeatedly per each queued `beta-mediawiki-config-update-eqiad` job), queued jobs now running. monitored for until each job had run successfully. jobs up to date
* 20:03 hashar: stopping Zuul to get rid of a faulty registered function "build:Global-Dev Dashboard Data". Job is gone already.
* 17:18 brennen: starting 1.39.0-wmf.16 ([[phab:T308069|T308069]]) transcript in deploy1002:~brennen/1.39.0-wmf.16.log
* 13:35 TheresNoTime: clear stuck `beta-scap-sync-world` job, other queued jobs now running. Cancel running `beta-update-databases-eqiad` job, will ensure it runs on the next timer
* 00:42 TimStarling: on deployment-deploy03 removed helm2, as was done in production


== 2015-07-16 ==
== 2022-06-13 ==
* 16:08 hashar_: kept nodepool stopped on labnodepool1001.eqiad.wmnet because it spams the cron log
* 22:04 TheresNoTime: cleared out stalled Jenkins beta jobs on `deployment-deploy03`, manually started `beta-code-update-eqiad` job & watched to completion. all caught up
* 10:27 hashar: fixing puppet on deployment-bastion. Stalled since July 7th - https://phabricator.wikimedia.org/T106003
* 04:33 hashar: Restarting Docker on contint1001.wikimedia.org , apparently can't build images anymore
* 10:26 hashar: deployment-bastion: apt-get upgrade
* 02:34 bd808: cherry-picked https://gerrit.wikimedia.org/r/#/c/224313 for scap testing


== 2015-07-15 ==
== 2022-06-12 ==
* 20:53 bd808: Added JanZerebecki as deployment-prep root
* 21:13 Reedy: Reloading Zuul to deploy https://gerrit.wikimedia.org/r/804777
* 17:53 bd808: cherry-picked https://gerrit.wikimedia.org/r/#/c/224829/
* 16:10 bd808: sudo rm -rf /tmp/scap_l10n_* on deployment-bastion
* 15:33 bd808: root (/) is full on deployment-bastion, trying to figure out why
* 14:39 bd808: mkdir mira.deployment-prep:/home/l10nupdate because puppet's managehome flag doesn't seem to be doing that :(
* 05:00 bd808: created mira.deployment-prep.eqiad.wmflabs to begin testing multi-master scap


== 2015-07-14 ==
== 2022-06-10 ==
* 00:45 bd808: /srv/deployment/scap/scap on deployment-mediawiki02 had corrupt git cache info; moved to scap-corrupt and forced a re-sync
* 15:20 James_F: Zuul: [mediawiki/extensions/SearchVue] Add initial CI jobs for [[phab:T309932|T309932]]
* 00:41 bd808: trebuchet deploy of scap to mediawiki02 failed. investigating
* 08:28 hashar: Reloaded Zuul to remove mediawiki/services/parsoid from CI dependencies # https://gerrit.wikimedia.org/r/c/integration/config/+/803990
* 00:41 bd808: Updated scap to d7db8de (Don't assume current l10n cache files are .cdb)
* 04:27 TimStarling: on deployment-deploy03 running scap sync-world -v with PHP 7.4 for [[phab:T295578|T295578]]
* 04:03 TimStarling: on deployment-deploy03 running scap sync-world -v with PHP 7.2 for [[phab:T295578|T295578]] sanity check


== 2015-07-13 ==
== 2022-06-09 ==
* 20:44 thcipriani: might be some failures, puppetmaster refused to stop as usual, had to kill pid and restart
* 22:49 dancy: Upgrading scap to 4.9.1-1+0~20220609211227.304~1.gbpe48c42 in beta cluster
* 20:39 thcipriani: restarting puppetmaster on deployment-salt, seeing weird errors on instances
* 16:39 brennen: gitlab shared runners: re-registering to apply image allowlist configuration
* 10:24 hashar: pushed mediawiki/ruby/api tags for versions 0.4.0 and 0.4.1
* 10:12 hashar: deployment-prep: killing puppetmaster
* 10:06 hashar: integration: kicking puppet master. It is stalled somehow


== 2015-07-11 ==
== 2022-06-08 ==
* 04:35 bd808: Updated /var/lib/git/labs/private to latest upstream
* 17:14 hashar: Reloaded Zuul for {{Gerrit|I39342265033e82ae13998f53defe6612dc6819b4}}
* 03:54 bd808: cherry-picked https://gerrit.wikimedia.org/r/#/c/224219/
* 15:57 dancy: Set `profile::mediawiki::php::restarts::ensure: present` in deployment-prep hiera config for [[phab:T237033|T237033]]
* 03:54 bd808: fixed rebase conflict with "Enable firejail containment for zotero" by removing stale cherry-pick
* 09:28 hashar: Reloaded Zuul for "Add doc publish for Translate" https://gerrit.wikimedia.org/r/792134


== July 10 ==
== 2022-06-06 ==
* 16:12 hashar: nodepool puppitization going on :-D
* 14:37 James_F: Zuul: [mediawiki/extensions/ImageSuggestions] Mark as in production for [[phab:T302711|T302711]]
* 03:01 legoktm: deploying https://gerrit.wikimedia.org/r/223992


== July 9 ==
== 2022-06-02 ==
* 22:16 hashar: integration: pulled labs/private.git : dbef45d..d41010d
* 15:33 dancy: Upgrading scap to 4.8.1-1+0~20220602153109.295~1.gbp318d9c in beta cluster
* 11:26 hashar: Restarting Jenkins on contint2001
* 11:19 hashar: Restarting Jenkins on releases1002


== July 8 ==
== 2022-05-31 ==
* 23:17 bd808: Kibana functional again. Imported some dashboards from prod instance.
* 21:16 dancy: Upgrading scap to 4.8.0-1+0~20220531211114.292~1.gbp8dbbcf in beta cluster
* 22:48 marxarelli: cherry-picked https://gerrit.wikimedia.org/r/#/c/223691/ on integration-puppetmaster
* 17:40 dancy: Upgrading scap to 4.8.0-1+0~20220531173912.291~1.gbp21a7ef in beta cluster
* 22:33 bd808: about half of the indices on deployment-logstash2 lost. I assume it was caused by shard rebalancing to logstash1 that I didn't notice before I shut it down and deleted it :(
* 17:33 dancy: Reverted to scap 4.8.0-1+0~20220524160924.288~1.gbp794a08 in beta cluster
* 22:32 bd808: Upgraded elasticsearch on logstash2 to 1.6.0
* 17:07 dancy: Upgrading scap to 4.8.0-1+0~20220531170512.289~1.gbp143729 in beta cluster
* 22:00 bd808: Kibana messed up. Half of the logstash elasticsearch indices are gone from deployment-logstash2
* 21:05 legoktm: deployed https://gerrit.wikimedia.org/r/223669
* 11:47 Krinkle: Reloading Zuul to deploy  https://gerrit.wikimedia.org/r/223530
* 09:26 hashar: upgraded plugins on jenkins and restarting it


== July 7 ==
== 2022-05-30 ==
* 23:58 bd808: updated scap to 303e72e (Increment deployment stats after sync-wikiversions)
* 11:47 jelto: apply gitlab-settings to gitlab1004 - [[phab:T307142|T307142]]
* 21:23 bd808: deleted instance deployment-logstash1
* 11:46 jelto: apply gitlab-settings to gitlab1003 - [[phab:T307142|T307142]]
* 20:48 marxarelli: cherry-picking https://gerrit.wikimedia.org/r/#/c/158016/ on deployment-salt
* 20:07 bd808: Forced puppet run on deployment-restbase01; run picked up changes that should have been applied yesterday, not sure why puppet wasn't running from cron properly
* 19:58 bd808: cherry-picked https://gerrit.wikimedia.org/r/#/c/223391/
* 18:51 bd808: restarted puppetmaster on deployment-salt to pick up logging config changes
* 18:14 bd808: Changed role::protoproxy::ssl::beta to role::tlsproxy::ssl::beta for deployment-cache-*
* 18:10 bd808: puppet broken on deployment-cache-* by https://gerrit.wikimedia.org/r/#/c/222124/
* 15:45 bd808: Cherry-picked https://gerrit.wikimedia.org/r/#/c/223301/


== July 6 ==
== 2022-05-28 ==
* 23:34 marxarelli: Reloading Zuul to deploy I33ac72e7df498e58f0e25d8c59f167d13eae06cf
* 19:09 TheresNoTime: deployment-deploy04 live, not referenced by anything [[phab:T309437|T309437]]
* 23:24 bd808: restarted nutcracker on deployment-mediawiki01
* 21:32 bd808: cherry-picked https://gerrit.wikimedia.org/r/#/c/223184/ to deployment-salt
* 20:57 bd808: restarted puppetmaster on deployment-salt
* 20:55 bd808: cherry-picked https://gerrit.wikimedia.org/r/#/c/223172/ for testing
* 20:50 hashar: removing lanthanum from Jenkins slave configuration. Server is gone ( https://phabricator.wikimedia.org/T86658 )
* 20:34 hashar: lanthanum: deleting gerrit replicas under /srv/ssd/gerrit
* 20:32 hashar: Gerrit: reloading replication plugin: <tt>gerrit plugin reload replication</tt>
* 14:08 hashar: Disconnected lanthanum Jenkins slave. Being phased out https://phabricator.wikimedia.org/T86658


== July 3 ==
== 2022-05-27 ==
* 14:07 hashar: adding puppetmaster::certcleaner class to integration and beta  puppetmaster
* 22:55 zabe: zabe@deployment-mwmaint02:~$ mwscript extensions/WikiLambda/maintenance/updateTypedLists.php --wiki=wikifunctionswiki --db # started ~20 min ago
* 14:03 hashar: rebased puppetmaster on integration project
* 22:49 TheresNoTime: manually running database update script: samtar@deployment-deploy03:~$ /usr/local/bin/wmf-beta-update-databases.py
* 13:59 hashar: removing puppetmaster::autosigner from integration-puppetmaster
* 22:09 TheresNoTime: samtar@deployment-deploy03:~$ sudo keyholder arm
* 13:58 hashar: removing puppetmaster::autosigner from deployment-salt. It is now automatic per https://gerrit.wikimedia.org/r/#/c/220306/
* 21:44 TheresNoTime: hard rebooted deployment-deploy03 as soft reboot unresponsive
* 13:55 hashar: restarted puppetmaster on deployment-salt
* 21:44 bd808: `sudo wmcs-openstack role add --user zabe --project deployment-prep projectadmin` ([[phab:T309419|T309419]])
* 05:20 legoktm: deploying https://gerrit.wikimedia.org/r/222539
* 21:10 zabe: zabe@deployment-deploy03:~$ sudo keyholder arm
* 01:18 legoktm: deploying https://gerrit.wikimedia.org/r/166074
* 20:53 bd808: `sudo wmcs-openstack role add --user samtar --project deployment-prep projectadmin` ([[phab:T309415|T309415]])
* 00:41 legoktm: deploying https://gerrit.wikimedia.org/r/222503
* 20:49 dancy: Initiated hard reboot of deployment-deploy03.deployment-prep


== July 2 ==
== 2022-05-26 ==
* 10:07 hashar: adding mobrovac to the integration project so he can ssh to slaves and sudo as jenkins-deploy user
* 18:33 dancy: Updated Jenkins beta-* job configs
* 16:51 TheresNoTime: manually triggered beta-update-databases-eqiad post-merge of {{Gerrit|2c7b5825}}
* 16:51 brennen: puppetmaster-1001.devtools: resetting ops/puppet checkout to production branch


== July 1 ==
== 2022-05-25 ==
* 15:44 hashar: Kunal awesome dashboard for repos https://www.mediawiki.org/wiki/User:Legoktm/ci
* 18:38 TheresNoTime: (@ ~18:20UTC) samtar@deployment-mwmaint02:~$ mwscript resetUserEmail.php --wiki=wikidatawiki Mahir256 [snip] [[phab:T309230{{!}}T309230]]
* 15:34 hashar: https://integration.wikimedia.org/ci/job/mediawiki-core-phpcs-HEAD/ is fixed. populated the git repos manually
* 15:46 dancy: Restarted apache2 on gerrit1001
* 15:21 hashar: manually populating mediawiki/core on Precise instances for mediawiki-core-phpcs-HEAD job using: <tt>git config remote.origin.url https://gerrit.wikimedia.org/r/p/mediawiki/core</tt> <tt>git fetch</tt>
* 15:14 hashar: https://integration.wikimedia.org/ci/job/mediawiki-core-phpcs-HEAD/ broken while cloning mediawiki/core :-(
* 10:47 hashar: puppet fixed by restarting the puppet master
* 10:41 hashar: restarting Jenkins
* 10:40 hashar: upgrading Jenkins gearman plugin from 0.1.1-8-gf2024bd to 0.1.1-9-g08e9c42-change_192429_2  https://phabricator.wikimedia.org/T72597#1416913
* 10:38 hashar: restarted puppetmaster on integration
* 10:36 hashar: Error: /Stage[main]/Ldap::Client::Utils/File[/usr/local/sbin/archive-project-volumes]: Could not evaluate: Could not retrieve information from environment production source(s) puppet:///modules/ldap/scripts/archive-project-volumes
* 10:36 hashar: integration: puppet now fails on instances :-/
* 10:29 hashar: rebased puppet.git on integration-puppetmaster.  Autoupdater was blocked by a couple 3-way merges.


== June 30 ==
== 2022-05-24 ==
* 10:07 hashar: deployment-bastion sudo -u l10nupdate bash -c 'cd /srv/l10nupdate/mediawiki/extension && git submodule foreach git gc'
* 15:15 dancy: Upgrading scap to 4.7.1-1+0~20220524151055.286~1.gbpe809e8 in beta cluster
* 09:43 hashar: deployment-bastion sudo -u jenkins-deploy bash -c 'cd /srv/mediawiki-staging/php-master/extensions && git submodule foreach git gc'
* 13:35 James_F: Zuul: [mediawiki/tools/code-utils] Add composer test CI for [[phab:T309099|T309099]]
* 09:40 hashar: deployment-bastion sudo -u l10nupdate bash -c 'cd /srv/l10nupdate/mediawiki/core/.git && git gc'
* 11:36 TheresNoTime: cleared stuck beta deployment jobs per https://www.mediawiki.org/wiki/Continuous_integration/Jenkins#Hung_beta_code/db_update
* 09:39 hashar: deployment-bastion: sudo -u l10nupdate bash -c 'cd /srv/l10nupdate/mediawiki/extensions/.git && git gc'
* 09:38 hashar: deployment-bastion sudo -u jenkins-deploy bash -c 'cd /srv/mediawiki-staging/php-master/extensions/.git && git gc'
* 09:31 hashar: beta: running git gc on deployment-bastion Trebuchet directories. As trebuchet: find /srv/deployment/*/*/.git -type d -name .git -print -exec bash -c 'cd {} && git gc' \;
* 07:09 legoktm: deploying https://gerrit.wikimedia.org/r/221835


== June 29 ==
== 2022-05-23 ==
* 23:19 bd808: Moved logstash irc bot from logstash1 to logstash2
* 19:21 inflatador: Deleted deployment-elastic0[5-7] in favor of newer bullseye hosts [[phab:T299797|T299797]]
* 22:25 legoktm: deploying https://gerrit.wikimedia.org/r/221749
* 18:37 dancy: Reverted to scap 4.7.1-1+0~20220505181519.270~1.gbpeb47ae in beta cluster
* 18:08 thcipriani: restarted nutcracker on beta cluster salt '*-mediawiki*' cmd.run 'service nutcracker restart'
* 18:35 dancy: Upgrading beta cluster scap to 4.7.1-1+0~20220523183110.280~1.gbpaa0826
* 10:42 hashar: manually rebasing integration-puppetmaster git repo
* 14:49 James_F: Zuul: Enforce Postgres and SQLite support via in-mediawiki-tarball
* 10:24 hashar: restarted puppetmater on deployment-salt
* 08:37 elukey: move kafka jumbo in deployment-prep to fixed uid/gid - [[phab:T296982|T296982]]
* 10:23 hashar: puppet master stalled due to: [ldap-yaml-enc.p] <defunct> .  Killing it
* 08:29 elukey: move kafka main in deployment-prep to fixed uid/gid - [[phab:T296982|T296982]]
* 10:21 hashar: sees beta cluster puppetmaster is suffering from some random issue
* 08:06 elukey: move kafka logging in deployment-prep to fixed uid/gid - [[phab:T296982|T296982]]


== June 27 ==
== 2022-05-22 ==
* 02:42 legoktm: deploying https://gerrit.wikimedia.org/r/221343 & https://gerrit.wikimedia.org/r/221344
* 18:39 Krinkle: Reloading Zuul to deploy https://gerrit.wikimedia.org/r/795818/
* 02:36 Krinkle: Reloading Zuul to deploy https://gerrit.wikimedia.org/r/221342
* 02:22 Krinkle: Reloading Zuul to deploy https://gerrit.wikimedia.org/r/221338
* 02:15 legoktm: deploying https://gerrit.wikimedia.org/r/#/c/221337/
* 01:56 legoktm: deploying https://gerrit.wikimedia.org/r/221333 & https://gerrit.wikimedia.org/r/221334
* 01:42 legoktm: deploying https://gerrit.wikimedia.org/r/221331
* 01:36 legoktm: deploying https://gerrit.wikimedia.org/r/221330
* 01:28 legoktm: deploying https://gerrit.wikimedia.org/r/221329
* 01:13 legoktm: deploying https://gerrit.wikimedia.org/r/221328
* 00:15 legoktm: deploying https://gerrit.wikimedia.org/r/221316 & https://gerrit.wikimedia.org/r/221318


== June 26 ==
== 2022-05-21 ==
* 22:39 marxarelli: Reloading Zuul to deploy I3deec5e5a7ce7eee75268d0546eafb3e4145fdc7
* 23:05 legoktm: deployed https://gerrit.wikimedia.org/r/c/integration/config/+/794756/
* 22:20 marxarelli: Reloading Zuul to deploy I7affe14e878d5c1fc4bcb4dfc7f2d1494cd795b7
* 14:11 hashar: Icinga reports `Gerrit Health Check SSL Expiry` errors filed as [[phab:T308908|T308908]]
* 21:45 legoktm: deploying https://gerrit.wikimedia.org/r/221295
* 21:21 marxarelli: running `jenkins-jobs update` to deploy I7affe14e878d5c1fc4bcb4dfc7f2d1494cd795b7
* 18:46 marxarelli: running `jenkins-jobs update '*bundle*'` to deploy Icb31cf57bee0483800b41a2fb60d236fcd2d004e


== June 25 ==
== 2022-05-20 ==
* 23:38 legoktm: deploying https://gerrit.wikimedia.org/r/221001
* 16:21 hashar: Reloaded Zuul for https://gerrit.wikimedia.org/r/c/integration/config/+/793809
* 21:21 thcipriani: updated deployment-salt to match puppet by rm /var/lib/git/operations/puppet/modules/cassandra per godog's instructions
* 19:09 hashar: purged all WikidataQuality workspaces.  Got renamed to WikibaseQuality*
* 14:22 jzerebecki: reloading zuul for https://gerrit.wikimedia.org/r/#/c/220737/2
* 14:20 jzerebecki: killing a fellows idle shell zuul@gallium:~$ kill 13602
* 11:03 hashar: Rebooting  integration-raita and integration-vmbuilder-trusty
* 11:01 hashar: Unmounting /data/project and /home NFS mounts from integration-raita and integration-vmbuilder-trusty https://phabricator.wikimedia.org/T90610
* 10:45 hashar: deployment-sca02 deleted /var/lib/puppet/state/agent_catalog_run.lock from June 5th
* 08:57 hashar: Fixed puppet "Can't dup Symbol" on deployment-pdf01  by deleting puppet, /var/lib/puppet and reinstalling it from scratch https://phabricator.wikimedia.org/T87197
* 08:39 hashar: apt-get upgrade deployment-salt
* 08:08 hashar: deployment-pdf01 deleted /var/log/ocg/ content. Last entry is from July 25th 2014 and puppet complains with <tt>e[/var/log/ocg]: Not removing directory; use 'force' to override</tt>
* 08:04 hashar: apt-get upgrade deployment-pdf01
* 06:37 Krinkle: Reloading Zuul to deploy https://gerrit.wikimedia.org/r/220712
* 06:33 Krinkle: Reloading Zuul to deploy https://gerrit.wikimedia.org/r/220705


== June 24 ==
== 2022-05-19 ==
* 19:31 hashar: rebooting deployment-cache-upload02
* 19:34 hashar: Reloaded Zuul for https://gerrit.wikimedia.org/r/793527
* 19:28 hashar: fixing DNS puppet etc on deployment-cache-upload02
* 14:31 hashar: Reloaded zuul for https://gerrit.wikimedia.org/r/c/integration/config/+/793458 {{!}} Don't re-trigger the test pipeline on patches with C+2 already
* 19:24 hashar: rebooting deployment-zookeeper to get rid of the /home NFS https://phabricator.wikimedia.org/T102169 
* 19:06 hashar: beta: salt 'i-00*' cmd.run "echo 'domain integration.eqiad.wmflabs\nsearch integration.eqiad.wmflabs eqiad.wmflabs\nnameserver 208.80.154.20\noptions timeout:5' > /etc/resolv.conf"
* 19:06 hashar: fixing DNS / puppet and salt on i-000008d5.eqiad.wmflabs  i-000002de.eqiad.wmflabs i-00000958.eqiad.wmflabs
* 15:35 hashar: integration-dev recovered!  puppet hasn't run for ages but caught up with changes
* 15:13 hashar: removed /var/lib/puppet/state/agent_catalog_run.lock on integration-dev
* 09:52 hashar: Java 6 removed from gallium / lanthanum and CI labs slaves.
* 09:18 hashar: getting rid of java 6 on CI machines ( https://phabricator.wikimedia.org/T103491 )
* 07:58 hashar: Bah puppet reenable NFS on deployment-parsoidcache02 for some reason
* 07:57 hashar: disabling NFS on deployment-parsoidcache02
* 00:38 marxarelli: reloading zuul to deploy https://gerrit.wikimedia.org/r/#/c/219513/
* 00:32 marxarelli: running `jenkins-jobs update` to create 'mwext-MobileFrontend-mw-selenium' with I7affe14e878d5c1fc4bcb4dfc7f2d1494cd795b7
* 00:20 marxarelli: running `jenkins-jobs update` to create 'mediawiki-selenium-integration' with I7affe14e878d5c1fc4bcb4dfc7f2d1494cd795b7


== June 23 ==
== 2022-05-18 ==
* 23:29 Krinkle: Reloading Zuul to deploy https://gerrit.wikimedia.org/r/220350
* 19:31 hashar: Reloaded Zuul for https://gerrit.wikimedia.org/r/c/integration/config/+/793028
* 21:34 bd808: updated scap to 33f3002 (Ensure that the minimum batch size used by cluster_ssh is 1)
* 18:45 brennen: gitlab: created placeholder /repos/mediawiki group for squatting purposes
* 19:53 legoktm: deleted broken renames from centralauth.renameuser_status on beta cluster
* 08:29 hashar: Updating SSH Build agent from 1.31.5 to 1.32.0 on CI Jenkins to prevent an issue when uploading `remoting.jar`  # [[phab:T307339|T307339]]#7937268
* 18:28 jzerebecki: zuul reload for https://gerrit.wikimedia.org/r/#/c/219778/4
* 07:32 hashar: Deleting Jenkins agent configuration for `integration-castor03` # [[phab:T252071|T252071]]
* 16:33 bd808: updated scap to da64a65 (Cast pid read from file to an int)
* 16:20 bd808: updated scap to 947b93f (Fix reference to _get_apache_list)
* 12:24 hashar: rebooting integration-labvagrant (stuck)
* 00:07 legoktm: deploying https://gerrit.wikimedia.org/r/220020


== June 22 ==
== 2022-05-17 ==
* 22:23 legoktm: deploying https://gerrit.wikimedia.org/r/219603
* 23:26 James_F: Zuul: [mediawiki/extensions/Phonos] Install basic quibble CI for [[phab:T308558|T308558]]
* 21:47 bd808: scap emitting soft failures due to missing python-netifaces on deployment-videoscaler01; should be fixed by a current puppet run
* 21:37 bd808: Updated scap to 81b7c14 (Move dsh group file names to config)
* 14:58 hashar: disabled  sshd MAC/KEX hardening on beta (was https://gerrit.wikimedia.org/r/#/c/219828/ )
* 14:32 hashar: restarting Jenkins
* 14:30 hashar: Reenable sshd MAC/KEX hardening on beta by cherry picking https://gerrit.wikimedia.org/r/#/c/219828/
* 13:17 moritzm: activated firejail service containment for graphoid, citoid and mathoid in deployment-sca
* 11:07 hashar: fixing puppet on integration-zuul-server
* 10:29 hashar: rebooted deployment-kafka02 to get rid of /home NFS share
* 10:25 hashar: fixed puppet.conf on deployment-urldownloader
* 10:20 hashar: enabled puppet agent on deployment-urldownloader
* 10:05 hashar: removing puppet lock on deployment-elastic07 ( rm /var/lib/puppet/state/agent_catalog_run.lock )
* 09:40 hashar: fixed puppet certificates on integration-lightslave-jessie-1002 by deleting the SSL certs
* 09:31 hashar: cant reach integration-lightslave-jessie-1002 , probably NFS related
* 09:22 hashar: upgrading Jenkins gearman plugin from 0.1.1 to latest master (f2024bd).


== June 21 ==
== 2022-05-16 ==
* 02:40 legoktm_: deploying https://gerrit.wikimedia.org/r/219401
* 19:31 inflatador: bking@deployment-elastic07 halted deployment-elastic07 in beta ES cluster; will decom on Friday [[phab:T299797|T299797]]
* 19:02 inflatador: bking@deployment-elastic06 halted deployment-elastic06 in beta ES cluster; will decom on Friday [[phab:T299797|T299797]]
* 08:33 hashar: Reloading Zuul for https://gerrit.wikimedia.org/r/c/integration/config/+/791809


== June 20 ==
== 2022-05-14 ==
* 03:12 Krinkle: Reloading Zuul to deploy https://gerrit.wikimedia.org/r/219449
* 23:19 James_F: Zuul: Add Dreamy_Jazz to CI allow list
* 23:17 James_F: Zuul: [mediawiki/extensions/LocalisationUpdate] Move out of production section
* 20:25 urbanecm: add TheresNoTime (samtar) as a project member per request


== June 19 ==
== 2022-05-13 ==
* 18:39 thcipriani: running `salt -b 2 '*' cmd.run 'puppet agent -t'` from deployment salt to remount /data/projects
* 22:59 James_F: Zuul: [mediawiki/extensions/SocialProfile] Add WikiEditor as a CI dependency
* 18:36 thcipriani: added role::deployment::repo_config to deployment-prep hiera, to be removed after patched in ops/puppet
* 22:52 James_F: Zuul: Add Tranve to CI allow list
* 16:48 thcipriani: primed keyholder on deployment-bastion
* 22:01 hashar: reloaded zuul for https://gerrit.wikimedia.org/r/791688
* 15:35 hashar: nodepool manages to boot instances and ssh to them. Now attempting to add them as slave in Jenkins!
* 18:58 inflatador: bking@deployment-elastic05 halted deployment-elastic05 in beta ES cluster; will decom in 1 wk [[phab:T299797|T299797]]
* 17:18 hashar: Reloading Zuul for https://gerrit.wikimedia.org/r/c/integration/config/+/791644/
* 13:16 taavi: added user Zoranzoki21 to extension-HidePrefix gerrit group [[phab:T305317|T305317]]


== June 17 ==
== 2022-05-12 ==
* 20:43 legoktm: deploying https://gerrit.wikimedia.org/r/219021
* 22:09 inflatador: bking@deployment-elastic05 banned deployment-elastic05 from beta ES cluster in preparation for decom [[phab:T299797|T299797]]
* 18:56 legoktm: deploying https://gerrit.wikimedia.org/r/218981
* 19:53 hashar: gerrit: triggering full replication to gerrit2001 to test [[phab:T307137|T307137]]
* 16:40 legoktm: deploying https://gerrit.wikimedia.org/r/218938 & https://gerrit.wikimedia.org/r/218939
* 16:00 hashar: contint2001 and contint1001 now automatically run `docker system prune --force` every day  and `docker system prune --force` on Sunday {{!}} https://gerrit.wikimedia.org/r/c/operations/puppet/+/773784/
* 14:16 jzerebecki: deploying zuul config ca3bd69..00eb921
* 15:05 brennen: gitlab-prod-1001.devtools: soft reboot
* 13:53 jzerebecki: applying https://www.mediawiki.org/wiki/Continuous_integration/Zuul#Gearman_deadlock
* 00:46 brennen: gitlab: disabling container registries on all existing projects ([[phab:T307537|T307537]])
* 13:00 jzerebecki: done
* 12:32 jzerebecki: also needed to kill a few beta jobs, like https://www.mediawiki.org/wiki/Continuous_integration/Jenkins#Hung_beta_code.2Fdb_update says. no proceeding with https://gerrit.wikimedia.org/r/#/c/214603/8
* 12:23 jzerebecki: before doing that actually trying https://www.mediawiki.org/wiki/Continuous_integration/Zuul#Jenkins_execution_lock to try to unlock https://integration.wikimedia.org/ci/computer/deployment-bastion.eqiad/
* 12:17 jzerebecki: changing many jenkins jobs while deploying https://gerrit.wikimedia.org/r/#/c/214603/8


== June 16 ==
== 2022-05-11 ==
* 15:55 bd808: Resolved rebase conflicts on deployment-salt caused by code review changes of https://gerrit.wikimedia.org/r/#/c/216325 prior to merge
* 23:20 brennen: gitlab-prod-1001.devtools: container registry currently enabled
* 13:05 hashar: upgrading HHVM on CI trusty slaves https://phabricator.wikimedia.org/T102616  <tt>salt -v -t 30  --out=json -C 'G@oscodename:trusty and *slave*' pkg.install pkgs='["hhvm","hhvm-dev","hhvm-fss","hhvm-luasandbox","hhvm-tidy","hhvm-wikidiff2"]'</tt>
* 18:58 brennen: gitlab-prod-1001.devtools: setting to use devtools standalone puppetmaster
* 11:45 hashar: integration-slave-trusty-1021 downgrading hhvm plugins to match hhvm 3.3.1
* 11:42 hashar: integration-slave-trusty-1021 downgrading hhvm, hhvm-dev from 3.3.6 to 3.3.1
* 11:19 hashar: rebooting integration-dev , unreacheable
* 11:09 hashar: apt-get upgrade on integration-slave-trusty-1021
* 08:19 hashar: rebooting integration-slave-jessie-1001, unreacheable


== June 15 ==
== 2022-05-10 ==
* 23:39 legoktm: deploying https://gerrit.wikimedia.org/r/218549
* 12:06 hashar: Updating Quibble jobs to image 1.4.5 with Memcached enabled {{!}} https://gerrit.wikimedia.org/r/c/integration/config/+/790641 {{!}} [[phab:T300340|T300340]]
* 23:22 legoktm: deploying https://gerrit.wikimedia.org/r/218527
* 10:55 hashar: Updating `wmf-quibble-*` jobs to Quibble 1.4.5 # https://gerrit.wikimedia.org/r/c/integration/config/+/790638/
* 21:10 bd808: Put cherry-picks of https://gerrit.wikimedia.org/r/#/c/216325/ and https://gerrit.wikimedia.org/r/#/c/216337/ back on deployment-salt
* 08:36 hashar: Updating wikibase-client-docker and wikibase-repo-docker to Quibble 1.4.5 + supervisord https://gerrit.wikimedia.org/r/c/integration/config/+/790621
* 19:59 hashar: manually rebased puppet repo on integration-puppetmaster (some patch got merged)
* 08:30 hashar: Updating MediaWiki coverage jobs to Quibble image 1.4.5 + supervisord https://gerrit.wikimedia.org/r/c/integration/config/+/790381
* 17:21 legoktm: deploying https://gerrit.wikimedia.org/r/218391
* 08:24 hashar: Updating codehealth jobs to Quibble 1.4.5 + supervisord https://gerrit.wikimedia.org/r/c/integration/config/+/790380/
* 15:02 hashar: rebooting integration-slave-jessie-1001.integration.eqiad.wmflabs  (unresponsive)
* 08:23 hashar: Updating MediaWiki Phan jobs to Quibble 1.4.5 https://gerrit.wikimedia.org/r/c/integration/config/+/790377
* 14:37 hashar: rebooting integration-dev since it is unresponsive
* 13:22 hashar: cleaned integration-puppetmaster certificate
* 13:09 hashar: deleting integration-saltmaster puppet cert


== June 13 ==
== 2022-05-09 ==
* 07:53 legoktm: deploying https://gerrit.wikimedia.org/r/217997
* 21:43 James_F: Beta Cluster: Shutting down old deployment-restbase03 instance for [[phab:T295375|T295375]]
* 03:42 Krinkle: Reloading Zuul to deploy  https://gerrit.wikimedia.org/r/217993
* 20:33 hashar: Manually cancelling deadlock build jobs for beta https://integration.wikimedia.org/ci/view/Beta/ # [[phab:T307963|T307963]]
* 01:11 Krinkle: Reloading Zuul to deploy https://gerrit.wikimedia.org/r/217982


== June 12 ==
== 2022-05-08 ==
* 21:29 jzerebecki: reloading zuul with 9ceb1ea..3b862a7 for https://gerrit.wikimedia.org/r/#/c/176377/3
* 12:33 urbanecm: deployment-prep: urbanecm@deployment-mwmaint02:~$ foreachwikiindblist growthexperiments extensions/GrowthExperiments/maintenance/migrateMenteeOverviewFiltersToPresets.php --update # [[phab:T304057|T304057]]
* 21:22 legoktm: deploying https://gerrit.wikimedia.org/r/217448
* 19:02 jzerebecki: done
* 19:00 jzerebecki: doing https://www.mediawiki.org/wiki/Continuous_integration/Zuul#Gearman_deadlock
* 14:56 jzerebecki: reloaded zuul 38c009d..6753a47


== June 11 ==
== 2022-05-06 ==
* 14:44 hashar: deployment-prep and integration labs project got migrated out of ec2id.  Flawless / self maintaining task thanks to Andrew B. !
* 12:55 hashar: Migrated Castor service from integration-castor03 to integration-castor05 # [[phab:T252071|T252071]]
* 14:38 hashar: integration-saltmaster: salt-key --accept-all --yes
* 14:30 hashar: rebasing puppetmaster on integration-puppetmaster ca27502..c409503
* 14:28 hashar: rebasing puppetmaster on deployment-salt  ca27502..c409503
* 14:28 hashar: cert madness on integration and deployment-prep ( https://gerrit.wikimedia.org/r/#/c/202924/ )
* 10:44 hashar: operations-dns-lint can't be migrated yet until we figure out a solution to provide some missing GeoIP file https://phabricator.wikimedia.org/T98737
* 10:33 hashar: integration: pooling https://integration.wikimedia.org/ci/computer/integration-lightslave-jessie-1002/ with labels <tt>DebianJessie</tt> and <tt>contintLabsSlave</tt>. Does not have Zuul package installed though.
* 10:27 hashar: integration: do not install zuul on light slaves (i.e.: integration-lightslave-jessie-1002 ). Jessie does not have a zuul package yet  https://gerrit.wikimedia.org/r/#/c/217476/1
* 10:03 hashar: integration: cherry picked https://gerrit.wikimedia.org/r/#/c/217466/1 and https://gerrit.wikimedia.org/r/#/c/217467/1 and applied role::ci::slave::labs::light to integration-lightslave-jessie-1002
* 09:41 hashar: [[Hiera:Integration]] change puppet master from 'integration-puppetmaster' to 'integration-puppetmaster.integration.eqiad.wmflabs' https://phabricator.wikimedia.org/T102108
* 09:20 hashar: creating integration-lightslave-jessie-1002 a m1.small (1CPU) instance that would be a very basic Jenkins slaves.  The reason is role::ci::slave::labs includes too many things which are not ready for Jessie yet ( https://phabricator.wikimedia.org/T94836 ).  Will let us migrate operations-dns-lint to it since prod switched to Jessie (https://phabricator.wikimedia.org/T98003)


== June 10 ==
== 2022-05-05 ==
* 20:18 legoktm: deploying https://gerrit.wikimedia.org/r/217277
* 22:57 dduvall: Reloading Zuul to deploy https://gerrit.wikimedia.org/r/789723
* 10:42 hashar: restarted jobchron/jobrunner on deployment-jobrunner01
* 22:31 dduvall: Reloading Zuul to deploy https://gerrit.wikimedia.org/r/789721
* 10:42 hashar: manually nuked and repopulated jobqueue:aggregator:s-wikis:v2 on deplkoyment-redis01  It now only contains entries from all-labs.dblist
* 22:28 dduvall: created 2 new jobs to deploy https://gerrit.wikimedia.org/r/789720
* 09:46 hashar: deployment-videoscaler restarted jobchron
* 22:24 dduvall: Reloading Zuul to deploy https://gerrit.wikimedia.org/r/789718
* 08:19 mobrovac: reboot deployment-restbase01 due to ssh problems
* 22:21 dduvall: created 4 new jobs to deploy https://gerrit.wikimedia.org/r/789717
* 22:15 dduvall: Reloading Zuul to deploy https://gerrit.wikimedia.org/r/789714
* 22:13 dduvall: created 2 new jobs to deploy https://gerrit.wikimedia.org/r/789713
* 22:09 dduvall: Reloading Zuul to deploy https://gerrit.wikimedia.org/r/789711
* 22:07 dduvall: created 2 new jobs to deploy https://gerrit.wikimedia.org/r/789710
* 21:57 dduvall: Reloading Zuul to deploy https://gerrit.wikimedia.org/r/c/integration/config/+/789707/1
* 21:51 dduvall: created 4 new jobs to deploy https://gerrit.wikimedia.org/r/c/integration/config/+/789706
* 21:48 dduvall: Reloading Zuul to deploy https://gerrit.wikimedia.org/r/c/integration/config/+/789704
* 21:44 dduvall: created 4 new jobs to deploy https://gerrit.wikimedia.org/r/c/integration/config/+/789703
* 21:38 dduvall: Reloading Zuul to deploy https://gerrit.wikimedia.org/r/789698
* 21:35 dduvall: created 4 jobs to deploy https://gerrit.wikimedia.org/r/c/integration/config/+/789697
* 21:26 dduvall: Reloading Zuul to deploy https://gerrit.wikimedia.org/r/c/integration/config/+/789694
* 21:22 dduvall: creating 4 new jobs to deploy https://gerrit.wikimedia.org/r/c/integration/config/+/789693
* 18:27 dduvall: reenabled puppet on integration-agent-docker-1023.integration.eqiad1.wikimedia.cloud
* 18:25 dancy: Update to scap 4.7.1-1+0~20220505181519.270~1.gbpeb47ae in beta cluster
* 18:16 dduvall: disabled puppet on integration-agent-docker-1023.integration.eqiad1.wikimedia.cloud for deployment of https://gerrit.wikimedia.org/r/c/operations/puppet/+/768774
* 16:29 dduvall: Reloading Zuul to deploy https://gerrit.wikimedia.org/r/c/integration/config/+/789650
* 16:26 dduvall: created 4 new jobs to deploy https://gerrit.wikimedia.org/r/c/integration/config/+/789649
* 14:25 hashar: Created integration-castor05
* 12:28 hashar: Reloading Zuul for https://gerrit.wikimedia.org/r/789179 and https://gerrit.wikimedia.org/r/789232
* 07:45 hashar: deployment-prep: removed a few queued Jenkins  builds from https://integration.wikimedia.org/ci/view/Beta/


== June 9 ==
== 2022-05-04 ==
* 22:13 thcipriani: are we back?
* 21:29 dduvall: Reloading Zuul to deploy https://gerrit.wikimedia.org/r/c/integration/config/+/789285
* 17:31 twentyafterfour: Branching 1.26wmf9
* 21:16 dduvall: created 1 new job to deploy https://gerrit.wikimedia.org/r/c/integration/config/+/789284
* 17:10 hashar: restart puppet master on deployment-salt. Was overloaded with wait I/O since roughly 1am UTC
* 21:07 dduvall: Reloading Zuul to deploy https://gerrit.wikimedia.org/r/c/integration/config/+/789278
* 16:56 hashar: restarted puppetmaster on deployment-salt
* 21:00 dduvall: created 2 jobs to deploy https://gerrit.wikimedia.org/r/c/integration/config/+/789277
* 20:48 dduvall: Reloading Zuul to deploy https://gerrit.wikimedia.org/r/789274
* 20:44 dduvall: creating 4 new jobs to deploy https://gerrit.wikimedia.org/r/c/integration/config/+/789273
* 20:31 dduvall: Reloading Zuul to deploy https://gerrit.wikimedia.org/r/c/integration/config/+/789265
* 20:25 dduvall: created 4 new jobs to deploy https://gerrit.wikimedia.org/r/c/integration/config/+/789264
* 20:22 urbanecm: urbanecm@deployment-mwmaint02:~$ mwscript extensions/CentralAuth/maintenance/fixStuckGlobalRename.php --wiki=commonswiki --logwiki=metawiki "There'sNoTime" "TheresNoTime" # [[phab:T307590|T307590]]
* 20:14 dduvall: Reloading Zuul to deploy https://gerrit.wikimedia.org/r/c/integration/config/+/789259/1
* 20:11 dduvall: created 4 new jobs to deploy https://gerrit.wikimedia.org/r/c/integration/config/+/789258
* 18:54 dduvall: Reloading Zuul to deploy https://gerrit.wikimedia.org/r/c/integration/config/+/789245
* 18:47 dduvall: creating 4 new jobs to deploy https://gerrit.wikimedia.org/r/c/integration/config/+/789244
* 18:31 dduvall: Reloading Zuul to deploy https://gerrit.wikimedia.org/r/c/integration/config/+/789238
* 18:24 dduvall: created 4 new jobs to deploy https://gerrit.wikimedia.org/r/c/integration/config/+/789237
* 17:51 dduvall: created 4 new jobs to deploy https://gerrit.wikimedia.org/r/c/integration/config/+/789225
* 17:22 dduvall: Reloading Zuul to deploy https://gerrit.wikimedia.org/r/c/integration/config/+/789218
* 17:12 dduvall: created 4 new jobs to deploy https://gerrit.wikimedia.org/r/c/integration/config/+/789217
* 16:11 dduvall: Reloading Zuul to deploy https://gerrit.wikimedia.org/r/c/integration/config/+/789204
* 16:01 dduvall: created 2 new jobs to deploy https://gerrit.wikimedia.org/r/c/integration/config/+/789203
* 16:01 dduvall: Reloading Zuul to deploy https://gerrit.wikimedia.org/r/c/integration/config/+/789195
* 15:42 dduvall: created 2 new jobs to deploy https://gerrit.wikimedia.org/r/c/integration/config/+/789194
* 13:44 James_F: Zuul: [mediawiki/services/function-evaluator] Use bespoke pipeline jobs only [[phab:T307507|T307507]]


== June 8 ==
== 2022-05-03 ==
* 14:12 hashar: clearing disk space on trusty 1011 and 1012
* 23:35 Reedy: Reloading Zuul to deploy https://gerrit.wikimedia.org/r/788871
* 14:12 hashar: clearing disk space on trusty 1011 and 1012
* 23:23 Reedy: Reloading Zuul to deploy https://gerrit.wikimedia.org/r/788868
* 08:56 hashar: rebooted trusty-1013 trusty-1015  ( https://phabricator.wikimedia.org/T101658 ) and repooled them in Jenkins
* 22:03 dduvall: Reloading Zuul to deploy https://gerrit.wikimedia.org/r/c/integration/config/+/788806
* 08:48 hashar: rebooting integration-slave-trusty-1012 (stalled can't login)
* 22:01 dduvall: created 4 new jobs to deploy https://gerrit.wikimedia.org/r/c/integration/config/+/788806
* 04:30 legoktm: deploying https://gerrit.wikimedia.org/r/216520
* 21:40 Reedy: Reloading Zuul to deploy https://gerrit.wikimedia.org/r/788798
* 00:40 legoktm: deploying https://gerrit.wikimedia.org/r/216600
* 21:27 dduvall: Reloading Zuul to deploy https://gerrit.wikimedia.org/r/c/integration/config/+/788799
* 21:25 dduvall: created trigger-pipelinelib-pipeline-test and pipelinelib-pipeline-test jobs for https://gerrit.wikimedia.org/r/c/integration/config/+/788799
* 11:50 hashar: Reloading Zuul for https://gerrit.wikimedia.org/r/788682


== June 7 ==
== 2022-05-02 ==
* 20:43 Krinkle: Rebooting integration-slave-trusty-1015 to see if it comes back so we can inspect logs (T101658)
* 15:09 dancy: Updating beta cluster scap to 4.7.1-1+0~20220502085300.264~1.gbp367de7?
* 20:16 Krinkle: Per Yuvi's advice, disabled "Shared project storage" (/data/project NFS mount) for the integration project. Mostly unused. Two existing directories were archived to /home/krinkle/integration-nfs-data-project/
* 10:06 hashar: Reloading Zuul for https://gerrit.wikimedia.org/r/786934 # [[phab:T301766|T301766]]
* 17:51 Krinkle: integration-slave-trusty-1012, trusty-1013 and 1015 unresponsive to pings or ssh. Other trusty slaves still reachable.


== June 6 ==
== 2022-04-29 ==
* 21:05 legoktm: deploying https://gerrit.wikimedia.org/r/216500
* 21:49 brennen: created https://gitlab.wikimedia.org/toolforge-repos and https://gitlab.wikimedia.org/cloudvps-repos for cloud tenants ([[phab:T305301|T305301]])
* 18:37 James_F: Zuul: Add SimilarEditors dependency on QuickSurveys extension for [[phab:T297687|T297687]]


== June 5 ==
== 2022-04-28 ==
* 23:55 bd808: added deployment-logstash2 host and told cluster to move logstash all data there
* 20:31 James_F: Zuul: Add PHP81 as voting for libraries, PHP extensions etc. for [[phab:T293509|T293509]]
* 21:22 bd808: restarted puppetmaster on deployment-salt ("Could not request certificate: Error 500 on SERVER: <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">")
* 18:57 brennen: finished editing mediawiki-new-errors
* 21:17 hashar: Pooled in mediawiki-extensions-qunit which runs qunit tests with karma with multiple extensions . https://gerrit.wikimedia.org/r/#/c/216132/ . https://phabricator.wikimedia.org/T99877
* 18:50 brennen: adding some filters to mediawiki-new-errors, including one based on https://wikitech.wikimedia.org/wiki/Performance/Runbook/Kibana_monitoring#Filtering_by_query_string
* 19:45 thcipriani: set use_dnsmasq: false on Hiera:Integration
* 09:03 hashar: Gerrit upgraded to 3.4.4  at roughly 8:00 UTC
* 19:40 hashar: refreshed Jenkins jobs mediawiki-extensions-hhvm and mediawiki-extensions-zend with  https://gerrit.wikimedia.org/r/#/c/216100/3 (refactoring)
* 18:56 Krinkle: Reloading Zuul to deploy https://gerrit.wikimedia.org/r/216182
* 18:52 Krinkle: Reloading Zuul to deploy https://gerrit.wikimedia.org/r/216159


== June 4 ==
== 2022-04-27 ==
* 18:06 Krinkle: Reloading Zuul to deploy https://gerrit.wikimedia.org/r/214501
* 19:06 hashar: Updating operations/software/gerrit branches and tags from upstream # [[phab:T292759|T292759]]
* 16:50 legoktm: deploying https://gerrit.wikimedia.org/r/215935
* 15:20 hashar: Updating non-quibble jobs to composer 2.3.3 {{!}} [[phab:T303867|T303867]] {{!}} https://gerrit.wikimedia.org/r/c/integration/config/+/777029
* 15:07 hashar: integration-jessie-slave1001 : upgrading salt from 2014.1.13 to 2014.7.5
* 14:58 thcipriani: running sudo salt '*' cmd.run 'sed -i "s/GlobalSign_CA.pem/ca-certificates.crt/" /etc/ldap/ldap.conf' on integration-saltmaster
* 14:54 hashar: integration-jessie-slave1001 : running dpkg --configure -a
* 09:26 Krinkle: Reloading Zuul to deploy  https://gerrit.wikimedia.org/r/215870


== June 3 ==
== 2022-04-26 ==
* 23:31 Krinkle: Reloading Zuul to deploy https://gerrit.wikimedia.org/r/209991
* 15:40 brennen: train 1.39.0-wmf.9 ([[phab:T305215|T305215]]): no current blockers - expect to start train ops after the toolhub deployment window wraps, so some time after 17:00 UTC; taking a pre-train stroll-around-the-block break before that.
* 20:49 hashar: restarted zuul entirely to remove some stalled jobs
* 13:46 James_F: Deleting deployment-mx02.deployment-prep.eqiad1.wikimedia.cloud for [[phab:T306068|T306068]]
* 20:47 marxarelli: Reloading Zuul to deploy I96649bc92a387021a32d354c374ad844e1680db2
* 13:38 James_F: Zuul: [mediawiki/extensions/SimilarEditors] Install basic prod CI for [[phab:T306897|T306897]]
* 20:28 hashar: Restarting Jenkins to release a deadlock
* 12:33 hashar: Manually pruned dangling docker images on contint1001 and contint2001
* 20:22 hashar: deployment-bastion Jenkins slave is stalled again :-(  No code update happening on beta cluster
* 08:30 hashar: Reloading Zuul for https://gerrit.wikimedia.org/r/780824
* 18:50 thcipriani: change use_dnsmasq: false for deployment-prep
* 08:09 hashar: Reloading Zuul for https://gerrit.wikimedia.org/r/785204
* 18:24 thcipriani: updating deployment-salt puppet in prep for use_dnsmasq=false
* 11:58 kart_: Cherry-picked 213840 to test logstash
* 10:08 hashar: Update JJB fork again f966521..4135e14 . Will remove the http notification to zuul {{bug:T93321}}. REFRESHING ALL JOBS!
* 10:03 hashar: Further updated JJB fork  c7231fe..f966521
* 09:10 hashar: Refershing almost all jenkins jobs to take in account the Jenkins Git plugin upgrade https://phabricator.wikimedia.org/T101105
* 03:07 Krinkle: Reloading Zuul to deploy  https://gerrit.wikimedia.org/r/215571


== June 2 ==
== 2022-04-25 ==
* 20:58 bd808: redis-cli srem "deploy:scap/scap:minions" i-000002f4.eqiad.wmflabs
* 17:29 dancy: Reloading Zuul to deploy https://gerrit.wikimedia.org/r/c/integration/config/+/779450
* 20:54 bd808: deleted unused deployment-rsync01 instance
* 15:31 James_F: Zuul: [mediawiki/extensions/RegularTooltips] Add basic quibble CI
* 20:49 bd808: Updated scap to 62d5cb2 (Lint JSON files)
* 20:40 marxarelli: cherry-picked https://gerrit.wikimedia.org/r/#/c/208024/ on integration-puppetmaster
* 20:38 marxarelli: manually rebased operations/puppet on integration-puppetmaster to fix empty commit from cherry-pick
* 17:01 hashar: updated JJB fork to e3199d9..c7231fe
* 15:16 hashar: updated integration/jenkins-job-builder to  e3199d9
* 13:16 hashar: restarted deployment-salt


== June 1 ==
== 2022-04-20 ==
* 08:18 hashar: Jenkins: upgrading git plugin from 1.5.0 to latest
* 16:25 zabe: root@deployment-cache-upload06:~# touch /srv/trafficserver/tls/etc/ssl_multicert.config && systemctl reload trafficserver-tls.service


== May 31 ==
== 2022-04-18 ==
* 21:31 legoktm: deploying https://gerrit.wikimedia.org/r/214982
* 19:27 brennen: gitlab runners: deleting a number of stale runners with no contacts in > 2 months which are most likely no longer extant
* 20:50 legoktm: deploying https://gerrit.wikimedia.org/r/214939
* 16:49 brennen: phabricator: created phame blog https://phabricator.wikimedia.org/phame/blog/view/22/ for [[phab:T306329|T306329]]
* 00:59 legoktm: deployed https://gerrit.wikimedia.org/r/214889
* 16:48 brennen: phabricator: adding self to acl*blog-admins
* 15:33 James_F: Shutting off deployment-wdqs01 from the Beta Cluster project per [[phab:T306054|T306054]]; it's apparently unused, so this shouldn't break anything.


== May 29 ==
== 2022-04-14 ==
* 22:45 legoktm: deploying https://gerrit.wikimedia.org/r/214775
* 22:30 Reedy: Reloading Zuul to deploy https://gerrit.wikimedia.org/r/779969
* 19:48 legoktm: deleting corrupt mwext-qunit@2 workspace on integration-slave-trusty-1017
* 16:09 brennen: removed or renamed 4 filters from mediawiki-new-errors per check-new-error-tasks/check.sh
* 17:21 legoktm: deploying https://gerrit.wikimedia.org/r/214652 and https://gerrit.wikimedia.org/r/214653


== May 28 ==
== 2022-04-12 ==
* 20:50 bd808: Ran "del jobqueue:aggregator:h-ready-queues:v2" on deployment-redis01
* 21:49 brennen: Updating dev-images docker-pkg files on primary contint for elastic 7.10.2
* 13:46 hashar: upgrading Jenkins git plugin from 1.4.6+wmf1 to 1.7.1 {{bug|T100655}}  and restarting Jenkins
* 21:46 brennen: Updating dev-images docker-pkg files on primary contint for elastic 6.8.23
* 21:37 brennen: Updating dev-images docker-pkg files on primary contint for apache & elasticsearch changes ([[phab:T304290|T304290]], [[phab:T305143|T305143]])
* 16:05 Reedy: Reloading Zuul to deploy https://gerrit.wikimedia.org/r/779500
* 15:55 Reedy: Reloading Zuul to deploy https://gerrit.wikimedia.org/r/779498 https://gerrit.wikimedia.org/r/779141


== May 27 ==
== 2022-04-08 ==
* 15:09 hashar: Jenkins slaves are all back up. Root cause was some ssh algorithm in their sshd which is not supported by Jenkins jsch embedded lib.
* 11:08 hashar: Reloading Zuul for https://gerrit.wikimedia.org/r/778287
* 14:30 hashar: manually rebasing puppet git on deployment-salt (stalled)
* 14:27 hashar: restarting deployment-salt / some process is 100% wa/IO
* 13:38 hashar: restarted integration puppetmaster (memory leak)
* 13:35 hashar: integration-puppetmaster apparently out of memory
* 13:30 hashar: All Jenkins slaves are disconnected due to some ssh error. CI is down.


== May 24 ==
== 2022-04-07 ==
* 10:27 duh: deploying https://gerrit.wikimedia.org/r/213218
* 06:07 urbanecm: deployment-prep: foreachwiki extensions/GrowthExperiments/maintenance/T304461.php --delete # [[phab:T304461|T304461]], output is at P24204
* 05:54 urbanecm: deployment-prep: mwscript extensions/GrowthExperiments/maintenance/T304461.php --wiki=<nowiki>{</nowiki>enwiki,cswiki<nowiki>}</nowiki> --delete # [[phab:T304461|T304461]]


== May 23 ==
== 2022-04-06 ==
* 21:43 legoktm: deploying https://gerrit.wikimedia.org/r/212960
* 20:03 thcipriani: rebooting phabricator
* 11:44 James_F: Zuul: [mediawiki/extensions/WikiEditor] Add BetaFeatures to phan deps for [[phab:T304596|T304596]]


== May 20 ==
== 2022-04-04 ==
* 17:19 thcipriani|afk: add --fail to curl inside mwext-Wikibase-qunit jenkins job
* 22:43 James_F: dockerfiles: [composer-scratch] Upgrade composer to 2.3.3 and cascade for [[phab:T294260|T294260]]
* 15:59 bd808: Applied role::beta::puppetmaster on deployment-salt to get Puppet logstash reports back
* 18:49 hashar: Reloading Zuul to revert https://gerrit.wikimedia.org/r/776179
* 18:23 hashar: Reloading Zuul for https://gerrit.wikimedia.org/r/776179
* 17:50 dancy: Reloading Zuul to deploy https://gerrit.wikimedia.org/r/c/integration/config/+/775796
* 12:12 hashar: Reloading Zuul for https://gerrit.wikimedia.org/r/776723
* 10:28 James_F: Zuul: [mediawiki/extensions/WikiLambda] Publish PHP and JS documentation
* 08:54 jnuche: redeploying Zuul


== May 19 ==
== 2022-04-02 ==
* 02:54 bd808: Primed keyholder agent via `sudo -u keyholder env SSH_AUTH_SOCK=/run/keyholder/agent.sock ssh-add /etc/keyholder.d/mwdeploy_rsa`
* 12:00 zabe: apply https://gerrit.wikimedia.org/r/c/mediawiki/extensions/CentralAuth/+/773903 on deployment-prep centralauth databases
* 02:40 Krinkle: deployment-bastion.eqiad magically back online and catching up jobs, though failing due to T99644
* 02:36 Krinkle: Jenkins is unable to launch slave agent on deployment-bastion.eqiad. Using "Jenkins Script Console" throws HTTP 503.
* 02:30 Krinkle: Various beta-mediawiki-config-update-eqiad jobs have been stuck for over 13 hours.


== May 12 ==
== 2022-03-31 ==
* 15:18 hashar: downgrading hhvm on CI slaves
* 20:58 Krinkle: Reloading Zuul to deploy https://gerrit.wikimedia.org/r/775957
* 15:10 hashar: mediawiki-phpunit-hhvm Jenkins job is broken due to an hhvm upgrade {{bug|T98876}}
* 00:48 bd808: beta cluster central syslog going to logstash rather than deployment-bastion (see https://gerrit.wikimedia.org/r/#/c/210253)
* 00:36 bd808: Cherry-picked https://gerrit.wikimedia.org/r/#/c/210253/
* 00:16 legoktm: deploying https://gerrit.wikimedia.org/r/210251


== May 11 ==
== 2022-03-29 ==
* 22:50 legoktm: deploying https://gerrit.wikimedia.org/r/210219
* 14:20 James_F: Zuul: [mediawiki/extensions/IPInfo] Add EventLogging phan dependency for [[phab:T304948|T304948]]
* 22:29 bd808: removed duplicate local group l10nupdate from deployment-bastion that was shadowing the ldap group of the same name
* 12:32 hashar: integration-agent-docker-1039: clearing leftover pipelinelib builds: `sudo rm -fR /srv/jenkins/workspace/workspace/*`  [[phab:T304932|T304932]] [[phab:T302477|T302477]]
* 22:24 bd808: removed duplicate local group mwdeploy from deployment-bastion that was shadowing the ldap group of the same name
* 05:35 hashar: Relocate castor directory on integration-castor03 from `/srv/jenkins-workspace/caches` to `/srv/castor` https://gerrit.wikimedia.org/r/c/operations/puppet/+/774771
* 22:15 bd808: Removed role::logging::mediawiki from deployment-bastion
* 20:55 legoktm: deleted operations-puppet-tox-py27 workspace on integration-slave-precise-1012, it was corrupt (fatal: loose object b48ccc3ef5be2d7252eb0f0f417f1b5b7c23fd5f (stored in .git/objects/b4/8ccc3ef5be2d7252eb0f0f417f1b5b7c23fd5f) is corrupt)
* 13:54 hashar: Jenkins: removing label hasContintPackages from production slaves, it is no more needed :)


== May 9 ==
== 2022-03-28 ==
* 00:10 bd808: Cherry-picked https://gerrit.wikimedia.org/r/#/c/209830 to deployment-bastion:/srv/deployment/scap/scap and deployed with trebuchet
* 16:55 hashar: integration: created instance integration-castor04 with flavor `g3.cores8.ram32.disk20` (twice more ram than integration-castor03) # [[phab:T252071|T252071]]
* 16:49 hashar: integration: created 320G volume https://horizon.wikimedia.org/project/volumes/3f90c3f2-158d-4e45-a919-0f048f47c3b6/ . Intended to migrate integration-castor03 [[phab:T252071|T252071]]
* 10:34 hashar: contint2001 and contint1001: pruning obsolete branches from the zuul-merger: `sudo -H -u zuul find /srv/zuul/git -type d -name .git -print -execdir git -c url."https://gerrit.wikimedia.org/r/".insteadOf="ssh://jenkins-bot@gerrit.wikimedia.org:29418/" remote prune origin \;` [[phab:T220606|T220606]]
* 10:25 hashar: Changed `Trainsperiment Survey Questions` surveys permissions to be open outside of WMF and limited to 1 answer (forcing signin) https://docs.google.com/forms/u/0/d/e/1FAIpQLSd0Nc2jGkAGW-5rTiKN2EHWzfw2HeHm13N-ZCw1xUdE3z6woQ/formrestricted
* 10:18 hashar: contint2001 and contint1001: pruning all git reflog entries from the zuul-merger: `sudo -u zuul find /srv/zuul/git -name .git -type d -execdir git reflog expire --expire=all --all`.  They are useless and no more generated since https://gerrit.wikimedia.org/r/c/operations/puppet/+/757943
* 09:53 hashar: Tag Quibble 1.4.5 @ {{Gerrit|abe16d574}} {{!}} [[phab:T291549|T291549]]


== May 8 ==
== 2022-03-27 ==
* 23:59 bd808: Created /data/project/logs/WHERE_DID_THE_LOGS_GO.txt to point folks to the right places
* 13:23 James_F: Zuul: [releng/phatality] Make the node14 CI job voting [[phab:T304736|T304736]]
* 23:54 bd808: Switched MediaWiki debug logs to deployment-fluorine:/srv/mw-log
* 20:05 bd808: Cherry-picked https://gerrit.wikimedia.org/r/#/c/209801
* 18:15 bd808: Cherry-picked https://gerrit.wikimedia.org/r/#/c/209769/
* 05:14 bd808: apache2 access logs now only locally on instances in /var/log/apache2/other_vhosts_access.log; error log in /var/log/apache2.log and still relayed to deployment-bastion and logstash (works like production now)
* 04:49 bd808: Symbolic link not allowed or link target not accessible: /srv/mediawiki/docroot/bits/static/master/extensions
* 04:47 bd808: cherry-picked https://gerrit.wikimedia.org/r/#/c/209680/


== May 7 ==
== 2022-03-26 ==
* 20:48 bd808: Updated kibana to bb9fcf6 (Merge remote-tracking branch 'upstream/kibana3')
* 02:37 Reedy: beta-update-databases-eqiad is back to @hourly
* 18:00 greg-g: brought deployment-bastion.eqiad back online in Jenkins (after Krinkle disconnected it some hours ago). Jobs are processing
* 16:05 bd808: Updated scap to 5d681af (Better handling for php lint checks)
* 14:05 Krinkle: deployment-bastion.eqiad has been stuck for 10 hours.
* 14:05 Krinkle: As of two days now, Jenkins always returns Wikimedia 503 Error page after logging in. Log in session itself is fine.
* 05:02 legoktm: slaves are going up/down likely due to automated labs migration script


== May 6 ==
== 2022-03-25 ==
* 15:13 bd808: Updated scap to 57036d2 (Update statsd events)
* 23:51 Reedy: temporarily turning off period building of beta-update-databases-eqiad until it's run to completion
* 23:21 Reedy: running /usr/local/bin/wmf-beta-update-databases.py manually
* 20:22 Krinkle: Reloading Zuul to deploy https://gerrit.wikimedia.org/r/773866
* 20:02 brennen: mediawiki-new-errors: ran check-new-error-tasks/check.sh and cleared "resolved" filters
* 09:43 hashar: Building Quibble Docker images to rename quibble-with-apache to quibble-with-supervisord


== May 5 ==
== 2022-03-24 ==
* 19:06 jzerebecki: integration-slave-trusty-1015:~$ sudo -u jenkins-deploy rm -rf /mnt/jenkins-workspace/workspace/mwext-Wikibase-qunit/src/node_modules
* 20:00 hashar: reloading Zuul for {{Gerrit|Id844e1723a38eed627af03397cf0ad90c7b09a32}} # [[phab:T299320|T299320]]
* 15:42 legoktm: deploying https://gerrit.wikimedia.org/r/208975 & https://gerrit.wikimedia.org/r/208976
* 20:00 James_F: Clearing integration-castor03:/srv/jenkins-workspace/caches/castor-mw-ext-and-skins/master/mwgate-node14-docker/_cacache/content-v2/sha512/22/ for [[phab:T304652|T304652]]
* 04:36 legoktm: deploying https://gerrit.wikimedia.org/r/208899
* 15:00 James_F: Zuul: [design/codex] Publish code coverage reports for [[phab:T303899|T303899]]
* 04:04 legoktm: deploying https://gerrit.wikimedia.org/r/208889,90,91,92
* 09:37 Lucas_WMDE: killed a beta-scap-sync-world job manually, let’s see if that helps getting beta updates unstuck


== May 4 ==
== 2022-03-23 ==
* 23:50 hashar: restarted Jenkins (deadlock with deployment-bastion)
* 17:35 brennen: restarting phabricator for [[phab:T304540|T304540]], brief downtime expected
* 23:49 hashar: restarted Jenkins
* 14:56 dancy: Updating scap to 4.5.0-1+0~20220321191814.216~1.gbp24bc64 in beta cluster
* 22:50 hashar: Manually retriggering last change of operations/mediawiki-config.git with: <tt>zuul enqueue --trigger gerrit --pipeline postmerge --project operations/mediawiki-config --change 208822,1</tt>
* 22:49 hashar: restarted Zuul to clear out a bunch of operations/mediawiki-config.git jobs
* 22:20 hashar: restarting Jenkins from gallium :/
* 22:18 thcipriani: jenkins restarted
* 22:12 thcipriani: preparing jenkins for shutdown
* 21:59 hashar: disconnected reconnected  Jenkins Gearman client
* 21:41 thcipriani: deployment-bastion still not accepting jobs from jenkins
* 21:35 thcipriani: disconnecting deployment-bastion and reconnecting, again
* 20:54 thcipriani: marking node deployment-bastion offline due to suck jenkins execution lock
* 19:03 legoktm: deploying https://gerrit.wikimedia.org/r/208339
* 17:46 bd808: integration-slave-precise-1014 died trying to clone mediawiki/core.git with "fatal: destination path 'src' already exists and is not an empty directory."


== May 2 ==
== 2022-03-22 ==
* 06:53 legoktm: deploying https://gerrit.wikimedia.org/r/208366
* 14:44 hashar: gerrit: `./deploy_artifacts.py --version=3.3.10 gerrit.war` [[phab:T304226|T304226]]
* 06:45 legoktm: deploying https://gerrit.wikimedia.org/r/208364
* 13:50 hashar: Reloading Zuul for https://gerrit.wikimedia.org/r/c/integration/config/+/771945
* 05:49 legoktm: deploying https://gerrit.wikimedia.org/r/208358
* 05:25 legoktm: deploying https://gerrit.wikimedia.org/r/207132
* 04:18 legoktm: deploying https://gerrit.wikimedia.org/r/208342 and https://gerrit.wikimedia.org/r/208340
* 03:56 legoktm: reset mediawiki-extensions-hhvm workspace on integration-slave-trusty-1015 (bad .git lock)


== April 30 ==
== 2022-03-21 ==
* 19:26 Krinkle: Repooled integration-slave-trusty-1013. IP unchanged.
* 08:35 hashar: The castor cache for mediawiki/core wmf/1.39-wmf.1 is actually empty!
* 19:00 Krinkle: Depooled integration-slave-trusty-1013 for labs maintenance (per andrewbogott)
* 08:32 hashar: Nuking npm castor cache /srv/jenkins-workspace/caches/castor-mw-ext-and-skins/master/wmf-quibble-selenium-php72-docker/npm/ # [[phab:T300203|T300203]]
* 14:17 hashar: Jenkins: properly downgraded IRC plugin from 2.26 to 2.25
* 13:40 hashar: Jenkins: downgrading IRC plugin from 2.26 to 2.25
* 12:09 hashar: restarting Jenkins https://phabricator.wikimedia.org/T96183


== April 29 ==
== 2022-03-18 ==
* 17:15 thcipriani: removed l10nupdate user from /etc/passwd on deployment-bastion
* 14:18 elukey: restart testing of kafka logging TLS certificates (may affect logstash in beta, ping me in case it is a problem)
* 15:00 hashar: Instances are being moved out from labvirt1005 which has some faulty memory. List of instances at https://phabricator.wikimedia.org/T97521#1245217
* 13:22 hashar: Rolling back Quibble jobs from 1.4.4 [[phab:T304147|T304147]]
* 14:25 hashar: upgrading zuul on integration-slave-precise-1011 for  https://phabricator.wikimedia.org/T97106
* 07:41 elukey: experimenting with PKI and kafka logging on deployment-prep, logstash dashboard/traffic may be down (please ping me in case it is a problem)
* 14:11 hashar: rebooting integration-saltmaster stalled.
* 13:11 hashar: Rebooting deployment-parsoid05 via wikitech interface.
* 13:02 hashar: labvirt1005 seems to have hardware issue. Impacts a bunch of beta cluster / integration instances as listed on https://phabricator.wikimedia.org/T97521#1245217
* 12:22 hashar: deployment-parsoid05 slow down is https://phabricator.wikimedia.org/T97421  . Running apt-get upgrade and rebooting it but its slowness issue might be with the underlying hardware
* 12:13 hashar: killing puppet on deployment-parsoid05  eats all CPU for some reason
* 02:40 legoktm: deploying https://gerrit.wikimedia.org/r/207363 and https://gerrit.wikimedia.org/r/207368


== April 28 ==
== 2022-03-17 ==
* 23:37 hoo: Ran foreachwiki extensions/Wikidata/extensions/Wikibase/lib/maintenance/populateSitesTable.php --load-from 'http://meta.wikimedia.beta.wmflabs.org/w/api.php' --force-protocol http (because some sites are http only, although the sitematrix claims otherwise)
* 19:11 hashar: Building Docker images for Quibble 1.4.4
* 23:33 hoo: Ran foreachwiki extensions/Wikidata/extensions/Wikibase/lib/maintenance/populateSitesTable.php --load-from 'http://meta.wikimedia.beta.wmflabs.org/w/api.php' to fix all sites tables
* 19:06 hashar: Tag Quibble 1.4.4 @ {{Gerrit|56b2c9ba52c}} # [[phab:T300340|T300340]]
* 23:18 hoo: Ran mysql> INSERT INTO sites (SELECT * FROM wikidatawiki.sites); on enwikinews to populate the sites table
* 16:25 hashar: Switching Quibble jobs to use memcached rather than APCu {{!}} https://gerrit.wikimedia.org/r/c/integration/config/+/770468 {{!}} [[phab:T300340|T300340]]
* 23:18 hoo: Ran mysql> INSERT INTO sites (SELECT * FROM wikidatawiki.sites); on testwiki to populate the sites table
* 14:11 hashar: Update all jobs to support `CASTOR_HOST` env variable {{!}} https://gerrit.wikimedia.org/r/770921 {{!}} [[phab:T216244|T216244]] {{!}} [[phab:T252071|T252071]]
* 17:48 James_F: Restarting grrrit-wm for config change.
* 14:07 hashar: Building Docker image to support `CASTOR_HOST` {{!}} https://gerrit.wikimedia.org/r/770921 {{!}} [[phab:T216244|T216244]]
* 16:24 bd808: Updated scap to ef15380 (Make scap localization cache build $TMPDIR aware)
* 15:42 bd808: Freed 5G on deployment-bastion by deleting abandoned /tmp/scap_l10n_* directories
* 14:01 marxarelli: reloading zuul to deploy https://gerrit.wikimedia.org/r/#/c/206967/
* 00:17 greg-g: after the 3rd or so time doing it (while on the Golden Gate Bridge, btw) it worked
* 00:11 greg-g: still nothing...
* 00:10 greg-g: after disconnecting, marking temp offline, bringing back online, and launching slave agent: "Slave successfully connected and online"
* 00:07 greg-g: deployment-bastion is idle, yet we have 3 pending jobs waiting for an executer on it - will disconnect/reconnect it in Jenkins


== April 27 ==
== 2022-03-16 ==
* 21:45 bd808: Manually triggered beta-mediawiki-config-update-eqiad for zuul build df1e789c726ad4aae60d7676e8a4fc8a2f6841fb
* 22:00 James_F: Docker: Publishing sonar-scanner:4.6.0.2311-3 for [[phab:T303958|T303958]]
* 21:20 bd808: beta-scap-equad job green again after adding a /srv/ disk to deployment-jobrunner01
* 20:13 James_F: Zuul: [mediawiki/services/function-evaluator and …/function-orchestrator] Switch to npm coverage job for [[phab:T302607|T302607]] and [[phab:T302608|T302608]]
* 21:08 bd808: Applied role::labs::lvm::srv on deployment-jobrunner01 and forced puppet run
* 19:48 zabe: apply https://gerrit.wikimedia.org/r/c/mediawiki/extensions/CentralAuth/+/769424/ on deployment-prep
* 21:08 bd808: Deleted deployment-jobrunner01:/srv/* in preparation for applying role::labs::lvm::srv
* 19:43 taavi: apply https://gerrit.wikimedia.org/r/c/mediawiki/extensions/CentralAuth/+/771347/ on deployment-prep
* 21:06 bd808: deployment-jobrunner01 missing role::labs::lvm::srv
* 21:00 bd808: Root partition full on deployment-jobrunner01
* 20:53 bd808: removed mwdeploy user from deployment-bastion:/etc/passwd
* 20:15 Krinkle: Relaunched Gearman connection
* 19:53 Krinkle: Jenkins unable to re-create Gearman connection. (HTTP 503 error from /configure). Have to force restart Jenkins
* 17:32 Krinkle: Relauch slave agent on deployment-bastion
* 17:31 Krinkle: Jenkins slave deployment-bastion deadlock waiting for executors


== April 26 ==
== 2022-03-15 ==
* 06:09 thcipriani|afk: rm scap l10nfiles from /tmp on deployment-bastion root partition 100% again...
* 18:26 brennen: gitlab: removed most existing /people groups
* 18:10 brennen: gitlab: finished migrating access for all existing people groups to direct project membership ([[phab:T274461|T274461]], [[phab:T300935|T300935]])
* 16:49 dancy: Reloading Zuul to deploy https://gerrit.wikimedia.org/r/c/integration/config/+/770963
* 14:30 hashar: CI Jenkins: globally defined CASTOR_HOST=integration-castor03.integration.eqiad.wmflabs via https://integration.wikimedia.org/ci/configure # [[phab:T216244|T216244]]
* 14:17 hashar: Apply label `castor` to node https://integration.wikimedia.org/ci/computer/integration-castor03/ # [[phab:T216244|T216244]]
* 01:37 James_F: Zuul: Switch services/function* publish job from node12 to node14
* 01:14 James_F: Zuul: [wikidata/query-builder] Switch branchdeploy from node12 to node14
* 00:08 James_F: Zuul: [wikipeg] Switch from node12 to node14 special job


== April 25 ==
== 2022-03-14 ==
* 16:00 thcipriani|afk: manually ran logrotate on deployment-jobrunner01, root partition at 100%
* 23:57 James_F: Zuul: [ooui] Switch from node12 to node14
* 15:16 thcipriani|afk: clear /tmp/scap files on deployment-bastion, root partition at 100%
* 23:46 James_F: Docker: Publishing node14-test-browser-php80-composer:0.1.0
* 23:27 James_F: Zuul: Drop legacy node12 templates except the one for Services
* 23:10 James_F: Zuul: [oojs/router] Drop custom job and just use the generic node14 one
* 23:08 James_F: Zuul: [oojs/core] Switch from node12 to node14 jobs
* 22:46 James_F: Zuul: [unicodejs] Switch from node12 to node14
* 22:25 James_F: Zuul: [VisualEditor/VisualEditor] Switch from node12 to node14
* 19:51 James_F: Zuul: Migrate almost all libraries and tools from node12 to node14 for [[phab:T267890|T267890]]
* 15:36 James_F: Zuul: Switch extension-javascript-documentation from node12 to node14 for [[phab:T267890|T267890]]
* 15:21 James_F: Zuul: Switch all mwgate jobs from node12 to node14 for [[phab:T267890|T267890]]
* 09:52 hashar: Building Quibble Docker images for https://gerrit.wikimedia.org/r/757867 {{!}} [[phab:T300340|T300340]]
* 08:54 hashar: Reloading Zuul for https://gerrit.wikimedia.org/r/770079


== April 24 ==
== 2022-03-11 ==
* 18:01 thcipriani: ran sudo chown -R mwdeploy:mwdeploy /srv/mediawiki on deployment-bastion to fix beta-scap-eqiad, hopefully
* 04:02 zabe: zabe@deployment-mwmaint02:~$ mwscript extensions/CentralAuth/maintenance/populateGlobalEditCount.php --wiki=metawiki
* 17:26 thcipriani: remove deployment-prep from domain in /etc/puppet/puppet.conf on deployment-stream, puppet now OK
* 17:20 thcipriani: rm stale lock on deployment-rsync01, puppet fine
* 17:10 thcipriani: gzip /var/log/account/pacct.0 on deployment-bastion: ought to revisit logrotate on that instance.
* 17:00 thcipriani: rm stale /var/lib/puppet/state/agent_catalog_run.lock on deployment-kafka02
* 9:56 hashar: restarted mysql on both deployment-db1 and deployment-db2. The service is apparently not started on instance boot.  https://phabricator.wikimedia.org/T96905
* 9:08 hashar: beta: manually rebased operations/puppet.git
* 8:43 hashar: Enabling puppet on deployment-eventlogging02.eqiad.wmflabs {{bug|T96921}}


== April 23 ==
== 2022-03-10 ==
* 06:11 Krinkle: Running git-cache-update inside screen on integration-slave-trusty-1021 at /mnt/git
* 20:45 zabe: apply https://gerrit.wikimedia.org/r/c/mediawiki/extensions/CentralAuth/+/769416 on deployment-prep centralauth databases
* 06:11 Krinkle: integration-slave-trusty-1021 stays depooled (see T96629 and T96706)
* 20:25 James_F: Zuul: [mediawiki/extensions/VueTest] Add basic quibble CI
* 04:35 Krinkle: Reloading Zuul to deploy https://gerrit.wikimedia.org/r/206044 and https://gerrit.wikimedia.org/r/206072
* 20:03 Krinkle: Updating docker-pkg files on contint primary for  https://gerrit.wikimedia.org/r/768843
* 00:29 bd808: cherry-picked and applied https://gerrit.wikimedia.org/r/#/c/205969/ (logstash: Convert $::realm switches to hiera)
* 15:12 hashar: updating Quibble jenkins jobs
* 00:17 bd808: beta cluster fatal monitor full of "Bad file descriptor: AH00646: Error writing to /data/project/logs/apache-access.log"
* 14:26 James_F: Docker: Publishing new versions of quibble-buster and cascade adding unzip for [[phab:T250496|T250496]] / [[phab:T303417|T303417]].
* 00:03 bd808: cleaned up redis leftovers on deployment-logstash1
* 11:43 Amir1: Reloading Zuul to deploy https://gerrit.wikimedia.org/r/769668
* 09:59 dwalden: restarted apache on deployment-mediawiki11 # [[phab:T302699|T302699]]


== April 22 ==
== 2022-03-09 ==
* 23:57 bd808: cherry-picked and applied https://gerrit.wikimedia.org/r/#/c/205968 (remove redis from logstash)
* 17:08 hashar: Updating Gerrit Comment.soy to get rid of a literal `null` string being inserted in notification emails {{!}} https://gerrit.wikimedia.org/r/c/operations/puppet/+/768005 {{!}} https://phabricator.wikimedia.org/T288312
* 23:33 bd808: reset deployment-salt:/var/lib/git/operations/puppet HEAD to production; forced update with upstream; re-cherry-picked I46e422825af2cf6f972b64e6d50040220ab08995
* 23:28 bd808: deployment-salt:/var/lib/git/operations/puppet in detached HEAD state; looks to be for cherry pick of I46e422825af2cf6f972b64e6d50040220ab08995 ?
* 21:40 thcipriani: restarted mariadb on deployment-db{1,2}
* 20:20 thcipriani: gzipped /var/log/pacct.0 on deployment-bastion
* 19:50 hashar: zuul/jenkins are back up (blame Jenkins)
* 19:40 hashar: reenabling Jenkins gearman client
* 19:30 hashar: Gearman went back. Reenabling Jenkins as a Gearman client
* 19:27 hashar: Zuul gearman is stalled.  Disabling Jenkins gearman client to free up connections
* 17:58 Krinkle: Creating integration-slave-trusty-1021 per T96629 (using ci1.medium type)
* 14:34 hashar: beta: failures on instances are due to them being moved on different openstack compute nodes (virt***)
* 13:51 jzerebecki: integration-slave-trusty-1015:~$ sudo -u jenkins-deploy rm -rf /mnt/jenkins-workspace/workspace/mwext-Wikibase-qunit/src/node_modules
* 12:48 hashar: beta: Andrew B. starting to migrate beta cluster instances on new virt servers
* 11:34 hashar: integration: apt-get upgrade on integration-slave-trusty* instances
* 11:31 hashar: integration: Zuul package has been uploaded for Trusty!  Deleting the .deb from /home/hashar/


== April 21 ==
== 2022-03-08 ==
* 09:27 hashar: Nodepool created it is first instance ever! :)
* 20:31 brennen: requiring 2fa for all users under /repos
* 01:51 legoktm: deploying https://gerrit.wikimedia.org/r/205494


== April 20 ==
== 2022-03-07 ==
* 23:34 legoktm: deploying https://gerrit.wikimedia.org/r/205465
* 10:53 zabe: restarted apache on deployment-mediawiki11 # [[phab:T302699|T302699]]
* 19:20 legoktm: mediawiki-extensions-hhvm workspace on integration-slave-trusty-1011 had bad lock file, wiping
* 16:10 hashar: deployment-salt kill -9 of puppetmaster processes
* 16:08 hashar: deployment-salt killed git-sync-upstream    netcat to labmon1001.eqiad.wmnet 8125  was eating all memory
* 16:04 hashar: beta: manually rebasing  operations/puppet on deployment-salt . Might have killed some live hack in the process :/
* 13:58 hashar: In Gerrit, hidden integration/jenkins-job-builder-config and integration/zuul-config historical repositories. Suggest by addshore on {{bug:T96522}}
* 03:39 legoktm: deploying https://gerrit.wikimedia.org/r/205174


== April 19 ==
== 2022-03-04 ==
* 06:12 legoktm: deploying https://gerrit.wikimedia.org/r/205076
* 20:29 Krinkle: Reloading Zuul to deploy https://gerrit.wikimedia.org/r/768146
* 19:13 Krinkle: Reloading Zuul to deploy  https://gerrit.wikimedia.org/r/768068


== April 18 ==
== 2022-03-03 ==
* 05:18 legoktm: deploying https://gerrit.wikimedia.org/r/204995
* 19:13 dancy: Reloading Zuul to deploy https://gerrit.wikimedia.org/r/767864
* 03:09 Krinkle: Finished set up of integration-slave-trusty-1017. Pooled.
* 15:37 James_F: Docker: Publishing sury-php images based on bullseye not stretch and cascade for [[phab:T278203|T278203]]
* 14:43 hashar: Reloading Zuul for {{Gerrit|Iae45cae8ec209a3e795fe4fd7dd92290565277db}}
* 12:47 hashar: Upgrading Quibble on CI Jenkins jobs from 1.3.0 to 1.4.3 https://gerrit.wikimedia.org/r/c/integration/config/+/767749/
* 10:30 hashar: Building Docker images for Quibble 1.4.3
* 10:22 hashar: Tagged Quibble 1.4.3 @ {{Gerrit|cf5cd1a0a07}}
* 09:24 hashar: Building Docker images for Quibble 1.4.2
* 09:20 hashar: Tag Quibble 1.4.2 @ {{Gerrit|63d2855a1e}} # [[phab:T302226|T302226]] [[phab:T302707|T302707]]


== April 17 ==
== 2022-03-02 ==
* 17:52 Krinkle: Reloading Zuul to deploy https://gerrit.wikimedia.org/r/204812
* 19:53 James_F: Zuul: Configure CI for the forthcoming REL1_38 branches for [[phab:T302908|T302908]]
* 17:45 Krinkle: Creating integration-slave-trusty-1017
* 15:56 dancy: Updating scap to 4.4.1-1+0~20220302155149.192~1.gbpe351d6 in beta
* 16:29 Krinkle: Reloading Zuul to deploy https://gerrit.wikimedia.org/r/204791
* 15:27 Krinkle: Reloading Zuul to deploy https://gerrit.wikimedia.org/r/767493
* 16:00 Krinkle: Reloading Zuul to deploy https://gerrit.wikimedia.org/r/204783
* 15:04 taavi: resolve merge conflicts on deployment-puppetmaster04
* 12:42 hashar: restarting Jenkins
* 12:38 hashar: Switching zuul on lanthanum.eqiad.wmnet to the Debian package version
* 12:14 hashar: Switching Zuul scheduler on gallium.wikimedia.org to the Debian package version
* 12:12 hashar: Jenkins: enabled plugin "ZMQ Event Publisher"  and publishing all jobs result on TCP port 8888
* 05:37 legoktm: deploying https://gerrit.wikimedia.org/r/204706
* 01:11 Krinkle: Repool integration-slave-precise-1013 and integration-slave-trusty-1015 (live hack with libeatmydata enabled for mysql; T96308)


== April 16 ==
== 2022-02-28 ==
* 22:08 Krinkle: Rebooting integration-slave-precise-1013 (depooled; experimenting with libeatmydata)
* 19:29 brennen: removing mutante (dzahn) as application-level gitlab admin; adding as owner of /repos for the time being to facilitate some migrations
* 22:07 Krinkle: Rebooted integration-slave-trusty-1015 (experimenting with libeatmydata)
* 19:22 dancy: Update scap to 4.4.0-1+0~20220228192031.189~1.gbp0a8436 in beta
* 18:31 Krinkle: Rebooting integration-slave-precise-1012 and integration-slave-trusty-1012
* 19:17 brennen: adding mutante (dzahn) as application-level gitlab admin
* 17:57 Krinkle: Repooled instances. Conversion of mysql.datadir to tmpfs complete.
* 17:22 Krinkle: Gracefully depool integration slaves to deploy https://gerrit.wikimedia.org/r/#/c/204528/ (T96230)
* 14:35 thcipriani: running dpkg --configure -a on deployment-bastion to correct puppet failures


== April 15 ==
== 2022-02-26 ==
* 23:21 Krinkle: beta-update-databases-eqiad stuck waiting for executors on a node that has plenty executors available
* 20:05 zabe: apply [[phab:T302658|T302658]] on deployment-prep centralauth databases
* 21:15 hashar: Jenkins browser test jobs sometime deadlock because of the IRC notification plugin  https://phabricator.wikimedia.org/T96183
* 13:24 zabe: apply [[phab:T302660|T302660]] on deployment-prep centralauth databases
* 20:34 hashar: hard restarting Jenkins
* 13:19 zabe: apply [[phab:T302659|T302659]] on deployment-prep centralauth databases
* 19:24 Krinkle: Aborting browser tests jobs. Stuck for over 5 hours.
* 19:24 Krinkle: Aborting beta-scap-eqiad. Has been stuck for 2 hours on "Notifying IRC" after "Connection time out" from scap.
* 08:22 hashar: restarted Jenkins
* 08:20 hashar: Exception in thread "RequestHandlerThread[#2]" java.lang.OutOfMemoryError: Java heap space
* 08:16 hashar: Jenkins process went wild taking all CPU busy on gallium


== April 14 ==
== 2022-02-24 ==
* 20:43 legoktm: starting SULF on beta cluster
* 16:02 dancy: Updating beta cluster scap to 4.4.0-1+0~20220224155429.187~1.gbp66c5c2
* 20:42 marktraceur: stopping all beta jobs, aborting running (and stuck) beta DB update, kicking bastion, to try and get beta to update
* 13:44 hashar: integration/config now fully enforces shellcheck https://gerrit.wikimedia.org/r/756088
* 19:49 Krinkle: All systems go.
* 13:13 hashar: Built image docker-registry.discovery.wmnet/releng/castor:0.2.5
* 19:48 Krinkle: Jenkins configuration panel won't load ("Loading..." stays indefine, "Uncaught TypeError: Cannot convert to object at prototype.js:195")
* 13:10 hashar: Updating castor-save-workspace-cache job https://gerrit.wikimedia.org/r/764817
* 19:46 Krinkle: Jenkins restarted. Relaunching Gearman
* 11:54 hashar: Built image docker-registry.discovery.wmnet/releng/shellcheck:0.1.1
* 19:42 Krinkle: Jenkins still unable to obtain Gearman connection. (HTTP 503 error from /configure). Have to force restart Jenkins.
* 11:41 hashar: Built image docker-registry.discovery.wmnet/releng/sonar-scanner:4.6.0.2311-2
* 19:42 Krinkle: deployment-bastion jobs were stuck. marktraceur cancelled queue and relaunched slave. Now processing again.
* 11:04 hashar: Built image docker-registry.discovery.wmnet/releng/operations-puppet:0.8.6
* 15:27 Krinkle: puppetmaster: Re-apply I05c49e5248cb operations/puppet patch to re-fix T91524. Somehow the patch got lost.
* 08:58 hashar: Built image docker-registry.discovery.wmnet/releng/mediawiki-phan-testrun:0.2.1
* 08:46 hashar: does qa-morebots works ?


== April 13 ==
== 2022-02-23 ==
* 20:14 Krinkle: Restarting Zuul, Jenkins and aborting all builds. Everything got stuck following NFS outage in lab
* 23:21 dancy: Update beta cluster scap to 4.3.1-1+0~20220223231645.183~1.gbp8ddb60
* 19:28 Krinkle: Restarting Zuul, Jenkins and aborting all builds. Everything crashed following NFS outage in labs
* 20:10 dancy: Updating scap in beta
* 17:01 legoktm: deploying https://gerrit.wikimedia.org/r/203858
* 19:23 hashar: Built docker-registry.discovery.wmnet/releng/logstash-filter-verifier:0.0.3
* 13:56 Krinkle: Delete old integration-slave1001...1004 (T94916)
* 12:41 hashar: Depooling integration-agent-puppet-docker-1002 , pooling integration-agent-puppet-docker-1003 # [[phab:T252071|T252071]]
* 10:43 hashar: reducing number of executors on Precise instances from 5 to 4 and on Trusty instances from 6 to 4.  The Jenkins scheduler tends to assign the unified jobs to the same slave which overload a single slave while others are idling.
* 10:21 hashar: Created Bullseye instance integration-agent-puppet-docker-1003 https://horizon.wikimedia.org/project/instances/96cf9ddc-daa3-4c9f-8c21-cdd58e95973e/  # [[phab:T252071|T252071]]
* 10:43 hashar: reducing number of executors from 5 to 4
* 08:37 hashar: Removing Stretch based integration-agent-qemu-1001 # [[phab:T284774|T284774]]
* 08:46 hashar: jenkins removed #wikimedia-qa IRC channel from the global configuration
* 08:42 hashar: kill -9 jenkins  causes it was stuck in some deadlock related to the IRC plugin :(
* 08:34 zeljkof: restarting stuck Jenkins


== April 12 ==
== 2022-02-22 ==
* 23:58 bd808: sudo ln -s /srv/l10nupdate/mediawiki /var/lib/l10nupdate/mediawiki on deployment-bastion
* 16:41 zabe: zabe@deployment-mwmaint02:~$ foreachwiki migrateUserGroup.php oversight suppress # [[phab:T112147|T112147]]
* 23:11 greg-g: 0bytes left on /var on deployment-bastion
* 13:28 urbanecm: deployment-prep: Create database for incubatorwiki ([[phab:T210492|T210492]])


== April 11 ==
== 2022-02-21 ==
* 23:13 legoktm: deploying https://gerrit.wikimedia.org/r/203628
* 14:58 hashar: Reverting Quibble jobs from 1.4.0 to 1.3.0 # [[phab:T302226|T302226]]
* 22:58 legoktm: deploying https://gerrit.wikimedia.org/r/203619 & https://gerrit.wikimedia.org/r/203626
* 07:31 hashar: Switching Quibble jobs from Quibble 1.3.0 to 1.4.0 # [[phab:T300340|T300340]] [[phab:T291549|T291549]] [[phab:T225730|T225730]]
* 06:13 legoktm: deployed https://gerrit.wikimedia.org/r/203520
* 07:27 hashar: Refreshing all Jenkins jobs
* 05:49 legoktm: deploying https://gerrit.wikimedia.org/r/203519 https://gerrit.wikimedia.org/r/203516 https://gerrit.wikimedia.org/r/203518


== April 10 ==
== 2022-02-20 ==
* 13:50 Krinkle: Pool integration-slave-precise-1012..integration-slave-precise-1014
* 10:32 qchris: Manually triggering replication run of Gerrit's analytics/datahub to populate newly created analytics-datahub GitHub repo
* 11:43 hashar: Filled https://phabricator.wikimedia.org/T95675 to migrate "Global-Dev Dashboard Data" to JJB/Zuul
* 11:40 Krinkle: Deleting various jobs from Jenkins that can be safely deleted (no longer in jjb-config). Will report the others to T91410 for inspection.
* 11:29 Krinkle: Fixed job "Global-Dev Dashboard Data" to be restricted to node "gallium" because it fails to connect to gp.wmflabs.org from lanthanum 1/2 builds.
* 11:26 Krinkle: Re-established Gearman connection from Jenkins
* 11:20 Krinkle: Jenkins unable to re-establish Gearman connection. Full restart.
* 10:39 Krinkle: Deleting the old integration1401...integration1405 instances. They've been depooled for 24h and their replacements are OK. This is to free up quota to create new Precise instances.
* 10:35 Krinkle: Creating integration-slave-precise-1012...integration-slave-precise-1014
* 10:31 Krinkle: Pool integration-slave-precise-1011
* 09:02 hashar: integration: Refreshed Zuul packages under /home/hashar
* 08:57 Krinkle: Fixed puppet failure for missing Zuul package on integration-dev by applying patch-integration-slave-trusty.sh


== April 9 ==
== 2022-02-19 ==
* 19:50 legoktm: deployed https://gerrit.wikimedia.org/r/202932
* 12:19 taavi: restart trafficserver-tls on deployment-cache-text06
* 17:20 Krinkle: Creating integration-slave-precise-1011
* 02:15 James_F: Zuul: [design/codex] Publish the Netlify preview on every patch for [[phab:T293705|T293705]]
* 17:11 Krinkle: Depool integration-slave1402...integration-slave1405
* 00:35 James_F: Manually re-triggered a build of the docs of Codex (via `zuul-test-repo design/codex postmerge`) now that we actually set the environment vars for [[phab:T293705|T293705]]
* 16:52 Krinkle: Pool integration-slave-trusty-1011...integration-slave-trusty-1016
* 16:00 hashar: integration-slave-jessie-1001  recreated. Applying it role::ci::slave::labs which should also bring in the package builder role under /mnt/pbuilder
* 15:32 thcipriani: added mwdeploy_rsa to keyholder agent.sock via chmod 400 /etc/keyholder.d/mwdeploy_rsa && SSH_AUTH_SOCK=/run/keyholder/agent.sock ssh-add /etc/keyholder.d/mwdeploy_rsa && chmod 440 /etc/keyholder.d/mwdeploy_rsa; permissions in puppet may be wrong?
* 14:24 hashar: deleting integration-slave-jessie-1001 extended disk is too small
* 14:24 hashar: deleting integration-slave-jessie-1001 extended disk is too smal
* 13:14 hashar: integration-zuul-packaged applied role::labs::lvm::srv
* 13:01 hashar: integration-zuul-packaged  applied zuul::merger and zuul::server
* 12:59 Krinkle: Creating integration-slave-trusty-1011 - integration-slave-trusty-1016
* 12:40 hashar: spurts out <tt>Permission denied (publickey).</tt>
* 12:39 hashar: https://integration.wikimedia.org/ci/job/beta-scap-eqiad/ is still broken :-(
* 12:31 hashar: beta: reset hard of operations/puppet repo on the puppetmaster since it has been stalled for 9+days https://phabricator.wikimedia.org/T95539
* 10:46 hashar: repacked extensions in deployment-bastion staging area: <tt>find /mnt/srv/mediawiki-staging/php-master/extensions -maxdepth 2  -type f -name .git  -exec bash  -c 'cd `dirname {}` && pwd && git repack -Ad && git gc' \;</tt>
* 10:31 hashar: deployment-bastion has a lock file remaining /mnt/srv/mediawiki-staging/php-master/extensions/.git/refs/remotes/origin/master.lock
* 09:55 hashar: restarted Zuul to clear out some stalled jobs
* 09:35 Krinkle: Pooled integration-slave-trusty-1010
* 08:59 hashar: rebooted deployment-bastion and cleared some files under /var/
* 08:51 hashar: deployment-bastion is out of disk space on /var/  :(
* 08:50 hashar: https://integration.wikimedia.org/ci/job/beta-code-update-eqiad/ timed out after 30 minutes while trying to  git pull
* 08:50 hashar: https://integration.wikimedia.org/ci/job/beta-update-databases-eqiad/  job stalled for some reason
* 06:15 legoktm: deploying https://gerrit.wikimedia.org/r/202998
* 06:02 legoktm: deploying https://gerrit.wikimedia.org/r/202992
* 05:11 legoktm: deleted core dumps from integration-slave1002, /var had filled up
* 04:36 legoktm: deploying https://gerrit.wikimedia.org/r/202938
* 00:32 legoktm: deploying https://gerrit.wikimedia.org/r/202279


== April 8 ==
== 2022-02-18 ==
* 21:56 legoktm: deploying https://gerrit.wikimedia.org/r/202930
* 22:54 James_F: Zuul: [branchdeploy-codex-node14-npm-docker] Create as experimental for [[phab:T293705|T293705]]
* 21:15 legoktm: deleting non-existent jobs' workspaces on labs slaves
* 22:14 James_F: Jenkins: Defined BRANCHDEPLOY_AUTH_TOKEN_codex and BRANCHDEPLOY_SITE_ID_codex secrets for [[phab:T293705|T293705]]
* 19:09 Krinkle: Re-establishing Gearman-Jenkins connection
* 13:44 hashar: Reloading Zuul for https://gerrit.wikimedia.org/r/c/integration/config/+/763724 [[phab:T301453|T301453]]
* 19:00 Krinkle: Restarting Jenkins
* 09:21 hashar: Reloading Zuul for {{Gerrit|I1494abb5e9e28da951ffb72154a074a16a0f8381}}
* 19:00 Krinkle: Jenkins Master unable to re-establish Gearman connection
* 19:00 Krinkle: Zuul queue is not being distributed properly. Many slaves are idling waiting to receive builds but not getting any.
* 18:29 Krinkle: Another attempt at re-creating the Trusty slave pool (T94916)
* 18:07 legoktm: deploying https://gerrit.wikimedia.org/r/202289 and https://gerrit.wikimedia.org/r/202445
* 18:01 Krinkle: Jobs for Precise slaves are not starting. Stuck in Zuul as 'queued'. Disconnected and restarted slave agent on them. Queue is back up now.
* 17:36 legoktm: deployed https://gerrit.wikimedia.org/r/180418
* 13:32 hashar: Disabled Zuul install based on git clone / setup.py by cherry picking https://gerrit.wikimedia.org/r/#/c/202714/ .  Installed the Zuul debian package on all slaves
* 13:31 hashar: integration: running <tt>apt-get upgrade</tt> on Trusty slaves
* 13:30 hashar: integration: upgrading python-gear and python-six on Trusty slaves
* 12:43 hasharLunch: Zuul is back and it is nasty
* 12:24 hasharLunch: killed zuul on gallium :/


== April 7 ==
== 2022-02-17 ==
* 16:26 Krinkle: git-deploy: Deploying integration/slave-scripts 4c6f541
* 21:48 brennen: added Dzahn (mutante) to acl*repository-admins on phabricator
* 12:57 hashar: running apt-get upgrade on integration-slave-trusty* hosts
* 15:58 zabe: root@deployment-cache-upload06:~# touch /srv/trafficserver/tls/etc/ssl_multicert.config && systemctl reload trafficserver-tls.service # [[phab:T301995|T301995]]
* 12:45 hashar: recreating integration-slave-trusty-1005
* 13:35 hashar: Reloading Zuul for https://gerrit.wikimedia.org/r/c/integration/config/+/763207
* 12:26 hashar: deleting integration-slave-trusty-1005 has been provisioned with role::ci::website instead of role::ci::slave::labs
* 13:20 Reedy: Reloading Zuul to deploy https://gerrit.wikimedia.org/r/763458
* 12:11 hashar: retriggering a bunch of browser tests hitting beta.wmflabs.org
* 11:12 hashar: Bringing deployment-deploy03 back
* 12:07 hashar: Puppet being fixed, it is finishing the installation of integration-slave-trusty-*** hosts
* 11:07 hashar: Disabled deployment-deploy03 Jenkins agent in order to revert some mediawiki/core patch and test the outcome
* 12:03 hashar: Browser tests against beta cluster were all failing due to an improper DNS resolver being applied on CI labs instances {{bug|T95273}}. Should be fixed now.
* 12:00 hashar: running puppet on all integration machines and resigning puppet client certs
* 11:31 hashar: integration-puppetmaster is back and operational with local puppet client working properly.
* 11:28 hashar: restored /etc/puppet/fileserver.conf
* 11:08 hashar: dishing out puppet SSL configuration on all integratio nodes. Can't figure out so lets restart from scratch
* 10:52 hashar: made puppetmaster certname = integration-puppetmaster.eqiad.wmflabs instead of the ec2 id :(
* 10:49 hashar: manually hacking integration-puppetmaster /etc/puppet/puppet.conf config file which is missing the [master] section
* 09:37 hashar: integration project has been switched to a new labs DNS resolver ( https://lists.wikimedia.org/pipermail/labs-l/2015-April/003585.html ) . It is missing the dnsmasq hack to resolve beta cluster URls to the instance IP instead of the public IP.  Causes a wild range of jobs to fail.
* 01:25 Krinkle: Reloading Zuul to deploy https://gerrit.wikimedia.org/r/202300


== April 6 ==
== 2022-02-16 ==
* 23:19 bd808: Updated scap to f9b9a82 (emove exotic unicode from ascii logo)
* 18:20 hashar: Tag Quibble 1.4.1 @ {{Gerrit|d4bd2801de}} # [[phab:T300301|T300301]]
* 22:34 legoktm: deployed https://gerrit.wikimedia.org/r/202229
* 16:42 dancy: Updating to scap 4.3.1-1+0~20220216163646.173~1.gbp823710?in beta
* 20:55 legoktm: deploying https://gerrit.wikimedia.org/r/202233
* 12:55 jelto: apply gitlab-settings to gitlab-prod-1001.devtools.eqiad1.wikimedia.cloud
* 20:46 legoktm: deploying https://gerrit.wikimedia.org/r/202225
* 10:09 hashar: Reloading Zuul for {{Gerrit|I997fee0f160ca3049b8085879831bfe175096ced}}
* 17:37 legoktm: deploying https://gerrit.wikimedia.org/r/201032
* 09:59 hashar: Reloading Zuul for {{Gerrit|I2ffa016563ad37f1e7c13dcce81deb8ab411c9e2}}
* 12:38 Krinkle: Reloading Zuul to deploy https://gerrit.wikimedia.org/r/201984 https://gerrit.wikimedia.org/r/202020 https://gerrit.wikimedia.org/r/202026
* 04:20 legoktm: deploying https://gerrit.wikimedia.org/r/201669


== April 5 ==
== 2022-02-15 ==
* 11:13 Krinkle: New integration-slave-trusty-1001..1005 must remain unpooled. Provisioning failed. details at https://phabricator.wikimedia.org/T94916#1180522
* 21:12 dancy: rebooting deployment-mediawiki12.deployment-prep.eqiad1.wikimedia.cloud to try to revive beta wikis
* 10:48 Krinkle: Puppet on integration-puppetmaster has been failing for the past 2 days: "Failed when searching for node i-0000063a.eqiad.wmflabs: You must set the 'external_nodes' parameter to use the external node terminus" (=integraton-dev.eqiad.wmflabs)
* 20:59 dancy: Killed runaway puppet agent on deployment-mediawiki11.deployment-prep.eqiad1.wikimedia.cloud
* 10:22 Krinkle: Creating integration-slave-trusty-1001-1005 per T94916.
* 16:24 hashar: Restarting CI Jenkins for plugins updates
* 16:21 hashar: Upgrading Jenkins plugins on releases Jenkins
* 16:06 hashar: Rollback fresh-test Jenkins job to the version intended to run on integration-agent-qemu-1001
* 15:26 hashar: Reloading Zuul for {{Gerrit|If80b4b4cfa5c1a869ceb220f5b11c272b384a721}}


== April 3 ==
== 2022-02-14 ==
* 23:47 greg-g: for Krinkle 23:31 "Finished npm upgrade on trusty slaves."
* 16:28 dancy: Updating scap in beta cluster to 4.3.1-1+0~20220211225318.167~1.gbp315b2c
* 23:08 Krinkle: Finished npm upgrade on precise slaves. Rolling trusty slaves now.
* 16:16 Amir1: Reloading Zuul to deploy https://gerrit.wikimedia.org/r/c/integration/config/+/762471
* 22:55 bd808: Updated scap to a1a5235 (Add a logo banner to scap)
* 15:41 hashar: Messing up with fresh-test Jenkns job to polish up Qemu / qcow2 integration
* 21:31 Krinkle: Upgrading npm from v2.4.1 to v2.7.6 (rolling, slave by slave graceful)
* 14:26 jnuche: Jenkins upgrade complete [[phab:T301361|T301361]]
* 21:11 ^d: puppet re-enabled on staging-palladium, running fine again
* 13:54 jnuche: Jenkins contint instances are going to be restarted soon
* 21:05 Krinkle: Delete unfinished/unpoooled instances integration-slave-precise-1011-1014. (T94916)
* 14:49 hashar: integration-slave-jessie-1001 : manually installed jenkins-debian-glue Debian packages. It is pending upload by ops to apt.wikimedia.org {{bug|T95006}}
* 12:56 hashar: installed zuul_2.0.0-304-g685ca22-wmf1precise1_amd64.deb on integration-slave-precise-101* instances
* 12:56 hashar: installed zuul_2.0.0-304-g685ca22-wmf1precise1_amd64.deb on integration-slave-precise-1011.eqiad.wmflabs
* 12:35 hashar: Switching Jessie slave from role::ci::slave::labs::common to role::ci::slave::labs  which will bring a whole lot of packages and break
* 12:28 hashar: integration-slave-jessie-1001 applying role::ci::slave::labs::common  to pool it as a very basic Jenkins slave
* 12:19 hashar: enabled puppetmaster::autosigner on integration-puppetmaster
* 11:58 hashar: Applied role::ci::slave::labs on integration-slave-precise-101[1-4] that Timo created earlier
* 11:58 hashar: Cherry picked a couple patches to fix puppet Package[] definitions issues
* 11:49 hashar: made integration-puppetmaster to self update its puppet clone
* 11:42 hashar: recreating integration-slave-precise-1011 stalled with a puppet oddity related to Package['gdb'] defined twice {{bug|T94917}}
* 11:30 hashar: integration-puppetmaster migrated down to Precise
* 11:23 hashar: rebooting integration-publisher : cant ssh to it
* 10:37 hashar: disabled some hiera configuration related to puppetmaster.
* 10:22 hashar: Created instance i-00000a4a with image "ubuntu-12.04-precise" and hostname i-00000a4a.eqiad.wmflabs.
* 10:21 hashar: downgrading integration-puppetmaster from Trusty to Precise https://phabricator.wikimedia.org/T94927
* 05:42 legoktm: deploying https://gerrit.wikimedia.org/r/200744
* 03:58 Krinkle: Jobs were throwing NOT_RECOGNISED.  Relaunched Gearman. Jobs are now happy again.
* 03:51 Krinkle: Jenkins is unable to re-establish Gearman connection. Have to force restart Jenkins master.
* 03:42 Krinkle: Reloading Jenking config repaired the broken references. However Jenkins is still unable to make new references properly. New builds are 404'ing the same way.
* 03:26 Krinkle: Reloading Jenkins configuration from disk
* 03:18 Krinkle: Build metadata exists properly at /var/lib/jenkins/jobs/:jobname/builds/:nr, but the "last*Build" symlinks are outdated.
* 03:12 Krinkle: As of 03:03, recent builds are mysteriously missing their entry in Jenkins. They show up on the dashbaord when running, but their build log is never published (url is 404). E.g. https://integration.wikimedia.org/ci/job/integration-docroot-deploy/105 and https://integration.wikimedia.org/ci/job/jshint/239
* 02:47 Krinkle: Reloading Zuul to deploy https://gerrit.wikimedia.org/r/201644
* 00:31 greg-g: rm 'd .gitignore in /srv/mediawiki-staging/php-master/skins due to https://gerrit.wikimedia.org/r/#/c/200307/ clashing with a local untracked version


== April 2 ==
== 2022-02-12 ==
* 22:56 Krinkle: New integration-slave-precise-101x are unfinished and must remain depooled. See T94916.
* 18:22 urbanecm: deployment-prep: reboot deployment-eventgate-3 ([[phab:T289029|T289029]])
* 22:53 Krinkle: Most puppet failures blocking T94916 may be caused by the fact that intergration-puppetmaster was inadvertently changed to Trusty; puppetmaster version of Trusty is not yet supported by ops
* 21:41 Krinkle: It seems integration-slave-jessie-1001 has role::ci::slave::labs::common instead of role::ci::slave::labs. Intentional?
* 21:25 Krinkle: Re-creating integration-dev-slave-precise in preparation of re-creating precise slaves
* 14:51 hashar: applying role::ci::slave::labs::common on integration-slave-jessie-1001
* 14:49 hashar: integration: nice thing, newly created instances are automatically made to point to integration-pummetmaster via hiera! Just have to sign the certificate on the master using: puppet ca list ; puppet ca sign i-000xxxx.eqiad.wmflabs
* 14:42 hashar: Created [[Nova_Resource:I-00000a3b.eqiad.wmflabs|integration-slave-jessie-1001]] to try out CI slave on Jessie ([[phab:T94836]])
* 14:11 hashar: reduced integration-slave1004 executors from 6 to 5 to make it on par with the other precise slaves
* 14:10 hashar: integration-slave100[1-4] are now using Zuul provided by a Debian package as of https://gerrit.wikimedia.org/r/#/c/195272/ PS 16
* 14:04 hashar: uninstall the pip installed zuul version from Precise labs slaves by doing:  pip uninstall zuul && rm /usr/local/bin/zuul* . Switching them all to a Debian package
* 13:45 hashar: pooling back integration-slave1001 and 1002 which are using zuul-cloner provided by a debian package
* 13:35 hashar: reloading Jenkins configuration files from disk to make it knows about a change manually applied to most jobs config.xml files for https://gerrit.wikimedia.org/r/#/c/201451/
* 13:01 Krinkle: Reloading Zuul to deploy https://gerrit.wikimedia.org/r/201458
* 12:19 hashar: preventing job to run on integration-slave1001 by replacing its label with 'DoNotLabelThisSlaveHashar'. Going to install Zuul debian package on it
* 09:37 hashar: rebooting integration-zuul-server  homedir seems to be stalled/missing
* 08:12 hashar: upgrading packages on integration-dev
* 05:14 greg-g: and right when I log'd that, things seem to be recovering
* 05:12 greg-g: the shinken alerts about beta cluster issues are due to wmflabs having issues.


== April 1 ==
== 2022-02-10 ==
* 07:17 Krinkle: Creating integration-slave1410 as test. Will re-create our pool later today.
* 17:29 jeena: reloading Zuul to deploy https://gerrit.wikimedia.org/r/c/integration/config/+/761602
* 06:26 Krinkle: Apply puppetmaster::autosigner to integration-puppetmaster
* 05:51 legoktm: deleting non-existent job workspaces from integration slaves
* 05:42 Krinkle: Free up space on integration-slave1001-1004 by removing obsolete phplint and qunit workspaces
* 02:05 Krinkle: Restarting Jenkins again..
* 01:35 legoktm: started zuul on gallium
* 01:00 Krinkle: Restarting Jenkins
* 01:00 Krinkle: Jenkins is unable to start Gearman connection (HTTP 503);
* 01:00 Krinkle: Force restarted Zuul, didn't help
* 00:55 Krinkle: Jenkins stuck. Builds are queued in Zuul but nothing is sent to Jenkins.


== March 31 ==
== 2022-02-09 ==
* 21:00 greg-g: puppet-compiler02: This node is offline because Jenkins failed to launch the slave agent on it.
* 15:22 taavi: deleted shutoff deployment-mx02
* 20:15 legoktm: deploying https://gerrit.wikimedia.org/r/200926
* 18:48 legoktm: DEPLOYING https://gerrit.wikimedia.org/r/200327
* 15:44 thcipriani: primed keyholder on deployment-bastion to ensure jenkins-deploy can ssh
* 12:25 hashar: qa-morebots is back


== March 30 ==
== 2022-02-08 ==
* 22:58 legoktm: 1001-1003 were depooled, restarted and repooled. 1004 is depooled and restarted
* 17:34 taavi: remove scap from deployment-kafka-main/jumbo
* 22:40 legoktm: rebooting precise jenkins slaves
* 16:23 taavi: hard reboot misbehaving deployment-echostore01
* 21:40 greg-g: Beta Cluster is down due to WMF Labs issues, being taken care of now (by Coren and Yuvi)
* 13:39 taavi: delete /srv/mediawiki-staging.save on deployment-deploy03
* 19:53 legoktm: deleted core dumps from integration-slave1001
* 19:11 legoktm: deploying https://gerrit.wikimedia.org/r/200646
* 16:29 jzerebecki: another damaged git repo integration-slave1001:~$ sudo -u jenkins-deploy rm -rf /mnt/jenkins-workspace/workspace/mwext-Wikibase-qunit/src/vendor/
* 16:07 jzerebecki: removing workspaces of deleted jobs integration-slave100{1,2,3,4}:~$ sudo -u jenkins-deploy rm -rf /mnt/jenkins-workspace/workspace/mwext-Wikibase-{client,repo,repo-api}-tests{,@*}
* 15:14 jzerebecki: integration-slave1001:~$ sudo -u jenkins-deploy rm -rf /mnt/jenkins-workspace/workspace/mwext-Wikibase-repo-api-tests-sqlite
* 15:05 jzerebecki: integration-slave1001:~$ sudo -u jenkins-deploy rm -rf /mnt/jenkins-workspace/workspace/mwext-Wikibase-repo-api-tests-mysql/src/extensions/cldr
* 14:36 jzerebecki: integration-slave1001:~$ sudo -u jenkins-deploy rm -rf /mnt/jenkins-workspace/workspace/mwext-Wikibase-*-tests{,@*}
* 13:06 jzerebecki: integration-slave1001:~$ sudo -u jenkins-deploy rm -rf /mnt/jenkins-workspace/workspace/mwext-Wikibase-client-tests@*
* 13:05 jzerebecki: integration-slave1001:~$ sudo -u jenkins-deploy rm -rf /mnt/jenkins-workspace/workspace/mwext-Wikibase-client-tests


== March 29 ==
== 2022-02-07 ==
* 07:29 legoktm: deploying https://gerrit.wikimedia.org/r/#/c/200333/
* 20:55 taavi: added Zabe as member of the deployment-prep project [[phab:T301179|T301179]]
* 07:07 legoktm: deploying https://gerrit.wikimedia.org/r/#/c/200332/
* 18:19 Reedy: Reloading Zuul to deploy https://gerrit.wikimedia.org/r/760550
* 03:51 legoktm: deploying https://gerrit.wikimedia.org/r/200330
* 03:09 legoktm: deploying https://gerrit.wikimedia.org/r/#/c/200329/
* 00:10 legoktm: deploying https://gerrit.wikimedia.org/r/#/c/200323/


== March 28 ==
== 2022-02-04 ==
* 04:02 bd808: manually updated beta-code-update-eqiad job to remove sudo to mwdeploy; needs associated jjb change for T94261
* 00:21 Krinkle: Updating docker-pkg files on contint primary for https://gerrit.wikimedia.org/r/759622


== March 27 ==
== 2022-02-03 ==
* 23:28 bd808: applied beta::autoupdater directly to deployment-bastion via wikitech interface
* 18:41 taavi: deployment-prep: route /w/api.php to deployment-mediawiki11, trying to reduce load on a single server
* 23:21 bd808: Duplicate declaration: Git::Clone[operations/mediawiki-config] is already declared in file /etc/puppet/modules/beta/manifests/autoupdater.pp:46; cannot redeclare at /etc/puppet/modules/scap/manifests/master.pp:22
* 14:53 hashar: Building Docker images for Quibble 1.4.0  (prepared by kostajh)
* 23:01 bd808: restarted puppetmaster
* 13:51 kostajh: Tag Quibble 1.4.0 @ {{Gerrit|4231bc2832395d94e29a332fe8d863301a0cd441}} # [[phab:T300340|T300340]] [[phab:T291549|T291549]] [[phab:T225730|T225730]]
* 22:52 hashar: integration: jzerebecki addition and sudo policy  tracked for history purpose as {{bug|T94280}}
* 22:52 bd808: chown -R l10nupdate:wikidev /srv/mediawiki-staging/php-master/cache/l10n
* 22:44 bd808: deployment-bastion: chown -R jenkins-deploy:wikidev /srv/mediawiki-staging/
* 22:41 bd808: forcing puppet run on deployment-bastion
* 22:41 bd808: cherry-picked https://gerrit.wikimedia.org/r/#/c/200248/ and https://gerrit.wikimedia.org/r/#/c/199988/
* 22:40 hashar: integration: created sudo policy allowing members to run any command as jenkins-deploy on all hosts.
* 22:40 hashar: added jzerebecki to the integration labs project as a normal member
* 22:34 hashar: integration-slave1001 rm -fR mwext-Wikibase-repo-api-tests/src/vendor
* 21:13 greg-g: things be better
* 20:56 greg-g: Beta Cluster is down, known
* 18:50 marxarelli: running `jenkins-jobs update` to update 'browsertests-UploadWizard-*' with Id33ffde07f0c15e153d52388cf130be4c59b4559
* 17:50 legoktm: deleted core dumps from integration-slave1002
* 17:48 legoktm: marked integration-slave1002 as offline, /var filled up
* 05:42 legoktm: marked integration-slave1001 as offline due to https://phabricator.wikimedia.org/T94138


== March 26 ==
== 2022-02-02 ==
* 23:47 legoktm: deploying https://gerrit.wikimedia.org/r/200069
* 16:50 dancy: Upgrading scap to 4.2.2-1+0~20220202164708.157~1.gbp376a16 in beta.
* 19:22 bd808: Manually added missing !log entries from 2015-03-25 from my bouncer logs
* 16:12 dancy: Upgrading scap to 4.2.2-1+0~20220201161808.156~1.gbp1c1c64 in beta
* 17:14 greg-g: jobs appear to be processing according to zuul, the Jenkins UI just takes forever to load, apparently
* 17:12 greg-g: "Please wait while Jenkins is getting ready to work"
* 17:08 greg-g: 0:07 <      robh> kill -9 and restarted per instrucitons
* 16:53 greg-g: Still.... "Please wait while Jenkins is restarting..."
* 16:49 greg-g: "Please wait while Jenkins is restarting..."
* 16:39 greg-g: going to do a safe-restart of Jenkins https://www.mediawiki.org/wiki/Continuous_integration/Jenkins#Restart_all_of_Jenkins
* 16:38 greg-g: nothing executing on deployment-bastion, that is
* 16:38 greg-g: same, nothing executing
* 16:37 greg-g: did that checklist once, jobs still not executing, doing again
* 16:32 greg-g: I'll start going through the checklist at https://www.mediawiki.org/wiki/Continuous_integration/Jenkins#Hung_beta_code.2Fdb_update
* 16:30 hashar: deadlock on deployment-bastion slave. Someone need to restart Jenkins :(
* 13:25 hashar: yamllint job fixed by altering the label https://gerrit.wikimedia.org/r/#/c/199876/
* 13:17 hashar: Changes blocked because there is nothing able to run yamllint  ( zuul-gearman.py status|grep build:yamllint  ,  shows 8 jobs pending and no worker available)


== March 25 ==
== 2022-02-01 ==
* 23:23 bd808: chown -R jenkins-deploy:project-deployment-prep /srv/mediawiki-staging/php-master/cache/gitinfo
* 17:27 addshore: Reloading Zuul to deploy https://gerrit.wikimedia.org/r/c/integration/config/+/734654
* 23:14 bd808: chown -R l10nupdate:project-deployment-prep /srv/mediawiki-staging/php-master/cache/l10n
* 00:34 tgr: deployment-pre un-cherry-picked gerrit 758584 from beta puppetmaster, patch is now merged [[phab:T300591|T300591]]
* 23:14 bd808: chown -R l10nupdate:project-deployment-prep /srv/mediawiki-staging/php-master/cache/l10n
* 00:12 tgr: deployment-prep cherry-picked gerrit 758584 to beta puppetmaster [[phab:T300591|T300591]]
* 23:04 bd808: chown -R mwdeploy:project-deployment-prep /srv/mediawiki-staging
* 22:58 bd808: File permissions in deployment-bastion:/srv/mediawiki-staging as part mwdeploy:mwdeploy and part mwdeploy:project-deployment-prep and part jenkins-deploy:project-deployment-prep
* 21:52 legoktm: deploying https://gerrit.wikimedia.org/r/199736
* 18:49 legoktm: deploying https://gerrit.wikimedia.org/r/196745
* 15:13 bd808: Updated scap to include 4a63a63 (Copy l10n CDB files to rebuildLocalisationCache.php tmp dir)
* 03:44 legoktm: deploying https://gerrit.wikimedia.org/r/199555 and https://gerrit.wikimedia.org/r/199559
* 00:52 Krinkle: Restarted Jenkins-Gearman connection
* 00:50 Krinkle: Jenkins is unable to start Gearman connection (HTTP 503); Restarting Jenkins.
* 00:32 legoktm: disabling/enabling gearman in jenkins


== March 24 ==
== 2022-01-31 ==
* 23:32 Krinkle: Force restart Zuul
* 19:01 James_F: Re-configured Jenkins job mediawiki-i18n-check-docker to {{Gerrit|9e3ea96c548d7a84be763d38c2d118bc861cf189}} for [[phab:T222216|T222216]]
* 22:25 hashar: marked gallium and lanthanum slaves as temp offline, then back. Seems to have cleared some Jenkins internal state and resumed the build
* 10:49 hashar: Added integration-agent-qemu-1003 with label `Qemu` # [[phab:T284774|T284774]]
* 21:55 bd808: Ran trebuchet for scap to keep cherry-pick of I01b24765ce26cf48d9b9381a476c3bcf39db7ab8 on top of active branch; puppet was forcing back to prior trebuchet sync tag
* 21:42 hashar: Reconfigured [https://integration.wikimedia.org/ci/view/Beta/job/mediawiki-core-code-coverage/ mediawiki-core-code-coverage]
* 21:22 hashar: Zuul gate is deadlocked for up to half an hour due to change being force merged :(
* 21:15 hashar: beta: deleted untracked file /srv/mediawiki-staging/php-master/extensions/.gitignore . That fixed the Jenkins job https://integration.wikimedia.org/ci/job/beta-code-update-eqiad/
* 20:31 twentyafterfour: sudo ln -s /srv/l10nupdate/ /var/lib/
* 20:31 twentyafterfour: sudo mv /var/lib/l10nupdate/ /srv/
* 20:28 bd808: deployment-bastion -- rm -r pacct.1.gz pacct.2.gz pacct.3.gz pacct.4.gz pacct.5.gz pacct.6.gz
* 20:24 bd808: Deleted junk in deployment-bastion:/tmp
* 18:57 legoktm: deploying https://gerrit.wikimedia.org/r/199305
* 18:25 legoktm: deploying https://gerrit.wikimedia.org/r/199216
* 17:06 legoktm: deploying https://gerrit.wikimedia.org/r/199273
* 11:23 hashar: beta-scap-eqiad keeps regenerating l10n cache https://phabricator.wikimedia.org/T93737 
* 08:35 hashar: restarting Jenkins for some plugins upgrades
* 08:07 legoktm: deployed https://gerrit.wikimedia.org/r/199190
* 07:21 legoktm: deploying https://gerrit.wikimedia.org/r/199205
* 07:17 legoktm: deploying https://gerrit.wikimedia.org/r/199204
* 07:08 legoktm: deploying https://gerrit.wikimedia.org/r/199201
* 06:46 legoktm: freed ~6G on lanthanum by deleting mediawiki-extensions-zend* worksapces
* 05:04 legoktm: deleting workspaces of jobs that no longer exist in jjb on lathanum
* 04:11 legoktm: deploying https://gerrit.wikimedia.org/r/198792
* 03:14 Krinkle: Deleting old job workspaces on gallium not touched since 2013
* 02:42 Krinkle: Restarting Zuul, wikimedia-fundraising-civicrm is stuck as of 46min ago waiting for something already merged
* 02:32 legoktm: toggling gearman off/on in jenkins
* 01:47 twentyafterfour: deployed scap/scap-sync-20150324-014257 to beta cluster
* 00:23 Krinkle: Restarted Zuul


== March 23 ==
== 2022-01-28 ==
* 23:18 hasharDinner: Stopping Jenkins for an upgrade
* 21:45 taavi: running recountCategories.php on all beta wikis per [[phab:T299823|T299823]]#7652496
* 23:16 legoktm: deleting mwext-*-lint* workspaces on gallium, shouldn't be needed
* 14:27 hashar: taking heapdump  of CI Jenkins `sudo -u jenkins /usr/lib/jvm/java-11-openjdk-amd64/bin/jmap -dump:live,format=b,file=/var/lib/jenkins/202201281527.hprof xxxx`
* 23:11 legoktm: deleting mwext-*-qunit* workspaces on gallium, shouldn't be needed
* 23:07 legoktm: deleting mwext-*-lint workspaces on gallium, shouldn't be needed
* 23:00 legoktm: lanthanum is now online again, with 13G free disk space
* 22:58 legoktm: deleting mwext-*-qunit* workspaces on lanthanum, shouldn't be needed any more
* 22:54 legoktm: deleting mwext-*-qunit-mobile workspaces on lanthanum, shouldn't be needed any more
* 22:48 legoktm: deleting mwext-*-lint workspaces on lanthanum, shouldn't be needed any more
* 22:45 legoktm: took lanthanum offline in jenkins
* 20:59 bd808: Last log copied from #wikimedia-labs
* 20:58 bd808: 20:41 cscott deployment-prep updated OCG to version 11f096b6e45ef183826721f5c6b0f933a387b1bb
* 19:28 YuviPanda: created staging-rdb01.eqiad.wmflabs
* 19:19 YuviPanda: disabled puppet on staging-palladium to test a puppet patch
* 18:41 legoktm: deploying https://gerrit.wikimedia.org/r/198762
* 13:11 hashar: and I restarted qa-morebots a minute or so ago (see https://wikitech.wikimedia.org/wiki/Morebots#Example:_restart_the_ops_channel_morebot )
* 13:11 hashar: Jenkins: deleting unused jobs mwext-.*-phpcs-HEAD and mwext-.*-lint


== March 21 ==
== 2022-01-27 ==
* 17:53 legoktm: deployed https://gerrit.wikimedia.org/r/198503
* 20:26 hashar: Successfully published image docker-registry.discovery.wmnet/releng/logstash-filter-verifier:0.0.2  # [[phab:T299431|T299431]]
* 00:02 Krinkle: Reestablished Jenkins-Gearman connection
* 19:34 Amir1: Reloading Zuul to deploy 757464
* 16:00 hashar: Pooling back agents 1035 1036 1037 1038 , they could not connect due to ssh host mismatch since yesterday they all got attached to instance 1033 and accepted that host key # [[phab:T300214|T300214]]
* 09:16 hashar: integration: cumin --force 'name:docker' 'apt install rsync'  # [[phab:T300236|T300236]]
* 09:05 hashar: integration: cumin --force 'name:docker' 'apt install rsync'  # [[phab:T300214|T300214]]
* 00:24 thcipriani: restarting jenkins


== March 20 ==
== 2022-01-26 ==
* 23:08 marxarelli: Reloading Zuul to deploy I693ea49572764c96f5335127902404167ca86487
* 20:29 hashar: Completed migration of integration-agent-docker-XXXX instances from Stretch to Bullseye - [[phab:T252071|T252071]]
* 22:50 marxarelli: Running `jenkins-jobs update` to create job mediawiki-vagrant-bundle17-yard-publish
* 19:55 hashar: deleting integration-agent-docker-1014 which only has the `codehealth` label. A short live experiment no more used since October 2nd 2019 - https://gerrit.wikimedia.org/r/c/integration/config/+/540362 - [[phab:T234259|T234259]]
* 19:00 Krinkle: Reloading Zuul to deploy  https://gerrit.wikimedia.org/r/198276
* 18:56 hashar: integration: pooled in Jenkins a few more Bullseye docker agents for [[phab:T252071|T252071]]
* 17:17 Krinkle: Reloading Zuul to deploy I5edff10a4f0
* 18:17 hashar: integration: pooled in Jenkins a few Bullseye docker agent for [[phab:T252071|T252071]]
* 12:32 mobrovac: deployment-salt ops/puppet: un-cherry-picked I48b1a139b02845c94c85cd231e54da67c62512c9
* 16:45 hashar: integration: creating  integration-agent-docker-1023  based on buster with new flavor `g3.cores8.ram24.disk20.ephemeral60.4xiops` # [[phab:T290783|T290783]]
* 12:30 mobrovac: deployment-prep disabled puppet on deployment-restbase[1,2] until https://gerrit.wikimedia.org/r/#/c/197662/ is merged
* 08:36 mobrovac: deployment-salt ops/puppet: cherry-picking I48b1a139b02845c94c85cd231e54da67c62512c9
* 04:57 legoktm: deployed https://gerrit.wikimedia.org/r/198184
* 00:21 legoktm: deployed https://gerrit.wikimedia.org/r/198161
* 00:14 legoktm: deployd https://gerrit.wikimedia.org/r/198160


== March 19 ==
== 2022-01-25 ==
* 23:59 legoktm: deployed https://gerrit.wikimedia.org/r/198154
* 20:17 James_F: Zuul: [mediawiki/extensions/CentralAuth] Drop UserMerge dependency
* 21:48 hashar: Jenkins: depooled/repooled lanthanum slave, it was no more processing any jobs.
* 16:39 James_F: Zuul: Mark Math extension as now tarballed in parameter_functions for [[phab:T232948|T232948]]
* 14:09 hashar: Further updated our JJB fork to upstream commit 4bf020e07 which version 1.1.0-3
* 15:57 James_F: Zuul: [mediawiki/extensions/Math] Add Math to the main gate for [[phab:T232948|T232948]]
* 13:22 hashar: refreshed our JJB fork 7ad4386..8928b66 . No difference in our jobs.
* 13:44 hashar: Jenkins CI: added Logger https://integration.wikimedia.org/ci/log/ProcessTree%20-%20T299995/ to watch `hudson.util.ProcessTree` for [[phab:T299995|T299995]]
* 11:25 hashar: refreshing configuration of all beta* jenkins jobs
* 10:02 hashar: integration: removing usage of `role::ci::slave::labs::docker::docker_lvm_volume` in Horizon following https://gerrit.wikimedia.org/r/c/operations/puppet/+/755948  . Docker role instances now always have a 24G partition for Docker
* 06:18 legoktm: deployed https://gerrit.wikimedia.org/r/197860 & https://gerrit.wikimedia.org/r/197858
* 09:59 hashar: integration-agent-qemu-1001: resized /srv to 100% disk free: `lvextend -r -l +100%FREE /dev/mapper/vd-second--local--disk` # [[phab:T299996|T299996]]
* 05:20 legoktm: deleting 'mediawiki-ruby-api-bundle-*' 'mediawiki-selenium-bundle-*' 'mwext-*-bundle-*' jobs
* 09:59 hashar: integration-agent-qemu-1001: resizing /dev/mapper/vd-second--local--disk (/srv) to 20G : `resize2fs -p /dev/mapper/vd-second--local--disk 20G` # [[phab:T299996|T299996]]
* 05:06 legoktm: deployed https://gerrit.wikimedia.org/r/197853
* 09:51 hashar: integration-agent-qemu-1001: resizing /dev/mapper/vd-second--local--disk (/srv) to 20G : `resize2fs -p /dev/mapper/vd-second--local--disk 20G`
* 00:57 Krinkle: Reloading Zuul to deploy Ie1d7bf114b34f9
* 09:51 hashar: integration-agent-qemu-1003: nuked /dev/vd/second-local-disk and /srv to make room for a docker logical volume. That has fixed puppet  [[phab:T299996|T299996]]
* 09:22 Reedy: unblocked beta again
* 07:32 Krinkle: integration-castor03:/srv/jenkins-workspace/caches$ sudo rm -rf castor-mw-ext-and-skins/


== March 18 ==
== 2022-01-24 ==
* 17:52 legoktm: deployed https://gerrit.wikimedia.org/r/197674 and https://gerrit.wikimedia.org/r/197675
* 21:44 Reedy: unstick beta ci jobs
* 17:27 legoktm: deployed https://gerrit.wikimedia.org/r/197651
* 21:19 jeena: reloading Zuul to deploy https://gerrit.wikimedia.org/r/c/integration/config/+/756523
* 15:20 hashar: setting gallium # of executors from 5 back to 3.  When jobs run on it that slowdown the zuul scheduler and merger!
* 20:36 Krinkle: Reloading Zuul to deploy https://gerrit.wikimedia.org/r/c/integration/config/+/756139
* 15:06 legoktm: deployed https://gerrit.wikimedia.org/r/194990
* 17:28 hashar: Nuke castor caches on integration-castor03 : sudo rm -fR /srv/jenkins-workspace/caches/castor-mw-ext-and-skins/master/<nowiki>{</nowiki>quibble-vendor-mysql-php72-selenium-docker,wmf-quibble-selenium-php72-docker<nowiki>}</nowiki>  # [[phab:T299933|T299933]]
* 02:02 bd808: Updated scap to I58e817b (Improved test for content preceeding <?php opening tag)
* 17:28 hashar: Nuke castor caches on integration-castor03 : sudo rm -fR /srv/jenkins-workspace/caches/castor-mw-ext-and-skins/master/<nowiki>{</nowiki>quibble-vendor-mysql-php72-selenium-docker,wmf-quibble-selenium-php72-docker<nowiki>}</nowiki>
* 01:48 marxarelli: memory usage, swap, io wait seem to be back to normal on deployment-salt and kill/start of puppetmaster
* 01:45 marxarelli: kill 9'd puppetmaster processes on deployment-salt after repeated attempts to stop
* 01:28 marxarelli: restarting salt master on deployment-salt
* 01:20 marxarelli: deployment-salt still unresponsive, lot's of io wait (94%) + swapping
* 00:32 marxarelli: seeing heavy swapping on deployment-salt; puppet processes using 250M+ memory each


== March 17 ==
== 2022-01-22 ==
* 21:42 YuviPanda: recreated staging-sca01, let’s wait and see if it just automagically configures itself :)
* 13:40 taavi: apply [[phab:T299827|T299827]] on deployment-prep centralauth database
* 21:40 YuviPanda: deleted staging-sca01 because why not :)
* 11:44 taavi: restart varnish-frontend.service on deployment-cache-upload06 to clear puppet agent failure alerts
* 17:52 Krinkle: Reloading Zuul to deploy I206c81fe9bb88feda6
* 16:28 bd808: Updated scap to include I61dcf7ae6d52a93afc6e88d3481068f09a45736d (Run rebuildLocalisationCache.php as www-data)
* 16:25 bd808: chown -R trebuchet:wikidev && chmod -R g+rwX deployment-bastion:/srv/deployment/scap/scap
* 16:16 YuviPanda: created staging-sca01
* 14:39 hashar: me versus debian packaging tool chain http://xkcd.com/1168/
* 09:24 hashar: deleted operations-puppet-validate
* 09:21 hashar: deleted mwext-Wikibase-lint job, not triggered anymore


== March 16 ==
== 2022-01-21 ==
* 21:55 legoktm: deployed https://gerrit.wikimedia.org/r/197213
* 18:12 taavi: resolved merge conflicts on deployment-puppetmaster04
* 21:25 legoktm: deployed https://gerrit.wikimedia.org/r/#/c/196095/
* 15:50 hashar: integration-puppetmaster-02: deleted 2021 snapshot tags in puppet repo and ran `git gc --prune=now`
* 18:50 legoktm: deployed https://gerrit.wikimedia.org/r/197109
* 18:38 legoktm: deployed https://gerrit.wikimedia.org/r/196743 & https://gerrit.wikimedia.org/r/196746
* 18:24 legoktm: deleted rcstream-* jobs
* 18:11 legoktm: deployed https://gerrit.wikimedia.org/r/197094
* 10:02 hashar: restarting Jenkins
* 02:00 legoktm: deleting all 'mwext-*-composer-*' jobs that should never have been used it


== March 15 ==
== 2022-01-20 ==
* 07:39 legoktm: deleting non-generic, unused *-rubylint1.9.3lint & *-ruby2.0lint jobs
* 20:24 James_F: Zuul: [Kartographer] Add parsoid as dependency for CI jobs
* 00:56 Krinkle: Reload Zuul to deploy Idb2f15a94a67
* 20:22 James_F: Zuul: [DiscussionTools] Add Gadgets as dependency for Phan jobs
* 20:04 dancy: Jenkins beta jobs are back online, using scap prep auto now.
* 19:19 dancy: Pausing beta Jenkins jobs to make a copy of /srv/mediawiki-staging in preparation for testing
* 19:10 dancy: Unpacking scap (4.1.1-1+0~20220120175448.144~1.gbp517f9d) over (4.1.1-1+0~20220113154148.133~1.gbp6e3a17) on deploy03
* 18:07 hashar: Updating Quibble jobs to have MediaWiki files written on the hosts /srv partition (38G) instead of inside the container which ends in /var/lib/docker (24G) https://gerrit.wikimedia.org/r/755743  # [[phab:T292729|T292729]]
* 16:31 hashar: Rebalancing /var/lib/docker and /srv partitions on CI agents {{!}} https://gerrit.wikimedia.org/r/755713
* 12:12 hashar: contint2001 deleting all the Docker images (they will be pulled as needed)
* 12:10 hashar: contint2001 : docker container prune && docker image prune
* 12:07 hashar: contint1001 deleting all the Docker images (they will be pulled as needed)
* 12:04 hashar: contint1001 `docker image prune`
* 11:51 hashar: Cleaning very old Docker images on contint1001.wikimedia.Org


== March 14 ==
== 2022-01-19 ==
* 03:52 legoktm: deployed https://gerrit.wikimedia.org/r/196540
* 18:20 hashar: Adding  https://integration.wikimedia.org/ci/computer/contint1001/ back to the pool again
* 17:31 hashar: Adding  https://integration.wikimedia.org/ci/computer/contint1001/ back to the pool after the machine got powercycled # [[phab:T299542|T299542]]
* 10:38 Reedy: kill some stuck jobs [[phab:T299485|T299485]]


== March 13 ==
== 2022-01-18 ==
* 01:49 legoktm: deleted a bunch of unused *-tox-* jobs
* 19:56 hashar: building Docker images for https://gerrit.wikimedia.org/r/754951
* 01:03 legoktm: deployed https://gerrit.wikimedia.org/r/191063 & https://gerrit.wikimedia.org/r/196505
* 18:01 taavi: added ryankemper as a member of the deployment-prep project
* 00:17 Krinkle: Reloading Zuul to deploy I46c60d520
* 15:00 hashar: Updating Jenkins jobs for Quibble 1.3.0  with proper PHP version in the images # [[phab:T299389|T299389]]
* 11:39 hashar: Rolling back Quibble 1.3.0 jobs due to php configuration files with at least releng/quibble-buster73:1.3.0  # [[phab:T299389|T299389]]
* 08:07 hashar: Updating Jenkins jobs for Quibble to pass `--parallel-npm-install` https://gerrit.wikimedia.org/r/c/integration/config/+/754569
* 08:02 hashar: Updating Jenkins jobs for Quibble 1.3.0


== March 12 ==
== 2022-01-17 ==
* 23:34 Krinkle: Depooling integration-slave1402 to play with T92351
* 16:28 hashar: Building Quibble 1.3.0 Docker images
* 20:26 Krinkle: Restablished Gearman connection from Zuul due to deadlock
* 16:16 hashar: Tagged Quibble 1.3.0 @ {{Gerrit|2b2c7f9a45}} # [[phab:T297480|T297480]] [[phab:T226869|T226869]] [[phab:T294931|T294931]]
* 17:39 YuviPanda: killll deployment-rsync01, wasn’t being used for anything discernable, and that’s not how proxies work in prod
* 08:32 hashar: Refreshing all Jenkins jobs with jjb to take in account recent changes related to the Jinja2 docker macro
* 15:31 Krinkle: Reloading Zuul to deploy Ia289ebb0
* 15:22 Krinkle: Fix Jenkins UI (was stuck in German)
* 15:05 YuviPanda: jenkins loves german again
* 07:11 YuviPanda: scap still failing on beta, I'll check when I'm back from lunch
* 07:11 YuviPanda: rebooted puppetmaster, was dead


== March 11 ==
== 2022-01-14 ==
* 19:47 legoktm: deployed https://gerrit.wikimedia.org/r/195990
* 15:56 dancy: Reloading Zuul to deploy https://gerrit.wikimedia.org/r/c/integration/config/+/753981
* 15:11 Krinkle: Jenkins UI in German, again
* 14:59 hashar: Starting VM integration-agent-docker-1022 which was in shutdown state since December and is Bullseye based # [[phab:T290783|T290783]]
* 14:05 Krinkle: Jenkins web dashboard is in German
* 13:49 hashar: Restarting all CI Docker agents via Horizon to apply new flavor settings [[phab:T265615|T265615]] [[phab:T299211|T299211]]
* 11:02 hashar: created integration-zuul-packaged.eqiad.wmflabs to test out the Zuul debian package
* 01:47 dancy: revert to scap 4.1.1-1+0~20220113154148.133~1.gbp6e3a17 in beta
* 09:07 hashar: Deleted refs/heads/labs branch in integration/zuul.git
* 09:01 hashar: https://gerrit.wikimedia.org/r/#/c/195287/
* 09:01 hashar: made Zuul clone on labs to use the master branch instead of the labs one. There is no point in keeping separate ones anymore


== March 10 ==
== 2022-01-13 ==
* 15:22 apergos: after update of salt in deployment-prep git deploy restart is likely broken. details; https://phabricator.wikimedia.org/T92276
* 18:02 dancy: Updating scap to 4.1.1-1+0~20220113154506.135~1.gbp523480 on all beta hosts
* 14:50 Krinkle: Browsertest job was stuck for > 10hrs. Jobs should not be allowed to run that long.
* 17:54 dancy: Reloading Zuul to deploy https://gerrit.wikimedia.org/r/753792
* 16:27 dancy: testing scap prep auto on deployment-deploy03
* 15:52 dancy: Update scap to 4.1.1-1+0~20220113154506.135~1.gbp523480 on deployment-deploy03
* 11:27 hashar: Updating Jenkins job to normalize usage of `docker run --workdir` https://gerrit.wikimedia.org/r/c/integration/config/+/753457
* 10:52 hashar: Restarting Jenkins CI for plugins update
* 10:42 hashar: Applied Jenkins built-in node migration to CI Jenkins (`master` > `built-in` renaming) # [[phab:T298691|T298691]]
* 10:14 taavi: cancelled stuck deployment-prep jobs on jenkins


== March 9 ==
== 2022-01-12 ==
* 23:57 legoktm: deployed https://gerrit.wikimedia.org/r/195486
* 18:58 hashar: Applied plugins update to https://releases-jenkins.wikimedia.org/
* 22:49 Krinkle: Reloading Zuul to deploy I229d24c57d90ef
* 20:37 legoktm: doing the gearman shuffle dance thing
* 19:42 Krinkle: Reloading Zuul to deploy I48cb4db87
* 19:35 Krinkle: Delete integration-slave1010
* 19:31 Krinkle: Restarted slave agent on gallium
* 19:30 Krinkle: Re-established Gearman connection from Jenkins


== March 8 ==
== 2022-01-11 ==
* 17:40 Krinkle: Delete integration-slave1006, integration-slave1007 and integration-slave1008
* 09:18 hashar: Updating all Jenkins jobs following recent "noop" refactorings
* 00:06 legoktm: deployed https://gerrit.wikimedia.org/r/195072


== March 7 ==
== 2022-01-10 ==
* 22:10 legoktm: deployed https://gerrit.wikimedia.org/r/195069
* 17:13 dancy: Update beta scap to 4.1.0-1+0~20220107203309.130~1.gbpcd0ace
* 14:44 Krinkle: Depool integration-slave1008 and integration-slave1010 (not deleting yet, just in case)
* 14:01 James_F: Zuul: Add gate-and-submit-l10n to Isa for [[phab:T222291|T222291]]
* 14:43 Krinkle: Depool integration-slave1006 and integration-slave1007 (not deleting yet, just in case)
* 14:41 Krinkle: Pool integration-slave1404
* 14:35 Krinkle: Reloading Zuul to deploy I864875aa4acc
* 06:28 Krinkle: Reloading Zuul to deploy I8d7e0bd315c4fc2
* 04:53 Krinkle: Reloading Zuul to deploy I585b7f026
* 04:51 Krinkle: Pool integration-slave1403
* 03:55 Krinkle: Pool integration-slave1402
* 03:31 Krinkle: Reloading Zuul to deploy I30131a32c7f1
* 02:59 James_F: Pushed Ib4f6e9 and Ie26bb17 to grrrit-wm and restarted
* 02:54 Krinkle: Reloading Zuul to deploy Ia82a0d45ac431b5


== March 6 ==
== 2022-01-05 ==
* 23:30 Krinkle: Pool integration-slave1401
* 19:15 taavi: run `sudo chown -R jenkins-deploy:wikidev public/dists/bullseye-deployment-prep/` on deployment-deploy03
* 22:24 Krinkle: Re-establishing Gearman connection from Jenkins (deployment-bastion was deadlocked)
* 17:31 hashar: Deploying Zuul change https://gerrit.wikimedia.org/r/c/integration/config/+/751697  to get rid of the wmf-quibble-apache jobs # [[phab:T285649|T285649]]
* 22:16 Krinkle: beta-scap-eqiad is has been waiting for 50minutes for an executor on deployment-bastion.eqiad (which has 5/5 slots idle)
* 10:48 hashar: CI: switching MediaWiki selenium from php built-in server to Apache # https://gerrit.wikimedia.org/r/751697
* 21:36 Krinkle: Provisioning integration-slave1401 - integration-slave1404
* 09:24 hashar: Updating Quibble jobs to use latest image (provides `quibble-with-apache` entrypoint) https://gerrit.wikimedia.org/r/c/integration/config/+/751685/
* 20:14 legoktm: deployed https://gerrit.wikimedia.org/r/194939 for reals this time
* 20:12 legoktm: deployed https://gerrit.wikimedia.org/r/194939
* 18:22 ^d: staging: set has_ganglia to false in hiera
* 16:57 legoktm: deployed https://gerrit.wikimedia.org/r/194892
* 16:40 Krinkle: Jenkins auto-depooled integration-slave1008 due to low /tmp space. Purged /tmp/npm-* to bring back up.
* 16:27 Krinkle: Delete integration-slave1005
* 09:17 hasharConf: Jenkins: upgrading and restarting. Wish me luck.
* 06:29 Krinkle: Re-creating integration-slave1401 - integration-slave1404
* 02:21 legoktm: deployed https://gerrit.wikimedia.org/r/194340
* 02:12 Krinkle: Pooled integration-slave1405
* 01:52 legoktm: deployed https://gerrit.wikimedia.org/r/194461


== March 5 ==
== 2022-01-04 ==
* 22:01 Krinkle: Reloading Zuul to deploy I97c1d639313b
* 12:49 hashar: Reloading Zuul for "api-testing: rename jobs to shorter forms"  https://gerrit.wikimedia.org/r/751422
* 21:15 hashar: stopping Jenkins
* 09:48 hashar: Builder Quibble Docker images with Apache included https://gerrit.wikimedia.org/r/c/integration/config/+/748104
* 21:08 hashar: killing browser tests running
* 09:47 hashar: Reloading Zuul for "Add CentralAuth to phan dependency list for GrowthExperiments" https://gerrit.wikimedia.org/r/751383
* 20:48 Krinkle: Re-establishing Gearman connection from Jenkins
* 20:44 Krinkle: Deleting integration-slave1201-integration-slave1204, and integration-slave1401-integration-slave1404.
* 20:18 Krinkle: Finished creation and provisioning of integration-slave1405
* 19:34 legoktm: deploying https://gerrit.wikimedia.org/r/194461, lots of new jobs
* 18:50 Krinkle: Re-creating integration-slave1405
* 17:52 twentyafterfour: pushed wmf/1.25wmf20 branch to submodule repos
* 16:18 greg-g: now there are jobs running on the zuul status page
* 16:16 greg-g: getting "/zuul/status.json: Service Temporarily Unavailable" after the zuul restart
* 16:12 ^d: restarted zuul
* 16:06 greg-g: jenkins doesn't have anything queued and is processing jobs apparently, not sure why zuul is showing two jobs queued for almost 2 hours (one with all tests passing, the other with nothing tested yet)
* 16:04 greg-g: not sure it helped
* 16:02 greg-g: about to disconnect/reconnect gearman per https://www.mediawiki.org/wiki/Continuous_integration/Zuul#Known_issues
* 00:34 legoktm: deployed https://gerrit.wikimedia.org/r/194421


== March 4 ==
== 2022-01-03 ==
* 17:34 Krinkle: Depooling all new integation-slave12xx and integration-slave14xx instances again (See T91524)
* 14:37 hashar: Upgraded Java 11 on contint2001 && contint1001.  Restarted CI Jenkins.
* 17:11 Krinkle: Pooled integration-slave1201, integration-slave1202, integration-slave1203, integration-slave1204
* 14:35 hashar: Upgraded Java 11 on releases1002 && releases2002
* 17:06 Krinkle: Pooled integration-slave1402, integration-slave1403, integration-slave1404, integration-slave1405
* 16:56 Krinkle: Pooled integration-slave1401
* 16:26 Krinkle: integration-slave12xx and integration-slave14xx are now provisioned. Old slaves will be depooled later and eventually deleted.


== March 3 ==
* 22:00 hashar: reboot integration-puppetmaster in case it solves a NFS mount issue
* 20:33 legoktm: manually created centralauth.users_to_rename table
* 18:28 Krinkle: Lots of Jenkins builds are stuck even though they're "Finished". All services look up. (Filed T91430.)
* 17:18 Krinkle: Reloading Zuul to deploy Icad0a26dc8 and Icac172b16
* 15:39 hashar: cancelled logrotate update of all jobs since that seems to kill the Jenkins/Zuul gearman connection. Probably because all jobs are registered on each config change.
* 15:31 hashar: updating all jobs in Jenkins based on PS2 of https://gerrit.wikimedia.org/r/194109
* 10:56 hashar: Created instance i-000008fb with image "ubuntu-14.04-trusty" and hostname i-000008fb.eqiad.wmflabs.
* 10:52 hashar: deleting integration-puppetmaster to recreate it with a new image {bug|T87484} . Will have to reapply I5335ea7cbfba33e84b3ddc6e3dd83a7232b8acfd  and I30e5bfeac398e0f88e538c75554439fe82fcc1cf
* 03:47 Krinkle: git-deploy: Deploying integration/slave-scripts 05a5593..1e64ed9
* 01:11 marxarelli: gzip'd /var/log/account/pacct.0 on deployment-bastion to free space


== March 2 ==
{{SAL-archives/Release Engineering}}
* 21:35 twentyafterfour: <Krenair> (per #mediawiki-core, have deleted the job queue key in redis, should get regenerated. also cleared screwed up log and restarted job runner service)
* 15:39 Krinkle: Removing /usr/local/src/zuul from integration-slave12xx and integration-slave14xx to let puppet re-install zuul-cloner (T90984)
* 13:39 Krinkle: integration-slave12xx and integration-slave14xx instances still depooled due to T90984


== February 27 ==
* 21:58 Krinkle: Ragekilled all queued jobs related to beta and force restarted Jenkins slave agent on deployment-bastion.eqiad
* 21:56 Krinkle: Job beta-update-databases-eqiad and node deployment-bastion.eqiad have been stuck for the past 4 hours
* 21:49 marxarelli: Reloading Zuul to deploy I273270295fa5a29422a57af13f9e372bced96af1 and I81f5e785d26e21434cd66dc694b4cfe70c1fa494
* 18:08 Krenair: Kicked deployment-bastion node in jenkins to try to fix jobs
* 06:42 legoktm: deployed https://gerrit.wikimedia.org/r/193057
* 01:01 Krinkle: Keeping all integration-slave12xx and slave14xx instances depooled.
* 00:53 Krinkle: Finished provisioning of integration-slave12xx and slave14xx instance. Initial testing failed due to "/usr/local/bin/zuul-cloner: No such file or directory"
== February 26 ==
* 23:24 Krinkle: integration-puppetmaster /var disk is full (1.8 of 1.9GB) - /var/log/puppet/reports is 1.1GB - purging
* 23:23 Krinkle: Puppet failing on new instances due to "Error 400 on SERVER: cannot generate tempfile `/var/lib/puppet/yaml/"
* 13:27 Krinkle: Provisioning the new integration-slave12xx and integration-slave14xx instances
* 05:05 legoktm: deployed https://gerrit.wikimedia.org/r/192980
* 03:48 Krinkle: Creating integration-slave1201,02,03,04 and integration-slave1401,02,03,04,05 per T74011 (not yet setup/provisioned, keep depooled)
* 03:39 Krinkle: Cleaned up and re-pooled integration-slave1006 (was depooled since yesterday)
* 03:39 Krinkle: Cleaned up and re-pooled integration-slave1007 and integration-slave1008 (was auto-depooled by Jenkins)
* 01:54 Krinkle: integration-slave1007 and integration-slave1008 were auto-deplooed due to main disk (/ and its /tmp) being < 900 MB free
* 01:20 legoktm: actually deployed https://gerrit.wikimedia.org/r/192772 this time
* 01:16 legoktm: deployed https://gerrit.wikimedia.org/r/192772
== February 25 ==
* 23:55 Krinkle: Re-established Jenkins-Gearman connection
* 23:54 Krinkle: Zuul queue is growing. Nothing is added to its dashboard. Jenkins executers all idle. Gearman deadlock?
* 20:38 legoktm: deployed https://gerrit.wikimedia.org/r/192564
* 20:18 legoktm: deployed https://gerrit.wikimedia.org/r/192267
* 17:22 ^d: reloading zuul to pick up utfnormal jobs
* 02:15 Krinkle: integration-slave1006 has <700MB free disk space (including /tmp)
== February 24 ==
* 18:41 marxarelli: Running `jenkins-jobs update` to create browsertests-CentralAuth-en.wikipedia.beta.wmflabs.org-linux-firefox-sauce
* 17:55 Krinkle: It seems xdebug was enabled on integration slaves running trusty. This makes errors in build logs incomprehensible.
== February 21 ==
* 03:01 Krinkle: Reloading Zuul to deploy I3bcd3d17cb886740bd67b33b573aa25972ddb574
== February 20 ==
* 07:25 Krinkle: Finished setting up integration-slave1010 and added it to Jenkins slave pool
* 00:54 Krinkle: Setting up integration-slave1010 (replacement for integration-slave1009)
== February 19 ==
* 23:13 bd808: added Thcipriani to under_NDA sudoers group; WMF staff
* 19:45 Krinkle: Destroying integration-slave1009 and re-imaging
* 19:02 bd808: VICTORY! deployment-bastion jenkins slave unstuck
* 19:01 bd808: toggling gearman plugin in jenkins admin console
* 18:58 bd808: took deployment-bastion jenkins connection offline and online 5 times; gearman plugin still stuck
* 18:41 bd808: cleaned up mess in /tmp on integration-slave1008
* 18:38 bd808: brought integration-slave1007 back online
* 18:37 bd808: cleaned up mess in /tmp on integration-slave1007
* 18:29 bd808: restarting jenkins because I messed up and disabled gearman plugin earlier
* 16:30 bd808: disconnected and reconnected deployment-bastion.eqiad again
* 16:28 bd808: reconnected deployment-bastion.eqiad to jenkins
* 16:28 bd808: disconnected deployment-bastion.eqiad from jenkins
* 16:27 bd808: killed all pending jobs for deployment-bastion.eqiad
* 16:26 bd808: disconnected deployment-bastion.eqiad from jenkins
* 16:20 legoktm: updated phpunit for https://gerrit.wikimedia.org/r/188398
== February 18 ==
* 23:50 marxarelli: Reloading Zuul to deploy Id311d632e5032ed153277ccc9575773c0c8f30f1
* 23:37 marxarelli: Running `jenkins-jobs update` to create mediawiki-vagrant-bundle17-cucumber job
* 23:15 marxarelli: Running `jenkins-jobs update` to update mediawiki-vagrant-bundle17 jobs
* 22:56 marxarelli: Reloading Zuul to deploy I3b71f4dc484d5f9ac034dc1050faf3ba6f321752
* 22:42 marxarelli: running `jenkins-jobs update` to create mediawiki-vagrant-bundle17 jobs
* 22:13 hashar: saving Jenkins configuration at https://integration.wikimedia.org/ci/configure to reset the locale
* 16:41 bd808: beta-scap-eqiad job fixed after manually rebuilding git clones of scap/scap on rsync01 and videoscaler01
* 16:39 bd808: rebuilt corrupt deployment-videoscaler01:/srv/deployment/scap/scap
* 16:36 bd808: rebuilt corrupt deployment-rsync01:/srv/deployment/scap/scap
* 16:26 bd808: scap failures only from deployment-videoscaler01 and deployment-rsync01
* 16:25 bd808: scap failing with "ImportError: cannot import name cli" after latest update; investigating
* 16:23 bd808: redis-cli srem 'deploy:scap/scap:minions' i-0000059b.eqiad.wmflabs i-000007f8.eqiad.wmflabs i-0000022e.eqiad.wmflabs i-0000044e.eqiad.wmflabs i-000004ba.eqiad.wmflabs
* 16:16 bd808: 5 deleted instances in trebuchet redis cache for salt/salt repo
* 16:16 bd808: updated scap to 7c64584 (Add universal argument to ignore ssh_auth_sock)
* 16:14 bd808: scap clone on deployment-mediawiki02 corrupt; git fsck did not fix; will delete and refetch
* 01:41 bd808: fixed git rebase conflict on deployment-salt caused by outdated cherry-pick; cherry-picks are merged now so reset to tracking origin/production
== February 17 ==
* 17:47 hashar: beta cluster is mostly down because the  instance supporting the main database (deployment-db1) is down.  The root cause is an outage on the labs infra
* 03:43 Krinkle: Depooled integration-slave1009 (Debugging T89180)
* 03:38 Krinkle: Depooled integration-slave1009
== February 14 ==
* 00:55 marxarelli: gzip'd /var/log/account/pacct.0 on deployment-bastion
* 00:02 bd808: Stopped udp2log ans started udp2log-mw on deployment-bastion
== February 13 ==
* 23:25 bd808: cherry-picked https://gerrit.wikimedia.org/r/#/c/190231/ to deployment-salt for testing
* 14:03 Krinkle: Jenkins UI stuck in Spanish. Resetting configuration.
* 13:05 Krinkle: Reloading Zuul to deploy I0eaf2085576165b
== February 12 ==
* 11:11 hashar: changed passwords of selenium users.
* 10:41 hashar: Removing MEDIAWIKI_PASSWORD* global env variables from Jenkins configuration {{bug|T89226}}
== February 11 ==
* 19:39 Krinkle: Jenkins UI is stuck in French. Resetting..
* 17:56 greg-g: hashar saved Jenkins global configuration at https://integration.wikimedia.org/ci/configure  to hopefully reset the web interface default locale
* 09:57 hashar: restarting Jenkins to upgrade the Credentials plugin
* 09:25 hashar: bunch of puppet failure since 8:00am UTC. Seems to be DNS timeouts.
== February 10 ==
* 09:18 hashar: reenabling puppet-agent on deployment-salt . Was disabled with no reason nor sal entry.
* 06:32 Krinkle: Fix lanthanum:/srv/ssd/jenkins-slave/workspace/mediawiki-extensions-zend@3/src/extensions/Flow/.git/config.lock
* 00:50 bd808: Updated integration/slave-scripts to "Load extensions using wfLoadExtensions() if possible" (b532a9a)
== February 9 ==
* 22:40 Krinkle: Various mediawiki-extensions-zend builds are jammed half-way through phpunit execution (filed T89050)
* 21:31 hashar: Deputized legoktm to the Gerrit 'integration' group. Brings +2 on integration/* repos.
* 20:38 hashar: reconnected jenkins slave agents 1006 1007 and 1008
* 20:37 hashar: deleted /tmp on integration slaves 1006 1007 and 1008. Filled with npm temp directories
* 15:51 hashar: integration : allowed ssh from gallium 208.80.154.135/32 to the instances
* 09:20 hashar: starting puppet agent on integration-puppetmaster
== February 7 ==
* 16:23 hashar: puppet is broken on integration project for some reason. No clue what is going on :-( {{bug|T88960}}
* 16:19 hashar: restarted puppetmaster on integration-puppetmaster.eqiad.wmflabs
* 00:42 Krinkle: Jenkins is alerting for integration-slave1006, integration-slave1007 and integration-slave1008 having low /tmp space free (< 0.8GB)
== February 6 ==
* 22:40 Krinkle: Installed dsh on integration-dev
* 05:46 Krinkle: Reloading Zuul to deploy I096749565 and I405bea9d3e
* 01:35 Krinkle: Upgraded all integration slaves to npm v2.4.1
== February 5 ==
* 13:11 hasharAway: restarted Zuul server to clear out stalled jobs
* 12:25 hashar: Upgrading puppet-lint from 0.3.2 to 1.1.0 on all repositories. All jobs are non voting beside mediawiki-vagrant-puppetlint-lenient which pass just fine with 1.1.0
* 03:21 Krinkle: Reloading Zuul to deploy I08a524ea195c
* 00:22 marxarelli: Reloaded Zuul to deploy Iebdd0d2ddd519b73b1fc5e9ce690ecb59da9b2db
== February 4 ==
* 10:43 hashar: beta-scap-eqiad job is broken because mwdeploy can no more ssh from deployment-bastion to deployment-mediawiki01 . Filled as {{bug|T88529}}
* 10:30 hashar: piok
== February 3 ==
* 13:55 hashar: ElasticSearch /var/log/ filling up is {{bug|T88280}}
* 09:15 hashar: Running puppet on deployment-eventlogging02 has been stalled for 3d15h.  No log :-(
* 09:08 hashar: cleaning /var/log on deployment-elastic06 and deployment-elastic07
* 00:44 Krinkle: Restarting Jenkins-Gearman connection
== February 2 ==
* 21:39 Krinkle: Deployed I94f65b56368 and reloading Zuul
== January 31 ==
* 20:31 hashar: canceling a bunch of browser tests jobs that are deadlocked waiting for SauceLabs.  The http request has no timeout {{bug|T88221}}
== January 29 ==
* 01:39 James_F: Restarting Jenkins because deployment-bastion.eqiad isn't depooling even after restart.
* 00:47 Krenair: running instructions at https://www.mediawiki.org/wiki/Continuous_integration/Jenkins#Hung_beta_code.2Fdb_update
* 00:26 Krinkle: integration-slave1007 rm -rf /mnt/jenkins-workspace/workspace/oojs*
* 00:19 Krinkle: Jenkins slave on deployment-bastion.eqiad has been stuck for the past 5 hours
== January 28 ==
* 22:53 Krinkle: rm -rf integration-slave1007  rm -rf /mnt/jenkins-workspace/workspace/mwext-DonationInterface-np*
* 22:43 Krinkle: /srv/deployment/integration/slave-scripts got corrupted by puppet on labs slaves. No longer has the appropriate permission flags.
* 16:52 marktraceur: restarting nginx on deployment-upload so beta images might work again
== January 27 ==
* 18:54 Krinkle: rm -rf integration-slave1007 mwext-VisualEditor-*
== January 26 ==
* 23:22 bd808: rm integration-slave1006:/mnt/jenkins-workspace/workspace/mediawiki-phpunit-hhvm/src/.git/HEAD.lock (file was timestamped Jan 22 23:55)
* 21:06 bd808: I just merged a scap change that probably will break the beta-recomile-math-textvc-eqiad job -- https://gerrit.wikimedia.org/r/#/c/186808/
== January 24 ==
* 01:05 hashar: restarting Jenkins (deadlock on deployment-bastion slave)
== January 20 ==
* 18:50 Krinkle: Reconfigure Jenkins default language back to 'en' as it was set to Turkish
== January 17 ==
* 20:20 James_F: Brought deployment-bastion.eqiad back online, but without effect AFAICS.
* 20:19 James_F: Marking deployment-bastion.eqiad as temporarily offline to try to fix the backlog.
== January 16 ==
* 23:26 bd808: cherry-picked https://gerrit.wikimedia.org/r/#/c/185570/ to fix puppet errors on deployment-prep
* 12:43 _joe_: added hhvm.pcre_cache_type = "lru" to beta hhvm config
* 12:32 _joe_: installing the new HHVM package on mediawiki hosts
* 11:59 akosiaris: removed ferm from all beta hosts via salt
== January 15 ==
* 17:06 greg-g: turned off the beta-scap-eqiad jenkins job due to the persistent failing (https://phabricator.wikimedia.org/T86901) and the impending labs outage
* 14:50 hashar: beta-scap-eqiad broken since ~ 7:52am UTC.  Depends on mwdeploy user homedir to be fixed in LDAP https://phabricator.wikimedia.org/T86903
* 10:55 hashar: https://integration.wikimedia.org/ci/job/beta-scap-eqiad/ is broken since roughly 7:52am UTC.
== January 14 ==
* 23:22 mutante: cherry-picked I1e5f9f7bcbbe6c4 on deployment-bastion
* 20:37 hashar: Restarting Zuul
* 20:36 hashar: Zuul applied Ori patch to fix a git lock contention in Zuul-cloner {{bug|T86730}} . Tagged wmf-deploy-20150114-1
* 16:58 greg-g: rm -rf'd the Wikigrok checkout in integration-slave1006:/mnt/jenkins-workspace/workspace/mediawiki-extensions-hhvm/src/extensions to (hopefully) fix https://phabricator.wikimedia.org/T86730
* 14:56 anomie: Cherry-pick https://gerrit.wikimedia.org/r/#/c/173336/11/ to Beta Labs
* 02:05 bd808: There is some kind of race / conflict with the mediawiki-extensions-hhvm; I cleaned up the same error for a different extension yesterday
* 02:04 bd808: integration-slave1006 IOError: Lock for file '/mnt/jenkins-workspace/workspace/mediawiki-extensions-hhvm/src/extensions/WikiGrok/.git/config' did already exist, delete '/mnt/jenkins-workspace/workspace/mediawiki-extensions-hhvm/src/extensions/WikiGrok/.git/config.lock' in case the lock is illegal
== January 13 ==
* 22:37 hashar: Restarted Zuul, deadlocked waiting for Gerrit
* 21:38 ori: deployment-prep upgraded nutcracker on mw1/mw2 to 0.4.0+dfsg-1+wm1
* 17:49 hashar: If Zuul status page ( https://integration.wikimedia.org/zuul/  ) shows a lot of changes with completed jobs and the number of results growing, Zuul is deadlocked waiting for Gerrit. Have to restart it on gallium.wikimedia.org with /etc/init.d/zuul restart
* 17:43 hashar: Restarted deadlocked Zuul , which drops ALL events.  Reason is Gerrit lost connection with its database which is not handled by Zuul . See https://wikitech.wikimedia.org/wiki/Incident_documentation/20150106-Zuul
* 17:32 James_F: No effect from restarting Gearman. Getting Timo to restart Zuul.
* 17:30 James_F: No effect. Restarting Gearman.
* 17:26 James_F: Trying a shutdown/re-enable of Jenkins.
* 13:59 YuviPanda: running scap via jenkins, hitting buttons on https://integration.wikimedia.org/ci/job/beta-scap-eqiad/
* 13:58 YuviPanda: scap failed
* 13:58 YuviPanda: running scap, because why not
* 13:58 YuviPanda: modified PrivateSettings.php to make it use wikiadmin user rather than mw user
* 13:51 YuviPanda: created user wikiadmin on deployment-db1
* 04:31 James_F: Zuul now appears fixed.
* 04:29 marktraceur: FORCE RESTART ZUUL (James_F told me to)
* 04:28 marktraceur: Attempting graceful zuul restart
* 04:26 marktraceur: Reloaded zuul to see if it will help
* 04:24 James_F: Took the gallium Jenkins slave offline, disconnected and relaunched; no effect.
* 04:19 James_F: Disabled and re-enabled Gearman, no effect.
* 04:15 James_F: Flagged and unflagged Jenkins for restart, no effect.
* 04:10 James_F: Jenkins/zuul/whatever not working, investigating.
* 01:12 marxarelli: Added twentyafterfour as an admin to the integration project
* 01:08 bd808: Added Dduvall as an admin in the integration project
* 00:55 bd808: zuul is plugged up because a gate-and-submit job failed on integration-slave1006 (ZeroBanner clone problem) and then the patch was force merged
* 00:48 bd808: deleted ntegration-slave1006:/mnt/jenkins-workspace/workspace/mediawiki-extensions-hhvm/src/extensions/ZeroBanner to try and clear the git clone problem there
* 00:35 bd808: git clone failure in https://integration.wikimedia.org/ci/job/mediawiki-extensions-hhvm/131/console blocking merge of core patch
== January 12 ==
* 21:17 hashar: qa-morebots moved from #wikimedia-qa to #wikimedia-releng  {{bug|T86053}}
* 20:57 greg-g: yuvi removed webserver:php5-mysql role from  deployment-sentry2, thus getting puppet onit to unfail
* 20:57 greg-g: test-qa
* 11:41 hashar: foo
* 10:28 hashar: Removing Jenkins IRC notifications from #wikimedia-qa , please switch to #wikimedia-releng
* 09:06 hashar: Tweak Zuul configuration to pin python-daemon <= 2.0  and deploying tag wmf-deploy-20150112-1. {{bug|T86513}}
== January 8 ==
* 19:21 Krinkle: Force restart Zuul
* 19:21 Krinkle: Gearman is back up but Zuul itself still stuck (no longer processing new events, doing "Updating information for .." for the same three jobs over and over again)
* 19:08 Krinkle: Relaunched Gearman from Jenkins manager
* 19:05 Krinkle: Zuul/Gearman stuck
* 18:26 YuviPanda: purged nscd cache on all deployment-prep hosts
* 16:34 Krinkle: Reload Zuul to deploy I9bed999493feb715
* 14:58 hashar: [[Nova_Resource:Contintcloud|contintcloud labs project]] has been created! {{bug|T86170}}. Added Krinkle and 20after4 as project admins.
* 14:44 hashar: on gallium and lanthanum, pushing integration/jenkins.git which would: 1b6a290 - Upgrade JSHint from v2.5.6 to 2.5.11
== January 7 ==
* 10:57 hashar: Taught Jenkins configuration about Java 8. Name: "Ubuntu - OpenJdk 8"  JAVA_HOME: /usr/lib/jvm/java-8-openjdk-amd64/  . Only available on Trusty slaves though
* 10:56 hashar: installed openjdk 8 on CI Trusty labs slaves https://phabricator.wikimedia.org/T85964
* 10:34 hashar: varnish text cache is back up. Had to delete /etc/varnish and reinstall varnish from scratch + rerun puppet.
* 10:25 hashar: deleting /etc/varnish on deplloyment-cache-text02 and running puppet
* 10:24 hashar: beta varnish text cache is broken. The vcl refuses to load because of undefined probes
* 10:01 hashar: restarted deployment-cache-mobile03 and deployment-cache-text02
* 09:49 hashar: rebooting deployment-cache-bits01
* 00:41 Krinkle: rm -rf slave-scripts and re-cloning from integration/jenkins.git on all slaves (under sudo, just like puppet originally did) - git-status and jshint both work fine now
* 00:40 Krinkle: Permissions of deployment/integration/slave-scripts on labs slave are all screwed up (git-status says files are dirty, but when run as root git-status is clean and jshint also works fine via sudo)
* 00:29 Krinkle: Tried reconnecting Gearman, relaunching slave agents. Force-restarting Zuul now.
* 00:15 Krinkle: Permissions in deployment/integration/slave-scripts on integration-slave1003 are screwed up as well
== January 6 ==
* 22:13 hashar: jshint complains with:  Error: Cannot find module './lib/node'  :-(
* 22:12 hashar: integration-slave1005 chmod -R go+r /srv/deployment/integration/slave-scripts
* 22:08 hashar: integration-slave1007 chmod -R go+r /srv/deployment/integration/slave-scripts  . cscott mentioned build failures of parsoidsvc-jslint  which could not read /srv/deployment/integration/slave-scripts/tools/node_modules/jshint/src/cli.js
* 02:29 ori: qdel -f'd qa-morebots and started a new instance
== December 22 ==
* 20:06 bd808: Saved settings in https://integration.wikimedia.org/ci/configure to get jenkins ui language back to english from korean
== December 21 ==
: 08:31 Krinkle: /var on integration-slave1005 had 93% of 2GB full. Removed some large items in /var/cache/apt/archives that seemed unneeded and don't exist on other slaves.
== December 19 ==
* 23:01 greg-g: Krinkle restarted Gearman, which got the jobs to flow again
* 20:51 Krinkle: integration-slave1005 (new Ubuntu Trusty instance) is now pooled
* 18:51 Krinkle: Re-created and provisioning integration-slave1005 (UbuntuTrusty)
* 18:23 bd808: redis input to logstash stuck; restarted service
* 18:16 bd808: ran `apt-get dist-upgrade` on logstash01
* 18:02 bd808: removed local mwdeploy user & group from videoscaler01
* 18:01 bd808: deployment-videoscaler01 has mysteriously aquired a local mwdeploy user instead of the ldap one
* 17:58 bd808: forcing puppet run on deploymnet-videoscaler01
* 07:24 Krinkle: Restarting Gearman connection to Jenkins
* 07:24 Krinkle: Attempt #5 at re-creating integration-slave1001. Completed provisioning per Setup instructions. Pooled.
* 05:33 Krinkle: Rebasing integration-puppetmaster with latest upstream operations/puppet (5 local patches) and labs/private
* 00:06 bd808: restored local commit with ssh keys for scap to deployment-salt
== December 18 ==
* 23:57 bd808: temporarily disabled jenkins scap job
* 23:56 bd808: killed some ancient screen sessions on deployment-bastion
* 23:53 bd808: Restarted udp2log-mw on deployment-bastion
* 23:53 bd808: Restarted salt-minion on deployement-bastion
* 23:47 bd808: Updated scap to latest HEAD version
* 21:57 Krinkle: integration-slave1005 is not ready. It's incompletely setup due to https://phabricator.wikimedia.org/T84917
* 19:29 marxarelli: restarted puppetmaster on deployment-salt
* 19:29 marxarelli: seeing "Could not evaluate: getaddrinfo: Temporary failure in name resolution" in the deployment-* puppet logs
* 14:17 hashar: deleting instance deployment-parsoid04 and removing it from Jenkins
* 14:08 hashar: restarted varnish backend on parsoidcache02
* 14:00 hashar: parsoid05 seems happy: curl http://localhost:8000/_version: <tt>{"name":"parsoid","version":"0.2.0-git","sha":"d16dd2db6b3ca56e73439e169d52258214f0aeb2"}</tt>
* 14:00 hashar: parsoid05 seems happy: curl http://localhost:8000/_version<br/>
* 13:56 hashar: applying latest changes of Parsoid on parsoid05 via: <tt>zuul enqueue --trigger gerrit --pipeline postmerge --project mediawiki/services/parsoid --change 180671,2</tt>
* 13:56 hashar: parsoid05: disabling puppet, stopping parsoid, rm -fR /srv/deployment/parsoid ; rerunning the Jenkins beta-parsoid-update-eqiad to hopefully recreate everything properly
* 13:52 hashar: making parsoid05 a Jenkins slave to replace parsoid04
* 13:24 hashar: apt-get upgrade on parsoidcache02 and parsoid04
* 13:23 hashar: updated labs/private on puppet master to fix a puppet dependency cycle with sudo-ldap
* 13:19 hashar: rebased puppetmaster repo
* 12:53 hashar: reenqueuing last merged change of Parsoid in Zuul postmerge pipeline in order to trigger the beta-parsoid-update-eqiad job properly. <tt>zuul enqueue --trigger gerrit --pipeline postmerge --project mediawiki/services/parsoid --change 180671,2</tt>
* 12:52 hashar: deleting the workspace for the beta-parsoid-update-eqiad jenkins job on deployment-parsoid04 . Some file belong to root which prevent the job from processing
* 09:13 hashar: enabled MediaWiki core 'structure' PHPUnit tests for all extensions.  Will require folks to fix their incorrect AutoLoader and  RessourceLoader entries. {{gerrit|180496}} {{bug|T78798}}
== December 17 ==
* 21:02 hashar: cancelled all browser tests,suspecting them to deadlock Jenkins somehow :(
== December 16 ==
* 17:17 bd808: git-sync-upstream runs cleanly on deployment-salt again!
* 17:16 bd808: removed cherry pick of Ib2a0401a7aa5632fb79a5b17c0d0cef8955cf990 (-2 by _joe_; replaced by Ibcad98a95413044fd6c5e9bd3c0a6fb486bd5fe9)
* 17:15 bd808: removed cherry pick of I3b6e37a2b6b9389c1a03bd572f422f898970c5b4 (modified in gerrit by bd808 and not repicked; merged)
* 17:15 bd808: removed cherry pick of I08c24578596506a1a8baedb7f4a42c2c78be295a (-2 by _joe_ in gerrit; replaced by Iba742c94aa3df7497fbff52a856d7ba16cf22cc7)
* 17:13 bd808: removed cherry pick of I6084f49e97c855286b86dbbd6ce8e80e94069492 (merged by Ori with a change)
* 17:09 bd808: trying to fix it without using important changes
* 17:08 bd808: deployment-salt:/var/lib/git/operations/puppet is a rebase hell of cherry-picks that don't apply
* 13:51 hashar: deleting integration-slave1001 and recreating it. It is blocked on boot and we can't console on it https://phabricator.wikimedia.org/T76250
== December 15 ==
* 23:24 Krinkle: integration-slave1001 isn't coming back (T76250), building integration-slave1005 as its replacement.
* 12:53 YuviPanda: manually restarted diamond on all betalabs host, to see if that is why metrics aren’t being sent anymore
* 09:41 hashar: deleted hhvm core files in /var/tmp/core from both mediawiki01 and mediawiki02 {{T1259}} and {{T71979}}
== December 13 ==
* 18:51 bd808: Running chmod -R g+s /data/project/upload7 on deploymnet-mediawiki02
* 18:25 bd808: Running chmod -R u=rwX,g=rwX,o=rX /data/project/upload7 from deployment-mediawiki02
* 18:16 bd808: chown done for /data/project/upload7
* 17:51 bd808: Running chown -R apache:apache on /data/project/upload7 from deployment-mediawiki02
* 17:11 bd808: Labs DNS seems to be flaking out badly and causing random scap and puppet failures
* 16:58 bd808: restarted puppetmaster on deployment-salt
* 16:31 bd808: apache user renumbered on deployment-mediawiki03
* 16:23 bd808: apache and hhvm restarted on beta app servers following apache user renumber
* 16:09 bd808: apache and hhvm stopped on beta app server tier. All requests expected to return 503 from varnish
* 16:03 bd808: Starting work on [[phab:T78076]] to renumber apache users in beta
* 08:21 YuviPanda|zzz: forcing puppet run on all deployment-prep hosts
== December 12 ==
* 22:38 bd808: Fixed scap by deleting /srv/mediawiki/~tmp~ on deployment-rsync01
* 22:27 hashar: Creating 1300 Jenkins jobs to run extensions PHPUnit tests under either HHVM or Zend  PHP flavors.
* 18:35 bd808: Added puppet config to record !log messages in logstash
* 17:32 bd808: forcing puppet runs on deployment-mediawiki0[12]; hiera settings specific to beta were not applied on the hosts leading to all kinds of problems
* 17:12 bd808: restarted hhvm on deployment-mediawiki0[12] and purged hhbc database
* 17:00 bd808: restarted apache2 on deployment-mediawiki01
* 16:59 bd808: restarted apache2 on deployment-mediawiki02
== December 11 ==
* 22:13 hashar: Adding chrismcmahon to the 'integration' Gerrit group so he can +2 changes made to integration/config.git
* 21:47 hashar: Jenkins re adding [https://integration.wikimedia.org/ci/computer/integration-slave1009/ integration-slave1009] to the pool of slaves
* 19:45 bd808|LUNCH: I got nerd snipped into looking at beta. Major personal productivity failure.
* 19:43 bd808|LUNCH: nslcd log noise is probably a red herring -- https://access.redhat.com/solutions/58684
* 19:39 bd808|LUNCH: lots of nslcd errors in syslog on deployment-rsync01 which may be causing scap failures
* 07:45 YuviPanda: shut up shinken-wm
== December 10 ==
* 22:17 bd808: restarted logstash on logstash1001. redis event queue not being processed
* 10:30 hashar: Adding hhvm on Trusty slaves, using depooled integration-slave1009 as the main work area
== December 9 ==
* 16:33 bd808: restarted puppetmaster to pick up changes to custom functions
* 16:19 bd808: forced install of sudo-ldap across beta with: salt '*' cmd.run 'env SUDO_FORCE_REMOVE=yes DEBIAN_FRONTEND=noninteractive apt-get -y install sudo-ldap'
== December 8 ==
* 23:45 bd808: deleted hhvm core on mediawiki01
* 23:43 bd808: Ran `apt-get clean` on deployment-mediawiki01
== December 5 ==
* 22:21 bd808: 1.1G free on deployment-mediawiki02:/var after removing a lot of crap form logs and /var/tmp/cores
* 22:06 bd808: /var full on deployment-mediawiki02 again :(((
* 10:50 hashar: applying mediawiki::multimedia class on contint slaves ( https://phabricator.wikimedia.org/T76661 | https://gerrit.wikimedia.org/r/#/c/177770/ )
* 01:01 bd808: Deleted a ton of jeprof.*.heap files from deployment-mediawiki02:/
* 00:54 YuviPanda: cleared out pngs from mediawiki02 to kill low space warning
* 00:53 YuviPanda: mediawiki02 instance is low on space, /tmp has lots of... pngs?
== December 4 ==
* 22:48 YuviPanda: manually rebased puppet on deployment-prep
* 00:29 bd808: deleted instance "udplog"
== December 3 ==
* 19:11 bd808: Cleaned up legacy jobrunner scripts on deployment-jobrunner01 (/etc/default/mw-job-runner /etc/init.d/mw-job-runner /usr/local/bin/jobs-loop.sh)
== December 2 ==
* 23:39 bd808: Cause of full disk on deployment-mediawiki01 was an hhvm core file; fixed now
* 23:35 bd808: /var full on deployment-mediawiki01
* 11:27 hashar: deleting /srv/vdb/varnish* files on all varnish instances ( https://phabricator.wikimedia.org/T76091 )
* 10:23 hashar: restarted parsoid on deployment-parsoid05
* 05:26 Krinkle: integration-slave1001 has been down since the failed reboot on 28 November 2014. Still unreachable over ssh and no Jenkins slave agent.
== December 1 ==
* 18:54 bd808: Got jenkins updates working again by taking deployment-bastion node offline, killing waiting jobs and bringing it back online again.
* 18:51 bd808: updates in beta suck with the "Waiting for next available executor" deadlock again
* 17:59 bd808: Testing rsyslog event forwarding to logstash via puppet cherry-pick
== November 27 ==
* 12:28 hashar: enabled puppet master autoupdate by setting <tt>puppetmaster_autoupdate: true</tt> in [[Hiera:Integration]] . https://phabricator.wikimedia.org/T75878
* 12:28 hashar: rebased integration puppetmaster : 5d35de4..1a5ebee
* 00:32 bd808: Testing local hack on deployment-salt to switch order of heira backends
* 00:16 bd808: Testing a proposed puppet patch to allow pointing hhvm logs back to deploment-bastion
== November 26 ==
* 00:51 bd808: cherry-picked patch for redis logstash input from MW {{gerrit|175896}}
* 00:50 bd808: Restored puppet cherry-picks from reflog [phab:T75947]
== November 25 ==
* 23:45 hashar: Fixed upload cache on beta cluster, the Varnish backend had a mmap SILO error that prevented the backend from starting. https://phabricator.wikimedia.org/T75922
* 21:05 bd808: Running `sudo find . -type d ! -perm -o=w -exec chmod 0777 {} +` to fix upload permissions
* 18:01 legoktm: cleared out renameuser_status table (old broken global merges)
* 18:00 legoktm: 4086 rows deleted from localnames, 3929 from localuser
* 17:59 legoktm: clearing out localnames/localuser where wikis don't exist on beta
* 17:10 legoktm: ran migratePass0.php on all wikis
* 17:09 legoktm: ran checkLocalUser.php --delete on all wikis
* 17:08 legoktm: PHP Notice:  Undefined index: wmgExtraLanguageNames in /mnt/srv/mediawiki/php-master/includes/SiteConfiguration.php on line 307
* 17:07 legoktm: ran checkLocalNames.php --delete on all wikis
* 04:37 jgage: restarted jenkins at 20:31
== November 24 ==
* 17:24 greg-g: stupid https
* 16:40 bd808|deploy: My problem with en.wikipedia.beta.wmflabs.org was caused by a forceHTTPS cookie being set in my browser and redirecting to the broken https endpoint
* 16:33 bd808|deploy: scap fixed by reverting bad config patch; still looking into failures from en.wikipedia.beta.wmflabs.org
* 16:27 bd808: Looking at scap crash
* 15:18 YuviPanda: restored local hacks + fixed 'em to account for 47dcefb74dd4faf8afb6880ec554c7e087aa947b on deployment-salt puppet repo, puppet failures recovering now
== November 21 ==
* 17:06 bd808: deleted salt keys for deleted instances: i-00000289, i-0000028a, i-0000028b, i-0000028e, i-000002b7, i-000006ad
* 15:57 hashar: fixed puppet cert on deployment-restbase01
* 15:50 hashar: deployment-sca01 regenerating puppet CA for deployment-sca01
* 15:34 hashar: Renerated puppet master certificate on deployment-salt. It needs to be named  deployment-salt.eqiad.wmflabs  not i-0000015c.eqiad.wmflabs.  Puppet agent works on deployment-salt now.
* 15:19 hashar: I have revoked the deployment-salt certificates. All puppet agent are thus broken!
* 15:01 hashar: deployment-salt cleaning certs with puppet cert clean
* 14:52 hashar: manually switching restbase01 puppet master from virt1000 to deployment-salt.eqiad.wmflabs
* 14:50 hashar: deployment-restbase01 has some puppet error: Error 400 on SERVER: Must provide non empty value. on node i-00000727.eqiad.wmflabs . That is due to puppet pickle() function being given an empty variable
== November 20 ==
* 15:25 hashar: 15:01 Restarted Jenkins AND Zuul.  Beta cluster jobs are still deadlocked.
* 13:21 hashar: for integration, set puppet master report retention to 360 minutes ( https://wikitech.wikimedia.org/wiki/Hiera:Integration , see https://bugzilla.wikimedia.org/show_bug.cgi?id=73472#c14 )
* 13:20 hashar: rebased puppet master on integration project
* 13:20 hashar: rebased puppet master
== November 19 ==
* 21:27 bd808: Ran `GIT_SSH=/var/lib/git/ssh git pull --rebase` in deployment-salt:/srv/var-lib/git/labs/private
== November 18 ==
* 15:32 hashar: Deleting job https://integration.wikimedia.org/ci/job/mediawiki-vendor-integration/  replaced by mediawiki-phpunit. Clearing out workspaces {{bug|73515}}
== November 17 ==
* 09:24 YuviPanda: moved *old* /var/log/eventlogging into /home/yuvipanda so puppet can run without bitching
* 04:57 YuviPanda: cleaned up coredump on mediawiki02 on deployment-prep
== November 14 ==
* 21:03 marxarelli: loaded and re-saved jenkins configuration to get it back to english
* 17:27 bd808: /var full on deployment-mediawiki02. Adjusted ~bd808/cleanup-hhvm-cores for core found in /var/tmp/core rather than the expected /var/tmp/hhvm
* 11:14 hashar: Recreated a labs Gerrit setup on integration-zuul-server . Available from http://integration.wmflabs.org/gerrit/ using OpenID for authentication.
== November 13 ==
* 11:13 hashar: apt-get upgrade / maintenance on all slaves
* 11:02 hashar: bringing back integration-slave1008 to the pool. The label had a typo. https://integration.wikimedia.org/ci/computer/integration-slave1008/
== November 12 ==
* 21:03 hashar: Restarted Jenkins due to a deadlock with deployment-bastion slave
== November 9 ==
* 16:51 bd808: Running `chmod -R =rwX .` in /data/project/upload7
== November 8 ==
* 08:06 YuviPanda: that fixed it
* 08:04 YuviPanda: disabling/enabling gearman
== November 6 ==
* 23:43 bd808: https://integration.wikimedia.org/ci/job/mwext-MobileFrontend-qunit-mobile/ happier after I deleted the clone of mw/core that was somehow corrupted
* 21:01 cscott: bounced zuul, jobs seem to be running again
* 20:58 cscott: about to restart zuul as per https://www.mediawiki.org/wiki/Continuous_integration/Zuul#Known_issues
* 00:53 bd808: HHVM not installed on integration-slave1009? "/srv/deployment/integration/slave-scripts/bin/mw-run-phpunit-hhvm.sh: line 42: hhvm: command not found" -- https://integration.wikimedia.org/ci/job/mediawiki-core-regression-hhvm-master/2542/console
== November 5 ==
* 16:14 bd808: Updated scap to include Ic4574b7fed679434097be28c061927ac459a86fc (Revert "Make scap restart HHVM")
== October 31 ==
* 17:13 godog: bouncing zuul in jenkins as per https://www.mediawiki.org/wiki/Continuous_integration/Zuul#Known_issues
== October 30 ==
* 16:34 hashar: cleared out /var/ on integration-puppetmaster
* 16:34 bd808: Upgraded kibana to v3.1.1
* 15:54 hashar: Zuul: merging in https://review.openstack.org/#/c/128921/3 which should fix jobs being stuck in queue on merge/gearman failures. {{bug|72113}}
* 15:45 hashar: Upgrading Zuul reference copy from upstream c9d11ab..1f4f8e1 
* 15:43 hashar: Going to upgrade Zuul and monitor the result over the next hour.
== October 29 ==
* 22:58 bd808: Stopped udp2log and started udp2log-mw on deployment-bastion
* 19:46 bd808: Logging seems broken following merge of https://gerrit.wikimedia.org/r/#/c/119941/24. Investigating
== October 28 ==
* 21:39 bd808: RoanKattouw creating deployment-parsoid05 as a replacement for the totally broken deployment-parsoid04
== October 24 ==
* 13:36 hashar: That bumps hhvm on contint from 3.3.0-20140925+wmf2  to 3.3.0-20140925+wmf3
* 13:36 hashar: apt-get upgrade on Trusty Jenkins slaves
== October 23 ==
* 22:43 hashar: Jenkins resumed activity.  Beta cluster code is being updated
* 21:36 hashar: Jenkins: disconnected / reconnected slave node  deployment-bastion.eqiad
== October 22 ==
* 20:54 bd808: Enabled puppet on deployment-logstash1
* 09:07 hashar: Jenkins: upgrading gearman-plugin from 0.0.7-1-g3811bb8 to 0.1.0-1-gfa5f083 .  Ie bring us to latest version + 1 commit
== October 21 ==
* 21:10 hashar: contint: refreshed slave-scripts 0b85d48..8c3f228  sqlite files will be cleared out after 20 minutes (instead of 60 minutes)  {{bug|71128}}
* 20:51 cscott: deployment-prep _joe_ promises to fix this properly tomorrow am
* 20:51 cscott: deployment-prep turned off puppet on deployment-pdf01, manually fixed broken /etc/ocg/mw-ocg-service.js
* 20:50 cscott: deployment-prep updated OCG to version 523c8123cd826c75240837c42aff6301032d8ff1
* 10:55 hashar: deleted salt master key on deployment-elastic{06,07}, restarted salt-minion and reran puppet.  It is now passing on both instances \O/
* 10:48 hashar: rerunning puppet manually on deployment-elastic{06,07}
* 10:48 hashar: beta: signing puppet cert for deployment-elastic{06,07}.  On deployment-salt ran:  puppet ca sign  i-000006b6.eqiad.wmflabs; puppet ca sign i-000006b7.eqiad.wmflabs
* 09:29 hashar: forget me  deployment-logstash1 has a puppet agent error but it is simply because the agent is disabled "'debugging logstash config'"
* 09:28 hashar: deployment-logstash1 disk full
== October 20 ==
* 17:41 bd808: Disabled redis input plugin and restarted logstash on deployment-logstash1
* 17:39 bd808: Disabled puppet on deployment-logstash1 for some live hacking of logstash config
* 15:27 apergos: upgrded salt-master on virt1000 (master for labs)
== October 17 ==
* 22:34 subbu: live fixed bad logger config in /srv/deployment/parsoid/deploy/conf/wmf/betalabs.localsettings.js and verified that parsoid doesn't crash anymore -- fix now on gerrit and being merged
* 20:48 hashar: qa-morebots is back
* 20:30 hashar: beta: switching Parsoid config file to the one in mediawiki/services/parsoid/deploy.git instead of the puppet maintained config file https://gerrit.wikimedia.org/r/#/c/166610/ for subbu.  Parsoid seems happy :)
* hashar: qa-morebots disappeared :(  {{bug|72179}}
* hashar: deployment-logstash1 unlocking puppet by deleting left over /var/lib/puppet/state/agent_catalog_run.lock
* hashar: logstash1 instance being filled up is {{bug|72175}}  probably caused by the Diamond collector spamming /server-status?auto
* hashar: deployment-logstash1 deleting files under /var/log/apache2/  gotta fill a bug to prevent access log from filling the partition
== October 16 ==
* 06:14 apergos: updated remaining beta instances to salt-minion 2014.1.11 from salt ppa
== October 15 ==
* 12:56 apergos: updated i-000002f4, i-0000059b, i-00000504, i-00000220 salt-minion to 2014.1.11
* 12:20 apergos: updated salt-master and salt-minion on the deployment-salt host _only_  to 2014.1.11 (using salt ppa for now)
* 01:08 Krinkle: Pooled integration-slave1009
* 01:00 Krinkle: Setting up integration-slave1009 ({{bug|72014}} fixed)
* 01:00 Krinkle: integration-publisher and integration-zuul-server were rebooted by me yesterday. Seems they only show up in graphite now. Maybe they were shutdown or had puppet stuck.
== October 14 ==
* 21:00 JohnLewis: icinga says deployment-sca01 is good (yay)
* 20:42 JohnLewis: deleted and recreated deployment-sca01 (still needs puppet set up)
* 20:24 JohnLewis: rebooted deployment-sca01
* 09:26 hashar: renamed deployment-cxserver02 node slaves to 03 and updated the ip address
* 06:49 Krinkle: Did a slow-rotating graceful depool/reboot/repool of all integration-slave's over the past hour to debug problems whilst waiting for puppet to unblock and set up new slaves.
* 06:43 Krinkle: Keeping the new integration-slave1009 unpooled because setup could not be completed due to {{bug|72014}}.
* 06:43 Krinkle: Pooled integration-slave1004
* 05:40 Krinkle: Setting up integration-slave1004 and integration-slave1009 ({{bug|71873}} fixed)
== October 10 ==
* 20:53 Krinkle: Deleted integration-slave1004 and integration-slave1009. When {{bug|71873}} is fixed, they'll need to be re-created.
* 19:11 Krinkle: integration-slave1004 (new instance, not set up yet) was broken ({{bug|71741}}). The bug seems fixed for new instances so, I deleted and re-created it. Will be setting up as a Precise instance and pool it.
* 19:09 Krinkle: integration-slave1009 (new instance) remains unpooled as it is not yet fully set up ({{bug|71874}}). See [[Nova_Resource:Integration/Setup]]
== October 9 ==
* 20:17 bd808: rebooted deployment-sca01 via wikitech ui
* 20:16 bd808: deployment-sca01 dead -- Kernel panic - not syncing: Attempted to kill init! exitcode=0x00000100
* 19:44 bd808: added role::deployment::test to deployment-rsync01 and deployment-mediawiki03 for trebuchet testing
* 19:07 bd808: updated scap to include 8183d94 (Fix "TypeError bufsize must be an integer")
* 09:34 hashar: migrating deployment-cxserver02 to beta cluster puppet and salt masters
* 09:22 hashar: Renamed Jenkins slave deployment-cxserver01 to deployment-cxserver02 and updated IP. It is marked offline until the instance is ready and has the relevant puppet classes applied.
* 09:19 hashar: deleting deployment-cxserver01 (borked since virt1005 outage) creating deployment-cxserver02 to replace it {{bug|71783}}
== October 7 ==
* 19:19 bd808: ^d deleted all files/directories in gallium:/var/lib/jenkins-slave/tmpfs
* 18:24 bd808: /var/lib/jenkins-slave/tmpfs full (100%) on gallium
* 11:54 Krinkle: The new integration-slave1009 must remain unpooled because Setup failed (puppet unable to mount /mnt, {{bug|71874}}) - see also [[Nova Resource:Integration/Setup]]
* 11:53 Krinkle: Deleted integration-slave1004 because {{bug|71741}}
* 10:16 hashar: beta: apt-get upgraded all instances beside the lucid one.
* 09:57 hashar: beta: deleting old occurrences of /etc/apt/preferences.d/puppet_base_2.7
* 09:53 hashar: apt-get upgrade on all beta cluster instances
* 09:34 Krinkle: Rebase integration-puppetmaster on latest operations-puppet (patches: I7163fd38bcd082a1, If2e96bfa9a1c46)
* 09:32 Krinkle: Apply I44d33af1ce85 instead of Ib95c292190d on integration-puppetmaster (remove php5-parsekit package)
* 09:28 hashar: upgrading php5-fss on both beta-cluster and integration instances. {{bug|66092}} https://rt.wikimedia.org/Ticket/Display.html?id=7213
* 08:55 Krinkle: Building additional contint slaves in labs (integration-slave1004 with precise and integration-slave1009 with trusty)
* 08:21 Krinkle: Reload Zuul to deploy 5e905e7c9dde9f47482d
== October 3 ==
* 22:53 bd808: Had to stop and start zuul due to NoConnectedServersError("No connected Gearman servers") in zuul.log on gallium
* 22:34 bd808|deploy: Merged Ie731eaa7e10548a947d983c0539748fe5a3fe3a2 (Regenerate autoloader) to integration/phpunit for bug 71629
* 14:01 manybubbles: rebuilding beta's simplewiki cirrus index
* 08:24 hashar: deployment-bastion clearing up /var/log/account a bit {{bug|69604}}. Puppet patch pending :]
== October 2 ==
* 19:42 bd808: Updated scap to include eff0d01 Fix format specifier for error message
* 11:58 hashar: Migrated all mediawiki-core-regression* jobs to Zuul cloner {{bug|71549}}
* 11:57 hashar: Migrated all mediawiki-core-regression* jobs to Zuul cloner
== October 1 ==
* 20:57 bd808: hhvm servers broken because of I5f9b5c4e452e914b33313d0774fb648c1cdfe7ad
* 17:29 bd808: Stopped service udp2log and started service udp2log-mw on deployment-bastion
* 16:21 bd808: Cherry-picked https://gerrit.wikimedia.org/r/#/c/163078/ into scap for beta. hhvm will be restarted on each scap. Keep your eyes open for weird problems like 503 responses that this may cause.
* 14:14 hashar: rebased contint puppetmaster
== September 30 ==
* 23:47 bd808: jobrunner using outdated ip address for redis01. Testing patch to use hostname rather than hardcoded ip
* 21:45 bd808: jobrunner not running. ebernhardson is debugging.
* 21:38 bd808: /srv on rsync01 now has 3.2G of free space and should be fine fro quite a while again.
* 21:37 bd808: I figured out the disk space problem on rsync01 (just as I was ready to replace it with rsync02). The old /src/common-local directory was still there which doubled the disk utilization. /src/mediawiki is the correct sync dir now following prod changes.
* 21:15 bd808: local l10nupdate users on bastion, mediawiki01 and rsync01
* 21:06 bd808: Local mwdeploy user on deployment-bastion making things sad
* 20:36 bd808: lots and lots of "file has vanished" errors from rsync. Not sure why
* 20:35 bd808: Initial puppet run with role::beta::rsync_slave applied on rsync02 failed spectacularly in /Stage[main]/Mediawiki::Scap/Exec[fetch_mediawiki] stage
* 20:02 bd808: Started building deployment-rsync02 to replace deployment-rsync01
* 19:59 bd808|LUNCH: /srv partition on deployment-rsync01 full again. We need a new rsync server with more space
* 17:44 bd808: Updated scap to 064425b (Remove restart-nutcracker and restart-twemproxy scripts)
* 16:08 bd808: Occasional memecached-serious errors in beta for something trying to connect to the default memcached port (11211) rather than the nutcracker port (11212).
* 15:58 bd808: scap happy again after fixing rogue group/user on rsync01 \o/ Not sure why they were created but likely an ldap hiccup during a puppet run
* 15:56 bd808: removed local group/user mwdeploy on deployment-rsync01
* 15:54 bd808: Local mwdeploy (gid=996) shadowing ldap group gid=603(mwdeploy) on deployment-rsync01
* 15:49 bd808: apt-get dist-upgrade fixed hhvm on deployment-mediawiki03
* 15:45 hashar: Updating our Jenkins job builder fork  686265a..ee80dbc (no job changed)
* 15:44 bd808: scap failing in beta due to "Permission denied (publickey)" talking to deployment-rsync01.eqiad.wmflabs
* 15:39 bd808: hhvm not starting after puppet run on deployment-mediawiki03. Investigating.
* 15:36 bd808: enabling puppet and forcing run on deployment-mediawiki03
* 15:34 bd808: enabling puppet and forcing run on deployment-mediawiki02
* 15:29 bd808:  puppet showed no changes on mediawiki01‽
* 15:27 bd808: enabling puppet and forcing run on deployment-mediawiki01
* 15:13 bd808: Fixed logstash by installing http://packages.elasticsearch.org/logstash/1.4/debian/pool/main/l/logstash-contrib/logstash-contrib_1.4.2-1-efd53ef_all.deb
* 15:02 bd808: Logstash doesn't bundle the prune filter by default any more -- http://logstash.net/docs/1.4.2/filters/prune
* 14:59 bd808: Logstash rules need to be adjusted for latest upstream version: "Couldn't find any filter plugin named 'prune'"
* 12:37 hashar: Fixed some file permissions under deployment-bastion:/srv/mediawiki-staging/php-master/vendor/.git  some files belonged to root instead of mwdeploy
* 00:34 bd808: Updated kibana to latest upstream head 8653aba
== September 29 ==
* 14:22 hashar: apt-get upgrade and reboot of all integration-slaveXX instances
* 14:07 hashar: updated puppetmaster labs/private on both integration and beta cluster projects ( a41fcdd..84f0906 )
* 08:57 hashar: rebased puppetmaster
== September 26 ==
* 22:16 bd808: Deleted deployment-mediawiki04 (i-000005ba.eqiad.wmflabs) and removed from salt and trebuchet
* 07:50 hashar: Pooled back integration-slave1006 , was removed because of {{bug|71314}}
* 07:41 hashar: Updated our Jenkins Job Builder fork 2d74b16..686265a
== September 25 ==
* 23:35 bd808: Done messing with puppet repo. Replaced 2 local commits with proper gerrit cherry picks. Removed a cherry-pick that had been rearranged and merged. Removed a cherry-pick that had been abandoned in gerrit.
* 23:10 bd808: removed cherry-pick of abandoned https://gerrit.wikimedia.org/r/#/c/156223/; if beta wikis stop working this would be a likely culprit
* 22:36 bd808: Trying to reduce the number of untracked changes in puppet repo. Expect some short term breakage.
* 22:21 bd808: cleaned up puppet repo with `git rebase origin/production; git submodule update --init --recursive`
* 22:18 bd808: puppet repo on deployment-salt out of whack. I will try to fix.
* 08:15 hashar: beta: puppetmaster rebased
* 08:10 hashar: beta: dropped a patch that reverted OCG LVS configuration ( https://gerrit.wikimedia.org/r/#/c/146860/ ), it has been fixed by https://gerrit.wikimedia.org/r/#/c/148371/
* 08:04 hashar: attempting to rebase beta cluster puppet master. Currently at 74036376
== September 24 ==
* 15:30 hashar_: install additional fonts on jenkins slaves for browser screenshots ( https://gerrit.wikimedia.org/r/#/c/162604/ and https://bugzilla.wikimedia.org/69535 )
* 09:57 hashar_: upgraded Zuul on all integration labs instances
* 09:33 hashar_: Jenkins switched mwext-UploadWizard-qunit back to Zuul cloner by applying pending change {{gerrit|161459}}
* 09:19 hashar_: Upgrading Zuul to f0e3688  Cherry pick https://review.openstack.org/#/c/123437/1 which fix {{bug|71133}} ''Zuul cloner: fails on extension jobs against a wmf branch''
== September 23 ==
* 23:08 bd808: Jenkins and deployment-bastion talking to each other again after six (6!) disconnect, cancel jobs, reconnect cycles
* 22:53 greg-g: The dumb "waiting for executors" bug is https://bugzilla.wikimedia.org/show_bug.cgi?id=70597
* 22:51 bd808: Jenkins stuck trying to update database in beta again with the dumb "waiting for executors" bug/problem
== September 22 ==
* 16:09 bd808: Ori updating HHVM to 3.3.0-20140918+wmf1 (from deployment-prep SAL)
* 09:37 hashar_: Jenkins: deleting old mediawiki extensions jobs (<tt>rm -fR /var/lib/jenkins/jobs/*testextensions-master</tt>).  They are no more triggered and superseded by the <tt>*-testextension</tt> jobs.
== September 20 ==
* 21:30 bd808: Deleted /var/log/atop.* on deployment-bastion to free some disk space in /var
* 21:29 bd808: Deleted /var/log/account/pacct.* on deployment-bastion to free some disk space in /var
== September 19 ==
* 21:16 hashar: puppet is broken on Trusty integration slaves  because they try to install the non existing package php-parsekit. WIP will get it sorted on eventually.
* 14:57 hashar: Jenkins friday deploy: migrate all MediaWiki extension qunit jobs to Zuul cloner.
== September 17 ==
* 12:20 hashar: upgrading jenkins 1.565.1 -> 1.565.2
== September 16 ==
* 16:36 bd808: Updated scap to 663f137 (Check php syntax with parallel `php -l`)
* 04:01 jeremyb: deployment-mediawiki02: salt was broken with a msgpack exception. mv -v /var/cache/salt{,.old} && service salt-minion restart fixed it. also did salt-call saltutil.sync_all
* 04:00 jeremyb: deployment-mediawiki02: (/run was 99%)
* 03:59 jeremyb: deployment-mediawiki02: rm -rv /run/hhvm/cache && service hhvm restart
* 00:51 jeremyb: deployment-pdf01 removed base::firewall (ldap via wikitech)
== September 15 ==
* 22:53 jeremyb: deployment-pdf01: pkill -f grain-ensure
* 21:36 bd808: Trying to fix salt with `salt '*' service.restart salt-minion`
* 21:32 bd808: only hosts responding to salt in beta are deployment-mathoid, deployment-pdf01 and deployment-stream
* 21:29 bd808: salt calls failing in beta with errors like "This master address: 'salt' was previously resolvable but now fails to resolve!"
* 20:18 hashar: restarted salt-master
* 19:50 hashar: killed on deployment-bastion  a bunch of <tt>python /usr/local/sbin/grain-ensure contains ... </tt> and <tt>/usr/bin/python /usr/bin/salt-call --out=json grains.append deployment_target scap</tt> commands
* 18:57 hashar: scap breakage due to ferm is logged as https://bugzilla.wikimedia.org/show_bug.cgi?id=70858
* 18:48 hashar: https://gerrit.wikimedia.org/r/#/c/160485/ tweaked a default ferm configuration file which caused puppet to reload ferm.  It ends up having rules that prevent ssh from other host thus breaking rsync \O/
* 18:37 hashar: beta-scap-eqiad job is broken since ~17:20 UTC https://integration.wikimedia.org/ci/job/beta-scap-eqiad/21680/console  || rsync: failed to connect to deployment-bastion.eqiad.wmflabs (10.68.16.58): Connection timed out (110)
== September 13 ==
* 01:07 bd808: Moved /srv/scap-stage-dir to /srv/mediawiki-staging; put a symlink in as a failsafe
* 00:31 bd808: scap staging dir needs some TLC on deployment-bastion; working on it
* 00:30 bd808: Updated scap to I083d6e58ecd68a997dd78faabe60a3eaf8dfaa3c
== September 12 ==
* 01:28 ori: services promoted User:Catrope to projectadmin
== September 11 ==
* 20:59 spagewmf: https://integration.wikimedia.org/ci/ is down with 503 errors
* 16:13 bd808: Now that scap is pointed to labmon1001.eqiad.wmnet the deployment-graphite.eqiad.wmflabs host can probably be deleted; it never really worked anyway
* 16:12 bd808: Updated scap to include I0f7f5cae72a87f68d861340d11632fb429c557b9
* 15:09 bd808: Updated hhvm-luasandbox to latest version on mediawiki03 and verified that mediawiki0[12] were already updated
* 15:01 bd808: Fixed incorrect $::deployment_server_override var on deployment-videoscaler01; deployment-bastion.eqiad.wmflabs is correct and deployment-salt.eqiad.wmflabs is not
* 10:05 ori: deployment-prep upgraded luasandbox and hhvm across the cluster
* 08:41 spagewmf: deployment-mediawiki01/02 are not getting latest code
* 05:10 bd808: Reverted cherry-pick of I621d14e4b75a8415b16077fb27ca956c4de4c4c3 in scap; not the actual problem
* 05:02 bd808: Cherry-picked I621d14e4b75a8415b16077fb27ca956c4de4c4c3 to scap  to try and fix l10n update issue
== September 10 ==
* 19:38 bd808: Fixed beta-recompile-math-texvc-eqiad job on deployment-bastion
* 19:38 bd808: Made /usr/local/apache/common-local a symlink to /srv/mediawiki on deployment-bastion
* 19:37 bd808: Deleted old /srv/common-local on deployment-videoscaler01
* 19:32 bd808: Killed jobs-loop.sh tasks on deployment-jobrunner01
* 19:30 bd808: Removed old mw-job-runner cron job on deployment-jobrunner01
* 19:19 bd808: Deleted /var/log/account/pacct* and /var/log/atop.log.* on deployment-jobrunner01 to make some temporary room in /var
* 19:14 bd808: Deleted /var/log/mediawiki/jobrunner.log and restarted jobrunner on deployment-jobrunner01:
* 19:11 bd808: /var full on deployment-jobrunner01
* 19:05 bd808: Deleted /srv/common-local on deployment-jobrunner01
* 19:04 bd808: Changed /usr/local/apache/common-local symlink to point to /srv/mediawiki on deployment-jobrunner01
* 19:03 bd808: w00t!!! scap jobs is green again -- https://integration.wikimedia.org/ci/job/beta-scap-eqiad/20965/
* 19:00 bd808: sync-common finished on deployement-jobrunner01; trying Jenkins scap job again
* 18:53 bd808: Removed symlink and make /srv/mediawiki a proper directory on deployment-jobrunner01; Running sync-common to populate.
* 18:45 bd808: Made /srv/mediawiki a symling to /srv/common-local on deployment-jobrunner01
* 10:20 jeremyb: deployment-bastion /var at 97%, freed up ~500MB. apt-get clean && rm -rv /var/log/account/pacct*
* 10:17 jeremyb: deployment-bastion good puppet run
* 10:16 jeremyb: deployment-salt had an oom-kill recently. and some box (maybe master, maybe client?) had a disk fill up
* 10:15 jeremyb: deployment-mediawiki0[12] both had good puppet runs
* 10:15 jeremyb: deployment-salt started puppetmaster && puppet run
* 10:14 jeremyb: deployment-bastion killed puppet lock
* 03:04 bd808: Ori made puppet changes that moved the MediaWiki install dir to /srv/mediawiki (https://gerrit.wikimedia.org/r/#/c/159431/). I didn't see that in SAL so I'm adding it here.
== September 9 ==
* 03:06 bd808: Restarted jenkins agent on delopment-bastion twice to resolve executor deadlock (bug 70597)
== September 7 ==
* 07:00 jeremyb: testing 1,2,3
__NOTOC__
<noinclude>[[Category:SAL]]</noinclude>
<noinclude>[[Category:SAL]]</noinclude>

Revision as of 14:17, 5 July 2022

2022-07-05

2022-06-30

  • 22:02 TheresNoTime: unstuck beta-mediawiki-config-update-eqiad jobs, will comment at T72597
  • 21:05 TheresNoTime: cancelled beta-code-update-eqiad#398138 to make way for pending beta-scap-sync-world#57641, queued another beta-code-update-eqiad
  • 16:47 taavi: reloading zuul to deploy https://gerrit.wikimedia.org/r/810053

2022-06-29

  • 14:48 ori: Clearing data from incomplete migration on Wikifunctionswiki via sql.php
  • 13:39 TheresNoTime: clearing stuck beta deployment jobs, watching to ensure they catch up :')

2022-06-28

2022-06-27

2022-06-24

  • 20:52 taavi: added `denisse` as a member

2022-06-23

2022-06-22

  • 17:36 taavi: gerrit: add tfellows to the extension-OpenBadges group per request in T308278
  • 17:35 taavi: gerrit: create group extension-JsonData with robla in it, make it an owner of mediawiki/extensions/JsonData per request in T303147
  • 16:19 hashar: Reloaded Zuul for https://gerrit.wikimedia.org/r/807586
  • 09:35 hashar: Switched `gitlab-prod-1001.devtools.eqiad1.wikimedia.cloud` instance to use the project Puppet master `puppetmaster-1001.devtools.eqiad1.wikimedia.cloud`
  • 09:08 hashar: contint1001 , contint2002: deleting `.git/logs` from all zuul-merger repositories. We do not need the reflog `sudo -u zuul find /srv/zuul/git -type d -name .git -print -execdir rm -fR .git/logs \;` # T307620
  • 09:00 hashar: contint1001 , contint2002: setting `core.logallrefupdates=false` on all Zuul merger git repositories: `sudo -u zuul find /srv/zuul/git -type d -name .git -print -execdir git config core.logallrefupdates false \;` # T307620
  • 07:46 hashar: Building operations-puppet docker image for https://gerrit.wikimedia.org/r/c/integration/config/+/807180

2022-06-21

  • 22:01 brennen: gitlab-runners: re-registering all shared runners
  • 17:55 dancy: Upgrading scap to 4.9.4-1+0~20220621174226.320~1.gbp56e4d4 in beta cluster

2022-06-20

  • 16:30 urbanecm: add sgimeno as a project member (Growth engineer with need for access)
  • 15:50 ori: On deployment-cache-{text,upload}06, ran: touch /srv/trafficserver/tls/etc/ssl_multicert.config && systemctl reload trafficserver-tls.service (T310957)
  • 14:07 ori: restarted acme-chief on deployment-acme-chief03

2022-06-17

  • 17:15 ori: provisioned deployment-cache-text07 in deployment-prep to test query normalization via VCL
  • 01:08 TimStarling: on deployment-docker-cpjobqueue01 and deployment-docker-changeprop01 I redeployed the changeprop configuration, reverting the PHP 7.4 hack

2022-06-16

  • 12:24 hashar: gitlab: runner-1030: `docker volume prune -f`
  • 12:24 hashar: gitlab: runner-1026: `docker volume prune -f`
  • 10:02 elukey: ran `scap install-world --batch` to allow scap/puppet to work on ml-cache100[2,3]

2022-06-15

  • 22:39 brennen: phabricator: tagged release/2022-06-15/1 (T310742)
  • 16:31 hashar: integration-agent-docker-1035: docker image prune
  • 15:26 dancy: Upgrading scap to 4.9.4-1+0~20220615151557.315~1.gbped3b8d in beta cluster

2022-06-14

  • 21:30 TheresNoTime: clear out stuck `beta-scap-sync-world` jobs (repeatedly per each queued `beta-mediawiki-config-update-eqiad` job), queued jobs now running. monitored for until each job had run successfully. jobs up to date
  • 17:18 brennen: starting 1.39.0-wmf.16 (T308069) transcript in deploy1002:~brennen/1.39.0-wmf.16.log
  • 13:35 TheresNoTime: clear stuck `beta-scap-sync-world` job, other queued jobs now running. Cancel running `beta-update-databases-eqiad` job, will ensure it runs on the next timer
  • 00:42 TimStarling: on deployment-deploy03 removed helm2, as was done in production

2022-06-13

  • 22:04 TheresNoTime: cleared out stalled Jenkins beta jobs on `deployment-deploy03`, manually started `beta-code-update-eqiad` job & watched to completion. all caught up
  • 04:33 hashar: Restarting Docker on contint1001.wikimedia.org , apparently can't build images anymore

2022-06-12

2022-06-10

  • 15:20 James_F: Zuul: [mediawiki/extensions/SearchVue] Add initial CI jobs for T309932
  • 08:28 hashar: Reloaded Zuul to remove mediawiki/services/parsoid from CI dependencies # https://gerrit.wikimedia.org/r/c/integration/config/+/803990
  • 04:27 TimStarling: on deployment-deploy03 running scap sync-world -v with PHP 7.4 for T295578
  • 04:03 TimStarling: on deployment-deploy03 running scap sync-world -v with PHP 7.2 for T295578 sanity check

2022-06-09

  • 22:49 dancy: Upgrading scap to 4.9.1-1+0~20220609211227.304~1.gbpe48c42 in beta cluster
  • 16:39 brennen: gitlab shared runners: re-registering to apply image allowlist configuration

2022-06-08

  • 17:14 hashar: Reloaded Zuul for I393422
  • 15:57 dancy: Set `profile::mediawiki::php::restarts::ensure: present` in deployment-prep hiera config for T237033
  • 09:28 hashar: Reloaded Zuul for "Add doc publish for Translate" https://gerrit.wikimedia.org/r/792134

2022-06-06

  • 14:37 James_F: Zuul: [mediawiki/extensions/ImageSuggestions] Mark as in production for T302711

2022-06-02

  • 15:33 dancy: Upgrading scap to 4.8.1-1+0~20220602153109.295~1.gbp318d9c in beta cluster
  • 11:26 hashar: Restarting Jenkins on contint2001
  • 11:19 hashar: Restarting Jenkins on releases1002

2022-05-31

  • 21:16 dancy: Upgrading scap to 4.8.0-1+0~20220531211114.292~1.gbp8dbbcf in beta cluster
  • 17:40 dancy: Upgrading scap to 4.8.0-1+0~20220531173912.291~1.gbp21a7ef in beta cluster
  • 17:33 dancy: Reverted to scap 4.8.0-1+0~20220524160924.288~1.gbp794a08 in beta cluster
  • 17:07 dancy: Upgrading scap to 4.8.0-1+0~20220531170512.289~1.gbp143729 in beta cluster

2022-05-30

  • 11:47 jelto: apply gitlab-settings to gitlab1004 - T307142
  • 11:46 jelto: apply gitlab-settings to gitlab1003 - T307142

2022-05-28

  • 19:09 TheresNoTime: deployment-deploy04 live, not referenced by anything T309437

2022-05-27

  • 22:55 zabe: zabe@deployment-mwmaint02:~$ mwscript extensions/WikiLambda/maintenance/updateTypedLists.php --wiki=wikifunctionswiki --db # started ~20 min ago
  • 22:49 TheresNoTime: manually running database update script: samtar@deployment-deploy03:~$ /usr/local/bin/wmf-beta-update-databases.py
  • 22:09 TheresNoTime: samtar@deployment-deploy03:~$ sudo keyholder arm
  • 21:44 TheresNoTime: hard rebooted deployment-deploy03 as soft reboot unresponsive
  • 21:44 bd808: `sudo wmcs-openstack role add --user zabe --project deployment-prep projectadmin` (T309419)
  • 21:10 zabe: zabe@deployment-deploy03:~$ sudo keyholder arm
  • 20:53 bd808: `sudo wmcs-openstack role add --user samtar --project deployment-prep projectadmin` (T309415)
  • 20:49 dancy: Initiated hard reboot of deployment-deploy03.deployment-prep

2022-05-26

  • 18:33 dancy: Updated Jenkins beta-* job configs
  • 16:51 TheresNoTime: manually triggered beta-update-databases-eqiad post-merge of 2c7b5825
  • 16:51 brennen: puppetmaster-1001.devtools: resetting ops/puppet checkout to production branch

2022-05-25

  • 18:38 TheresNoTime: (@ ~18:20UTC) samtar@deployment-mwmaint02:~$ mwscript resetUserEmail.php --wiki=wikidatawiki Mahir256 [snip] T309230
  • 15:46 dancy: Restarted apache2 on gerrit1001

2022-05-24

2022-05-23

  • 19:21 inflatador: Deleted deployment-elastic0[5-7] in favor of newer bullseye hosts T299797
  • 18:37 dancy: Reverted to scap 4.7.1-1+0~20220505181519.270~1.gbpeb47ae in beta cluster
  • 18:35 dancy: Upgrading beta cluster scap to 4.7.1-1+0~20220523183110.280~1.gbpaa0826
  • 14:49 James_F: Zuul: Enforce Postgres and SQLite support via in-mediawiki-tarball
  • 08:37 elukey: move kafka jumbo in deployment-prep to fixed uid/gid - T296982
  • 08:29 elukey: move kafka main in deployment-prep to fixed uid/gid - T296982
  • 08:06 elukey: move kafka logging in deployment-prep to fixed uid/gid - T296982

2022-05-22

2022-05-21

2022-05-20

2022-05-19

2022-05-18

  • 19:31 hashar: Reloaded Zuul for https://gerrit.wikimedia.org/r/c/integration/config/+/793028
  • 18:45 brennen: gitlab: created placeholder /repos/mediawiki group for squatting purposes
  • 08:29 hashar: Updating SSH Build agent from 1.31.5 to 1.32.0 on CI Jenkins to prevent an issue when uploading `remoting.jar` # T307339#7937268
  • 07:32 hashar: Deleting Jenkins agent configuration for `integration-castor03` # T252071

2022-05-17

  • 23:26 James_F: Zuul: [mediawiki/extensions/Phonos] Install basic quibble CI for T308558

2022-05-16

2022-05-14

  • 23:19 James_F: Zuul: Add Dreamy_Jazz to CI allow list
  • 23:17 James_F: Zuul: [mediawiki/extensions/LocalisationUpdate] Move out of production section
  • 20:25 urbanecm: add TheresNoTime (samtar) as a project member per request

2022-05-13

2022-05-12

  • 22:09 inflatador: bking@deployment-elastic05 banned deployment-elastic05 from beta ES cluster in preparation for decom T299797
  • 19:53 hashar: gerrit: triggering full replication to gerrit2001 to test T307137
  • 16:00 hashar: contint2001 and contint1001 now automatically run `docker system prune --force` every day and `docker system prune --force` on Sunday | https://gerrit.wikimedia.org/r/c/operations/puppet/+/773784/
  • 15:05 brennen: gitlab-prod-1001.devtools: soft reboot
  • 00:46 brennen: gitlab: disabling container registries on all existing projects (T307537)

2022-05-11

  • 23:20 brennen: gitlab-prod-1001.devtools: container registry currently enabled
  • 18:58 brennen: gitlab-prod-1001.devtools: setting to use devtools standalone puppetmaster

2022-05-10

2022-05-09

2022-05-08

  • 12:33 urbanecm: deployment-prep: urbanecm@deployment-mwmaint02:~$ foreachwikiindblist growthexperiments extensions/GrowthExperiments/maintenance/migrateMenteeOverviewFiltersToPresets.php --update # T304057

2022-05-06

  • 12:55 hashar: Migrated Castor service from integration-castor03 to integration-castor05 # T252071

2022-05-05

2022-05-04

2022-05-03

2022-05-02

2022-04-29

2022-04-28

2022-04-27

2022-04-26

  • 15:40 brennen: train 1.39.0-wmf.9 (T305215): no current blockers - expect to start train ops after the toolhub deployment window wraps, so some time after 17:00 UTC; taking a pre-train stroll-around-the-block break before that.
  • 13:46 James_F: Deleting deployment-mx02.deployment-prep.eqiad1.wikimedia.cloud for T306068
  • 13:38 James_F: Zuul: [mediawiki/extensions/SimilarEditors] Install basic prod CI for T306897
  • 12:33 hashar: Manually pruned dangling docker images on contint1001 and contint2001
  • 08:30 hashar: Reloading Zuul for https://gerrit.wikimedia.org/r/780824
  • 08:09 hashar: Reloading Zuul for https://gerrit.wikimedia.org/r/785204

2022-04-25

2022-04-20

  • 16:25 zabe: root@deployment-cache-upload06:~# touch /srv/trafficserver/tls/etc/ssl_multicert.config && systemctl reload trafficserver-tls.service

2022-04-18

  • 19:27 brennen: gitlab runners: deleting a number of stale runners with no contacts in > 2 months which are most likely no longer extant
  • 16:49 brennen: phabricator: created phame blog https://phabricator.wikimedia.org/phame/blog/view/22/ for T306329
  • 16:48 brennen: phabricator: adding self to acl*blog-admins
  • 15:33 James_F: Shutting off deployment-wdqs01 from the Beta Cluster project per T306054; it's apparently unused, so this shouldn't break anything.

2022-04-14

2022-04-12

2022-04-08

2022-04-07

  • 06:07 urbanecm: deployment-prep: foreachwiki extensions/GrowthExperiments/maintenance/T304461.php --delete # T304461, output is at P24204
  • 05:54 urbanecm: deployment-prep: mwscript extensions/GrowthExperiments/maintenance/T304461.php --wiki={enwiki,cswiki} --delete # T304461

2022-04-06

  • 20:03 thcipriani: rebooting phabricator
  • 11:44 James_F: Zuul: [mediawiki/extensions/WikiEditor] Add BetaFeatures to phan deps for T304596

2022-04-04

2022-04-02

2022-03-31

2022-03-29

  • 14:20 James_F: Zuul: [mediawiki/extensions/IPInfo] Add EventLogging phan dependency for T304948
  • 12:32 hashar: integration-agent-docker-1039: clearing leftover pipelinelib builds: `sudo rm -fR /srv/jenkins/workspace/workspace/*` T304932 T302477
  • 05:35 hashar: Relocate castor directory on integration-castor03 from `/srv/jenkins-workspace/caches` to `/srv/castor` https://gerrit.wikimedia.org/r/c/operations/puppet/+/774771

2022-03-28

2022-03-27

  • 13:23 James_F: Zuul: [releng/phatality] Make the node14 CI job voting T304736

2022-03-26

  • 02:37 Reedy: beta-update-databases-eqiad is back to @hourly

2022-03-25

  • 23:51 Reedy: temporarily turning off period building of beta-update-databases-eqiad until it's run to completion
  • 23:21 Reedy: running /usr/local/bin/wmf-beta-update-databases.py manually
  • 20:22 Krinkle: Reloading Zuul to deploy https://gerrit.wikimedia.org/r/773866
  • 20:02 brennen: mediawiki-new-errors: ran check-new-error-tasks/check.sh and cleared "resolved" filters
  • 09:43 hashar: Building Quibble Docker images to rename quibble-with-apache to quibble-with-supervisord

2022-03-24

  • 20:00 hashar: reloading Zuul for Id844e1 # T299320
  • 20:00 James_F: Clearing integration-castor03:/srv/jenkins-workspace/caches/castor-mw-ext-and-skins/master/mwgate-node14-docker/_cacache/content-v2/sha512/22/ for T304652
  • 15:00 James_F: Zuul: [design/codex] Publish code coverage reports for T303899
  • 09:37 Lucas_WMDE: killed a beta-scap-sync-world job manually, let’s see if that helps getting beta updates unstuck

2022-03-23

  • 17:35 brennen: restarting phabricator for T304540, brief downtime expected
  • 14:56 dancy: Updating scap to 4.5.0-1+0~20220321191814.216~1.gbp24bc64 in beta cluster

2022-03-22

2022-03-21

  • 08:35 hashar: The castor cache for mediawiki/core wmf/1.39-wmf.1 is actually empty!
  • 08:32 hashar: Nuking npm castor cache /srv/jenkins-workspace/caches/castor-mw-ext-and-skins/master/wmf-quibble-selenium-php72-docker/npm/ # T300203

2022-03-18

  • 14:18 elukey: restart testing of kafka logging TLS certificates (may affect logstash in beta, ping me in case it is a problem)
  • 13:22 hashar: Rolling back Quibble jobs from 1.4.4 T304147
  • 07:41 elukey: experimenting with PKI and kafka logging on deployment-prep, logstash dashboard/traffic may be down (please ping me in case it is a problem)

2022-03-17

2022-03-16

2022-03-15

2022-03-14

  • 23:57 James_F: Zuul: [ooui] Switch from node12 to node14
  • 23:46 James_F: Docker: Publishing node14-test-browser-php80-composer:0.1.0
  • 23:27 James_F: Zuul: Drop legacy node12 templates except the one for Services
  • 23:10 James_F: Zuul: [oojs/router] Drop custom job and just use the generic node14 one
  • 23:08 James_F: Zuul: [oojs/core] Switch from node12 to node14 jobs
  • 22:46 James_F: Zuul: [unicodejs] Switch from node12 to node14
  • 22:25 James_F: Zuul: [VisualEditor/VisualEditor] Switch from node12 to node14
  • 19:51 James_F: Zuul: Migrate almost all libraries and tools from node12 to node14 for T267890
  • 15:36 James_F: Zuul: Switch extension-javascript-documentation from node12 to node14 for T267890
  • 15:21 James_F: Zuul: Switch all mwgate jobs from node12 to node14 for T267890
  • 09:52 hashar: Building Quibble Docker images for https://gerrit.wikimedia.org/r/757867 | T300340
  • 08:54 hashar: Reloading Zuul for https://gerrit.wikimedia.org/r/770079

2022-03-11

  • 04:02 zabe: zabe@deployment-mwmaint02:~$ mwscript extensions/CentralAuth/maintenance/populateGlobalEditCount.php --wiki=metawiki

2022-03-10

2022-03-09

2022-03-08

  • 20:31 brennen: requiring 2fa for all users under /repos

2022-03-07

  • 10:53 zabe: restarted apache on deployment-mediawiki11 # T302699

2022-03-04

2022-03-03

2022-03-02

  • 19:53 James_F: Zuul: Configure CI for the forthcoming REL1_38 branches for T302908
  • 15:56 dancy: Updating scap to 4.4.1-1+0~20220302155149.192~1.gbpe351d6 in beta
  • 15:27 Krinkle: Reloading Zuul to deploy https://gerrit.wikimedia.org/r/767493
  • 15:04 taavi: resolve merge conflicts on deployment-puppetmaster04

2022-02-28

  • 19:29 brennen: removing mutante (dzahn) as application-level gitlab admin; adding as owner of /repos for the time being to facilitate some migrations
  • 19:22 dancy: Update scap to 4.4.0-1+0~20220228192031.189~1.gbp0a8436 in beta
  • 19:17 brennen: adding mutante (dzahn) as application-level gitlab admin

2022-02-26

  • 20:05 zabe: apply T302658 on deployment-prep centralauth databases
  • 13:24 zabe: apply T302660 on deployment-prep centralauth databases
  • 13:19 zabe: apply T302659 on deployment-prep centralauth databases

2022-02-24

  • 16:02 dancy: Updating beta cluster scap to 4.4.0-1+0~20220224155429.187~1.gbp66c5c2
  • 13:44 hashar: integration/config now fully enforces shellcheck https://gerrit.wikimedia.org/r/756088
  • 13:13 hashar: Built image docker-registry.discovery.wmnet/releng/castor:0.2.5
  • 13:10 hashar: Updating castor-save-workspace-cache job https://gerrit.wikimedia.org/r/764817
  • 11:54 hashar: Built image docker-registry.discovery.wmnet/releng/shellcheck:0.1.1
  • 11:41 hashar: Built image docker-registry.discovery.wmnet/releng/sonar-scanner:4.6.0.2311-2
  • 11:04 hashar: Built image docker-registry.discovery.wmnet/releng/operations-puppet:0.8.6
  • 08:58 hashar: Built image docker-registry.discovery.wmnet/releng/mediawiki-phan-testrun:0.2.1

2022-02-23

  • 23:21 dancy: Update beta cluster scap to 4.3.1-1+0~20220223231645.183~1.gbp8ddb60
  • 20:10 dancy: Updating scap in beta
  • 19:23 hashar: Built docker-registry.discovery.wmnet/releng/logstash-filter-verifier:0.0.3
  • 12:41 hashar: Depooling integration-agent-puppet-docker-1002 , pooling integration-agent-puppet-docker-1003 # T252071
  • 10:21 hashar: Created Bullseye instance integration-agent-puppet-docker-1003 https://horizon.wikimedia.org/project/instances/96cf9ddc-daa3-4c9f-8c21-cdd58e95973e/ # T252071
  • 08:37 hashar: Removing Stretch based integration-agent-qemu-1001 # T284774

2022-02-22

  • 16:41 zabe: zabe@deployment-mwmaint02:~$ foreachwiki migrateUserGroup.php oversight suppress # T112147
  • 13:28 urbanecm: deployment-prep: Create database for incubatorwiki (T210492)

2022-02-21

  • 14:58 hashar: Reverting Quibble jobs from 1.4.0 to 1.3.0 # T302226
  • 07:31 hashar: Switching Quibble jobs from Quibble 1.3.0 to 1.4.0 # T300340 T291549 T225730
  • 07:27 hashar: Refreshing all Jenkins jobs

2022-02-20

  • 10:32 qchris: Manually triggering replication run of Gerrit's analytics/datahub to populate newly created analytics-datahub GitHub repo

2022-02-19

  • 12:19 taavi: restart trafficserver-tls on deployment-cache-text06
  • 02:15 James_F: Zuul: [design/codex] Publish the Netlify preview on every patch for T293705
  • 00:35 James_F: Manually re-triggered a build of the docs of Codex (via `zuul-test-repo design/codex postmerge`) now that we actually set the environment vars for T293705

2022-02-18

2022-02-17

  • 21:48 brennen: added Dzahn (mutante) to acl*repository-admins on phabricator
  • 15:58 zabe: root@deployment-cache-upload06:~# touch /srv/trafficserver/tls/etc/ssl_multicert.config && systemctl reload trafficserver-tls.service # T301995
  • 13:35 hashar: Reloading Zuul for https://gerrit.wikimedia.org/r/c/integration/config/+/763207
  • 13:20 Reedy: Reloading Zuul to deploy https://gerrit.wikimedia.org/r/763458
  • 11:12 hashar: Bringing deployment-deploy03 back
  • 11:07 hashar: Disabled deployment-deploy03 Jenkins agent in order to revert some mediawiki/core patch and test the outcome

2022-02-16

  • 18:20 hashar: Tag Quibble 1.4.1 @ d4bd2801de # T300301
  • 16:42 dancy: Updating to scap 4.3.1-1+0~20220216163646.173~1.gbp823710?in beta
  • 12:55 jelto: apply gitlab-settings to gitlab-prod-1001.devtools.eqiad1.wikimedia.cloud
  • 10:09 hashar: Reloading Zuul for I997fee
  • 09:59 hashar: Reloading Zuul for I2ffa01

2022-02-15

  • 21:12 dancy: rebooting deployment-mediawiki12.deployment-prep.eqiad1.wikimedia.cloud to try to revive beta wikis
  • 20:59 dancy: Killed runaway puppet agent on deployment-mediawiki11.deployment-prep.eqiad1.wikimedia.cloud
  • 16:24 hashar: Restarting CI Jenkins for plugins updates
  • 16:21 hashar: Upgrading Jenkins plugins on releases Jenkins
  • 16:06 hashar: Rollback fresh-test Jenkins job to the version intended to run on integration-agent-qemu-1001
  • 15:26 hashar: Reloading Zuul for If80b4b

2022-02-14

  • 16:28 dancy: Updating scap in beta cluster to 4.3.1-1+0~20220211225318.167~1.gbp315b2c
  • 16:16 Amir1: Reloading Zuul to deploy https://gerrit.wikimedia.org/r/c/integration/config/+/762471
  • 15:41 hashar: Messing up with fresh-test Jenkns job to polish up Qemu / qcow2 integration
  • 14:26 jnuche: Jenkins upgrade complete T301361
  • 13:54 jnuche: Jenkins contint instances are going to be restarted soon

2022-02-12

  • 18:22 urbanecm: deployment-prep: reboot deployment-eventgate-3 (T289029)

2022-02-10

2022-02-09

  • 15:22 taavi: deleted shutoff deployment-mx02

2022-02-08

  • 17:34 taavi: remove scap from deployment-kafka-main/jumbo
  • 16:23 taavi: hard reboot misbehaving deployment-echostore01
  • 13:39 taavi: delete /srv/mediawiki-staging.save on deployment-deploy03

2022-02-07

2022-02-04

2022-02-03

  • 18:41 taavi: deployment-prep: route /w/api.php to deployment-mediawiki11, trying to reduce load on a single server
  • 14:53 hashar: Building Docker images for Quibble 1.4.0 (prepared by kostajh)
  • 13:51 kostajh: Tag Quibble 1.4.0 @ 4231bc2 # T300340 T291549 T225730

2022-02-02

  • 16:50 dancy: Upgrading scap to 4.2.2-1+0~20220202164708.157~1.gbp376a16 in beta.
  • 16:12 dancy: Upgrading scap to 4.2.2-1+0~20220201161808.156~1.gbp1c1c64 in beta

2022-02-01

2022-01-31

  • 19:01 James_F: Re-configured Jenkins job mediawiki-i18n-check-docker to 9e3ea96 for T222216
  • 10:49 hashar: Added integration-agent-qemu-1003 with label `Qemu` # T284774

2022-01-28

  • 21:45 taavi: running recountCategories.php on all beta wikis per T299823#7652496
  • 14:27 hashar: taking heapdump of CI Jenkins `sudo -u jenkins /usr/lib/jvm/java-11-openjdk-amd64/bin/jmap -dump:live,format=b,file=/var/lib/jenkins/202201281527.hprof xxxx`

2022-01-27

  • 20:26 hashar: Successfully published image docker-registry.discovery.wmnet/releng/logstash-filter-verifier:0.0.2 # T299431
  • 19:34 Amir1: Reloading Zuul to deploy 757464
  • 16:00 hashar: Pooling back agents 1035 1036 1037 1038 , they could not connect due to ssh host mismatch since yesterday they all got attached to instance 1033 and accepted that host key # T300214
  • 09:16 hashar: integration: cumin --force 'name:docker' 'apt install rsync' # T300236
  • 09:05 hashar: integration: cumin --force 'name:docker' 'apt install rsync' # T300214
  • 00:24 thcipriani: restarting jenkins

2022-01-26

  • 20:29 hashar: Completed migration of integration-agent-docker-XXXX instances from Stretch to Bullseye - T252071
  • 19:55 hashar: deleting integration-agent-docker-1014 which only has the `codehealth` label. A short live experiment no more used since October 2nd 2019 - https://gerrit.wikimedia.org/r/c/integration/config/+/540362 - T234259
  • 18:56 hashar: integration: pooled in Jenkins a few more Bullseye docker agents for T252071
  • 18:17 hashar: integration: pooled in Jenkins a few Bullseye docker agent for T252071
  • 16:45 hashar: integration: creating integration-agent-docker-1023 based on buster with new flavor `g3.cores8.ram24.disk20.ephemeral60.4xiops` # T290783

2022-01-25

  • 20:17 James_F: Zuul: [mediawiki/extensions/CentralAuth] Drop UserMerge dependency
  • 16:39 James_F: Zuul: Mark Math extension as now tarballed in parameter_functions for T232948
  • 15:57 James_F: Zuul: [mediawiki/extensions/Math] Add Math to the main gate for T232948
  • 13:44 hashar: Jenkins CI: added Logger https://integration.wikimedia.org/ci/log/ProcessTree%20-%20T299995/ to watch `hudson.util.ProcessTree` for T299995
  • 10:02 hashar: integration: removing usage of `role::ci::slave::labs::docker::docker_lvm_volume` in Horizon following https://gerrit.wikimedia.org/r/c/operations/puppet/+/755948 . Docker role instances now always have a 24G partition for Docker
  • 09:59 hashar: integration-agent-qemu-1001: resized /srv to 100% disk free: `lvextend -r -l +100%FREE /dev/mapper/vd-second--local--disk` # T299996
  • 09:59 hashar: integration-agent-qemu-1001: resizing /dev/mapper/vd-second--local--disk (/srv) to 20G : `resize2fs -p /dev/mapper/vd-second--local--disk 20G` # T299996
  • 09:51 hashar: integration-agent-qemu-1001: resizing /dev/mapper/vd-second--local--disk (/srv) to 20G : `resize2fs -p /dev/mapper/vd-second--local--disk 20G`
  • 09:51 hashar: integration-agent-qemu-1003: nuked /dev/vd/second-local-disk and /srv to make room for a docker logical volume. That has fixed puppet T299996
  • 09:22 Reedy: unblocked beta again
  • 07:32 Krinkle: integration-castor03:/srv/jenkins-workspace/caches$ sudo rm -rf castor-mw-ext-and-skins/

2022-01-24

  • 21:44 Reedy: unstick beta ci jobs
  • 21:19 jeena: reloading Zuul to deploy https://gerrit.wikimedia.org/r/c/integration/config/+/756523
  • 20:36 Krinkle: Reloading Zuul to deploy https://gerrit.wikimedia.org/r/c/integration/config/+/756139
  • 17:28 hashar: Nuke castor caches on integration-castor03 : sudo rm -fR /srv/jenkins-workspace/caches/castor-mw-ext-and-skins/master/{quibble-vendor-mysql-php72-selenium-docker,wmf-quibble-selenium-php72-docker} # T299933
  • 17:28 hashar: Nuke castor caches on integration-castor03 : sudo rm -fR /srv/jenkins-workspace/caches/castor-mw-ext-and-skins/master/{quibble-vendor-mysql-php72-selenium-docker,wmf-quibble-selenium-php72-docker}

2022-01-22

  • 13:40 taavi: apply T299827 on deployment-prep centralauth database
  • 11:44 taavi: restart varnish-frontend.service on deployment-cache-upload06 to clear puppet agent failure alerts

2022-01-21

  • 18:12 taavi: resolved merge conflicts on deployment-puppetmaster04
  • 15:50 hashar: integration-puppetmaster-02: deleted 2021 snapshot tags in puppet repo and ran `git gc --prune=now`

2022-01-20

  • 20:24 James_F: Zuul: [Kartographer] Add parsoid as dependency for CI jobs
  • 20:22 James_F: Zuul: [DiscussionTools] Add Gadgets as dependency for Phan jobs
  • 20:04 dancy: Jenkins beta jobs are back online, using scap prep auto now.
  • 19:19 dancy: Pausing beta Jenkins jobs to make a copy of /srv/mediawiki-staging in preparation for testing
  • 19:10 dancy: Unpacking scap (4.1.1-1+0~20220120175448.144~1.gbp517f9d) over (4.1.1-1+0~20220113154148.133~1.gbp6e3a17) on deploy03
  • 18:07 hashar: Updating Quibble jobs to have MediaWiki files written on the hosts /srv partition (38G) instead of inside the container which ends in /var/lib/docker (24G) https://gerrit.wikimedia.org/r/755743 # T292729
  • 16:31 hashar: Rebalancing /var/lib/docker and /srv partitions on CI agents | https://gerrit.wikimedia.org/r/755713
  • 12:12 hashar: contint2001 deleting all the Docker images (they will be pulled as needed)
  • 12:10 hashar: contint2001 : docker container prune && docker image prune
  • 12:07 hashar: contint1001 deleting all the Docker images (they will be pulled as needed)
  • 12:04 hashar: contint1001 `docker image prune`
  • 11:51 hashar: Cleaning very old Docker images on contint1001.wikimedia.Org

2022-01-19

2022-01-18

  • 19:56 hashar: building Docker images for https://gerrit.wikimedia.org/r/754951
  • 18:01 taavi: added ryankemper as a member of the deployment-prep project
  • 15:00 hashar: Updating Jenkins jobs for Quibble 1.3.0 with proper PHP version in the images # T299389
  • 11:39 hashar: Rolling back Quibble 1.3.0 jobs due to php configuration files with at least releng/quibble-buster73:1.3.0 # T299389
  • 08:07 hashar: Updating Jenkins jobs for Quibble to pass `--parallel-npm-install` https://gerrit.wikimedia.org/r/c/integration/config/+/754569
  • 08:02 hashar: Updating Jenkins jobs for Quibble 1.3.0

2022-01-17

  • 16:28 hashar: Building Quibble 1.3.0 Docker images
  • 16:16 hashar: Tagged Quibble 1.3.0 @ 2b2c7f9a45 # T297480 T226869 T294931
  • 08:32 hashar: Refreshing all Jenkins jobs with jjb to take in account recent changes related to the Jinja2 docker macro

2022-01-14

  • 15:56 dancy: Reloading Zuul to deploy https://gerrit.wikimedia.org/r/c/integration/config/+/753981
  • 14:59 hashar: Starting VM integration-agent-docker-1022 which was in shutdown state since December and is Bullseye based # T290783
  • 13:49 hashar: Restarting all CI Docker agents via Horizon to apply new flavor settings T265615 T299211
  • 01:47 dancy: revert to scap 4.1.1-1+0~20220113154148.133~1.gbp6e3a17 in beta

2022-01-13

  • 18:02 dancy: Updating scap to 4.1.1-1+0~20220113154506.135~1.gbp523480 on all beta hosts
  • 17:54 dancy: Reloading Zuul to deploy https://gerrit.wikimedia.org/r/753792
  • 16:27 dancy: testing scap prep auto on deployment-deploy03
  • 15:52 dancy: Update scap to 4.1.1-1+0~20220113154506.135~1.gbp523480 on deployment-deploy03
  • 11:27 hashar: Updating Jenkins job to normalize usage of `docker run --workdir` https://gerrit.wikimedia.org/r/c/integration/config/+/753457
  • 10:52 hashar: Restarting Jenkins CI for plugins update
  • 10:42 hashar: Applied Jenkins built-in node migration to CI Jenkins (`master` > `built-in` renaming) # T298691
  • 10:14 taavi: cancelled stuck deployment-prep jobs on jenkins

2022-01-12

2022-01-11

  • 09:18 hashar: Updating all Jenkins jobs following recent "noop" refactorings

2022-01-10

  • 17:13 dancy: Update beta scap to 4.1.0-1+0~20220107203309.130~1.gbpcd0ace
  • 14:01 James_F: Zuul: Add gate-and-submit-l10n to Isa for T222291

2022-01-05

2022-01-04

2022-01-03

  • 14:37 hashar: Upgraded Java 11 on contint2001 && contint1001. Restarted CI Jenkins.
  • 14:35 hashar: Upgraded Java 11 on releases1002 && releases2002


Archives