You are browsing a read-only backup copy of Wikitech. The live site can be found at wikitech.wikimedia.org
Your shell access was granted
Hello, welcome to Wikimedia Labs! I would like to inform you that your shell request was processed and you should be able to login to Bastion host now at bastion.wmflabs.org. In case you get into troubles, please read this page which contains useful information about accessing instances. You can also ask in our irc channel at #wikimedia-labs connect or send an e-mail to our mailing list firstname.lastname@example.org - thank you, and have fun using labs! Tim Landscheidt (talk) 16:30, 22 May 2015 (UTC)
Welcome to Tool Labs
Hello Kjschiroo, welcome to the Tool Labs project! Your request for access was processed and you should be able to login now via ssh at
login.tools.wmflabs.org. To get started, check the help page and the migrating from the Toolserver page. You can also ask in our IRC channel at #wikimedia-labs connect or send an e-mail to our mailing list email@example.com – thank you, and have fun using Tools! Tim Landscheidt (talk) 18:11, 22 May 2015 (UTC)
Running process on tools-login
You were running a Python process on tools-login that was using large amounts of CPU and I/O. Please don't run processes directly; instead, use the computing grid; see Help:Tool_Labs/Grid for how. In addition, please consider how much disk I/O you need to use; /data/project is on NFS and has limited capacity. I have killed the process for now; please discuss with User:YuviPanda or User:Coren what the best solution for the disk I/O is. Thank you.
- My apologies. I will use the grid from this point forward. Regarding disk I/O, this isn't impacted by database queries, is it? Kjschiroo (talk) 01:52, 10 June 2015 (UTC)
- No, database I/O should not be an issue. If you were not using lots of disk I/O (either by writing large amounts of data, or by flushing often), I probably misread the statistics. In that case, please ignore my comment on disk I/O and just re-submit to the grid :-). valhallasw (Merlijn van Deen) (talk) 08:00, 10 June 2015 (UTC)
Shell scripts as grid jobs need a shebang line
Wiki Replica c1.labsdb to be rebooted Monday 2017-10-30 14:30 UTC
A tool you maintain is hosting data on labsdb1001.eqiad.wmnet (aka c1.labsdb). This server will be rebooted Monday 2017-10-30 at 14:30 UTC.
Normal usage of the *.labsdb databases should experience only limited interruption as DNS is changed to point to the labsdb1003.eqiad.wmnet (aka c3.labsdb). The c1.labsdb service name will not be updated however, so tools hardcoded to that service name will be interrupted until the reboot is complete.
There is a possibility of catastrophic hardware failure in this reboot. There will be no way to recover the server or the data it currently hosts if that happens. Tools that are hosting self-created data on c1.labsdb will lose that data if there is hardware failure. If you are unsure why your tool is hosting data on c1.labsdb, you can check the database and table names at https://tools.wmflabs.org/tool-db-usage/.
This reboot is an intermediate step before the complete shutdown of the server on Wednesday 2017-12-13. See Wiki Replica c1 and c3 shutdown for more information. --BryanDavis 00:21, 28 October 2017 (UTC)
Wiki Replica c3.labsdb to be shutdown Wednesday 2017-12-13
A tool you maintain is hosting data on labsdb1003.eqiad.wmnet (aka c3.labsdb). This server will be taken out of service on Wednesday 2017-12-13.
Normal usage of the *.labsdb databases should experience only limited interruption as DNS is changed to point to the new Wiki Replica cluster. The c3.labsdb service name will not be updated however, so tools hardcoded to that service name will be interrupted until they are updated to use a new service name.
Tools that are hosting self-created data on c3.labsdb will lose that data if it is not migrated to
tools.db.svc.eqiad.wmflabs (also known as
tools.labsdb). If you are unsure why your tool is hosting data on c3.labsdb, you can check the database and table names at https://tools.wmflabs.org/tool-db-usage/ or phab:P6313.