You are browsing a read-only backup copy of Wikitech. The live site can be found at wikitech.wikimedia.org
Portal:Data Services/Admin/Shared storage
Labstore (cloudstore) is the prefix for the naming of a class of servers that fulfill different missions. The common thread is off-compute-host storage use cases that serve the VPS instances and Tools. The majority of labstore clusters provide NFS shares for some purpose.
Clusters
Primary Cluster
Servers: labstore1004, labstore1005
This was previously called the secondary cluster (now it only is in the client mount).
- Tools project share that is used operationally for deployment
- Tools home share that is used for /home in ToolForge
- Misc home and project shares that are used by all NFS enabled projects, except maps
Components: DRBD, NFS, nfs-manage, maintain-dbusers, nfs-exportd, BDSync
Secondary Cluster
Servers: cloudstore1008, cloudstore1009
- An NFS share large enough to be used as general scratch space across projects
- /data/scratch
- Maps project(s) also have tile generation on a share here temporarily.
- (proposed) Quota limited rsync backup service for Cloud VPS tenants (phab:T209530)
- Uses DRBD to stay in sync similar to the primary cluster.
Components: NFS, DRBD, nfs-exportd, nfs-manage
Dumps
Servers: labstore1006, labstore1007
- Dumps customer facing storage
- NFS exports to Cloud VPS projects including Toolforge
- NFS exports to Analytics servers
- Rsync origin server for dumps mirroring
- https://dumps.wikimedia.org (nginx)
- Analytics manages an hdfs client there which means the servers are kerberized
- Does NOT use nfs-exportd and nfs, rsync and nginx should remain active on both servers
Components: NFS, nginx, rsync (for mirrors and syncing to stats servers), kerberos, hdfs
Offsite backup
Servers: labstore2003, labstore2004
- labstore2003 acts as a backup server for the "tools-project" logical volume from labstore100[45]
- labstore2004 acts as a backup server for the "misc project" logical volume from labstore100[45]
Components
DRBD
DRBD syncs block devices between two hosts in real time. The upstream docs for DRBD found in Buster and up are at https://linbit.com/drbd-user-guide/drbd-guide-9_0-en/. Stretch and before used DRBD 8, which is very similar and documented at https://linbit.com/drbd-user-guide/users-guide-drbd-8-4/.
DRBD replication can be configured in several ways. Our system uses protocol B, which is memory synchronous replication. Writes are not completed until they are replicated, but the active host does not wait until it is flushed to disk on the standby. That has never caused problems in the past, but it is possible to corrupt data on the standby using that protocol during failure of the standby host. It's a balanced approach.
In any of our DRBD clusters, backups are ideally taken from snapshots on the secondary host because it isn't actively serving NFS. It is only writing blocks to keep up with the active machine.
Useful commands
DRBD status
cat /proc/drbd
NFS
NFS volume cleanup
Because the Primary and Secondary NFS clusters lack user quotas, WMCS must occasionally create a task to start removing large files and helping users clean up their shares. If it has been six months, and no clean-up has taken place, please check the NFS servers at least on grafana to make sure one isn't needed. The tasks generally take a form similar to task T247315, an overall tracking task with administrator work logged on it and a tree of user tasks that we assign to end users to clean up their tool shares or project shares with some advice and assistance where possible.
If a page has triggered a cleanup task, make sure you downtime the alert for a good long while.
Admin actions include, but are not limited to:
- Checking Grafana for the list of the heaviest users.
- Running
ionice -c 3 nice -19 find /srv/tools -type f -size +100M -printf "%k KB %p\n" | sort -h > tools_large_files_$(date +%Y%m%d).txt
to find the largest files. Often they are simply toolforge-created logs that can be truncated withtruncate -s 0 $filename
and a SAL log to tools.$toolname. - In general, we consider truncating automatically-created *.out and *.err files that were created by Grid Engine to be fair for admins to do unless there is obviously necessary troubleshooting information in there. If there is troubleshooting info, it is probably sufficient to copy a representative sample into a task for the tool maintainer before truncation.
- Log files generated by the webservice command such as access.log and error.log can be treated similarly.
- Other files should probably be checked with the user before deleting unless the situation is very urgent (usually asking the user in the phabricator task is enough).
- If a service is consistently filling up NFS volumes, and users cannot be reached, it could be shut down as a danger to the overall service. We should make our best effort to avoid needing to do that, of course.
NFS client operations
When significant changes are made on an NFS server, clients that mount that often need actions taken on them to recover from whatever state they are suddenly in. To this end, the cumin host file backend is there to work in tandem with the nfs-hostlist
script. The latter script will generate a list of VMs by project and specified mounts where NFS is mounted. Currently, you must be on a cloudinfra
cumin host to run these commands. The current host is cloud-cumin-01.cloudinfra.eqiad1.wikimedia.cloud
The nfs-hostlist script takes several options (some are required):
- -h Show help
- -m <mount> A space-delimited list of "mounts" as defined in the /etc/nfs-mounts.yaml file generated from puppet (it won't accept wrong answers, so this is a pretty safe option)
- --all-mounts Anything NFS mounted (but you can still limit the number of projects)
- -p <project> A space-delimited list of OpenStack projects to run against. This will be further trimmed according to the mounts you selected. (If you used -m maps and -p maps tools, you'll only end up with maps hosts)
- --all-projects Any project mentioned in /etc/nfs-mounts.yaml, but you can still filter by mounts.
- -f <filename> Without this, the script prints to STDOUT.
Example:
- First, create your host list based on the mounts or projects you know you will be operating on. For example, if I was making a change only to the secondary cluster, which currently serves maps and scratch, one might generate a host list with the command: Note that root/sudo is needed because this interacts with cumin's query setup to get hostnames. It will take quite a while to finish because it also calls openstack-browser's API to read Hiera settings.
bstorm@cloud-cumin-01:~$ sudo nfs-hostlist -m maps scratch --all-projects -f hostlist.txt
- Now you can run a command with cumin across all hosts in
hostlist.txt
similar tobstorm@cloud-cumin-01:~$ sudo cumin --force -x 'F{/home/bstorm/hostlist.txt}' 'puppet agent -t'
It is sensible to have the host list generated shortly before the changes will take place to respond quickly as needed with cumin when you need to.
nfs-manage
This script is meant as the entry point to bringing up and taking down the DRBD/NFS stack in its entirety.
nfs-manage status nfs-manage up nfs-manage down
To actually use it to failover a cluster, try Portal:Data_Services/Admin/Runbooks/Failover_an_NFS_cluster
nfs-exportd
Dynamically generates the contents of /etc/export.d to mirror active projects and shares as defined in /etc/nfs-mounts.yaml, every 5 minutes.
This daemon fetches project information from OpenStack to know the IPs of the instances and add them to the exports ACL.
See ::labstore::fileserver::exports
.
WARNING: there is a known issue, in case some openstack component is misbehaving (for example, keystone), this will typically return a 401. Please don't allow this to make it past the traceback. We want exceptions and failures in the service instead of letting it remove exports. There is also a cron job that backs up exports to /etc/exports.bak.
maintain-dbusers
We maintain the list of accounts to access the Wiki Replicas on the labstore server in the secondary cluster that is actively serving the Tools project share. The script writes out a $HOME/replica.my.cnf file to each user and project home containing MySQL connection credentials. This uses LDAP to get a list of accounts to create.
The credential files are created with the immutable bit set with chattr to prevent deletion by the Tool account.
The code pattern here is that you have a central data store (the db), that is then read/written to by various independent functions. These functions are not 'pure' - they could even be separate scripts. They mutate the DB in some way. They are also supposed to be idempotent - if they have nothing to do, they should not do anything.
Most of these functions should be run in a continuous loop, maintaining mysql accounts for new tool/user accounts as they appear.
populate_new_accounts
- Find list of tools/users (From LDAP) that aren't in the `accounts` table
- Create a replica.my.cnf for each of these tools/users
- Make an entry in the `accounts` table for each of these tools/users
- Make entries in `account_host` for each of these tools/users, marking them as absent
create_accounts
- Look through `account_host` table for accounts that are marked as absent
- Create those accounts, and mark them as present.
If we need to add a new labsdb, we can do so the following way:
- Add it to the config file
- Insert entries into `account_host` for each tool/user with the new host.
- Run `create_accounts`
In normal usage, just a continuous process running `populate_new_accounts` and `create_accounts` in a loop will suffice.
TODO:
- Support for maintaining per-tool restrictions (number of connections + time)
BDSync
We use the WMF bdsync package on both source and destination backup hosts. Backup hosts run a job periodically to sync a remote block device from a remote target to an LVM device locally.
Backups
Uses bdsync via rsync using SSH as a transport to copy block devices over the network.
Mounting a backup
When mounting a backup from a DRBD device, you have to tell the OS what kind of filesystem it is, since it just sees "DRBD".
Example:
$ mount -t ext4 /dev/backup/tools-project /mnt/tools-project/
Additionally, it is very important to unmount the backup as soon as work is done with it, because the backup jobs will fail if the device is mounted.
How to enable NFS for a project
Setup labstore1004
1. Find out the GID for the project
$ useldap getent group project-NAME
user@labstore1004:~$ useldap getent group project-wikilink
project-wikilink:*:54031:nskaggs,samwalton9,suecarmol,jsn,novaadmin,crucio
2. Add it to modules/labstore/templates/nfs-mounts.yaml.erb (example, another example)
3. Run sudo puppet agent -tv
on labstore1004
4. Create the shared folders on labstore1004 as /srv/misc/shared/<project_name>/home and /srv/misc/shared/<project_name>/project as appropriate. Leave ownership with "root:root". That is normal.
Enabling on the project
Utilize hiera key mount_nfs
to opt-in / out. (e.g. mount_nfs: true
)
If the share isn't used to run things from directly, it is also a good idea to add profile::wmcs::nfsclient::mode: soft
to the same server
1. Run puppet on the project VMs.