You are browsing a read-only backup copy of Wikitech. The live site can be found at wikitech.wikimedia.org

Wikimedia Cloud Services team/Clinic duties: Difference between revisions

From Wikitech-static
Jump to navigation Jump to search
imported>FNegri
(→‎Monitoring: Add link to alert history dashboard, fix indentation)
imported>Nskaggs
(Refine On-call instructions, add examples)
Line 51: Line 51:
* Summarize work reported by the team on the weekly meeting etherpad and add summary to:
* Summarize work reported by the team on the weekly meeting etherpad and add summary to:
** Add outgoing updates to weekly SRE meeting document (put important notes in bold)
** Add outgoing updates to weekly SRE meeting document (put important notes in bold)
** Nicholas will also copy this summary to the weekly Tech Managers meeting notes


= Oncall Duty =
= Oncall Duty =
During your shift, you are expected to monitor and react to alerts, as well as highly prioritize working on tasks that improve the current alert/monitoring/stability of the platforms. See [[Wikimedia Cloud Services team/EnhancementProposals/Decision record T310598 Team oncall alerting schedules and processes|Decision Record]] for more information.


== Monitoring ==
==Monitoring==
* Monitor the following for alerts:
*Monitor the following for alerts:
** [https://alerts.wikimedia.org/?q=team%3Dwmcs&q=%40state%3Dactive alertmanager (team=wmcs)]
**phaultfinder
*** You might also find [https://logstash.wikimedia.org/goto/d5c0cb63bb13bc883685c3d90d87cfb7 this dashboard] useful to browse alert history
***[[phab:maniphest/query/LXZs.g30DfOi/#R|Open tasks]]
** [https://prometheus-alerts.wmcloud.org/?q=%40state%3Dactive vps alertmanager]
***[[phab:maniphest/query/0PZ2LPsDUKL5/#R|All tasks]]
** [https://icinga.wikimedia.org/icinga/ icinga], especially [https://icinga.wikimedia.org/cgi-bin/icinga/status.cgi?hostgroup=wmcs_eqiad&style=overview WMCS hardware].
**[https://alerts.wikimedia.org/?q=team%3Dwmcs&q=%40state%3Dactive alertmanager (team=wmcs)]
* Watch for wmcs-related emails (cron, puppet failing on our projects, etc.) and fix the causes when possible
***You might also find [https://logstash.wikimedia.org/goto/d5c0cb63bb13bc883685c3d90d87cfb7 this dashboard] useful to browse alert history
* Check the [https://grafana-labs.wikimedia.org/d/000000012/tools-basic-alerts tools grafana board] for trends (For gridengine monitoring, https://sge-status.toolforge.org/ and https://sge-jobs.toolforge.org/ is also helpful)
**[https://prometheus-alerts.wmcloud.org/?q=%40state%3Dactive vps alertmanager]
* Check for broken puppet on VMs (owners get daily emails from [https://gerrit.wikimedia.org/r/plugins/gitiles/operations/puppet/+/refs/heads/production/modules/base/files/labs/puppet_alert.py puppetalert.py] but you can contact them if an instance is un-puppetized for a particularly long time). From a [[Cumin#WMCS_Cloud_VPS_infrastructure|cumin master]]:
**[https://icinga.wikimedia.org/icinga/ icinga], especially [https://icinga.wikimedia.org/cgi-bin/icinga/status.cgi?hostgroup=wmcs_eqiad&style=overview WMCS hardware].
*Watch for wmcs-related emails (cron, puppet failing on our projects, etc.) and fix the causes when possible
*Check the [https://grafana-labs.wikimedia.org/d/000000012/tools-basic-alerts tools grafana board] for trends (For gridengine monitoring, https://sge-status.toolforge.org/ and https://sge-jobs.toolforge.org/ is also helpful)
*Check for broken puppet on VMs (owners get daily emails from [[gerrit:plugins/gitiles/operations/puppet/+/refs/heads/production/modules/base/files/labs/puppet_alert.py|puppetalert.py]] but you can contact them if an instance is un-puppetized for a particularly long time). From a [[Cumin#WMCS_Cloud_VPS_infrastructure|cumin master]]:
<syntaxhighlight lang="shell-session">
<syntaxhighlight lang="shell-session">
cloud-cumin-03:~$ sudo cumin --force --timeout 500 -o json  "A:all" "/usr/local/lib/nagios/plugins/check_puppetrun -w 3600 -c 86400" | grep "Failed to apply catalog"
cloud-cumin-03:~$ sudo cumin --force --timeout 500 -o json  "A:all" "/usr/local/lib/nagios/plugins/check_puppetrun -w 3600 -c 86400" | grep "Failed to apply catalog"
cloud-cumin-03:~$ sudo cumin --force --timeout 500 -o json  "A:all" "/usr/local/lib/nagios/plugins/check_puppetrun -w 3600 -c 86400" | grep  -i unknown
cloud-cumin-03:~$ sudo cumin --force --timeout 500 -o json  "A:all" "/usr/local/lib/nagios/plugins/check_puppetrun -w 3600 -c 86400" | grep  -i unknown
</syntaxhighlight>
</syntaxhighlight>
* Platform Health
*Platform Health
** [https://grafana.wikimedia.org/d/50z0i4XWz/tools-overall-nfs-storage-utilization?orgId=1 NFS Storage Utilization]  
** [https://grafana.wikimedia.org/d/50z0i4XWz/tools-overall-nfs-storage-utilization?orgId=1 NFS Storage Utilization]
** [https://grafana.wikimedia.org/d/000000579/wmcs-openstack-eqiad1?orgId=1 Openstack]
**[https://grafana.wikimedia.org/d/000000579/wmcs-openstack-eqiad1?orgId=1 Openstack]
** [https://grafana.wikimedia.org/d/P1tFnn3Mk/wmcs-ceph-eqiad-health?orgId=1&search=open&folder=current&tag=ceph&tag=health Ceph Cluster]
**[https://grafana.wikimedia.org/d/P1tFnn3Mk/wmcs-ceph-eqiad-health?orgId=1&search=open&folder=current&tag=ceph&tag=health Ceph Cluster]
** [https://grafana.wikimedia.org/d/000000579/wmcs-openstack-eqiad1?search=open&folder=current&orgId=1&refresh=1m WMCS Grafana Dashboards]
**[https://grafana.wikimedia.org/d/000000579/wmcs-openstack-eqiad1?search=open&folder=current&orgId=1&refresh=1m WMCS Grafana Dashboards]


== Day to day ==
==Improvements==
You are expected to monitor, investigate and fix any tasks created by the [[phab:maniphest/query/pKapa5WUfzUU/|alert system on phabricator]]
If nothing currently requires attention, you should work on improving tooling in this area. Consider:


During your shift, you are expected to highly prioritize working on tasks that improve the current alert/monitoring/stability of the platforms, things like:
*Moving alerts from [https://icinga.wikimedia.org/ Icinga] to [https://alerts.wikimedia.org/ Alertmanager] (e.g. [https://gerrit.wikimedia.org/r/c/operations/puppet/+/813275 novafullstack], [https://gerrit.wikimedia.org/r/c/operations/puppet/+/813228 ceph])
 
*Adding new alerts or removing stale alerts (e.g. [https://gerrit.wikimedia.org/r/c/operations/alerts/+/822319 Adding neutron alert], [https://gerrit.wikimedia.org/r/c/operations/alerts/+/812706 Adding ceph alerts], [https://gerrit.wikimedia.org/r/c/operations/alerts/+/813274 Adding novafullstack alerts])
* Moving alerts from Icinga to Alertmanager
*Improving [[Portal:Cloud VPS/Admin/Runbooks|runbooks]] and documentation
* Adding new alerts or removing stale alerts
*Writing cookbooks to automate tasks (e.g. [https://gerrit.wikimedia.org/r/c/operations/cookbooks/+/774385 remove grid errors],  [https://gerrit.wikimedia.org/r/c/operations/cookbooks/+/801785 remove grid node], [https://gerrit.wikimedia.org/r/c/operations/cookbooks/+/810914 ceph_reboot], [https://gerrit.wikimedia.org/r/c/operations/cookbooks/+/806429 increase quotas] )
* Improving runbooks and documentation
*Cleaning up puppet code
* Writing cookbooks to automate tasks
* Cleaning up puppet code
* ...

Revision as of 17:59, 26 August 2022

The WMCS team practices a clinic duty rotation that runs from one weekly team meeting to the next. Each team member takes a turn sequentially performing these duties. Clinic Duty runs from one weekly meeting to the next. Your shift begins after the weekly meeting, and ends with the next.

In a similar fashion, we have two oncall duty rotations, that also run for one week (see the calendar).

Start of clinic duty

🦄 of the week duties

Phabricator

Community

IRC

  • #wikimedia-cloud connect monitoring
    • Respond to help requests
    • Watch for pings to other team members and intercept if appropriate
    • Watch for pings to !help
    • Call people out for poor behavior in the channel
    • Praise people for helping constructively

Community Requests

Check for and respond to incoming requests. For new project requests or quota requests, please seek and obtain at least one other person's approval before approving and granting the request. Ensure this permission is explicitly documented on the phabricator ticket. For all floating IP requests, and any request you are unsure about, please bring up to the weekly meeting. Requests that represent an increase of more than double the quota or more than 300GB of storage should also be reviewed at the weekly meeting.

Maintenance tasks (probably not all weeks)

End of clinic duty

  • Summarize work reported by the team on the weekly meeting etherpad and add summary to:
    • Add outgoing updates to weekly SRE meeting document (put important notes in bold)

Oncall Duty

During your shift, you are expected to monitor and react to alerts, as well as highly prioritize working on tasks that improve the current alert/monitoring/stability of the platforms. See Decision Record for more information.

Monitoring

cloud-cumin-03:~$ sudo cumin --force --timeout 500 -o json  "A:all" "/usr/local/lib/nagios/plugins/check_puppetrun -w 3600 -c 86400" | grep "Failed to apply catalog"
cloud-cumin-03:~$ sudo cumin --force --timeout 500 -o json  "A:all" "/usr/local/lib/nagios/plugins/check_puppetrun -w 3600 -c 86400" | grep  -i unknown

Improvements

If nothing currently requires attention, you should work on improving tooling in this area. Consider: