You are browsing a read-only backup copy of Wikitech. The live site can be found at wikitech.wikimedia.org
Logstash: Difference between revisions
imported>Filippo Giunchedi (Add onelines for stats) |
imported>Cwhite (→Operations: add steps to restore dashboards from backup) |
||
(41 intermediate revisions by 13 users not shown) | |||
Line 1: | Line 1: | ||
{{Navigation Wikimedia infrastructure|expand=logging}} | {{Navigation Wikimedia infrastructure|expand=logging}} | ||
{{ | {{See|For the frontend at logstash.wikimedia.org, see [[OpenSearch Dashboards]].}} | ||
'''Logstash''' is a tool for managing events and logs. When used generically, the term encompasses a larger system of log collection, processing, storage and searching activities. | '''Logstash''' is a tool for managing events and logs. When used generically, the term encompasses a larger system of log collection, processing, storage and searching activities. | ||
==Overview ("ELK+")== | =={{anchor|Overview ("ELK+")}} Overview== | ||
[[File:ELK_Tech_Talk_2015-08-20.pdf|thumb|Slides from TechTalk on ELK by Bryan Davis]] | [[File:ELK_Tech_Talk_2015-08-20.pdf|thumb|Slides from TechTalk on ELK by Bryan Davis]] | ||
[[File:Wikipedia webrequest | [[File:Wikipedia webrequest 2022.png|thumb|290px|Wikipedia request flow]] | ||
[[File:Using_Kibana4_to_read_logs_at_Wikimedia_Tech_Talk_2016-11-14.pdf|thumb|Slides from TechTalk on Kibana4 by Bryan Davis]] | [[File:Using_Kibana4_to_read_logs_at_Wikimedia_Tech_Talk_2016-11-14.pdf|thumb|Slides from TechTalk on Kibana4 by Bryan Davis]] | ||
Various Wikimedia applications send log events to '''[[Logstash]]''', which gathers the messages, converts them into JSON documents, and stores them in an '''[[ | Various Wikimedia applications send log events to '''[[Logstash]]''', which gathers the messages, converts them into JSON documents, and stores them in an '''[[OpenSearch]]''' cluster. Wikimedia uses '''[[OpenSearch Dashboards]]''' as a front-end client to filter and display messages from the OpenSearch cluster. These are the core components of our '''ELK stack''', but we use additional components as well. Since we utilize more than the core ELK components, we refer to our stack as "'''ELK+''''". | ||
(OpenSearch and OpenSearch Dashboards were forked from Elasticsearch and Kibana when those became non-free, hence the name "ELK".) | |||
===OpenSearch=== | |||
=== | |||
[ | [https://opensearch.org/docs/latest/opensearch/index/ OpenSearch] is a multi-node [https://lucene.apache.org/ Lucene] implementation. | ||
===Logstash=== | ===Logstash=== | ||
[http://logstash.net/ Logstash] is a tool to collect, process, and forward events and log messages. Collection is accomplished via configurable input plugins including raw socket/packet communication, file tailing, and several message bus clients. Once an input plugin has collected data it can be processed by any number of filters which modify and annotate the event data. Finally logstash routes events to output plugins which can forward the events to a variety of external programs | [http://logstash.net/ Logstash] is a tool to collect, process, and forward events and log messages. Collection is accomplished via configurable input plugins including raw socket/packet communication, file tailing, and several message bus clients. Once an input plugin has collected data it can be processed by any number of filters which modify and annotate the event data. Finally logstash routes events to output plugins which can forward the events to a variety of external programs, local files, and several message bus implementations. | ||
=== | ===OpenSearch Dashboards=== | ||
[ | [https://opensearch.org/docs/latest/dashboards/index/ OpenSearch Dashboards] is a browser-based analytics and search interface for OpenSearch. | ||
=== Kafka === | === Kafka === | ||
[https://kafka.apache.org/intro Apache Kafka] is a distributed | [https://kafka.apache.org/intro Apache Kafka] is a distributed streaming system. In our ELK stack Kafka buffers the stream of log messages produced by rsyslog (on behalf of applications) for consumption by Logstash. Nothing should output logs to logstash directly, logs should always be sent by way of Kafka. | ||
=== Rsyslog === | === Rsyslog === | ||
[https://www.rsyslog.com Rsyslog] is the "rocket-fast system for log processing". In our ELK stack rsyslog is used as the host "log agent". Rsyslog ingests log messages in various formats and from varying protocols, normalizes them and outputs to Kafka. | [https://www.rsyslog.com Rsyslog] is the "rocket-fast system for log processing". In our ELK stack rsyslog is used as the host "log agent". Rsyslog ingests log messages in various formats and from varying protocols, normalizes them and outputs to Kafka. | ||
==Systems feeding into logstash== | ==Systems feeding into logstash== | ||
Line 41: | Line 39: | ||
Please see [[Logstash/Interface]] for details regarding long-term supported log shipping interfaces. | Please see [[Logstash/Interface]] for details regarding long-term supported log shipping interfaces. | ||
==== Kubernetes ==== | |||
Kubernetes hosted services are taken care of directly by the kubernetes infrastructure which ships via rsyslog into the logstash pipeline. All a kubernetes service needs to do is log in a JSON structured format (e.g. bunyan for nodejs services) to standard output/standard error. | |||
===Systems not feeding into logstash=== | ===Systems not feeding into logstash=== | ||
Line 47: | Line 49: | ||
*[[Varnish]] logs of the billions of[[Analytics/Pageviews | pageviews]] of WMF wikis would require a lot more hardware. Instead we use [[Kafka]] to feed [[Analytics/Data Lake/Traffic/Webrequest|web requests]] into [[Hadoop]]. A notable exception to this rule: varnish user-facing errors (HTTP status 500-599) are sent to logstash to make debugging easier. | *[[Varnish]] logs of the billions of[[Analytics/Pageviews | pageviews]] of WMF wikis would require a lot more hardware. Instead we use [[Kafka]] to feed [[Analytics/Data Lake/Traffic/Webrequest|web requests]] into [[Hadoop]]. A notable exception to this rule: varnish user-facing errors (HTTP status 500-599) are sent to logstash to make debugging easier. | ||
*MediaWiki logs usually go to both logstash and log files, but a few log channels aren't. You can check which in <code>$wmgMonologChannels</code> in [https://noc.wikimedia.org/conf/InitialiseSettings.php.txt InitialiseSettings.php]. | *MediaWiki logs usually go to both logstash and log files, but a few log channels aren't. You can check which in <code>$wmgMonologChannels</code> in [https://noc.wikimedia.org/conf/InitialiseSettings.php.txt InitialiseSettings.php]. | ||
=== Writing & testing filters === | |||
When in the process of writing new logstash filters, take a look at what's [https://phabricator.wikimedia.org/source/operations-puppet/browse/production/modules/profile/files/logstash/ existing already] in puppet. Each filter must be tested to avoid regressions, we are using [https://github.com/magnusbaeck/logstash-filter-verifier logstash filter verifier] and existing tests can be found in the <tt>tests/</tt> directory. To write tests or run existing tests you will need logstash-filter-verifier and logstash installed locally, or you can use docker/podman and the puppet repository:<syntaxhighlight lang="bash"> | |||
# From the base dir of operations/puppet. | |||
$ cd modules/profile/files/logstash/ | |||
# The Makefile recognizes if one of podman or docker is installed | |||
# and then it uses it. | |||
$ make test-local | |||
/usr/bin/docker run --rm --workdir /src -v $(pwd):/src:Z -v $(pwd)/templates:/etc/logstash/templates:Z -v $(pwd)/filter_scripts:/etc/logstash/filter_scripts:Z --entrypoint make docker-registry.wikimedia.org/releng/logstash-filter-verifier:latest | |||
logstash-filter-verifier --diff-command="diff -u --color=always" --sockets tests/ filters/*.conf | |||
Use Unix domain sockets. | |||
[...cut...] | |||
</syntaxhighlight>Each filter has a corresponding test after its name in <tt>tests/</tt>. Within the test file the <tt>fields</tt> map lists the fields common to all tests and are used to trigger a specific filter's "if" conditions. The <tt>ignore</tt> key usually contains only <tt>@timestamp</tt> since that field is bound to change across invocations and can be safely ignored. The remainder of a test file is a list of testcases in the form of input/expected pairs. For "input" it is recommended to use yaml <tt>></tt> to include verbatim JSON, whereas "expected" is usually yaml, although it can be also verbatim JSON if more convenient. | |||
=== Getting logs from misc systems into logstash === | |||
Please see [[Logstash/Interface#Tailing_Log_Files]]. | |||
==Production Logstash Architecture== | ==Production Logstash Architecture== | ||
Line 59: | Line 82: | ||
===== Web interface ===== | ===== Web interface ===== | ||
: | :https://logstash.wikimedia.org | ||
===== Authentication ===== | ===== Authentication ===== | ||
Line 67: | Line 90: | ||
===== Configuration ===== | ===== Configuration ===== | ||
:The cluster contains two types of nodes, configured by | :The cluster contains two types of nodes, configured by Puppet. | ||
:*role:: | :*role::logging::opensearch::collector manages the Logstash "collector" instances. These run Logstash, an OpenSearch indexing node, and an Apache vhost serving OpenSearch Dashboards. The Apache vhosts perform LDAP-based authentication to restrict access to the potentially sensitive log information. | ||
:*role:: | :*role::logging::opensearch::data configures an OpenSearch data node providing storage for log data. | ||
:*role::kafka::logging configures a Kafka broker for producers to publish log data to and for Logstash to consume from. This is a buffering layer to absorb log spikes and queue log events when maintenance is being performed on the logging cluster. | |||
===== Hostname Convention ===== | ===== Hostname Convention ===== | ||
Line 81: | Line 105: | ||
:logstashNNNN - Logstash "collector" hosts | :logstashNNNN - Logstash "collector" hosts | ||
: | :opensearch-loggingNNNN - Logstash OpenSearch hosts | ||
:kafka-loggingNNNN - Logstash Kafka broker hosts | :kafka-loggingNNNN - Logstash Kafka broker hosts | ||
===== Operating Systems ===== | ===== Operating Systems ===== | ||
All hosts run Debian | All hosts run Debian Buster as a base operating system | ||
===== Load Balancing and TLS ===== | ===== Load Balancing and TLS ===== | ||
The misc Varnish cluster is being used to provide ssl termination and load balancing support for the Kibana application. | The misc Varnish cluster is being used to provide ssl termination and load balancing support for the Kibana application. | ||
== | ==OpenSearch quick intro== | ||
*Start from one of the blue Dashboard links near the top, more are available from the Load icon near the top right. | *Start from one of the blue Dashboard links near the top, more are available from the Load icon near the top right. | ||
Line 101: | Line 125: | ||
Read [{{fullurl:commons:File:ELK_Tech_Talk_2015-08-20.pdf|page=11}} slide 11 and onwards in the TechTalk on ELK by Bryan Davis], they highlight features of the Kibana web page. | Read [{{fullurl:commons:File:ELK_Tech_Talk_2015-08-20.pdf|page=11}} slide 11 and onwards in the TechTalk on ELK by Bryan Davis], they highlight features of the Kibana web page. | ||
==Common Logging Schema== | |||
Seeː [[Logstash/Common Logging Schema]]. | |||
==API== | ==API== | ||
{{outdated-inline|note=and most likely does not work anymore. It is due to be evaluated and cleaned up as necessary.}} | |||
The [https://www.elastic.co/guide/en/elasticsearch/reference/current/search-search.html Elasticsearch API] is accessible at https://logstash.wikimedia.org/elasticsearch/ | The [https://www.elastic.co/guide/en/elasticsearch/reference/current/search-search.html Elasticsearch API] is accessible at https://logstash.wikimedia.org/elasticsearch/ | ||
Line 110: | Line 139: | ||
To get the last 100 log entries matching the Lucene query '''logger_name:varnishfetcherr AND layer:backend''' | To get the last 100 log entries matching the Lucene query '''logger_name:varnishfetcherr AND layer:backend''' | ||
< | <syntaxhighlight lang="python"> | ||
#!/usr/bin/env python3 | #!/usr/bin/env python3 | ||
import os | import os | ||
Line 139: | Line 168: | ||
for line in data["hits"]["hits"]: | for line in data["hits"]["hits"]: | ||
print(json.dumps(line["_source"])) | print(json.dumps(line["_source"])) | ||
</ | </syntaxhighlight> | ||
'''Note:''' Certain queries with whitespace characters may require additional url-encoding (via <code>urllib.parse.quote</code> or similar) when using <code>python requests</code>. If requests to the logstash API consistently return 504 http status codes, even for relatively lightweight queries, this may be the issue. | '''Note:''' Certain queries with whitespace characters may require additional url-encoding (via <code>urllib.parse.quote</code> or similar) when using <code>python requests</code>. If requests to the logstash API consistently return 504 http status codes, even for relatively lightweight queries, this may be the issue. | ||
=== Extract data from Logstash ( | === Extract data from Logstash (OpenSearch) with curl and jq === | ||
<syntaxhighlight lang="bash"> | <syntaxhighlight lang="bash"> | ||
logstash-server:~$ cat search.sh | logstash-server:~$ cat search.sh | ||
Line 176: | Line 205: | ||
$ bash search.sh | jq -r '.hits.hits[]._source | {timestamp,host,level,program,message} | map(.) | @csv' > asw2-d2-eqiad-crash.csv | $ bash search.sh | jq -r '.hits.hits[]._source | {timestamp,host,level,program,message} | map(.) | @csv' > asw2-d2-eqiad-crash.csv | ||
</syntaxhighlight><br /> | </syntaxhighlight><br /> | ||
==Plugins== | ==Plugins== | ||
Logstash plugins are fetched and compiled into a Debian package for distribution and installation on Logstash servers. | |||
The plugin git repository is located at https://gerrit.wikimedia.org/r/#/admin/projects/operations/software/logstash/plugins | |||
===Plugin build process=== | ===Plugin build process=== | ||
The build can be run on the production builder host. See [https://gerrit.wikimedia.org/r/plugins/gitiles/operations/software/logstash/plugins/+/refs/heads/master/README README] for up-to-date build steps. | |||
=====Deployment===== | |||
* Add package to [[Reprepro|reprepro]] and install on the host normally. | |||
. | {{Note|Package installation will not restart Logstash. This must be done manually in a rolling fashion, and it's strongly suggested to perform this in step with the plugin deploy.}} | ||
{{anchor|Prototype (Beta) Logstash}} | |||
==Beta Cluster Logstash== | |||
</syntaxhighlight> | ; Web interface | ||
: https://beta-logs.wmcloud.org/ | |||
; Access control | |||
: Credentials for Beta's Logstash can be found on [https://office.wikimedia.org/wiki/User:BDavis_(WMF)/logstash officewiki], or by connecting to <code>deployment-deploy03.deployment-prep.eqiad1.wikimedia.cloud</code> and reading <code>/root/secrets.txt</code>. Unlike production services, Beta Cluster may not use [[mw:Developer account|Developer accounts]] (LDAP) for authentication. | |||
: <syntaxhighlight lang=shell-session> | |||
$ ssh deployment-deploy03.deployment-prep.eqiad1.wikimedia.cloud -- sudo cat /root/secrets.txt | |||
service: https://beta-logs.wmcloud.org | |||
user: ************ | |||
password: ************ | |||
</syntaxhighlight> | |||
; Kafka access to deployment-prep | |||
: The security group <code>kafka-logging</code> must allow ingress from the logging collector on port 9093. | |||
: When commissioning a new logging collector, the certificate authority keystore (<code>/etc/ssl/localcerts/wmf-java-cacerts</code>) must be manually copied onto the new logging collector otherwise logstash will not start: (<code>File does not exist or cannot be opened /etc/ssl/localcerts/wmf-java-cacerts</code>). | |||
==Gotchas== | |||
===GELF transport=== | |||
Make sure logging events sent to the GELF input don't have a "type" or "_type" field set, or if set, that it contains the value "gelf". The gelf/Logstash config discards any events that have a different value set for "type" or "_type". The final "type" seen in OpenSearch/Dashboards will be take from the "facility" element of the original GELF packet. The application sending the log data to Logstash should set "facility" to a reasonably unique value that identifies your application. | |||
==Documents== | |||
{{Special:Prefixindex/Logstash/|hideredirects=1|stripprefix=1}} | |||
== Troubleshooting == | |||
=== Kafka consumer lag === | |||
For a host of reasons it might happen that there's a buildup of messages on Kafka. For example: | |||
; OpenSearch is refusing to index messages, thus Logstash can't consume properly from Kafka. | |||
: The reason for index failure is usually conflicting fields, see also {{bug|T150106}} for a detailed discussion of the problem. The solution is to find what programs are generating the conflicts and drop them on Logstash accordingly, see also {{bug|T228089}} | |||
=== Using the dead letter queue === | |||
The Logstash DLQ is not enabled normally, however it comes handy when debugging indexing failures and the problematic log entries don't show up in the logstash logs. | |||
Enable (with puppet disabled) the DLQ in <tt>/etc/logstash/logstash.yml</tt> | |||
<pre> | |||
dead_letter_queue.enable: true | |||
path.dead_letter_queue: "/var/lib/logstash/dead_letter_queue/" | |||
</pre> | |||
And <tt>systemctl restart logstash</tt>. The DLQ will start filling up as soon as unindexable logs are received. At a later time the DLQ can be dumped with (running as <tt>logstash</tt> user) | |||
== | <pre> | ||
$ /usr/share/logstash/bin/logstash -e ' | |||
input { | |||
dead_letter_queue { | |||
path => "/var/lib/logstash/dead_letter_queue/" | |||
commit_offsets => false | |||
pipeline_id => "main" | |||
} | |||
} | |||
== | output { | ||
stdout { | |||
codec => rubydebug { metadata => true } | |||
} | |||
} | |||
' 2>&1 | less | |||
</pre> | |||
Once debugging is complete, clear the queue with <tt>rm /var/lib/logstash/dead_letter_queue/main/*.log</tt>, and reenable puppet | |||
== Operations == | == Operations == | ||
Line 267: | Line 291: | ||
=== Configuration changes === | === Configuration changes === | ||
After merging your configuration change it is usually enough to run cumin in batches of one with >60s of sleep: | |||
cumin -b1 -s60 'O:logging::opensearch::collector' 'run-puppet-agent -q && systemctl restart logstash' | |||
==== Test a configuration snippet before merge ==== | |||
Copy your ready to merge snippet (eg. modules/profile/files/logstash/filter-syslog-network.conf) to a Logstash host. | |||
Then run | |||
sudo /usr/share/logstash/bin/logstash --config.test_and_exit -f <myfile> | |||
It should return "Configuration OK". | |||
=== Indexing errors === | |||
Have a look at the [https://logstash.wikimedia.org/app/discover#/view/6086dd90-85dd-11eb-99a9-c1243d7de186 Dead Letter Queue Dashboard]. The original message that caused the error is in the <code>log.original</code> field. | |||
We're alerting on errors that Logstash gets from OpenSearch whenever there's an "indexing conflict" between fields of the same index (see also {{Bug|T236343}}). The reason usually is because two applications send logs with the same field name but two different types, e.g. <code>response</code> will be sent as a string in one case but as nested object in another. {{Bug|T239458}} is a good example of this, where different parts of mediawiki send logs formatted in a different way. | |||
=== No logs indexed === | |||
This alert is based on the incoming logs per second indexed by OpenSearch. During normal operation there is a baseline of ~1k logs/s (July 2020) and anything significantly lower than that is an unexpected condition. Check the Logstash dashboard attached to alert for signs of root causes. Most likely Logstash has stopped sending logs to OpenSearch. | |||
=== Drop spammy logs === | |||
Occasionally producers will outpace Logstash's ingestion capabilities, most often with what's considered "log spam" (e.g. dumping whole request/response in debug logs). In these case one solution is to drop the offending logs from Logstash, and ideally the producer has already stopped spamming. The simplest such filter is installed before most/all other filters, matches a few fields and then <code>drop</code>s the message: | |||
<pre> | |||
filter { | |||
if [program] == "producer" and [nested][field] == "offending value" { | |||
drop {} | |||
} | |||
} | |||
</pre> | |||
See also [https://gerrit.wikimedia.org/r/c/operations/puppet/+/713853 this Gerrit change] for a real-world example. | |||
=== UDP packet loss === | |||
Logstash 5 locks up from time to time, causing UDP packet loss on the host it is running on. The fix in this case is to restart <code>logstash.service</code> on the host in question. | |||
=== Replace failed disk and rebuild RAID === | |||
The storage drives on Logstash data are configured in an mdraid RAID0. OpenSearch handles data redundancy, so the rest of the cluster will absorb the impact of the downed node. | |||
Once the disk is replaced, the RAID will have to be rebuilt: | |||
First stop opensearch and disable puppet. | |||
Copy disk partition layout from good disk to new disk | |||
sfdisk -d /dev/sdb | sfdisk /dev/sdi | |||
determine md device mounted at /srv (/dev/md2 for example) and check mdstat | |||
cat /proc/mdstat | |||
get array information. make a note of the remaining array members, we'll need this information when rebuilding | |||
mdadm --query --detail /dev/md2 | |||
stop and remove the raid0 array | |||
mdadm --stop /dev/md2 && mdadm --remove /dev/md2 | |||
remove traces of the previous array on the old partitions | |||
mdadm --zero-superblock /dev/sdb4 | |||
mdadm --zero-superblock /dev/sdc4 | |||
# ... etc | |||
mdadm --zero-superblock /dev/sdh4 | |||
create new raid0 array (WARNING: DISKS MAY BE DIFFERENT) | |||
mdadm --create --verbose /dev/md/2 --level=0 --raid-devices=8 /dev/sdb4 /dev/sdc4 /dev/sdd4 /dev/sde4 /dev/sdf4 /dev/sdg4 /dev/sdh4 /dev/sdi4 | |||
make filesystem | |||
mkfs.ext4 /dev/md2 | |||
workaround systemd mount management by commenting out the old array mount in fstab and issuing a daemon reload | |||
vim /etc/fstab | |||
systemctl daemon-reload | |||
add mount point back in with new uuid and mount | |||
vim /etc/fstab | |||
mount /srv | |||
check to make sure the disk is mounted and add new array definition to /etc/mdadm/mdadm.conf - also remove old definition | |||
mdadm --detail --scan | |||
vim /etc/mdadm/mdadm.conf | |||
update initramfs | |||
update-initramfs -u | |||
check other arrays for failed partitions and add partitions to them | |||
mdadm --manage /dev/md0 --add /dev/sdi2 | |||
mdadm --manage /dev/md1 --add /dev/sdi3 | |||
make opensearch data directory | |||
mkdir /srv/opensearch && chown opensearch:opensearch /srv/opensearch | |||
re-enable puppet and run puppet. OpenSearch should start up, join the cluster, and immediately start rebalancing shards. | |||
=== Restore Dashboards from backup === | |||
From a single collector node, delete all <code>.kibana</code> indexes and restart opensearch-dashboards. Check that the restart created <code>.kibana_1</code> and aliased it with <code>.kibana</code>. Fetch and unzip the backup and run: | |||
BACKUP_FILE=<myfile>.ndjson curl -s -X POST http://localhost:5601/api/saved_objects/_import?createNewCopies=false -H "osd-xsrf: true" --form file=@$BACKUP_FILE > response.json | |||
Check the response for problems and navigate into OpenSearch Dashboards to ensure expected saved objects are present. | |||
== Stats == | == Stats == | ||
Line 275: | Line 382: | ||
=== Documents and bytes counts === | === Documents and bytes counts === | ||
The | The OpenSearch ''cat'' API provides a simple way to extract general statistics about log storage, e.g. total logs and bytes (not including replication) | ||
logstash1010:~$ curl -s 'localhost:9200/_cat/indices?v&bytes=b' | awk '/logstash-/ {b+=$10; d+=$7} END {print d; print b}' | logstash1010:~$ curl -s 'localhost:9200/_cat/indices?v&bytes=b' | awk '/logstash-/ {b+=$10; d+=$7} END {print d; print b}' | ||
Line 286: | Line 393: | ||
logstash1010:~$ curl -s 'localhost:9200/_cat/indices?v&bytes=b' | awk '/logstash-/ { gsub(/logstash-[^0-9]*/, "", $3); gsub(/\.[0-9][0-9]$/, "", $3); sum[$3] += $7 } END { for (i in sum) print i, sum[i] }' | sort | logstash1010:~$ curl -s 'localhost:9200/_cat/indices?v&bytes=b' | awk '/logstash-/ { gsub(/logstash-[^0-9]*/, "", $3); gsub(/\.[0-9][0-9]$/, "", $3); sum[$3] += $7 } END { for (i in sum) print i, sum[i] }' | sort | ||
== Data Retention == | |||
Logs are retained in Logstash for a maximum of 90 days by default in accordance with our [https://foundation.wikimedia.org/wiki/Privacy_policy Privacy Policy] and [https://meta.wikimedia.org/wiki/Data_retention_guidelines Data Retention Guidelines]. | |||
=== Extended Retention === | |||
See [[Logstash/Extended Retention]] | |||
==See also== | ==See also== | ||
Line 292: | Line 405: | ||
*[[Logs#mw-log]] (the old method of viewing logs) | *[[Logs#mw-log]] (the old method of viewing logs) | ||
*[[phabricator:J177|Introducing Phatality]] (a Kibana plugin to streamline the process of reporting production errors on Phabricator) | *[[phabricator:J177|Introducing Phatality]] (a Kibana plugin to streamline the process of reporting production errors on Phabricator) | ||
*[[Kubernetes/Logging]] How logs flow into Logstash from the Kubernetes components | |||
{{Ptag|Wikimedia-Logstash}} | |||
[[Category:SRE Observability]] | |||
[[Category:Services]] | [[Category:Services]] |
Latest revision as of 19:16, 12 December 2022
Logstash is a tool for managing events and logs. When used generically, the term encompasses a larger system of log collection, processing, storage and searching activities.
Overview
File:ELK Tech Talk 2015-08-20.pdf
File:Using Kibana4 to read logs at Wikimedia Tech Talk 2016-11-14.pdf
Various Wikimedia applications send log events to Logstash, which gathers the messages, converts them into JSON documents, and stores them in an OpenSearch cluster. Wikimedia uses OpenSearch Dashboards as a front-end client to filter and display messages from the OpenSearch cluster. These are the core components of our ELK stack, but we use additional components as well. Since we utilize more than the core ELK components, we refer to our stack as "ELK+'".
(OpenSearch and OpenSearch Dashboards were forked from Elasticsearch and Kibana when those became non-free, hence the name "ELK".)
OpenSearch
OpenSearch is a multi-node Lucene implementation.
Logstash
Logstash is a tool to collect, process, and forward events and log messages. Collection is accomplished via configurable input plugins including raw socket/packet communication, file tailing, and several message bus clients. Once an input plugin has collected data it can be processed by any number of filters which modify and annotate the event data. Finally logstash routes events to output plugins which can forward the events to a variety of external programs, local files, and several message bus implementations.
OpenSearch Dashboards
OpenSearch Dashboards is a browser-based analytics and search interface for OpenSearch.
Kafka
Apache Kafka is a distributed streaming system. In our ELK stack Kafka buffers the stream of log messages produced by rsyslog (on behalf of applications) for consumption by Logstash. Nothing should output logs to logstash directly, logs should always be sent by way of Kafka.
Rsyslog
Rsyslog is the "rocket-fast system for log processing". In our ELK stack rsyslog is used as the host "log agent". Rsyslog ingests log messages in various formats and from varying protocols, normalizes them and outputs to Kafka.
Systems feeding into logstash
See 2015-08 Tech talk slides
Writing new filters is easy.
Supported log shipping protocols & formats ("interfaces")
Support of logs shipped directly from application to Logstash has been deprecated.
Please see Logstash/Interface for details regarding long-term supported log shipping interfaces.
Kubernetes
Kubernetes hosted services are taken care of directly by the kubernetes infrastructure which ships via rsyslog into the logstash pipeline. All a kubernetes service needs to do is log in a JSON structured format (e.g. bunyan for nodejs services) to standard output/standard error.
Systems not feeding into logstash
- EventLogging (of program-defined events with schemas), despite its name, uses a different pipeline.
- Varnish logs of the billions of pageviews of WMF wikis would require a lot more hardware. Instead we use Kafka to feed web requests into Hadoop. A notable exception to this rule: varnish user-facing errors (HTTP status 500-599) are sent to logstash to make debugging easier.
- MediaWiki logs usually go to both logstash and log files, but a few log channels aren't. You can check which in
$wmgMonologChannels
in InitialiseSettings.php.
Writing & testing filters
When in the process of writing new logstash filters, take a look at what's existing already in puppet. Each filter must be tested to avoid regressions, we are using logstash filter verifier and existing tests can be found in the tests/ directory. To write tests or run existing tests you will need logstash-filter-verifier and logstash installed locally, or you can use docker/podman and the puppet repository:
# From the base dir of operations/puppet.
$ cd modules/profile/files/logstash/
# The Makefile recognizes if one of podman or docker is installed
# and then it uses it.
$ make test-local
/usr/bin/docker run --rm --workdir /src -v $(pwd):/src:Z -v $(pwd)/templates:/etc/logstash/templates:Z -v $(pwd)/filter_scripts:/etc/logstash/filter_scripts:Z --entrypoint make docker-registry.wikimedia.org/releng/logstash-filter-verifier:latest
logstash-filter-verifier --diff-command="diff -u --color=always" --sockets tests/ filters/*.conf
Use Unix domain sockets.
[...cut...]
Each filter has a corresponding test after its name in tests/. Within the test file the fields map lists the fields common to all tests and are used to trigger a specific filter's "if" conditions. The ignore key usually contains only @timestamp since that field is bound to change across invocations and can be safely ignored. The remainder of a test file is a list of testcases in the form of input/expected pairs. For "input" it is recommended to use yaml > to include verbatim JSON, whereas "expected" is usually yaml, although it can be also verbatim JSON if more convenient.
Getting logs from misc systems into logstash
Please see Logstash/Interface#Tailing_Log_Files.
Production Logstash Architecture
As of FY2019 Logstash infrastructure is owned by SRE. See also Logstash/SRE_onboard for more information on how to migrate services/applications.
Architecture Diagram
Web interface
Authentication
- wikitech LDAP username and password and membership in one of the following LDAP groups: nda, ops, wmf
Configuration
- The cluster contains two types of nodes, configured by Puppet.
- role::logging::opensearch::collector manages the Logstash "collector" instances. These run Logstash, an OpenSearch indexing node, and an Apache vhost serving OpenSearch Dashboards. The Apache vhosts perform LDAP-based authentication to restrict access to the potentially sensitive log information.
- role::logging::opensearch::data configures an OpenSearch data node providing storage for log data.
- role::kafka::logging configures a Kafka broker for producers to publish log data to and for Logstash to consume from. This is a buffering layer to absorb log spikes and queue log events when maintenance is being performed on the logging cluster.
Hostname Convention
Current
- logstash1NNN Logstash related servers in Eqiad.
- logstash2NNN Logstash related servers in Codfw.
Future
- logstashNNNN - Logstash "collector" hosts
- opensearch-loggingNNNN - Logstash OpenSearch hosts
- kafka-loggingNNNN - Logstash Kafka broker hosts
Operating Systems
All hosts run Debian Buster as a base operating system
Load Balancing and TLS
The misc Varnish cluster is being used to provide ssl termination and load balancing support for the Kibana application.
OpenSearch quick intro
- Start from one of the blue Dashboard links near the top, more are available from the Load icon near the top right.
- In "Events over time" click to zoom out to see what you want, or select a region with the mouse to zoom in.
- smaller time intervals are faster
- be careful you may see no events at all... because you're viewing the future
- When you get lost, click the Home icon near the top right
- As an example query,
wfDebugLog( 'Flow', ...)
in MediaWiki PHP corresponds totype:mediawiki AND channel:flow
- switch to using mw:Structured logging and you can query for ...
AND level:ERROR
- switch to using mw:Structured logging and you can query for ...
Read slide 11 and onwards in the TechTalk on ELK by Bryan Davis, they highlight features of the Kibana web page.
Common Logging Schema
Seeː Logstash/Common Logging Schema.
API
![]() | This information is outdated. and most likely does not work anymore. It is due to be evaluated and cleaned up as necessary. |
The Elasticsearch API is accessible at https://logstash.wikimedia.org/elasticsearch/
Note: The _search endpoint can only be used without a request body (see task T174960). Use _msearch instead for complex queries that need a request body.
Extract data from Logstash with Python
To get the last 100 log entries matching the Lucene query logger_name:varnishfetcherr AND layer:backend
#!/usr/bin/env python3
import os
import sys
import json
import requests
query = "logger_name:varnishfetcherr AND layer:backend"
results = 100
ldap_user = os.getenv("LDAP_USER")
ldap_pass = os.getenv("LDAP_PASS")
if ldap_user is None or ldap_pass is None:
print("You need to set LDAP_USER and LDAP_PASS")
sys.exit(1)
url = "https://logstash.wikimedia.org/elasticsearch/_search?size={}&q={}".format(
results, query
)
resp = requests.get(url, auth=requests.auth.HTTPBasicAuth(ldap_user, ldap_pass))
if resp.status_code != 200:
print("Something's wrong, response status code={}".format(resp.status_code))
sys.exit(1)
data = resp.json()
for line in data["hits"]["hits"]:
print(json.dumps(line["_source"]))
Note: Certain queries with whitespace characters may require additional url-encoding (via urllib.parse.quote
or similar) when using python requests
. If requests to the logstash API consistently return 504 http status codes, even for relatively lightweight queries, this may be the issue.
Extract data from Logstash (OpenSearch) with curl and jq
logstash-server:~$ cat search.sh
curl -XGET 'localhost:9200/_search?pretty&size=10000' -d '
{
"query": {
"query_string" : {
"query" : "facility:19,local3 AND host:csw2-esams AND @timestamp:[2019-08-04T03:00 TO 2019-08-04T03:15] NOT program:mgd"
}
},
"sort": ["@timestamp"]
} '
logstash-server:~$ bash search.sh | jq '.hits.hits[]._source | {timestamp,host,level,message}' | head -20
{
"timestamp": "2019-08-04T03:00:00+00:00",
"host": "csw2-esams",
"level": "INFO",
"message": " %-: (root) CMD (newsyslog)"
}
{
"timestamp": "2019-08-04T03:00:00+00:00",
"host": "csw2-esams",
"level": "INFO",
"message": " %-: (root) CMD ( /usr/libexec/atrun)"
}
{
"timestamp": "2019-08-04T03:01:00+00:00",
"host": "csw2-esams",
"level": "INFO",
"message": " %-: (root) CMD (adjkerntz -a)"
}
$ bash search.sh | jq -r '.hits.hits[]._source | {timestamp,host,level,program,message} | map(.) | @csv' > asw2-d2-eqiad-crash.csv
Plugins
Logstash plugins are fetched and compiled into a Debian package for distribution and installation on Logstash servers.
The plugin git repository is located at https://gerrit.wikimedia.org/r/#/admin/projects/operations/software/logstash/plugins
Plugin build process
The build can be run on the production builder host. See README for up-to-date build steps.
Deployment
- Add package to reprepro and install on the host normally.
![]() | Package installation will not restart Logstash. This must be done manually in a rolling fashion, and it's strongly suggested to perform this in step with the plugin deploy. |
Beta Cluster Logstash
- Web interface
- https://beta-logs.wmcloud.org/
- Access control
- Credentials for Beta's Logstash can be found on officewiki, or by connecting to
deployment-deploy03.deployment-prep.eqiad1.wikimedia.cloud
and reading/root/secrets.txt
. Unlike production services, Beta Cluster may not use Developer accounts (LDAP) for authentication. $ ssh deployment-deploy03.deployment-prep.eqiad1.wikimedia.cloud -- sudo cat /root/secrets.txt service: https://beta-logs.wmcloud.org user: ************ password: ************
- Kafka access to deployment-prep
- The security group
kafka-logging
must allow ingress from the logging collector on port 9093. - When commissioning a new logging collector, the certificate authority keystore (
/etc/ssl/localcerts/wmf-java-cacerts
) must be manually copied onto the new logging collector otherwise logstash will not start: (File does not exist or cannot be opened /etc/ssl/localcerts/wmf-java-cacerts
).
Gotchas
GELF transport
Make sure logging events sent to the GELF input don't have a "type" or "_type" field set, or if set, that it contains the value "gelf". The gelf/Logstash config discards any events that have a different value set for "type" or "_type". The final "type" seen in OpenSearch/Dashboards will be take from the "facility" element of the original GELF packet. The application sending the log data to Logstash should set "facility" to a reasonably unique value that identifies your application.
Documents
Troubleshooting
Kafka consumer lag
For a host of reasons it might happen that there's a buildup of messages on Kafka. For example:
- OpenSearch is refusing to index messages, thus Logstash can't consume properly from Kafka.
- The reason for index failure is usually conflicting fields, see also bug T150106 for a detailed discussion of the problem. The solution is to find what programs are generating the conflicts and drop them on Logstash accordingly, see also bug T228089
Using the dead letter queue
The Logstash DLQ is not enabled normally, however it comes handy when debugging indexing failures and the problematic log entries don't show up in the logstash logs.
Enable (with puppet disabled) the DLQ in /etc/logstash/logstash.yml
dead_letter_queue.enable: true path.dead_letter_queue: "/var/lib/logstash/dead_letter_queue/"
And systemctl restart logstash. The DLQ will start filling up as soon as unindexable logs are received. At a later time the DLQ can be dumped with (running as logstash user)
$ /usr/share/logstash/bin/logstash -e ' input { dead_letter_queue { path => "/var/lib/logstash/dead_letter_queue/" commit_offsets => false pipeline_id => "main" } } output { stdout { codec => rubydebug { metadata => true } } } ' 2>&1 | less
Once debugging is complete, clear the queue with rm /var/lib/logstash/dead_letter_queue/main/*.log, and reenable puppet
Operations
Configuration changes
After merging your configuration change it is usually enough to run cumin in batches of one with >60s of sleep:
cumin -b1 -s60 'O:logging::opensearch::collector' 'run-puppet-agent -q && systemctl restart logstash'
Test a configuration snippet before merge
Copy your ready to merge snippet (eg. modules/profile/files/logstash/filter-syslog-network.conf) to a Logstash host. Then run
sudo /usr/share/logstash/bin/logstash --config.test_and_exit -f <myfile>
It should return "Configuration OK".
Indexing errors
Have a look at the Dead Letter Queue Dashboard. The original message that caused the error is in the log.original
field.
We're alerting on errors that Logstash gets from OpenSearch whenever there's an "indexing conflict" between fields of the same index (see also bug T236343). The reason usually is because two applications send logs with the same field name but two different types, e.g. response
will be sent as a string in one case but as nested object in another. bug T239458 is a good example of this, where different parts of mediawiki send logs formatted in a different way.
No logs indexed
This alert is based on the incoming logs per second indexed by OpenSearch. During normal operation there is a baseline of ~1k logs/s (July 2020) and anything significantly lower than that is an unexpected condition. Check the Logstash dashboard attached to alert for signs of root causes. Most likely Logstash has stopped sending logs to OpenSearch.
Drop spammy logs
Occasionally producers will outpace Logstash's ingestion capabilities, most often with what's considered "log spam" (e.g. dumping whole request/response in debug logs). In these case one solution is to drop the offending logs from Logstash, and ideally the producer has already stopped spamming. The simplest such filter is installed before most/all other filters, matches a few fields and then drop
s the message:
filter { if [program] == "producer" and [nested][field] == "offending value" { drop {} } }
See also this Gerrit change for a real-world example.
UDP packet loss
Logstash 5 locks up from time to time, causing UDP packet loss on the host it is running on. The fix in this case is to restart logstash.service
on the host in question.
Replace failed disk and rebuild RAID
The storage drives on Logstash data are configured in an mdraid RAID0. OpenSearch handles data redundancy, so the rest of the cluster will absorb the impact of the downed node.
Once the disk is replaced, the RAID will have to be rebuilt:
First stop opensearch and disable puppet.
Copy disk partition layout from good disk to new disk
sfdisk -d /dev/sdb | sfdisk /dev/sdi
determine md device mounted at /srv (/dev/md2 for example) and check mdstat
cat /proc/mdstat
get array information. make a note of the remaining array members, we'll need this information when rebuilding
mdadm --query --detail /dev/md2
stop and remove the raid0 array
mdadm --stop /dev/md2 && mdadm --remove /dev/md2
remove traces of the previous array on the old partitions
mdadm --zero-superblock /dev/sdb4 mdadm --zero-superblock /dev/sdc4 # ... etc mdadm --zero-superblock /dev/sdh4
create new raid0 array (WARNING: DISKS MAY BE DIFFERENT)
mdadm --create --verbose /dev/md/2 --level=0 --raid-devices=8 /dev/sdb4 /dev/sdc4 /dev/sdd4 /dev/sde4 /dev/sdf4 /dev/sdg4 /dev/sdh4 /dev/sdi4
make filesystem
mkfs.ext4 /dev/md2
workaround systemd mount management by commenting out the old array mount in fstab and issuing a daemon reload
vim /etc/fstab systemctl daemon-reload
add mount point back in with new uuid and mount
vim /etc/fstab mount /srv
check to make sure the disk is mounted and add new array definition to /etc/mdadm/mdadm.conf - also remove old definition
mdadm --detail --scan vim /etc/mdadm/mdadm.conf
update initramfs
update-initramfs -u
check other arrays for failed partitions and add partitions to them
mdadm --manage /dev/md0 --add /dev/sdi2 mdadm --manage /dev/md1 --add /dev/sdi3
make opensearch data directory
mkdir /srv/opensearch && chown opensearch:opensearch /srv/opensearch
re-enable puppet and run puppet. OpenSearch should start up, join the cluster, and immediately start rebalancing shards.
Restore Dashboards from backup
From a single collector node, delete all .kibana
indexes and restart opensearch-dashboards. Check that the restart created .kibana_1
and aliased it with .kibana
. Fetch and unzip the backup and run:
BACKUP_FILE=<myfile>.ndjson curl -s -X POST http://localhost:5601/api/saved_objects/_import?createNewCopies=false -H "osd-xsrf: true" --form file=@$BACKUP_FILE > response.json
Check the response for problems and navigate into OpenSearch Dashboards to ensure expected saved objects are present.
Stats
Documents and bytes counts
The OpenSearch cat API provides a simple way to extract general statistics about log storage, e.g. total logs and bytes (not including replication)
logstash1010:~$ curl -s 'localhost:9200/_cat/indices?v&bytes=b' | awk '/logstash-/ {b+=$10; d+=$7} END {print d; print b}'
Or logs per day (change $3 to $7 to get bytes sans replication)
logstash1010:~$ curl -s 'localhost:9200/_cat/indices?v&bytes=b' | awk '/logstash-/ { gsub(/logstash-[^0-9]*/, "", $3); sum[$3] += $7 } END { for (i in sum) print i, sum[i] }' | sort
Or logs per month:
logstash1010:~$ curl -s 'localhost:9200/_cat/indices?v&bytes=b' | awk '/logstash-/ { gsub(/logstash-[^0-9]*/, "", $3); gsub(/\.[0-9][0-9]$/, "", $3); sum[$3] += $7 } END { for (i in sum) print i, sum[i] }' | sort
Data Retention
Logs are retained in Logstash for a maximum of 90 days by default in accordance with our Privacy Policy and Data Retention Guidelines.
Extended Retention
See Logstash/Extended Retention
See also
- mw:Manual:Structured logging (MediaWiki part of the job to feed into Logstash)
- Logs#mw-log (the old method of viewing logs)
- Introducing Phatality (a Kibana plugin to streamline the process of reporting production errors on Phabricator)
- Kubernetes/Logging How logs flow into Logstash from the Kubernetes components