You are browsing a read-only backup copy of Wikitech. The live site can be found at wikitech.wikimedia.org

Logstash: Difference between revisions

From Wikitech-static
Jump to navigation Jump to search
imported>BryanDavis
(→‎Production Logstash: Update prod cluster information)
imported>Krinkle
No edit summary
 
(60 intermediate revisions by 19 users not shown)
Line 1: Line 1:
'''Logstash''' is a tool for managing events and logs. When used generically the term encompases a larger system of log collection, processing, storage and searching activities.
{{Navigation Wikimedia infrastructure|expand=logging}}
{{See|For the frontend at logstash.wikimedia.org, see [[OpenSearch Dashboards]].}}
'''Logstash''' is a tool for managing events and logs. When used generically, the term encompasses a larger system of log collection, processing, storage and searching activities.


== Overview ("ELK") ==
=={{anchor|Overview ("ELK+")}} Overview==
[[File:ELK_Tech_Talk_2015-08-20.pdf|thumb|Slides from TechTalk on ELK by Bryan Davis]]
[[File:Wikipedia webrequest 2022.png|thumb|290px|Wikipedia request flow]]
[[File:Using_Kibana4_to_read_logs_at_Wikimedia_Tech_Talk_2016-11-14.pdf|thumb|Slides from TechTalk on Kibana4 by Bryan Davis]]
 
Various Wikimedia applications send log events to '''[[Logstash]]''', which gathers the messages, converts them into JSON documents, and stores them in an '''[[OpenSearch]]''' cluster. Wikimedia uses '''[[OpenSearch Dashboards]]''' as a front-end client to filter and display messages from the OpenSearch cluster. These are the core components of our '''ELK stack''', but we use additional components as well. Since we utilize more than the core ELK components, we refer to our stack as "'''ELK+''''".
 
(OpenSearch and OpenSearch Dashboards were forked from Elasticsearch and Kibana when those became non-free, hence the name "ELK".)
 
===OpenSearch===


[[File:ELK_Tech_Talk_2015-08-20.pdf|thumb|Slides from TechTalk on ELK by Bryan Davis]]
[https://opensearch.org/docs/latest/opensearch/index/ OpenSearch] is a multi-node [https://lucene.apache.org/ Lucene] implementation.
Various Wikimedia applications send log events to '''[[Logstash]]''', which gathers the messages, converts them into json documents, and stores them in an '''[[Elasticsearch]]''' cluster. Wikimedia uses '''Kibana''' as a front-end client to filter and display messages from the Elasticsearch cluster.


=== Logstash ===
===Logstash===


[http://logstash.net/ Logstash] is a tool to collect, process, and forward events and log messages. Collection is accomplished via configurable input plugins including raw socket/packet communication, file tailing, and several message bus clients. Once an input plugin has collected data it can be processed by any number of filters which modify and annotate the event data. Finally logstash routes events to output plugins which can forward the events to a variety of external programs including Elasticsearch, local files and several message bus implementations.
[http://logstash.net/ Logstash] is a tool to collect, process, and forward events and log messages. Collection is accomplished via configurable input plugins including raw socket/packet communication, file tailing, and several message bus clients. Once an input plugin has collected data it can be processed by any number of filters which modify and annotate the event data. Finally logstash routes events to output plugins which can forward the events to a variety of external programs, local files, and several message bus implementations.


=== Elasticsearch ===
===OpenSearch Dashboards===


[http://www.elasticsearch.org/ Elasticsearch] is a multi-node [https://lucene.apache.org/ Lucene] implementation. The same technology powers the [[mw:Extension:CirrusSearch|CirrusSearch]] on WMF wikis.
[https://opensearch.org/docs/latest/dashboards/index/ OpenSearch Dashboards] is a browser-based analytics and search interface for OpenSearch.


=== Kibana ===
=== Kafka ===
[https://kafka.apache.org/intro Apache Kafka] is a distributed streaming system.  In our ELK stack Kafka buffers the stream of log messages produced by rsyslog (on behalf of applications) for consumption by Logstash.  Nothing should output logs to logstash directly, logs should always be sent by way of Kafka.


[http://www.elasticsearch.org/overview/kibana/ Kibana] is a browser-based analytics and search interface for Elasticsearch that was developed primarily to view Logstash event data.
=== Rsyslog ===
[https://www.rsyslog.com Rsyslog] is the "rocket-fast system for log processing".  In our ELK stack rsyslog is used as the host "log agent".  Rsyslog ingests log messages in various formats and from varying protocols, normalizes them and outputs to Kafka.


== Systems feeding into logstash ==
==Systems feeding into logstash==
See 2015-08 Tech talk slides
See 2015-08 Tech talk slides


Writing new filters is easy.
Writing new filters is easy.


=== Systems not feeding into logstash ===
=== Supported log shipping protocols & formats ("interfaces") ===
* [[mw:Extension:EventLogging|EventLogging]] (of program-defined events with schemas), despite its name, uses a different pipeline.
'''''Support of logs shipped directly from application to Logstash has been deprecated'''''.
* [[Varnish]] logs of the billions of [[Analytics/Pageviews | pageviews]] of WMF wikis would require a lot more hardware. Instead we use [[Kafka]] to feed web requests into [[Hadoop]].
 
Please see [[Logstash/Interface]] for details regarding long-term supported log shipping interfaces.
 
==== Kubernetes ====
 
Kubernetes hosted services are taken care of directly by the kubernetes infrastructure which ships via rsyslog into the logstash pipeline. All a kubernetes service needs to do is log in a JSON structured format (e.g. bunyan for nodejs services) to standard output/standard error.
 
===Systems not feeding into logstash===
 
*[[mw:Extension:EventLogging|EventLogging]] (of program-defined events with schemas), despite its name, uses a different pipeline.
*[[Varnish]] logs of the billions of[[Analytics/Pageviews | pageviews]] of WMF wikis would require a lot more hardware. Instead we use [[Kafka]] to feed [[Analytics/Data Lake/Traffic/Webrequest|web requests]] into [[Hadoop]]. A notable exception to this rule: varnish user-facing errors (HTTP status 500-599) are sent to logstash to make debugging easier.
*MediaWiki logs usually go to both logstash and log files, but a few log channels aren't. You can check which in <code>$wmgMonologChannels</code> in [https://noc.wikimedia.org/conf/InitialiseSettings.php.txt InitialiseSettings.php].
 
=== Writing & testing filters ===
 
When in the process of writing new logstash filters, take a look at what's [https://phabricator.wikimedia.org/source/operations-puppet/browse/production/modules/profile/files/logstash/ existing already] in puppet. Each filter must be tested to avoid regressions, we are using [https://github.com/magnusbaeck/logstash-filter-verifier logstash filter verifier] and existing tests can be found in the <tt>tests/</tt> directory. To write tests or run existing tests you will need logstash-filter-verifier and logstash installed locally, or you can use docker/podman and the puppet repository:<syntaxhighlight lang="bash">
# From the base dir of operations/puppet.
$ cd modules/profile/files/logstash/
 
# The Makefile recognizes if one of podman or docker is installed
# and then it uses it.
$ make test-local
/usr/bin/docker run --rm --workdir /src -v $(pwd):/src:Z -v $(pwd)/templates:/etc/logstash/templates:Z -v $(pwd)/filter_scripts:/etc/logstash/filter_scripts:Z --entrypoint make docker-registry.wikimedia.org/releng/logstash-filter-verifier:latest
logstash-filter-verifier --diff-command="diff -u --color=always" --sockets tests/ filters/*.conf
Use Unix domain sockets.
[...cut...]
 
</syntaxhighlight>Each filter has a corresponding test after its name in <tt>tests/</tt>. Within the test file the <tt>fields</tt> map lists the fields common to all tests and are used to trigger a specific filter's "if" conditions. The <tt>ignore</tt> key usually contains only <tt>@timestamp</tt> since that field is bound to change across invocations and can be safely ignored. The remainder of a test file is a list of testcases in the form of input/expected pairs. For "input" it is recommended to use yaml <tt>&gt;</tt> to include verbatim JSON, whereas "expected" is usually yaml, although it can be also verbatim JSON if more convenient.
 
==Production Logstash Architecture==
 
As of FY2019 Logstash infrastructure is owned by SRE. See also [[Logstash/SRE_onboard]] for more information on how to migrate services/applications.
 
===== Architecture Diagram =====
<br />
[[File:Logging Pipeline Arch Diag.jpg|center]]
<br />
 
===== Web interface =====
 
:https://logstash.wikimedia.org
 
===== Authentication =====
 
:wikitech LDAP username and password and membership in one of the following LDAP groups: nda, ops, wmf
 
===== Configuration =====
 
:The cluster contains two types of nodes, configured by Puppet.
:*role::logging::opensearch::collector manages the Logstash "collector" instances.  These run Logstash, an OpenSearch indexing node, and an Apache vhost serving OpenSearch Dashboards. The Apache vhosts perform LDAP-based authentication to restrict access to the potentially sensitive log information.
:*role::logging::opensearch::data configures an OpenSearch data node providing storage for log data.
:*role::kafka::logging configures a Kafka broker for producers to publish log data to and for Logstash to consume from.  This is a buffering layer to absorb log spikes and queue log events when maintenance is being performed on the logging cluster.
 
===== Hostname Convention =====
 
====== Current ======
 
:logstash1NNN Logstash related servers in [[Eqiad]].
:logstash2NNN Logstash related servers in Codfw.
 
====== Future ======
 
:logstashNNNN - Logstash "collector" hosts
:opensearch-loggingNNNN - Logstash OpenSearch hosts
:kafka-loggingNNNN - Logstash Kafka broker hosts
 
===== Operating Systems =====
All hosts run Debian Buster as a base operating system
 
===== Load Balancing and TLS =====
The misc Varnish cluster is being used to provide ssl termination and load balancing support for the Kibana application.


== Production Logstash ==
==OpenSearch quick intro==
; Web interface
: [https://logstash.wikimedia.org logstash.wikimedia.org] runs Kibana
; Authentication
: wikitech LDAP username and password and membership in one of the following LDAP groups: nda, ops, wmf
; Hosts
: logstash100[1-6] servers in [[Eqiad]].
; Configuration
: The cluster contains two types of nodes:
:* Logstash100[1-3] provide a Logstash instance, an no-data Elasticsearch node, and an Apache vhost serving the Kibana application. The Apache vhosts also act as reverse proxies to the Elasticsearch cluster and perform LDAP-based authentication to restrict access to the potentially sensitive log information.
:* Logstash100[4-6] provide Elasticsearch nodes forming the storage layer for log data.
: All hosts run Debian Jessie as a base operating system
: The misc Varnish cluster is being used to provide ssl termination and load balancing support for the Kibana application.


[[File:Wmf-elk-cluster-2014-10.svg|800px]]
*Start from one of the blue Dashboard links near the top, more are available from the Load icon near the top right.
*In "Events over time" click to zoom out to see what you want, or select a region with the mouse to zoom in.
**smaller time intervals are faster
**be careful you may see no events at all... because you're viewing the future
*When you get lost, click the Home icon near the top right
*As an example query, <code>wfDebugLog( <nowiki>'</nowiki>''Flow''<nowiki>'</nowiki>, ...)</code> in MediaWiki PHP corresponds to <code>type:mediawiki AND channel:flow</code>
**switch to using [[mw:Structured logging]] and you can query for ... <code>AND level:ERROR</code>


=== Kibana quick intro ===
* Start from one of the blue Dashboard links near the top, more are available from the Load icon near the top right.
* In "Events over time" click to zoom out to see what you want, or select a region with the mouse to zoom in.
** smaller time intervals are faster
** be careful you may see no events at all... because you're viewing the future
* When you get lost, click the Home icon near the top right
* As an example query, <code>wfDebugLog( <nowiki>'</nowiki>''Flow''<nowiki>'</nowiki>, ...)</code> in MediaWiki PHP corresponds to <code>type:mediawiki AND channel:flow</code>
** switch to using [[mw:Structured logging]] and you can query for ... <code>AND level:ERROR</code>
Read [{{fullurl:commons:File:ELK_Tech_Talk_2015-08-20.pdf|page=11}} slide 11 and onwards in the TechTalk on ELK by Bryan Davis], they highlight features of the Kibana web page.
Read [{{fullurl:commons:File:ELK_Tech_Talk_2015-08-20.pdf|page=11}} slide 11 and onwards in the TechTalk on ELK by Bryan Davis], they highlight features of the Kibana web page.


== Prototype (Beta) Logstash ==
==Common Logging Schema==
Seeː [[Logstash/Common Logging Schema]].
 
==API==
{{outdated-inline|note=and most likely does not work anymore.  It is due to be evaluated and cleaned up as necessary.}}
 
The [https://www.elastic.co/guide/en/elasticsearch/reference/current/search-search.html Elasticsearch API] is accessible at https://logstash.wikimedia.org/elasticsearch/
 
Note: The [https://www.elastic.co/guide/en/elasticsearch/reference/current/search-search.html _search] endpoint can only be used '''without''' a request body (see {{Phabricator|T174960}}). Use [https://www.elastic.co/guide/en/elasticsearch/reference/current/search-multi-search.html _msearch] instead for complex queries that need a request body.
 
===Extract data from Logstash with Python===
To get the last 100 log entries matching the Lucene query '''logger_name:varnishfetcherr AND layer:backend'''
 
<syntaxhighlight lang="python">
#!/usr/bin/env python3
import os
import sys
import json
import requests
 
query = "logger_name:varnishfetcherr AND layer:backend"
results = 100
 
ldap_user = os.getenv("LDAP_USER")
ldap_pass = os.getenv("LDAP_PASS")
 
if ldap_user is None or ldap_pass is None:
    print("You need to set LDAP_USER and LDAP_PASS")
    sys.exit(1)
 
url = "https://logstash.wikimedia.org/elasticsearch/_search?size={}&q={}".format(
    results, query
)
 
resp = requests.get(url, auth=requests.auth.HTTPBasicAuth(ldap_user, ldap_pass))
if resp.status_code != 200:
    print("Something's wrong, response status code={}".format(resp.status_code))
    sys.exit(1)
 
data = resp.json()
for line in data["hits"]["hits"]:
    print(json.dumps(line["_source"]))
</syntaxhighlight>
 
'''Note:''' Certain queries with whitespace characters may require additional url-encoding (via <code>urllib.parse.quote</code> or similar) when using <code>python requests</code>.  If requests to the logstash API consistently return 504 http status codes, even for relatively lightweight queries, this may be the issue.
 
=== Extract data from Logstash (OpenSearch) with curl and jq ===
<syntaxhighlight lang="bash">
logstash-server:~$ cat search.sh
curl -XGET 'localhost:9200/_search?pretty&size=10000' -d '
{
    "query": {
        "query_string" : {
            "query" : "facility:19,local3 AND host:csw2-esams AND @timestamp:[2019-08-04T03:00 TO 2019-08-04T03:15] NOT program:mgd"
        }
    },
    "sort": ["@timestamp"]
} '
logstash-server:~$ bash search.sh | jq '.hits.hits[]._source | {timestamp,host,level,message}' | head -20
{
  "timestamp": "2019-08-04T03:00:00+00:00",
  "host": "csw2-esams",
  "level": "INFO",
  "message": " %-: (root) CMD (newsyslog)"
}
{
  "timestamp": "2019-08-04T03:00:00+00:00",
  "host": "csw2-esams",
  "level": "INFO",
  "message": " %-: (root) CMD (  /usr/libexec/atrun)"
}
{
  "timestamp": "2019-08-04T03:01:00+00:00",
  "host": "csw2-esams",
  "level": "INFO",
  "message": " %-: (root) CMD (adjkerntz -a)"
}
$ bash search.sh | jq -r '.hits.hits[]._source | {timestamp,host,level,program,message} | map(.) | @csv' > asw2-d2-eqiad-crash.csv
</syntaxhighlight><br />
 
==Plugins==
Logstash plugins are fetched and compiled into a Debian package for distribution and installation on Logstash servers.
 
The plugin git repository is located at https://gerrit.wikimedia.org/r/#/admin/projects/operations/software/logstash/plugins
 
===Plugin build process===
 
The build can be run on the production builder host.  See [https://gerrit.wikimedia.org/r/plugins/gitiles/operations/software/logstash/plugins/+/refs/heads/master/README README] for up-to-date build steps.
 
=====Deployment=====
 
* Add package to [[Reprepro|reprepro]] and install on the host normally.
 
{{Note|Package installation will not restart Logstash. This must be done manually in a rolling fashion, and it's strongly suggested to perform this in step with the plugin deploy.}}
 
{{anchor|Prototype (Beta) Logstash}}
 
==Beta Cluster Logstash==
 
; Web interface
; Web interface
: [https://logstash-beta.wmflabs.org/ logstash-beta.wmflabs.org]
: https://beta-logs.wmcloud.org/
; Hosts
; Access control
: [[Nova_Resource:Deployment-logstash2.deployment-prep.eqiad.wmflabs|deployment-logstash2.eqiad.wmflabs]]
: Credentials for Beta's Logstash can be found on [https://office.wikimedia.org/wiki/User:BDavis_(WMF)/logstash officewiki], or by connecting to <code>deployment-deploy03.deployment-prep.eqiad1.wikimedia.cloud</code> and reading <code>/root/secrets.txt</code>. Unlike production services, Beta Cluster may not use [[mw:Developer account|Developer accounts]] (LDAP) for authentication.
; Configuration
: <syntaxhighlight lang=shell-session>
: It hosts a functional Logstash + Elasticsearch + Kibana stack at [https://logstash-beta.wmflabs.org/ logstash-beta.wmflabs.org] that aggregates log data produced by the [[Nova_Resource:Deployment-prep|beta cluster]].
$ ssh deployment-deploy03.deployment-prep.eqiad1.wikimedia.cloud -- sudo cat /root/secrets.txt
service: https://beta-logs.wmcloud.org
user: ************
password: ************
</syntaxhighlight>
 
==Gotchas==
===GELF transport===
Make sure logging events sent to the GELF input don't have a "type" or "_type" field set, or if set, that it contains the value "gelf". The gelf/Logstash config discards any events that have a different value set for "type" or "_type". The final "type" seen in OpenSearch/Dashboards will be take from the "facility" element of the original GELF packet. The application sending the log data to Logstash should set "facility" to a reasonably unique value that identifies your application.
 
==Documents==
{{Special:Prefixindex/Logstash/|hideredirects=1|stripprefix=1}}
 
== Troubleshooting ==
 
=== Kafka consumer lag ===
 
For a host of reasons it might happen that there's a buildup of messages on Kafka. For example:
; OpenSearch is refusing to index messages, thus Logstash can't consume properly from Kafka.
: The reason for index failure is usually conflicting fields, see also {{bug|T150106}} for a detailed discussion of the problem. The solution is to find what programs are generating the conflicts and drop them on Logstash accordingly, see also {{bug|T228089}}
 
=== Using the dead letter queue ===
 
The Logstash DLQ is not enabled normally, however it comes handy when debugging indexing failures and the problematic log entries don't show up in the logstash logs.
 
Enable (with puppet disabled) the DLQ in <tt>/etc/logstash/logstash.yml</tt>
 
<pre>
dead_letter_queue.enable: true
path.dead_letter_queue: "/var/lib/logstash/dead_letter_queue/"
</pre>
 
And <tt>systemctl restart logstash</tt>. The DLQ will start filling up as soon as unindexable logs are received. At a later time the DLQ can be dumped with (running as <tt>logstash</tt> user)
 
<pre>
$ /usr/share/logstash/bin/logstash -e '
input {
  dead_letter_queue {
    path => "/var/lib/logstash/dead_letter_queue/"
    commit_offsets => false
    pipeline_id => "main"
  }
}
 
output {
  stdout {
    codec => rubydebug { metadata => true }
  }
}
' 2>&1 | less
</pre>
 
Once debugging is complete, clear the queue with <tt>rm /var/lib/logstash/dead_letter_queue/main/*.log</tt>, and reenable puppet
 
== Operations ==
 
=== Configuration changes ===
 
After merging your configuration change it is usually enough to run cumin in batches of one with >60s of sleep:
 
  cumin -b1 -s60 'O:logging::opensearch::collector' 'run-puppet-agent -q && systemctl restart logstash'
 
==== Test a configuration snippet before merge ====
Copy your ready to merge snippet (eg. modules/profile/files/logstash/filter-syslog-network.conf) to a Logstash host.
Then run
  sudo /usr/share/logstash/bin/logstash --config.test_and_exit -f <myfile>
 
It should return "Configuration OK".
 
=== Indexing errors ===
Have a look at the [https://logstash.wikimedia.org/app/discover#/view/6086dd90-85dd-11eb-99a9-c1243d7de186 Dead Letter Queue Dashboard].  The original message that caused the error is in the <code>log.original</code> field.
 
We're alerting on errors that Logstash gets from OpenSearch whenever there's an "indexing conflict" between fields of the same index (see also {{Bug|T236343}}). The reason usually is because two applications send logs with the same field name but two different types, e.g. <code>response</code> will be sent as a string in one case but as nested object in another. {{Bug|T239458}} is a good example of this, where different parts of mediawiki send logs formatted in a different way.
 
=== No logs indexed ===
 
This alert is based on the incoming logs per second indexed by OpenSearch. During normal operation there is a baseline of ~1k logs/s (July 2020) and anything significantly lower than that is an unexpected condition. Check the Logstash dashboard attached to alert for signs of root causes. Most likely Logstash has stopped sending logs to OpenSearch.
 
=== Drop spammy logs ===
Occasionally producers will outpace Logstash's ingestion capabilities, most often with what's considered "log spam" (e.g. dumping whole request/response in debug logs). In these case one solution is to drop the offending logs from Logstash, and ideally the producer has already stopped spamming. The simplest such filter is installed before most/all other filters, matches a few fields and then <code>drop</code>s the message:
 
<pre>
filter {
  if [program] == "producer" and [nested][field] == "offending value" {
    drop {}
  }
}
</pre>
 
See also [https://gerrit.wikimedia.org/r/c/operations/puppet/+/713853 this Gerrit change] for a real-world example.
 
=== UDP packet loss ===
Logstash 5 locks up from time to time, causing UDP packet loss on the host it is running on. The fix in this case is to restart <code>logstash.service</code> on the host in question.
 
=== Replace failed disk and rebuild RAID ===
The storage drives on Logstash data are configured in an mdraid RAID0.  OpenSearch handles data redundancy, so the rest of the cluster will absorb the impact of the downed node.
 
Once the disk is replaced, the RAID will have to be rebuilt:
 
First stop opensearch and disable puppet.
 
Copy disk partition layout from good disk to new disk
  sfdisk -d /dev/sdb | sfdisk /dev/sdi
determine md device mounted at /srv (/dev/md2 for example) and check mdstat
  cat /proc/mdstat
get array information.  make a note of the remaining array members, we'll need this information when rebuilding
  mdadm --query --detail /dev/md2
stop and remove the raid0 array
  mdadm --stop /dev/md2 && mdadm --remove /dev/md2
remove traces of the previous array on the old partitions
  mdadm --zero-superblock /dev/sdb4
  mdadm --zero-superblock /dev/sdc4
  # ... etc
  mdadm --zero-superblock /dev/sdh4
create new raid0 array (WARNING: DISKS MAY BE DIFFERENT)
  mdadm --create --verbose /dev/md/2 --level=0 --raid-devices=8 /dev/sdb4 /dev/sdc4 /dev/sdd4 /dev/sde4 /dev/sdf4 /dev/sdg4 /dev/sdh4 /dev/sdi4
make filesystem
  mkfs.ext4 /dev/md2
workaround systemd mount management by commenting out the old array mount in fstab and issuing a daemon reload
  vim /etc/fstab
  systemctl daemon-reload
add mount point back in with new uuid and mount
  vim /etc/fstab
  mount /srv
check to make sure the disk is mounted and add new array definition to /etc/mdadm/mdadm.conf - also remove old definition
  mdadm --detail --scan
  vim /etc/mdadm/mdadm.conf
update initramfs
  update-initramfs -u
check other arrays for failed partitions and add partitions to them
  mdadm --manage /dev/md0 --add /dev/sdi2
  mdadm --manage /dev/md1 --add /dev/sdi3
make opensearch data directory
  mkdir /srv/opensearch && chown opensearch:opensearch /srv/opensearch
 
re-enable puppet and run puppet.  OpenSearch should start up, join the cluster, and immediately start rebalancing shards.
 
== Stats ==
 
=== Documents and bytes counts ===
 
The OpenSearch ''cat'' API provides a simple way to extract general statistics about log storage, e.g. total logs and bytes (not including replication)
 
  logstash1010:~$ curl -s 'localhost:9200/_cat/indices?v&bytes=b' | awk '/logstash-/ {b+=$10; d+=$7} END {print d; print b}'
 
Or logs per day (change $3 to $7 to get bytes sans replication)
 
  logstash1010:~$ curl -s 'localhost:9200/_cat/indices?v&bytes=b' | awk '/logstash-/ { gsub(/logstash-[^0-9]*/, "", $3); sum[$3] += $7 } END { for (i in sum) print i, sum[i] }' | sort
 
Or logs per month:


== Gotchas ==
  logstash1010:~$ curl -s 'localhost:9200/_cat/indices?v&bytes=b' | awk '/logstash-/ { gsub(/logstash-[^0-9]*/, "", $3); gsub(/\.[0-9][0-9]$/, "", $3); sum[$3] += $7 } END { for (i in sum) print i, sum[i] }' | sort
=== GELF transport ===
Make sure logging events sent to the GELF input don't have a "type" or "_type" field set, or if set, that it contains the value "gelf". The gelf/logstash config discards any events that have a different value set for "type" or "_type".


== Data Retention ==
Logs are retained in Logstash for a maximum of 90 days by default in accordance with our [https://foundation.wikimedia.org/wiki/Privacy_policy Privacy Policy] and [https://meta.wikimedia.org/wiki/Data_retention_guidelines Data Retention Guidelines].


== Documents ==
=== Extended Retention ===
{{Special:Prefixindex/Logstash/|hideredirects=1|stripprefix=1}}
See [[Logstash/Extended Retention]]
 
==See also==


*[[mw:Manual:Structured logging]] (MediaWiki part of the job to feed into Logstash)
*[[Logs#mw-log]] (the old method of viewing logs)
*[[phabricator:J177|Introducing Phatality]] (a Kibana plugin to streamline the process of reporting production errors on Phabricator)
*[[Kubernetes/Logging]] How logs flow into Logstash from the Kubernetes components


{{Ptag|Wikimedia-Logstash}}
[[Category:SRE Observability]]
[[Category:Services]]
[[Category:Services]]

Latest revision as of 23:12, 26 September 2022

Logstash is a tool for managing events and logs. When used generically, the term encompasses a larger system of log collection, processing, storage and searching activities.

Overview

File:ELK Tech Talk 2015-08-20.pdf

Wikipedia request flow

File:Using Kibana4 to read logs at Wikimedia Tech Talk 2016-11-14.pdf

Various Wikimedia applications send log events to Logstash, which gathers the messages, converts them into JSON documents, and stores them in an OpenSearch cluster. Wikimedia uses OpenSearch Dashboards as a front-end client to filter and display messages from the OpenSearch cluster. These are the core components of our ELK stack, but we use additional components as well. Since we utilize more than the core ELK components, we refer to our stack as "ELK+'".

(OpenSearch and OpenSearch Dashboards were forked from Elasticsearch and Kibana when those became non-free, hence the name "ELK".)

OpenSearch

OpenSearch is a multi-node Lucene implementation.

Logstash

Logstash is a tool to collect, process, and forward events and log messages. Collection is accomplished via configurable input plugins including raw socket/packet communication, file tailing, and several message bus clients. Once an input plugin has collected data it can be processed by any number of filters which modify and annotate the event data. Finally logstash routes events to output plugins which can forward the events to a variety of external programs, local files, and several message bus implementations.

OpenSearch Dashboards

OpenSearch Dashboards is a browser-based analytics and search interface for OpenSearch.

Kafka

Apache Kafka is a distributed streaming system. In our ELK stack Kafka buffers the stream of log messages produced by rsyslog (on behalf of applications) for consumption by Logstash. Nothing should output logs to logstash directly, logs should always be sent by way of Kafka.

Rsyslog

Rsyslog is the "rocket-fast system for log processing". In our ELK stack rsyslog is used as the host "log agent". Rsyslog ingests log messages in various formats and from varying protocols, normalizes them and outputs to Kafka.

Systems feeding into logstash

See 2015-08 Tech talk slides

Writing new filters is easy.

Supported log shipping protocols & formats ("interfaces")

Support of logs shipped directly from application to Logstash has been deprecated.

Please see Logstash/Interface for details regarding long-term supported log shipping interfaces.

Kubernetes

Kubernetes hosted services are taken care of directly by the kubernetes infrastructure which ships via rsyslog into the logstash pipeline. All a kubernetes service needs to do is log in a JSON structured format (e.g. bunyan for nodejs services) to standard output/standard error.

Systems not feeding into logstash

  • EventLogging (of program-defined events with schemas), despite its name, uses a different pipeline.
  • Varnish logs of the billions of pageviews of WMF wikis would require a lot more hardware. Instead we use Kafka to feed web requests into Hadoop. A notable exception to this rule: varnish user-facing errors (HTTP status 500-599) are sent to logstash to make debugging easier.
  • MediaWiki logs usually go to both logstash and log files, but a few log channels aren't. You can check which in $wmgMonologChannels in InitialiseSettings.php.

Writing & testing filters

When in the process of writing new logstash filters, take a look at what's existing already in puppet. Each filter must be tested to avoid regressions, we are using logstash filter verifier and existing tests can be found in the tests/ directory. To write tests or run existing tests you will need logstash-filter-verifier and logstash installed locally, or you can use docker/podman and the puppet repository:

# From the base dir of operations/puppet.
$ cd modules/profile/files/logstash/

# The Makefile recognizes if one of podman or docker is installed
# and then it uses it.
$ make test-local
/usr/bin/docker run --rm --workdir /src -v $(pwd):/src:Z -v $(pwd)/templates:/etc/logstash/templates:Z -v $(pwd)/filter_scripts:/etc/logstash/filter_scripts:Z --entrypoint make docker-registry.wikimedia.org/releng/logstash-filter-verifier:latest
logstash-filter-verifier --diff-command="diff -u --color=always" --sockets tests/ filters/*.conf
Use Unix domain sockets.
[...cut...]

Each filter has a corresponding test after its name in tests/. Within the test file the fields map lists the fields common to all tests and are used to trigger a specific filter's "if" conditions. The ignore key usually contains only @timestamp since that field is bound to change across invocations and can be safely ignored. The remainder of a test file is a list of testcases in the form of input/expected pairs. For "input" it is recommended to use yaml > to include verbatim JSON, whereas "expected" is usually yaml, although it can be also verbatim JSON if more convenient.

Production Logstash Architecture

As of FY2019 Logstash infrastructure is owned by SRE. See also Logstash/SRE_onboard for more information on how to migrate services/applications.

Architecture Diagram


Logging Pipeline Arch Diag.jpg


Web interface
https://logstash.wikimedia.org
Authentication
wikitech LDAP username and password and membership in one of the following LDAP groups: nda, ops, wmf
Configuration
The cluster contains two types of nodes, configured by Puppet.
  • role::logging::opensearch::collector manages the Logstash "collector" instances. These run Logstash, an OpenSearch indexing node, and an Apache vhost serving OpenSearch Dashboards. The Apache vhosts perform LDAP-based authentication to restrict access to the potentially sensitive log information.
  • role::logging::opensearch::data configures an OpenSearch data node providing storage for log data.
  • role::kafka::logging configures a Kafka broker for producers to publish log data to and for Logstash to consume from. This is a buffering layer to absorb log spikes and queue log events when maintenance is being performed on the logging cluster.
Hostname Convention
Current
logstash1NNN Logstash related servers in Eqiad.
logstash2NNN Logstash related servers in Codfw.
Future
logstashNNNN - Logstash "collector" hosts
opensearch-loggingNNNN - Logstash OpenSearch hosts
kafka-loggingNNNN - Logstash Kafka broker hosts
Operating Systems

All hosts run Debian Buster as a base operating system

Load Balancing and TLS

The misc Varnish cluster is being used to provide ssl termination and load balancing support for the Kibana application.

OpenSearch quick intro

  • Start from one of the blue Dashboard links near the top, more are available from the Load icon near the top right.
  • In "Events over time" click to zoom out to see what you want, or select a region with the mouse to zoom in.
    • smaller time intervals are faster
    • be careful you may see no events at all... because you're viewing the future
  • When you get lost, click the Home icon near the top right
  • As an example query, wfDebugLog( 'Flow', ...) in MediaWiki PHP corresponds to type:mediawiki AND channel:flow

Read slide 11 and onwards in the TechTalk on ELK by Bryan Davis, they highlight features of the Kibana web page.

Common Logging Schema

Seeː Logstash/Common Logging Schema.

API

The Elasticsearch API is accessible at https://logstash.wikimedia.org/elasticsearch/

Note: The _search endpoint can only be used without a request body (see task T174960). Use _msearch instead for complex queries that need a request body.

Extract data from Logstash with Python

To get the last 100 log entries matching the Lucene query logger_name:varnishfetcherr AND layer:backend

#!/usr/bin/env python3
import os
import sys
import json
import requests

query = "logger_name:varnishfetcherr AND layer:backend"
results = 100

ldap_user = os.getenv("LDAP_USER")
ldap_pass = os.getenv("LDAP_PASS")

if ldap_user is None or ldap_pass is None:
    print("You need to set LDAP_USER and LDAP_PASS")
    sys.exit(1)

url = "https://logstash.wikimedia.org/elasticsearch/_search?size={}&q={}".format(
    results, query
)

resp = requests.get(url, auth=requests.auth.HTTPBasicAuth(ldap_user, ldap_pass))
if resp.status_code != 200:
    print("Something's wrong, response status code={}".format(resp.status_code))
    sys.exit(1)

data = resp.json()
for line in data["hits"]["hits"]:
    print(json.dumps(line["_source"]))

Note: Certain queries with whitespace characters may require additional url-encoding (via urllib.parse.quote or similar) when using python requests. If requests to the logstash API consistently return 504 http status codes, even for relatively lightweight queries, this may be the issue.

Extract data from Logstash (OpenSearch) with curl and jq

logstash-server:~$ cat search.sh
curl -XGET 'localhost:9200/_search?pretty&size=10000' -d '
{
    "query": {
        "query_string" : {
            "query" : "facility:19,local3 AND host:csw2-esams AND @timestamp:[2019-08-04T03:00 TO 2019-08-04T03:15] NOT program:mgd"
        }
    },
    "sort": ["@timestamp"]
} '
logstash-server:~$ bash search.sh | jq '.hits.hits[]._source | {timestamp,host,level,message}' | head -20
{
  "timestamp": "2019-08-04T03:00:00+00:00",
  "host": "csw2-esams",
  "level": "INFO",
  "message": " %-: (root) CMD (newsyslog)"
}
{
  "timestamp": "2019-08-04T03:00:00+00:00",
  "host": "csw2-esams",
  "level": "INFO",
  "message": " %-: (root) CMD (   /usr/libexec/atrun)"
}
{
  "timestamp": "2019-08-04T03:01:00+00:00",
  "host": "csw2-esams",
  "level": "INFO",
  "message": " %-: (root) CMD (adjkerntz -a)"
}
$ bash search.sh | jq -r '.hits.hits[]._source | {timestamp,host,level,program,message} | map(.) | @csv' > asw2-d2-eqiad-crash.csv


Plugins

Logstash plugins are fetched and compiled into a Debian package for distribution and installation on Logstash servers.

The plugin git repository is located at https://gerrit.wikimedia.org/r/#/admin/projects/operations/software/logstash/plugins

Plugin build process

The build can be run on the production builder host. See README for up-to-date build steps.

Deployment
  • Add package to reprepro and install on the host normally.

Beta Cluster Logstash

Web interface
https://beta-logs.wmcloud.org/
Access control
Credentials for Beta's Logstash can be found on officewiki, or by connecting to deployment-deploy03.deployment-prep.eqiad1.wikimedia.cloud and reading /root/secrets.txt. Unlike production services, Beta Cluster may not use Developer accounts (LDAP) for authentication.
$ ssh deployment-deploy03.deployment-prep.eqiad1.wikimedia.cloud -- sudo cat /root/secrets.txt
service: https://beta-logs.wmcloud.org
user: ************
password: ************

Gotchas

GELF transport

Make sure logging events sent to the GELF input don't have a "type" or "_type" field set, or if set, that it contains the value "gelf". The gelf/Logstash config discards any events that have a different value set for "type" or "_type". The final "type" seen in OpenSearch/Dashboards will be take from the "facility" element of the original GELF packet. The application sending the log data to Logstash should set "facility" to a reasonably unique value that identifies your application.

Documents

Troubleshooting

Kafka consumer lag

For a host of reasons it might happen that there's a buildup of messages on Kafka. For example:

OpenSearch is refusing to index messages, thus Logstash can't consume properly from Kafka.
The reason for index failure is usually conflicting fields, see also bug T150106 for a detailed discussion of the problem. The solution is to find what programs are generating the conflicts and drop them on Logstash accordingly, see also bug T228089

Using the dead letter queue

The Logstash DLQ is not enabled normally, however it comes handy when debugging indexing failures and the problematic log entries don't show up in the logstash logs.

Enable (with puppet disabled) the DLQ in /etc/logstash/logstash.yml

dead_letter_queue.enable: true
path.dead_letter_queue: "/var/lib/logstash/dead_letter_queue/"

And systemctl restart logstash. The DLQ will start filling up as soon as unindexable logs are received. At a later time the DLQ can be dumped with (running as logstash user)

$ /usr/share/logstash/bin/logstash -e '
input {
  dead_letter_queue {
    path => "/var/lib/logstash/dead_letter_queue/" 
    commit_offsets => false 
    pipeline_id => "main" 
  }
}

output {
  stdout {
    codec => rubydebug { metadata => true }
  }
}
' 2>&1 | less

Once debugging is complete, clear the queue with rm /var/lib/logstash/dead_letter_queue/main/*.log, and reenable puppet

Operations

Configuration changes

After merging your configuration change it is usually enough to run cumin in batches of one with >60s of sleep:

 cumin -b1 -s60 'O:logging::opensearch::collector' 'run-puppet-agent -q && systemctl restart logstash'

Test a configuration snippet before merge

Copy your ready to merge snippet (eg. modules/profile/files/logstash/filter-syslog-network.conf) to a Logstash host. Then run

 sudo /usr/share/logstash/bin/logstash --config.test_and_exit -f <myfile>

It should return "Configuration OK".

Indexing errors

Have a look at the Dead Letter Queue Dashboard. The original message that caused the error is in the log.original field.

We're alerting on errors that Logstash gets from OpenSearch whenever there's an "indexing conflict" between fields of the same index (see also bug T236343). The reason usually is because two applications send logs with the same field name but two different types, e.g. response will be sent as a string in one case but as nested object in another. bug T239458 is a good example of this, where different parts of mediawiki send logs formatted in a different way.

No logs indexed

This alert is based on the incoming logs per second indexed by OpenSearch. During normal operation there is a baseline of ~1k logs/s (July 2020) and anything significantly lower than that is an unexpected condition. Check the Logstash dashboard attached to alert for signs of root causes. Most likely Logstash has stopped sending logs to OpenSearch.

Drop spammy logs

Occasionally producers will outpace Logstash's ingestion capabilities, most often with what's considered "log spam" (e.g. dumping whole request/response in debug logs). In these case one solution is to drop the offending logs from Logstash, and ideally the producer has already stopped spamming. The simplest such filter is installed before most/all other filters, matches a few fields and then drops the message:

filter {
  if [program] == "producer" and [nested][field] == "offending value" {
    drop {}
  }
}

See also this Gerrit change for a real-world example.

UDP packet loss

Logstash 5 locks up from time to time, causing UDP packet loss on the host it is running on. The fix in this case is to restart logstash.service on the host in question.

Replace failed disk and rebuild RAID

The storage drives on Logstash data are configured in an mdraid RAID0. OpenSearch handles data redundancy, so the rest of the cluster will absorb the impact of the downed node.

Once the disk is replaced, the RAID will have to be rebuilt:

First stop opensearch and disable puppet.

Copy disk partition layout from good disk to new disk

 sfdisk -d /dev/sdb | sfdisk /dev/sdi

determine md device mounted at /srv (/dev/md2 for example) and check mdstat

 cat /proc/mdstat

get array information. make a note of the remaining array members, we'll need this information when rebuilding

 mdadm --query --detail /dev/md2

stop and remove the raid0 array

 mdadm --stop /dev/md2 && mdadm --remove /dev/md2

remove traces of the previous array on the old partitions

 mdadm --zero-superblock /dev/sdb4
 mdadm --zero-superblock /dev/sdc4
 # ... etc
 mdadm --zero-superblock /dev/sdh4

create new raid0 array (WARNING: DISKS MAY BE DIFFERENT)

 mdadm --create --verbose /dev/md/2 --level=0 --raid-devices=8 /dev/sdb4 /dev/sdc4 /dev/sdd4 /dev/sde4 /dev/sdf4 /dev/sdg4 /dev/sdh4 /dev/sdi4

make filesystem

 mkfs.ext4 /dev/md2

workaround systemd mount management by commenting out the old array mount in fstab and issuing a daemon reload

 vim /etc/fstab
 systemctl daemon-reload

add mount point back in with new uuid and mount

 vim /etc/fstab
 mount /srv

check to make sure the disk is mounted and add new array definition to /etc/mdadm/mdadm.conf - also remove old definition

 mdadm --detail --scan
 vim /etc/mdadm/mdadm.conf

update initramfs

 update-initramfs -u

check other arrays for failed partitions and add partitions to them

 mdadm --manage /dev/md0 --add /dev/sdi2
 mdadm --manage /dev/md1 --add /dev/sdi3

make opensearch data directory

 mkdir /srv/opensearch && chown opensearch:opensearch /srv/opensearch

re-enable puppet and run puppet. OpenSearch should start up, join the cluster, and immediately start rebalancing shards.

Stats

Documents and bytes counts

The OpenSearch cat API provides a simple way to extract general statistics about log storage, e.g. total logs and bytes (not including replication)

 logstash1010:~$ curl -s 'localhost:9200/_cat/indices?v&bytes=b' | awk '/logstash-/ {b+=$10; d+=$7} END {print d; print b}'

Or logs per day (change $3 to $7 to get bytes sans replication)

 logstash1010:~$ curl -s 'localhost:9200/_cat/indices?v&bytes=b' | awk '/logstash-/ { gsub(/logstash-[^0-9]*/, "", $3); sum[$3] += $7 } END { for (i in sum) print i, sum[i] }' | sort

Or logs per month:

 logstash1010:~$ curl -s 'localhost:9200/_cat/indices?v&bytes=b' | awk '/logstash-/ { gsub(/logstash-[^0-9]*/, "", $3); gsub(/\.[0-9][0-9]$/, "", $3); sum[$3] += $7 } END { for (i in sum) print i, sum[i] }' | sort

Data Retention

Logs are retained in Logstash for a maximum of 90 days by default in accordance with our Privacy Policy and Data Retention Guidelines.

Extended Retention

See Logstash/Extended Retention

See also