You are browsing a read-only backup copy of Wikitech. The live site can be found at


From Wikitech-static
Revision as of 11:32, 29 May 2019 by imported>Tarrow (→‎Prototype (Beta) Logstash: add credential location)
Jump to navigation Jump to search

Logstash is a tool for managing events and logs. When used generically, the term encompasses a larger system of log collection, processing, storage and searching activities.

Overview ("ELK")

File:ELK Tech Talk 2015-08-20.pdf

Wikipedia request flow

File:Using Kibana4 to read logs at Wikimedia Tech Talk 2016-11-14.pdf

Various Wikimedia applications send log events to Logstash, which gathers the messages, converts them into json documents, and stores them in an Elasticsearch cluster. Wikimedia uses Kibana as a front-end client to filter and display messages from the Elasticsearch cluster.


Logstash is a tool to collect, process, and forward events and log messages. Collection is accomplished via configurable input plugins including raw socket/packet communication, file tailing, and several message bus clients. Once an input plugin has collected data it can be processed by any number of filters which modify and annotate the event data. Finally logstash routes events to output plugins which can forward the events to a variety of external programs including Elasticsearch, local files and several message bus implementations.


Elasticsearch is a multi-node Lucene implementation. The same technology powers the CirrusSearch on WMF wikis.


Kibana is a browser-based analytics and search interface for Elasticsearch that was developed primarily to view Logstash event data.

Systems feeding into logstash

See 2015-08 Tech talk slides

Writing new filters is easy.

Systems not feeding into logstash

  • EventLogging (of program-defined events with schemas), despite its name, uses a different pipeline.
  • Varnish logs of the billions of pageviews of WMF wikis would require a lot more hardware. Instead we use Kafka to feed web requests into Hadoop. A notable exception to this rule: varnish user-facing errors (HTTP status 500-599) are sent to logstash to make debugging easier.
  • MediaWiki logs usually go to both logstash and log files, but a few log channels aren't. You can check which in $wmgMonologChannels in InitialiseSettings.php.

Production Logstash

As of FY2019 Logstash infrastructure is owned by SRE. See also Logstash/SRE_onboard for more information on how to migrate services/applications.

Web interface runs Kibana
wikitech LDAP username and password and membership in one of the following LDAP groups: nda, ops, wmf
logstash100[1-6] servers in Eqiad.
The cluster contains two types of nodes:
  • Logstash100[1-3] provide a Logstash instance, an no-data Elasticsearch node, and an Apache vhost serving the Kibana application. The Apache vhosts also act as reverse proxies to the Elasticsearch cluster and perform LDAP-based authentication to restrict access to the potentially sensitive log information.
  • Logstash100[4-6] provide Elasticsearch nodes forming the storage layer for log data.
All hosts run Debian Jessie as a base operating system
The misc Varnish cluster is being used to provide ssl termination and load balancing support for the Kibana application.


Kibana quick intro

  • Start from one of the blue Dashboard links near the top, more are available from the Load icon near the top right.
  • In "Events over time" click to zoom out to see what you want, or select a region with the mouse to zoom in.
    • smaller time intervals are faster
    • be careful you may see no events at all... because you're viewing the future
  • When you get lost, click the Home icon near the top right
  • As an example query, wfDebugLog( 'Flow', ...) in MediaWiki PHP corresponds to type:mediawiki AND channel:flow

Read slide 11 and onwards in the TechTalk on ELK by Bryan Davis, they highlight features of the Kibana web page.


The Elasticsearch API is accessible at

Note: The _search endpoint can only be used without a request body (see task T174960). Use _msearch instead for complex queries that need a request body.


We maintain a WMF specific process to build and distribute Logstash plugins to servers.

The build script and plugin git repository is located at

Plugin build process

  • An up to date Debian stable host with the Logstash package installed
  • Membership to LDAP group cn=archiva-deployers,ou=groups,dc=wikimedia,dc=org
Build Process

First, create a ~/.m2/settings.xml file containing the below (see Archiva#Deploy to Archiva for additional details)

# ~/.m2/settings.xml

Then, check out

git clone

In the newly created directory 'plugins' , initialize git-fat

cd plugins
git-fat init

Then execute the build script, providing the logstash version for which plugins should be built as the first argument

./ 5.6.15    # 5.6.15 is used as an example, substitute in desired logstash version

Also git add and git commit the changes. Then send it for review

git add .
git commit -m "Upgrade logstash plugin to 5.X.X"
git review -R

After review and merging the change, you'll need to pull the change onto the deployment server, and run a scap deploy. Currently the deploy server is deploy1001

# ssh to deploy master, currently deploy1001

cd /srv/deployment/logstash/plugins/

# have a look at what is in the local working copy. you'll want to bring this up to date
# with the gerrit repository but also not inadvertantly clobber manual changes

git pull

# if you wish to see what would happen without actually running the deploy
scap deploy --dry-run --no-log-message

# deploy the updated plugins
scap deploy "updating logstash plugins to $version"

After a successful scap deploy, puppet should do the right thing upon the next agent run to fetch and install the update plugins

Puppet will not restart logstash. This must be done manually in a rolling fashion, and it's strongly suggested to perform this in step with the plugin deploy.

Prototype (Beta) Logstash

Web interface
It hosts a functional Logstash + Elasticsearch + Kibana stack at that aggregates log data produced by the beta cluster. Credentials for this can be found on deployment-deploy01.deployment-prep.eqiad.wmflabs in /root/secrets.txt.


GELF transport

Make sure logging events sent to the GELF input don't have a "type" or "_type" field set, or if set, that it contains the value "gelf". The gelf/logstash config discards any events that have a different value set for "type" or "_type". The final "type" seen in Kibana/Elasticsearch will be take from the "facility" element of the original GELF packet. The application sending the log data to Logstash should set "facility" to a reasonably unique value that identifies your application.


See also