You are browsing a read-only backup copy of Wikitech. The live site can be found at wikitech.wikimedia.org
Difference between revisions of "Analytics/Systems/Varnishkafka"
|Line 21:||Line 21:|
== Monitoring ==
== Monitoring ==
Revision as of 07:39, 7 August 2019
Varnishkafka is a daemon that runs on all the Wikimedia frontend caching hosts. Since Varnish logs HTTP requests in its own format in shared memory, Varnishkafka uses the Varnish Log API to read data, format it following some user input and finally send the result to a specific Kafka topic (using librdkafka).
Where is the code?
Testing a code change
Use the Docker image as outlined in https://gerrit.wikimedia.org/r/#/admin/projects/operations/software/varnish/varnishkafka/testing. The wise reader might think "and what about unit tests? These ones are integration tests!". The Analytics team tried in T147432 to explore the possibility of adding unit tests to Varnishkafka but the estimated amount of time for the code refactoring, testing, releasing etc.. (this software is critical for the team) was not worth the benefits of having some code tests. The team preferred instead to spend more time on creating a flexible and simple integration testing suite.
You might see some reference in puppet of Varnishkafka instances, and this is because the same Varnish request data can be sliced and formatted in different ways for different scopes:
On all the cache segments (text, upload, misc and maps) we run the Webrequest instance, meanwhile the Statsv and Eventlogging ones are only running in text.
The varnishkafka grafana dashboard uses Prometheus metrics, and hence it follows its conventions of grouping metrics by datacenter. This means that we'll have a separate dashboard for eqiad, codfw, eqsin, ulsfo and esams. You can switch source of data by selecting a value in the top left corner of the grafana dashboard (datasource dropdown).
This error means that Varnishkafka failed to send messages to Kafka Jumbo, and hence data has been lost. Usually this means that something is not working properly at the caching layer, or that Kafka Jumbo is in trouble for some reason. Start by checking the Kafka jumbo dashboard, and see if there is any correlation with Varnishkafka metrics. If everything looks good, check with the SRE/Traffic team if anything is ongoing at the caching layer.
Important note: the delivery alerts are per datacenter, so this can give you some information about the source of the problem. If you get alarms for webrequest text on all datacenters, it probably means that something global is happening to Kafka or Varnish. If you get only an alarm for say esams, it is probably a local-only problem with that datacenter.