You are browsing a read-only backup copy of Wikitech. The live site can be found at wikitech.wikimedia.org

Wikidata Query Service/Streaming Updater

From Wikitech-static
< Wikidata Query Service
Revision as of 16:30, 10 September 2021 by imported>DCausse
Jump to navigation Jump to search

The WDQS Streaming Updater is an Apache Flink application whose purpose is to create a stream of diffs of RDF triples, meant to be fed into Blazegraph. It uses available change streams to calculate the diffs and push it to Kafka topic.

Design

Wikidata Query Service Streaming Updater Design.svg

The application reads some of the topics populated by mw:Extension:EventBus and builds a diff of the RDF content as produced by mw:Wikibase/EntityData by comparing the last seen revision for this entity with the new revision seen from the mediawiki.revision-create topic. It is meant to integrate as a Stream processor part of the Modern Event Platform.

It relies on flink to provide the following functionalities:

  • event time semantic to re-order the events out of multiple kafka topics
  • state management consistent with the output of the stream
  • scalability

The flink application (code name streaming-updater-producer) is responsible for producing its data to a kafka topic, a client (named streaming-updater-consumer) running on the same machines as the triple store (known as wdqs hosts) is responsible for reading this topic and performing updates.

The dependencies of the flink application are:

Deployment strategy

The flink application is active/active and runs in both eqiad and codfw through the Kubernetes cluster hosting services. The WDQS machines in eqiad will consume the output of flink application running in eqiad.

The benefit of this approach are:

  • simple to put in place in out setup: no need to have a failover strategy bringing up flink in the spare DC and resuming operations from where the failed one left (requires offsets replication between two kafka cluster)
  • Symmetry of the k8s deployed services

Drawbacks:

  • No guarantee that the output of both flink pipelines will be the same
  • Double compute

See this presentation for a quick overview of two strategies evaluated.

Operations

Kubernetes setup

Kubernetes only hosts the flink session cluster responsible for running the flink-session-clusterflink job. K8s does only manage a flink session cluster using the flink-session-cluster chart with the rdf-streaming-updater values.

Deploying the chart to staging (on deployment.eqiad.wmnet):

$ cd /srv/deployment-charts/helmfile.d/services/rdf-streaming-updater/
$ helmfile -e staging -i apply

Looking at the jobmanager and then the taskmanager logs in staging

$ kube_env rdf-streaming-updater staging
$ kubectl logs -l component=jobmanager -c flink-session-cluster-main -f
$ kubectl logs -l component=taskmanager -c flink-session-cluster-main-taskmanager -f

The flink jobmanager UI and REST endpoint is exposed via the 4007 port.

This endpoint has no lvs endpoint setup and is only used for internal management (main application deploys):

Note that the k8s cluster cannot yet be accessed via IPv6 thus IPv4 must be forced on your HTTP client (e.g. curl -4)

Logs

Flink logs are collected in logstash and can be filtered using: kubernetes.master_url:"https://kubemaster.svc.codfw.wmnet" AND kubernetes.namespace_name:"rdf-streaming-updater". Append kubernetes.labels.component:jobmanager to filter jobmanager's logs or taskmanager for the taskmanagers' logs.

Managing the streaming-updater-producer

TODO

Monitoring

The flink session cluster activity can be monitored using the flink-session-cluster and the wdqs-streaming-updater graphana dashboards.

Important metrics:

  • flink job uptime in the flink-session-cluster dashboard (flink_jobmanager_job_uptime), indicates for how long the job has been running
    • a constant low uptime (below 10minutes) might indicate that the job is constantly restarting. Lag may start to rise.
  • Triples Divergences on the wdqs-streaming-updater dashboard, gives an indication of the divergences detected when applying the diffs, sudden surge might indicate the following problems:
    • on a single machine, the blazegraph journal was corrupted or copied from another source or a serious bug in the streaming-updater-consumer.
    • on all the machines in one or two DC, might indicate a problem in the streaming-updater-producer.
  • Consumer Poll vs Store time on the wdqs-streaming-updater gives an indication of the saturation of the writes of the streaming-updater-consumer. Poll time is how much time is spent polling/waiting on kafka, store time is how much is spent on writing to blazegraph.

Troubleshooting

TODO