Wikidata Query Service/Streaming Updater
This page is currently a draft.
More information and discussion about changes to this draft on the talk page.
The WDQS Streaming Updater is an Apache Flink application whose purpose is to create a stream of diffs of RDF triples, meant to be fed into Blazegraph. It uses available change streams to calculate the diffs and push it to Kafka topic.
The application reads some of the topics populated by mw:Extension:EventBus and builds a diff of the RDF content as produced by mw:Wikibase/EntityData by comparing the last seen revision for this entity with the new revision seen from the mediawiki.revision-create topic. It is meant to integrate as a Stream processor part of the Modern Event Platform.
It relies on flink to provide the following functionalities:
- event time semantic to re-order the events out of multiple kafka topics
- state management consistent with the output of the stream
The flink application (code name streaming-updater-producer) is responsible for producing its data to a kafka topic, a client (named streaming-updater-consumer) running on the same machines as the triple store (known as wdqs hosts) is responsible for reading this topic and performing updates.
The dependencies of the flink application are:
- The mediawiki application servers for mw:Wikibase/EntityData
- Kafka (main) for consuming Mediawiki changes and for producing its output
- Swift (thanos) for the object storage but the aim is to use future MOS
- K8S services cluster to run flink as a session cluster
- schema.wikimedia.org for verifying the validity of the event it emits against their Event_Platform/Schemas
- meta.wikimedia.org for fetching the stream configurations
The flink application is active/active and runs in both eqiad and codfw through the Kubernetes cluster hosting services. The WDQS machines in eqiad will consume the output of flink application running in eqiad.
The benefit of this approach are:
- simple to put in place in out setup: no need to have a failover strategy bringing up flink in the spare DC and resuming operations from where the failed one left (requires offsets replication between two kafka cluster)
- Symmetry of the k8s deployed services
- No guarantee that the output of both flink pipelines will be the same
- Double compute
See this presentation for a quick overview of two strategies evaluated.
Kubernetes only hosts the flink session cluster responsible for running the flink-session-clusterflink job. K8s does only manage a flink session cluster using the flink-session-cluster chart with the rdf-streaming-updater values.
Deploying the chart to staging (on deployment.eqiad.wmnet):
$ cd /srv/deployment-charts/helmfile.d/services/rdf-streaming-updater/ $ helmfile -e staging -i apply
Looking at the jobmanager and then the taskmanager logs in staging
$ kube_env rdf-streaming-updater staging $ kubectl logs -l component=jobmanager -c flink-session-cluster-main -f $ kubectl logs -l component=taskmanager -c flink-session-cluster-main-taskmanager -f
The flink jobmanager UI and REST endpoint is exposed via the 4007 port.
This endpoint has no lvs endpoint setup and is only used for internal management (main application deploys):
https://kubernetes1003.eqiad.wmnet:4007(beware to disable TLS host verification here, e.g. using
https://kubernetes2003.codfw.wmnet:4007(ditto regarding TLS host verification)
Note that the k8s cluster cannot yet be accessed via IPv6 thus IPv4 must be forced on your HTTP client (e.g.
Flink logs are collected in logstash and can be filtered using:
kubernetes.master_url:"https://kubemaster.svc.codfw.wmnet" AND kubernetes.namespace_name:"rdf-streaming-updater". Append
kubernetes.labels.component:jobmanager to filter jobmanager's logs or taskmanager for the taskmanagers' logs.
Managing the streaming-updater-producer
- flink job uptime in the flink-session-cluster dashboard (flink_jobmanager_job_uptime), indicates for how long the job has been running
- a constant low uptime (below 10minutes) might indicate that the job is constantly restarting. Lag may start to rise.
- Triples Divergences on the wdqs-streaming-updater dashboard, gives an indication of the divergences detected when applying the diffs, sudden surge might indicate the following problems:
- on a single machine, the blazegraph journal was corrupted or copied from another source or a serious bug in the streaming-updater-consumer.
- on all the machines in one or two DC, might indicate a problem in the streaming-updater-producer.
- Consumer Poll vs Store time on the wdqs-streaming-updater gives an indication of the saturation of the writes of the streaming-updater-consumer. Poll time is how much time is spent polling/waiting on kafka, store time is how much is spent on writing to blazegraph.