You are browsing a read-only backup copy of Wikitech. The live site can be found at

Machine Learning/LiftWing

From Wikitech-static
< Machine Learning
Revision as of 12:07, 20 August 2022 by imported>AikoChou (Add pointers to related doc)
Jump to navigation Jump to search

Lift Wing

A scalable machine learning model serving infrastructure on Kubernetes using KServe.


Software Version
Kubernetes v1.16.5
Istio v1.9.5
Knative v0.18.1
KServe v0.8.0


Istio is a service-mesh where we can run our ML-services. It is installed using the istioctl package, which has been added to the WMF APT repository (Debian buster). See packages, we are currently running Istio 1.9.5 (istioctl: 1.9.5-1)


We use Knative Serving for running serverless containers on Kubernetes using Istio. It also allows for various deployment strategies like canary, blue-green, A/B tests, etc.




We use KServe for its custom InferenceService resource. It enables us to expose our ML models as asynchronous micro-services.





  • ml-serve1001-4


  • ml-serve2001-4
  • ml-staging200[12]




We host our Machine Learning models as Inference Services (isvcs), which are asynchronous micro-services that can transform raw feature data and make predictions. Each inference service has production images that are published in the WMF Docker Registry via the Deployment Pipeline. These images are then used for an isvc configuration in our ml-services helmfile in the operations/deployment-charts repo.


We store model binary files in Swift, which is an open-source s3-compatible object store that is widely-used across the WMF. The model files are downloaded by the storage-initializer (init:container) when an Inference Service pod is created. The storage-initializer then mounts the model binary in the pod at /mnt/models/ and can be loaded by the predictor container.


We are developing inference services with Docker and testing on the ML Sandbox using our own WMF KServe images & charts.

We previously used multiple sandbox clusters running MiniKF.


We are serving ML models as Inference Services, which are containerized applications. The code is currently hosted on Gerrit.

Gerrit mono-repo:

Github mirror:

Current Inference Services

  • Revscoring models
Model name Kubernetes namespace Images Supported wikis
articlequality revscoring-articlequality articlequality en, eu, fa, frwikisource, fr, gl, nl, pt, ru, sv, tr, uk, wikidata
draftquality revscoring-draftquality draftquality en, pt
damaging revscoring-editquality-damaging editquality ar, bs, ca, cs, de, en, eswikibooks, es, eswikiquote, et, fa, fi, fr, he, hi, hu, it, ja, ko, lv, nl, no, pl, pt, ro, ru, sq, sr, sv, uk, wikidata, zh
goodfaith revscoring-editquality-goodfaith ar, bs, ca, cs, de, en, eswikibooks, es, eswikiquote, et, fa, fi, fr, he, hi, hu, it, ja, ko, lv, nl, no, pl, pt, ro, ru, sq, sr, sv, uk, wikidata, zh
reverted revscoring-editquality-reverted bn, el, enwiktionary, gl, hr, id, is, ta, translate, vi
articletopic revscoring-articletopic topic ar, cs, en, eu, hu, hy, ko, sr, uk, vi, wikidata
drafttopic revscoring-drafttopic ar, cs, en, eu, hu, hy, ko, sr, uk, vi
  • Outlink topic model
Model name Kubernetes namespace Images Model Card
outlink-topic-model articletopic-outlink outlink, outlink-transformer Language_agnostic_link-based_article_topic_model_card