You are browsing a read-only backup copy of Wikitech. The primary site can be found at wikitech.wikimedia.org
Machine Learning/LiftWing
Lift Wing
A scalable machine learning model serving infrastructure on Kubernetes using KServe.
- Phabricator MVP Task: T272917
Stack
Software | Version |
---|---|
Kubernetes | v1.16.5 |
Istio | v1.9.5 |
Knative | v0.18.1 |
KServe | v0.8.0 |
Istio
Istio is a service-mesh where we can run our ML-services. It is installed using the istioctl package, which has been added to the WMF APT repository (Debian buster). See packages, we are currently running Istio 1.9.5 (istioctl: 1.9.5-1)
Knative
We use Knative Serving for running serverless containers on Kubernetes using Istio. It also allows for various deployment strategies like canary, blue-green, A/B tests, etc.
Charts
Images
KServe
We use KServe for its custom InferenceService
resource. It enables us to expose our ML models as asynchronous micro-services.
Charts
Images
Hosts
eqiad
- ml-serve1001-4
codfw
- ml-serve2001-4
- ml-staging200[12]
Components
Monitoring
- Grafana - KServe
- Grafana - Knative Serving
Serving
We host our Machine Learning models as Inference Services (isvcs), which are asynchronous micro-services that can transform raw feature data and make predictions. Each inference service has production images that are published in the WMF Docker Registry via the Deployment Pipeline. These images are then used for an isvc configuration in our ml-services helmfile in the operations/deployment-charts repo.
- Model Deployment Guide: Machine Learning/LiftWing/Deploy
- Inference Service Docs: Machine_Learning/LiftWing/Inference Services
Storage
We store model binary files in Swift, which is an open-source s3-compatible object store that is widely-used across the WMF. The model files are downloaded by the storage-initializer (init:container) when an Inference Service pod is created. The storage-initializer then mounts the model binary in the pod at /mnt/models/
and can be loaded by the predictor container.
- Model Upload info: Machine_Learning/LiftWing/Deploy#How_to_upload_a_model_to_Swift
Development
We are developing inference services with Docker and testing on the ML Sandbox using our own WMF KServe images & charts.
- KServe Guide: Machine Learning/LiftWing/KServe
- Production Image Development Guide: Machine Learning/LiftWing/Inference Services/Production Image Development
- ML-Sandbox Guide: Machine Learning/LiftWing/ML-Sandbox
We previously used multiple sandbox clusters running MiniKF.
Services
We are serving ML models as Inference Services, which are containerized applications. The code is currently hosted on Gerrit.
Gerrit mono-repo: https://gerrit.wikimedia.org/r/plugins/gitiles/machinelearning/liftwing/inference-services
Github mirror: https://github.com/wikimedia/machinelearning-liftwing-inference-services
Current Inference Services
- Revscoring models (migrated from ORES)
Model type | Kubernetes namespace | Images | Supported wikis |
---|---|---|---|
articlequality | revscoring-articlequality | articlequality | en, eu, fa, frwikisource, fr, gl, nl, pt, ru, sv, tr, uk, wikidata |
draftquality | revscoring-draftquality | draftquality | en, pt |
damaging | revscoring-editquality-damaging | editquality | ar, bs, ca, cs, de, en, eswikibooks, es, eswikiquote, et, fa, fi, fr, he, hi, hu, it, ja, ko, lv, nl, no, pl, pt, ro, ru, sq, sr, sv, uk, wikidata, zh |
goodfaith | revscoring-editquality-goodfaith | ar, bs, ca, cs, de, en, eswikibooks, es, eswikiquote, et, fa, fi, fr, he, hi, hu, it, ja, ko, lv, nl, no, pl, pt, ro, ru, sq, sr, sv, uk, wikidata, zh | |
reverted | revscoring-editquality-reverted | bn, el, enwiktionary, gl, hr, id, is, ta, translate, vi | |
articletopic | revscoring-articletopic | topic | ar, cs, en, eu, hu, hy, ko, sr, uk, vi, wikidata |
drafttopic | revscoring-drafttopic | ar, cs, en, eu, hu, hy, ko, sr, uk, vi |
- Language agnostic models
Model name | Kubernetes namespace | Images | Model Card |
---|---|---|---|
outlink-topic-model | articletopic-outlink | outlink, outlink-transformer | Language_agnostic_link-based_article_topic_model_card |
revert-risk-model | experimental | revertrisk |
Usage
Internal discovery endpoint
Once an InferenceService is deployed it should become available internally via
- the discovery endpoint:
https://inference.discovery.wmnet:30443/v1/models/{MODEL_NAME}:predict
- with the HTTP Host header:
{MODEL_NAME}.{KUBERNETES_NAMESPACE}.wikimedia.org
You can find MODEL_NAME and KUBERNETES_NAMESPACE in the tables in the previous section.
Note that the revscoring model group has its own model for each supported wiki, so the MODEL_NAME combines the wiki code and the model type i.e. {wiki_code}wiki-{model_type}
. For example, enwiki-articlequality, arwiki-damaging, bnwiki-reverted, eswikibookswiki-goodfaith.
Curl
Let's say you want to query enwiki-goodfaith
model via curl:
aikochou@stat1004:~$ cat input.json
{ "rev_id": 1083325118 }
aikochou@stat1004:~$ curl "https://inference.discovery.wmnet:30443/v1/models/enwiki-goodfaith:predict" -X POST -d @input.json -i -H "Host: enwiki-goodfaith.revscoring-editquality-goodfaith.wikimedia.org" --http1.1
HTTP/1.1 200 OK
content-length: 209
content-type: application/json; charset=UTF-8
date: Mon, 31 Oct 2022 16:51:54 GMT
server: istio-envoy
x-envoy-upstream-service-time: 361
{"enwiki": {"models": {"goodfaith": {"version": "0.5.1"}}, "scores": {"1083325118": {"goodfaith": {"score": {"prediction": true, "probability": {"false": 0.033641298577500645, "true": 0.9663587014224994}}}}}}}
Python
If you want to query outlink-topic-model
via python:
import json
import requests
inference_url = 'https://inference.discovery.wmnet:30443/v1/models/outlink-topic-model:predict'
headers = {
'Host': 'outlink-topic-model.articletopic-outlink.wikimedia.org',
'Content-Type': 'application/x-www-form-urlencoded',
}
data = {"lang": "en", "page_title": "Wings of Fire (novel series)"}
response = requests.post(inference_url, headers=headers, data=json.dumps(data))
print(response.text)
Run the python script:
aikochou@stat1004:~$ python inference.py
{"prediction": {"article": "https://en.wikipedia.org/wiki/Wings of Fire (novel series)", "results": [{"topic": "Culture.Literature", "score": 1.0000100135803223}, {"topic": "Culture.Media.Books", "score": 0.9926641583442688}, {"topic": "Culture.Media.Media*", "score": 0.8774868249893188}]}}
Troubleshooting
proxy error
If you get error like curl: (56) Received HTTP code 403 from proxy after CONNECT
or the following error from python:
requests.exceptions.ProxyError: HTTPSConnectionPool(host='inference.discovery.wmnet', port=30443): Max retries exceeded with url: /v1/models/outlink-topic-model:predict (Caused by ProxyError('Cannot connect to proxy.', OSError('Tunnel connection failed: 403 Forbidden')))
that's because you probably set http proxy env vars before, and you are going through the http(s) proxy for a .discovery.wmnet domain (internal one).
Try with unset https_proxy
, it should work afterwards. (T287056#8138803)
ssl error
If you're using a conda environment, and get the following from python:
requests.exceptions.SSLError: HTTPSConnectionPool(host='inference.discovery.wmnet', port=30443): Max retries exceeded with url: /v1/models/outlink-topic-model:predict (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1091)')))
Try to run export REQUESTS_CA_BUNDLE=/etc/ssl/certs/wmf-ca-certificates.crt
before executing the code. (T317328)