You are browsing a read-only backup copy of Wikitech. The live site can be found at wikitech.wikimedia.org
Portal:Cloud VPS/Admin/Devstack magnum/PAWS dev devstack: Difference between revisions
imported>Michael DiPietro |
imported>Michael DiPietro No edit summary |
||
Line 19: | Line 19: | ||
On the worker node: | On the worker node: | ||
in order to give the paws dev env its excessive access we need the k8s worker node: | in order to give the paws dev env its excessive access (scary host needs this. Not needed in an actual prod deploy?) we need the k8s worker node: | ||
<syntaxhighlight lang="bash"> | <syntaxhighlight lang="bash"> | ||
ssh -i admin core@<worker node ip> | ssh -i admin core@<worker node ip> | ||
Line 31: | Line 31: | ||
<syntaxhighlight lang="bash"> | <syntaxhighlight lang="bash"> | ||
[Global] | [Global] | ||
username=admin | username = admin | ||
password=secret | password = secret | ||
domain-name = default | domain-name = default | ||
auth-url = http://192.168.122.11/identity | auth-url = http://192.168.122.11/identity | ||
Line 44: | Line 44: | ||
</syntaxhighlight> | </syntaxhighlight> | ||
it will be listed as project_id | it will be listed as project_id | ||
<syntaxhighlight lang="bash"> | |||
base64 -w 0 cloud.conf | |||
git clone https://github.com/kubernetes/cloud-provider-openstack.git | |||
cd cloud-provider-openstack | |||
vim manifests/cinder-csi-plugin/csi-secret-cinderplugin.yaml # replace with base64 above output | |||
kubectl create -f manifests/cinder-csi-plugin/csi-secret-cinderplugin.yaml | |||
kubectl -f manifests/cinder-csi-plugin/ apply | |||
</syntaxhighlight> | |||
Setup cinder as the default storage class: | Setup cinder as the default storage class: | ||
Line 65: | Line 75: | ||
helm repo add jupyterhub https://jupyterhub.github.io/helm-chart/ | helm repo add jupyterhub https://jupyterhub.github.io/helm-chart/ | ||
helm dep up paws/ | helm dep up paws/ | ||
kubectl create namespace paws- | kubectl create namespace prod | ||
helm - | helm -n prod install dev paws/ --timeout=50m | ||
kubectl apply -f manifests/psp.yaml | |||
</syntaxhighlight> | |||
<syntaxhighlight lang="bash"> | |||
helm upgrade --install ingress-nginx ingress-nginx --repo https://kubernetes.github.io/ingress-nginx --namespace ingress-nginx --create-namespace --set controller.service.type=NodePort --set controller.kind=DaemonSet | |||
</syntaxhighlight> | |||
add a virtual ip (virtual IP used in this example is 192.168.122.109, if your virtual machine is on a different subnet, you may need to change this): | |||
<syntaxhighlight lang="bash"> | |||
ip addr add 192.168.122.109/24 br 192.168.122.255 dev enp1s0 | |||
</syntaxhighlight> | |||
add following to haproxy: | |||
<syntaxhighlight lang="bash"> | |||
frontend k8s-ingress-http | |||
bind 192.168.122.109:80 | |||
mode http | |||
default_backend k8s-ingress | |||
frontend k8s-ingress-https | |||
bind 192.168.122.109:443 | |||
mode http | |||
option httplog | |||
timeout client 1h | |||
acl is_public hdr(host) -i public.paws.wmcloud.org | |||
default_backend k8s-ingress | |||
backend k8s-ingress | |||
mode http | |||
option httplog | |||
option tcp-check | |||
balance roundrobin | |||
timeout server 1h | |||
default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100 | |||
server cluster <worker node ip>:<http node port> | |||
</syntaxhighlight> | |||
update apache ports.conf: | |||
<syntaxhighlight lang="bash"> | |||
-Listen 80 | |||
+Listen 192.168.122.10:80 | |||
</syntaxhighlight> | |||
add the following to /etc/hosts: | |||
<syntaxhighlight lang="bash"> | |||
192.168.122.109 wiki.mediawiki.local | |||
192.168.122.109 mediawiki.local | |||
192.168.122.109 hub.paws.local | |||
</syntaxhighlight> | </syntaxhighlight> |
Latest revision as of 20:03, 19 January 2022
Note
This is for dev only. Do not use this install in any production setting.
Installation
Have helm and kubectl setup on your system. Kubectl will need to setup to access your cluster. Following instructions for installing more recent k8s in devstack at Portal:Cloud_VPS/Admin/Devstack_magnum/Stable_xena. With the following details: Deploy cluster with a single worker node. Create cluster template as:
openstack coe cluster template create my-template --image Fedora-CoreOS --external-network public --fixed-network private --fixed-subnet private-subnet --dns-nameserver 8.8.8.8 --network-driver flannel --docker-storage-driver overlay2 --volume-driver cinder --docker-volume-size 30 --master-flavor m1.small --flavor m1.medium --coe kubernetes --labels kube_tag=v1.20.14-rancher1-linux-amd64,hyperkube_prefix=docker.io/rancher/,cloud_provider_enabled=true
In /etc/magnum/magnum.conf add:
[trust]
cluster_user_trust = true
On the worker node:
in order to give the paws dev env its excessive access (scary host needs this. Not needed in an actual prod deploy?) we need the k8s worker node:
ssh -i admin core@<worker node ip>
sudo setenforce 0
Setup cinder csi: https://github.com/kubernetes/cloud-provider-openstack/blob/master/docs/cinder-csi-plugin/using-cinder-csi-plugin.md#using-the-manifests
example cloud.conf:
[Global]
username = admin
password = secret
domain-name = default
auth-url = http://192.168.122.11/identity
tenant-id = dd808b5873f84b81b16816012e07479d
region = RegionOne
tenant-id can be found from:
openstack server show <id of a k8s cluster node>
it will be listed as project_id
base64 -w 0 cloud.conf
git clone https://github.com/kubernetes/cloud-provider-openstack.git
cd cloud-provider-openstack
vim manifests/cinder-csi-plugin/csi-secret-cinderplugin.yaml # replace with base64 above output
kubectl create -f manifests/cinder-csi-plugin/csi-secret-cinderplugin.yaml
kubectl -f manifests/cinder-csi-plugin/ apply
Setup cinder as the default storage class:
sc.yaml:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: standard
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: cinder.csi.openstack.org
parameters:
availability: nova
kubectl apply -f sc.yaml
git clone https://github.com/toolforge/paws.git --branch openstack-dev
cd paws
helm repo add jupyterhub https://jupyterhub.github.io/helm-chart/
helm dep up paws/
kubectl create namespace prod
helm -n prod install dev paws/ --timeout=50m
kubectl apply -f manifests/psp.yaml
helm upgrade --install ingress-nginx ingress-nginx --repo https://kubernetes.github.io/ingress-nginx --namespace ingress-nginx --create-namespace --set controller.service.type=NodePort --set controller.kind=DaemonSet
add a virtual ip (virtual IP used in this example is 192.168.122.109, if your virtual machine is on a different subnet, you may need to change this):
ip addr add 192.168.122.109/24 br 192.168.122.255 dev enp1s0
add following to haproxy:
frontend k8s-ingress-http
bind 192.168.122.109:80
mode http
default_backend k8s-ingress
frontend k8s-ingress-https
bind 192.168.122.109:443
mode http
option httplog
timeout client 1h
acl is_public hdr(host) -i public.paws.wmcloud.org
default_backend k8s-ingress
backend k8s-ingress
mode http
option httplog
option tcp-check
balance roundrobin
timeout server 1h
default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100
server cluster <worker node ip>:<http node port>
update apache ports.conf:
-Listen 80
+Listen 192.168.122.10:80
add the following to /etc/hosts:
192.168.122.109 wiki.mediawiki.local
192.168.122.109 mediawiki.local
192.168.122.109 hub.paws.local