You are browsing a read-only backup copy of Wikitech. The primary site can be found at

Portal:Cloud VPS/Admin/Devstack magnum/PAWS dev devstack

From Wikitech-static
Jump to navigation Jump to search


This is for dev only. Do not use this install in any production setting.


Have helm and kubectl setup on your system. Kubectl will need to setup to access your cluster. Following instructions for installing more recent k8s in devstack at Portal:Cloud_VPS/Admin/Devstack_magnum/Stable_xena. With the following details: Deploy cluster with a single worker node. Create cluster template as:

openstack coe cluster template create my-template --image Fedora-CoreOS --external-network public --fixed-network private --fixed-subnet private-subnet --dns-nameserver --network-driver flannel --docker-storage-driver overlay2 --volume-driver cinder --docker-volume-size 30 --master-flavor m1.small --flavor m1.medium --coe kubernetes --labels kube_tag=v1.20.14-rancher1-linux-amd64,,cloud_provider_enabled=true

In /etc/magnum/magnum.conf add:

cluster_user_trust = true

On the worker node: in order to give the paws dev env its excessive access (scary host needs this. Not needed in an actual prod deploy?) we need the k8s worker node:

ssh -i admin core@<worker node ip>
sudo setenforce 0

Setup cinder csi:

example cloud.conf:

username = admin
password = secret
domain-name = default
auth-url =
tenant-id = dd808b5873f84b81b16816012e07479d
region = RegionOne

tenant-id can be found from:

openstack server show <id of a k8s cluster node>

it will be listed as project_id

base64 -w 0 cloud.conf
git clone
cd cloud-provider-openstack
vim manifests/cinder-csi-plugin/csi-secret-cinderplugin.yaml # replace with base64 above output
kubectl create -f manifests/cinder-csi-plugin/csi-secret-cinderplugin.yaml
kubectl -f manifests/cinder-csi-plugin/ apply

Setup cinder as the default storage class: sc.yaml:

kind: StorageClass
  name: standard
  annotations: "true"
  availability: nova
kubectl apply -f sc.yaml
git clone --branch openstack-dev
cd paws
helm repo add jupyterhub
helm dep up paws/
kubectl create namespace prod
helm -n prod install dev paws/ --timeout=50m
kubectl apply -f manifests/psp.yaml

helm upgrade --install ingress-nginx ingress-nginx --repo --namespace ingress-nginx --create-namespace --set controller.service.type=NodePort --set controller.kind=DaemonSet

add a virtual ip (virtual IP used in this example is, if your virtual machine is on a different subnet, you may need to change this):

ip addr add br dev enp1s0

add following to haproxy:

frontend k8s-ingress-http
    mode http
    default_backend k8s-ingress

frontend k8s-ingress-https
    mode http
    option httplog
    timeout client 1h
    acl is_public hdr(host) -i
    default_backend k8s-ingress

backend k8s-ingress
    mode http
    option httplog
    option tcp-check
    balance roundrobin
    timeout server 1h
    default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100
    server cluster <worker node ip>:<http node port>

update apache ports.conf:

-Listen 80

add the following to /etc/hosts: wiki.mediawiki.local mediawiki.local hub.paws.local