You are browsing a read-only backup copy of Wikitech. The live site can be found at

Portal:Cloud VPS/Admin/Devstack magnum/PAWS dev devstack

From Wikitech-static
< Portal:Cloud VPS‎ | Admin‎ | Devstack magnum
Revision as of 13:22, 11 January 2022 by imported>Michael DiPietro (→‎Installation)
Jump to navigation Jump to search


This is for dev only. Do not use this install in any production setting.


Have helm and kubectl setup on your system. Kubectl will need to setup to access your cluster. Following instructions for installing more recent k8s in devstack at Portal:Cloud_VPS/Admin/Devstack_magnum/Stable_xena. With the following details: Deploy cluster with a single worker node. Create cluster template as:

openstack coe cluster template create my-template --image Fedora-CoreOS --external-network public --fixed-network private --fixed-subnet private-subnet --dns-nameserver --network-driver flannel --docker-storage-driver overlay2 --volume-driver cinder --docker-volume-size 30 --master-flavor m1.small --flavor m1.medium --coe kubernetes --labels kube_tag=v1.20.14-rancher1-linux-amd64,,cloud_provider_enabled=true

In /etc/magnum/magnum.conf add:

cluster_user_trust = true

On the worker node: in order to give the paws dev env its excessive access we need the k8s worker node:

ssh -i admin core@<worker node ip>
sudo setenforce 0

Setup cinder csi:

example cloud.conf:

domain-name = default
auth-url =
tenant-id = dd808b5873f84b81b16816012e07479d
region = RegionOne

tenant-id can be found from:

openstack server show <id of a k8s cluster node>

it will be listed as project_id

Setup cinder as the default storage class: sc.yaml:

kind: StorageClass
  name: standard
  annotations: "true"
  availability: nova
kubectl apply -f sc.yaml
git clone --branch openstack-dev
cd paws
helm repo add jupyterhub
helm dep up paws/
kubectl create namespace paws-dev
helm -n paws-dev install dev paws/ --timeout=50m