You are browsing a read-only backup copy of Wikitech. The primary site can be found at

User:Jeena Huneidi/Kubernetes Migration

From Wikitech-static
< User:Jeena Huneidi
Revision as of 21:28, 26 March 2020 by imported>Jeena Huneidi (adding content)
Jump to navigation Jump to search

Migrating a service to Kubernetes

A Guide Using HelloWorldOid


  1. Create .pipeline/blubber.yaml
  2. Generate dockerfile using blubber
  3. Create docker image
  4. Create helm deployment chart
  5. Test in minikube (Try local-charts if you want to test integrations with other services/apps or do more development!)
  6. Create .pipeline/config.yaml
  7. Update integration/config to run the pipeline you created for testing and publishing your service
  8. Run benchmarks and update deployment chart
  9. Talk to SRE about deployment to production

Set Up

We’re going to migrate HelloWorldOid to Kubernetes!


Clone the Repositories:

Creating a Docker Image

Services running in production need a docker image generated and pushed to the wikimedia docker registry in CI. Notice the .pipeline/blubber.yaml file in the helloworldoid directory:


[image or code here]

blubber.yaml tells the blubber service what operating system, packages, libraries, and files are needed in your docker image. We need a docker image to deploy to Kubernetes because services in Kubernetes must be in a container. The blubber service will output a dockerfile that can be used to create your docker image. Some tutorials can be found here: [[1]]

1. Use the blubberoid service to create our dockerfile from the blubber configuration! Switch to the root directory of the helloworldoid repo.

       curl -s "" \ 
               -H 'content-type: application/yaml' \
               --data-binary @".pipeline/blubber.yaml" > Dockerfile

3. Build the docker image: docker build -t <imagetag> -f - .

4. Test the docker image:

       docker run -d -p 8001:8001 hwoid
       curl localhost:8001

5. Clean up:

       docker ps
       docker stop <container id>
       docker rm <container id>

Publishing Docker Images

It's great that our docker image runs, but we should take advantage of the continuous integration pipeline to build our images and publish them to a public repository so that others can use them too!

1. Switch over to the helloworldoid repo's .pipeline folder. Notice config.yaml


[image or code here]

config.yaml describes what actions need to happen in the continuous integration pipeline and what to publish, for example, tests and lint need to run before publishing a docker image. If you want to create your own pipeline configuration, some tutorials can be found here: [[2]]

2. Edit config.yaml:

3. Switch to the integration/config repo.

4. Edit jjb/project-pipelines.yaml:


Edit the helloworldoid project to include a publish pipeline

5. Edit zuul/layout.yaml:


Edit the helloworldoid project to have a publish pipeline

Congratulations! After these changes are merged, your images will be published to! The images in the registry can be seen here: [[3]]

Our docker image has been built, but we still need a way to run it in Kubernetes.

Creating a Helm Chart

We use Helm charts to configure our Kubernetes deployments.

1. Switch to the deployment-charts repo.

2. Use the script to create our initial chart.

3. Edit the files created by the script with specific configuration for our service. Let's take a look:


blah blah

Testing the Helm Chart

Let's test that our chart works using the local-charts environment. Add HelloWorldOid to local-charts:

1. In the local-charts repo, update requirements.yaml:

2. Add the configuration necessary for HelloWorldOid in values.yaml:

3. Try running HelloWorldOid in Kubernetes: Type make deploy in the terminal to deploy to Minikube.

Whoops, I forgot to add something to our deployment chart. Let's change it now and run make update to update our deployment.

Getting Deployed to Production

We have a deployment chart. What does it take to get our app deployed to production?

Running Benchmarks

Now that we know our HelloWorldOid runs in Kubernetes, we can run benchmarks to determine how many resources it needs. This is required for deployment to production. Follow this tutorial to benchmark:

Update the deployment-charts chart with the values discovered during the benchmark tests.