You are browsing a read-only backup copy of Wikitech. The primary site can be found at wikitech.wikimedia.org

Kubernetes/Kubernetes Workshop/Step 8

From Wikitech-static
< Kubernetes‎ | Kubernetes Workshop
Revision as of 06:22, 11 March 2021 by imported>Wolfgang Kandek (Wolfgang Kandek moved page Kubernetes/Kubernetes Workshop Step 8 to Kubernetes/Kubernetes Workshop/Step 8)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

Step 8: K8s Manageability and Security

Kubernetes is a new execution environment for code, in that sense quite similar to a new operating system albeit with familiar roots. It is powerful, flexible and complex. Like other software it is becoming more mature with each new version. A new version typically also addresses vulnerabilities that can be used by an attacker to abuse the system.

As k8s became more successful guidelines came out on how to use and configure k8s for a safe experience. So far we have not followed or looked at these guidelines, roughly the equivalent of running a Linux system off a not too old CD/DVD as root.

Typical guidelines:

  • Container images should be updated
  • Containers should run as a normal user, not as root
  • Containers should run in a separate namespace, not in the default shared namespace
  • Container registries that can be used to get images from should be limited
  • Kubernetes itself is kept updated
  • The underlying OS that runs kubernetes is managed and updated

Hands-on: Container image updates

Over time the software installed in the container will become outdated. Even the base images that we have used so far can be out of date when we download them.

We have been using base images from dockerhub in a rather careless way, without much scrutiny as to the origin or patchlevel. That is ok for learning and exploration but not for a service in production. In addition we have been using quite basic images that are under the control of certain trustworthy organizations such as Ubuntu and Debian, which provides some additional confidence. But there are many (millions) of docker images on dockerhub and most contain vulnerabilities, many of them high severity (see http://dance.csc.ncsu.edu/papers/codaspy17.pdf). Since using an available image is often the quickest way to get an application up and running, it is tempting to do so, but quite possibly a bad idea for a production system. In addition attackers have started to infiltrate docker registries - see this news article for a recent example.

If you do use any external images, be sure to exercise some caution as to their purpose and the data that you store in these systems.

Let’s perform a quick check on the images we have been using:

Dockerfile:

FROM ubuntu
ENV DEBIAN_FRONTEND=noninteractive
RUN apt-get update && apt-get upgrade -
/security/namespace$ kubectl get cronjobs
No resources found in default namespace.
/security/namespace$ kubectl get cronjobs --namespace=cronpywpchksumbot-dev
NAME                SCHEDULE      SUSPEND   ACTIVE   LAST SCHEDULE   AGE
cronpywpchksumbot   */5 * * * *   False     0        <none>          85s

By reducing the /etc/apt/sources.list file to only security sources we get

The following packages will be upgraded:
 gcc-10-base libgcc-s1 libgnutls30 libseccomp2 libstdc++6 perl-base

Including OS updates in the Dockerfile and rebuilding and redeploying of application images in accordance with standards set by your security team makes sense.

Hands-on: Run the container as non-root

Containers run by default as root. All applications that we have run so far have been running as root. Let’s check.

Dockerfile:

FROM ubuntu
RUN apt-get update && apt-get install -y ca-certificates python3
ADD printuid.py /
CMD ["/printuid.py"]

printuid.py:

#!/usr/bin/python3
import os
print(os.getuid())
  • docker build --tag printuid .
  • docker run printuid

Should give you a “0” as output, that means you are running as root.

Now run it on minikube

Quick sidebar: We have been running images on minikube by uploading them to the dockerhub and pulling them down again. But minikube can use your local images as well. The problem is that it uses a different local docker installation that keeps its own images. In order to built for that docker we have to switch:

  • eval $(minikube docker-env)
    • To undo this: eval $(minikube docker-env --unset)
  • docker images # to check on images present in minikube

Rebuild the image and retag so the minikube can access the image locally. Remember that In the past we have avoided that action by uploading the image to the dockerhub and pulling it from there. But this way we can save some cycles and work entirely locally.

  • docker build --tag printuid .
  • docker images

Now run the the image as a job and check the output with:

  • Kubectl apply -f job.yaml
  • kubectl get pods
  • kubectl logs <pod in question>

Note the image is just specified as “printuid” and the imagePullPolicy is “Never”.

job.yaml:

apiVersion: batch/v1
kind: Job
metadata:
 name: printuid
spec:
 template:
   spec:
     containers:
      - name: printuid
        image: printuid:latest
        imagePullPolicy: Never
     restartPolicy: Never

Ok, we are running as root here as well. The user can be specified in the Dockerfile such as:

FROM ubuntu
RUN apt-get update && apt-get install -y ca-certificates python3
ADD printuid.py /
USER nobody
CMD ["/printuid.py"]

Rebuild the docker file and rerun to check. Sample Output (redacted for length):

/k8s/security/root$ docker build --tag printuid .
Sending build context to Docker daemon  4.096kB
Step 1/5 : FROM ubuntu
---> adafef2e596e
…
Successfully built 6438746fda15
Successfully tagged printuid:latest
/k8s/security/root$ docker run printuid
65534
/k8s/security/root$ kubectl apply -f job.yaml
job.batch/printuid created
/k8s/security/root$ kubectl get pods
NAME                                            READY   STATUS      RESTARTS   AGE
printuid-djbgd                                  0/1     Completed   0          6s
/k8s/security/root$ kubectl logs printuid-djbgd

65534

Now let’s adapt the simpleapache image to run as non-root. Give it a try to see what needs to change in order to work before looking at the Dockerfile below. Dockerfile:

FROM ubuntu
ENV DEBIAN_FRONTEND=noninteractive
RUN apt-get update
RUN apt-get install -y apache2
ENV APACHE_RUN_USER www-data
ENV APACHE_RUN_GROUP www-data
ENV APACHE_LOG_DIR /var/log/apache2
ENV APACHE_RUN_DIR /var/run
ENV APACHE_PID_FILE /var/run/apache2.pid
RUN echo 'Hello Apache in Docker' > /var/www/html/index.html
RUN sed -r -i 's?isten 80?isten 8080?' /etc/apache2/ports.conf
RUN sed -r -i 's?:80?:8080?' /etc/apache2/sites-available/000-default.conf
RUN chown -R nobody /var/log/apache2
RUN chown -R nobody /var/run
RUN chown -R nobody /var/lock
EXPOSE 8080
CMD ["/usr/sbin/apachectl", "-D FOREGROUND"]

To return to the local docker:

  • eval $(minikube docker-env --unset)

Hands-on: Namespaces

Namespaces are used to logically partition a kubernetes cluster, for example into environments for development, staging and production or for different teams or applications. So far all of our containers have been running in the same namespace called default. Using a namespace will provide separation between these environments, minimizing the possibility of name collisions and enhancing manageability of the cluster.

Namespaces are a kubernetes construct and need to be created and then added in the deployment files. Let’s run our initial example in two separate namespaces for used for development and production.

cronpywpchksumbot-dev.yaml:

apiVersion: v1
kind: Namespace
metadata:
 name: cronpywpchksumbot-dev

cronpywpchksumbot-prod.yaml:

apiVersion: v1
kind: Namespace
metadata:
 name: cronpywpchksumbot-prod

cronpywpchksumbotdeployment1.yaml:

apiVersion: batch/v1beta1
kind: CronJob
metadata:
 name: cronpywpchksumbot
 namespace: cronpywpchksumbot-dev
spec:
 schedule: "*/5 * * * *"
 jobTemplate:
   spec:
     template:
       spec:
         containers:
         - name: pywpchksumbot
           image: <userid>/pywpchksumbot
           imagePullPolicy: IfNotPresent
         restartPolicy: OnFailure

cronpywpchksumbotdeployment2.yaml:

apiVersion: batch/v1beta1
kind: CronJob
metadata:
 name: cronpywpchksumbot
 namespace: cronpywpchksumbot-prod
spec:
 schedule: "*/15 * * * *"
 jobTemplate:
   spec:
     template:
       spec:
         containers:
         - name: pywpchksumbot
           image: <userid>/pywpchksumbot
           imagePullPolicy: IfNotPresent
         restartPolicy: OnFailure

You now have to add the --namespace=<name of namespace> to kubectl to get information on the cronjob. Alternatively use the --all-namespaces option to get info on all namespaces. Sample output:

/security/namespace$ kubectl get cronjobs
No resources found in default namespace.
/security/namespace$ kubectl get cronjobs --namespace=cronpywpchksumbot-dev
NAME                SCHEDULE      SUSPEND   ACTIVE   LAST SCHEDULE   AGE
cronpywpchksumbot   */5 * * * *   False     0        <none>          85s

Note that using namespaces in that way is not enforced by kubernetes, but is simply a convention that developers and SRE/devops have to follow.

There are many more management and security topics in kubernetes that we can take a look at:

  • White-list registries: can be done through the Open Policy Agent
  • Kubernetes updates quarterly to a new version and maintains the past 4 versions with security updates
  • OS updates