You are browsing a read-only backup copy of Wikitech. The primary site can be found at wikitech.wikimedia.org

Kubernetes/Kubernetes Workshop/Build a service application on Kubernetes: Difference between revisions

From Wikitech-static
Jump to navigation Jump to search
imported>Slavina Stefanova
(→‎Run and Access Your Service Applications on a Web Browser: Edit Dockerfile for best practices + rephrasing for clarity)
imported>Slavina Stefanova
 
Line 72: Line 72:
You can now run the service on minikube following the steps listed in [[Kubernetes/Kubernetes Workshop/Set up a batch application on Kubernetes|Module 1]], and return to this module to expose the pod on a public port by creating a service.
You can now run the service on minikube following the steps listed in [[Kubernetes/Kubernetes Workshop/Set up a batch application on Kubernetes|Module 1]], and return to this module to expose the pod on a public port by creating a service.


* Run the kubectl cluster and expose the pod on port 80:
* Run the image on the cluster and expose the running pod on port 80:


<syntaxhighlight lang="bash">
<syntaxhighlight lang="bash">
Line 85: Line 85:
<syntaxhighlight lang="bash">
<syntaxhighlight lang="bash">
$ minikube service <pod_name> --url
$ minikube service <pod_name> --url
http://192.168.49.2:31688 # This is sample output, yours WILL vary
http://192.168.49.2:31688 # This is sample output, yours may vary


$ curl <service_url>
$ curl <service_url>
Line 95: Line 95:
'''Note''':  
'''Note''':  


* To find out more information about the pods and the service, checkout the [https://kubernetes.io/docs/reference/kubectl/cheatsheet/ cheat sheet].
* To find out more information about the Pods and the Service, check out the [https://kubernetes.io/docs/reference/kubectl/cheatsheet/ cheat sheet].
* The --port=80 parameter in the command <code> kubectl run <pod_name> </code> is the port that the service in the container is running (apache by default listens on port 80). In contrast, in [[Kubernetes/Kubernetes Workshop/Set up a batch application on Kubernetes|Module 1]] the application did not listen on any port.
* The --port=80 parameter in the command <code> kubectl run <pod_name> </code> is the port that the service in the container is running on (apache by default listens on port 80). In contrast, in [[Kubernetes/Kubernetes Workshop/Set up a batch application on Kubernetes|Module 1]] the application did not listen on any port.
* The <code> --port=80 parameter </code> in the command <code> kubectl expose pod <pod_name> </code> is the port that the kubernetes service will listen on. A Kubernetes service is a distinct concept from a pod and provides a way of exposing one or more pods to users.
* The <code> --port=80 parameter </code> in the command <code> kubectl expose pod <pod_name> </code> is the port that the Kubernetes service will listen on. A Kubernetes Service fronts the Pod (or Pods) with a stable IP, DNS name, and port. It also load-balances traffic to Pods.  


== Hands-on Demo: Creating replicas on minikube ==
== Create a Deployment ==


To assure high availability, you can run more than one service instance. You have to use a new Kubernetes (k8s) object called a [https://kubernetes.io/docs/concepts/workloads/controllers/deployment/ deployment] to achieve this. In running deployments, you can determine the number of replicas you want.  
In the previous section, we packaged our application as a container and ran is as a Pod through a Service. While you could run Pods ''as-is'' on Kubernetes, this is not very useful. Static Pods don't give you any of the benefits associated with cloud-native applications: self-healing, scaling (replication), easy updates and rollbacks. In practice, you will most often wrap Pods inside a Deployment object. Under the hood, Deployments use another object called a ReplicaSet. ReplicaSets are responsible for self-healing and scaling, while Deployments manage ReplicaSets and bring rollouts and rollbacks.  


Previously, you instructed k8s to run an image; now, you will create a deployment object for the image and define its parameters, including the number of replicas.
Let's now create a Deployment with a single Pod replica to see this in action. We will then scale up this Deployment from 1 to 5 replicas.


* Create a deployment:
* Create a Deployment using the Docker image from the previous section:


<syntaxhighlight lang="bash">
<syntaxhighlight lang="bash">
Line 112: Line 112:
</syntaxhighlight>
</syntaxhighlight>


* List and describe your pods and deployments e.g. if you specify “vol2” as <deployment_name>
* List and describe your Pods and Deployments e.g. if you specified “vol2” as <deployment_name>:
<syntaxhighlight lang="bash">
<syntaxhighlight lang="bash">
$ kubectl get pods
$ kubectl get pods
NAME                    READY  STATUS            RESTARTS        AGE
NAME                    READY  STATUS            RESTARTS        AGE
vol2-779c5f4b96-7vg8q  1/1    Running            0                4s
vol2-779c5f4b96-7vg8q  1/1    Running            0                4s
$ kubectl get deployments
$ kubectl get deployments
NAME  READY  UP-TO-DATE  AVAILABLE  AGE
NAME  READY  UP-TO-DATE  AVAILABLE  AGE
vol2  1/1    1            1          105s
vol2  1/1    1            1          105s
$ kubectl describe deployment <deployment_name>
$ kubectl describe deployment <deployment_name>
</syntaxhighlight>
Name:                  vol2
 
Namespace:              default
* Update replicas:
CreationTimestamp:      Tue, 26 Jul 2022 11:39:05 +0000
Labels:                app=vol2
Annotations:            deployment.kubernetes.io/revision: 1
Selector:              app=vol2
Replicas:              1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType:          RollingUpdate
MinReadySeconds:        0
<...>
</syntaxhighlight>After the initial rollout, we have one replica of the app running. We will now perform a scaling operation.
* Run the following command to scale up to 5 and verify the operation:
<syntaxhighlight lang="bash">
<syntaxhighlight lang="bash">
$ kubectl scale --current-replicas=1 --replicas=<replica_number> deployment <deployment_name>
$ kubectl scale --current-replicas=1 --replicas=5 deployment <deployment_name>
deployment.apps/<deployment_name> scaled
deployment.apps/<deployment_name> scaled
$ kubectl get deployment <deployment_name>
NAME          READY  UP-TO-DATE  AVAILABLE  AGE
vol2.          5/5    5            5          56m
</syntaxhighlight>
</syntaxhighlight>


'''Note''':  <code> --current-replicas </code> is a safeguard. The point is that the action will not be taken unless the actual number of replicas matches what --current-replicas specifies.
'''Note''':  <code> --current-replicas </code> is a safeguard. The point is that the action will not be taken unless the actual number of replicas matches what --current-replicas specifies.


* Expose the web servers deployment through a service:
* Expose the Deployment on port 80 through a Service:


<syntaxhighlight lang="bash">
<syntaxhighlight lang="bash">
Line 138: Line 153:
</syntaxhighlight>
</syntaxhighlight>


* View your deployment’s URL:
*Get your Deployment’s URL:


<syntaxhighlight lang="bash">
<syntaxhighlight lang="bash">
$ minikube service <deployment_name> --url
$ minikube service <deployment_name> --url
$ curl <service_url>
$ curl <service_url>
<html><body>Hello World</body></html>
<html><body>Hello World</body></html>
</syntaxhighlight>
</syntaxhighlight>


* You can confirm if all replicated pods received deployments by changing the contents of any page served on pod:
*You can verify that all replicated Pods received Deployments by changing the contents of any page served on Pod:


<syntaxhighlight lang="bash">
<syntaxhighlight lang="bash">
Line 158: Line 174:
</syntaxhighlight>
</syntaxhighlight>


'''Note''': Run the ''curl'' command according to the number of replicas you created
'''Note''': Run the <code>curl</code> command according to the number of replicas you created


* Shut down minikube after you have practiced to your satisfaction:
*Shut down minikube after you have practiced to your satisfaction:
<syntaxhighlight lang="bash">
<syntaxhighlight lang="bash">
$ minikube stop
$ minikube stop
</syntaxhighlight>
</syntaxhighlight>


== Next Module ==
==Next Module==
Module 3:  [[Kubernetes/Kubernetes Workshop/Setting up Infrastructure as Code (IaC) in Kubernetes|Setting up Infrastructure as Code (IaC) in Kubernetes]]
Module 3:  [[Kubernetes/Kubernetes Workshop/Setting up Infrastructure as Code (IaC) in Kubernetes|Setting up Infrastructure as Code (IaC) in Kubernetes]]


== <div style="text-align: right; direction: ltr; margin-left: 1em;"> Previous Module ==
==<div style="text-align: right; direction: ltr; margin-left: 1em;"> Previous Module ==
<div style="text-align: right; direction: ltr; margin-left: 1em;"> Module 1: [[Kubernetes/Kubernetes Workshop/Set up a batch application on Kubernetes|Set up a batch application on Kubernetes]]
<div style="text-align: right; direction: ltr; margin-left: 1em;"> Module 1: [[Kubernetes/Kubernetes Workshop/Set up a batch application on Kubernetes|Set up a batch application on Kubernetes]]
<div>
<div>


[[Category:Kubernetes]] [[Category: How-To]]
[[Category:Kubernetes]]  
[[Category: How-To]]

Latest revision as of 12:47, 26 July 2022

Overview

At the end of this module, you should be able to:

  • Run a web server application on Kubernetes (minikube).
  • Understand service networking on k8s.
  • Run replicas of an application.
  • Access and create logs on a pod

Run and Access Your Service Applications on a Web Browser

Kubernetes(k8s) has extensive support for running service applications such as a web server. k8s takes care of scaling and failover for your application and provides deployment patterns, automated restarts, load balancing, and dynamic scaling, among other services.

Build and Access the Application via the Internet (Dockerfile)

In this step, you will:

  • Set up an apache web server using a Dockerfile.
  • Build your web server on Docker's Ubuntu image.

Dockerfile

You can reference the previous module for a Dockerfile refresher.

  • Create an empty directory and a Dockerfile using your preferred text editor, e.g. vim:
FROM ubuntu
ARG DEBIAN_FRONTEND=noninteractive
RUN apt-get update
RUN apt-get install -y apache2
RUN echo '<html><body>Hello World</body></html>' >/var/www/html/index.html
RUN echo "ServerName localhost" >> /etc/apache2/apache2.conf 
RUN service apache2 restart
EXPOSE 80
CMD ["apachectl","-DFOREGROUND"]
  • Build and run your image on a public port:
$ docker login

$ docker build -t <your-dockerhub-username>/<given-image-name>:<tag> .
---> b9818132e617
Successfully built b9818132e617
Successfully tagged <your-dockerhub-username>/<given-image-name>:<tag>

$ docker image ls

$ docker run -p 80:80 <image_id>

Note:

  • ARG DEBIAN_FRONTEND=noninteractive is there to prevent apt-get from prompting the user during installs.
  • If you have previously started a container using port 80, you will get an error stating the port is already allocated. In that case, stop the running containers by:
$ docker ps
$ docker stop <id>
  • Lastly, access your web server by running:
$ curl http://localhost
<html><body>Hello World</body></html>

Step 1 - Working with Kubernetes Pods

You can now run the service on minikube following the steps listed in Module 1, and return to this module to expose the pod on a public port by creating a service.

  • Run the image on the cluster and expose the running pod on port 80:
$ kubectl run <pod_name> --image=<your_username>/<image_name>:<tag> --port=80
pod/<pod_name> created
$ kubectl expose pod <pod_name> --type=LoadBalancer --port=80
service/<pod_name> exposed
  • Get the cluster’s URL:
$ minikube service <pod_name> --url
http://192.168.49.2:31688 # This is sample output, yours may vary

$ curl <service_url>
<html><body>Hello World</body></html>

You can also fetch the above via a web browser.

Note:

  • To find out more information about the Pods and the Service, check out the cheat sheet.
  • The --port=80 parameter in the command kubectl run <pod_name> is the port that the service in the container is running on (apache by default listens on port 80). In contrast, in Module 1 the application did not listen on any port.
  • The --port=80 parameter in the command kubectl expose pod <pod_name> is the port that the Kubernetes service will listen on. A Kubernetes Service fronts the Pod (or Pods) with a stable IP, DNS name, and port. It also load-balances traffic to Pods.

Create a Deployment

In the previous section, we packaged our application as a container and ran is as a Pod through a Service. While you could run Pods as-is on Kubernetes, this is not very useful. Static Pods don't give you any of the benefits associated with cloud-native applications: self-healing, scaling (replication), easy updates and rollbacks. In practice, you will most often wrap Pods inside a Deployment object. Under the hood, Deployments use another object called a ReplicaSet. ReplicaSets are responsible for self-healing and scaling, while Deployments manage ReplicaSets and bring rollouts and rollbacks.

Let's now create a Deployment with a single Pod replica to see this in action. We will then scale up this Deployment from 1 to 5 replicas.

  • Create a Deployment using the Docker image from the previous section:
$ kubectl create deployment  <deployment_name> --image=<your_username>/<image_name>:<tag> 
deployment.apps/<deployment_name> created
  • List and describe your Pods and Deployments e.g. if you specified “vol2” as <deployment_name>:
$ kubectl get pods
NAME                    READY   STATUS             RESTARTS         AGE
vol2-779c5f4b96-7vg8q   1/1     Running            0                4s

$ kubectl get deployments
NAME   READY   UP-TO-DATE   AVAILABLE   AGE
vol2   1/1     1            1           105s

$ kubectl describe deployment <deployment_name>
Name:                   vol2
Namespace:              default
CreationTimestamp:      Tue, 26 Jul 2022 11:39:05 +0000
Labels:                 app=vol2
Annotations:            deployment.kubernetes.io/revision: 1
Selector:               app=vol2
Replicas:               1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
<...>

After the initial rollout, we have one replica of the app running. We will now perform a scaling operation.

  • Run the following command to scale up to 5 and verify the operation:
$ kubectl scale --current-replicas=1 --replicas=5 deployment <deployment_name>
deployment.apps/<deployment_name> scaled

$ kubectl get deployment <deployment_name>
NAME           READY   UP-TO-DATE   AVAILABLE   AGE
vol2.          5/5     5            5           56m

Note: --current-replicas is a safeguard. The point is that the action will not be taken unless the actual number of replicas matches what --current-replicas specifies.

  • Expose the Deployment on port 80 through a Service:
$ kubectl expose deployment <deployment_name> --type=LoadBalancer --port=80
service/<deployment_name> exposed
  • Get your Deployment’s URL:
$ minikube service <deployment_name> --url

$ curl <service_url>
<html><body>Hello World</body></html>
  • You can verify that all replicated Pods received Deployments by changing the contents of any page served on Pod:
$ kubectl get pods
$ kubectl exec -it <chosen_pod_name> - /bin/bash
$ cd /var/www/html
echo "<html><body>Hello 2nd World</body></html>" > ./index.html
$ exit
minikube service <deployment_name> --url
$ curl <service_url>

Note: Run the curl command according to the number of replicas you created

  • Shut down minikube after you have practiced to your satisfaction:
$ minikube stop

Next Module

Module 3: Setting up Infrastructure as Code (IaC) in Kubernetes

Previous Module

Module 1: Set up a batch application on Kubernetes