Google Associate Cloud Engineer Topic: Getting Started with Google Kubernetes Engine Part 2
December 20, 2022

6. Step 06 – Kubernetes Journey – Autoscaling, Config Map and Secrets

Welcome back. The resizing of the cluster took some time, but finally it was done. It took about 10 to 15 minutes before it was done. And now, through Cloud Shell and Cloud Console, I’ll be able to see that my cluster has just two nodes. So let it refresh. Yeah, you can see that there are just two nodes. And as we talked about in the last step, we are not really happy about the fact that we are manually increasing the number of instances and nodes. That’s where you can go for auto scaling. You can set up auto-scaling for your microservice and for your cluster. How do we set up auto-scaling for our microservice? Again, it’s a very, very simple command. And since we are playing with the microservice or the Rest API, we need to use Kubectl. So let’s go in and say Kubectl auto-scale deployment. So we want to auto-scale the deployment. Which deployment? Good day, world rest API deployment. And what do you want to do? We want to have a maximum of, let’s say, four instances. So I would want to have a maximum of four instances. How do I decide about auto-scaling? I would recommend doing it based on CPU percentage. So CPU percent is equal to, let’s say, 70. So you can go up to a maximum of four instances, and you can scale up and scale down. Try to achieve a CPU utilisation of 70%. Let’s press enter. Cool. The Kubectl auto-scale deployment Hello, world rest API. Max is equal to 4, and CPU percent is equal to 70. And how does this auto-scaling work internally? When I execute this command, the cube Kettle autoscale command, what internally gets created inside Kubernetes is something called a “horizontal pod autoscaling configuration.” This is also called HPA. And if you actually do a cube cattle appraisal, you should see something of this kind. You can see that the name is this  and that it refers to this deployment. And this is a target.

The target is 70%. Right now, the CPU utilisation is unknown, and the minimum number of pods is one. The maximum number of pods is four, and the current existing number of replicas is three. So, when we run autoscale deployment, we get this horizontal pod autoscaler. And this is how we scale our microservice. Now, auto-scaling microservices is good, but you’d also want to auto-scale your Kubernetes cluster. The microservice deployments can only scale up to the level of nodes in the Kubernetes cluster; beyond that level, you might not have sufficient nodes. And then you had to autoscale or increase the number of nodes in your Kubernetes cluster. How can you do autoscaling on a Kubernetes cluster? That can be done by using a command called “G cloud container clusters.” Update. You’d like to update a specific cluster and enable auto scaling. And you can set the minimum number of nodes and the maximum number of nodes. So you can autoscale both your applications and your Kubernetes cluster.

 As your applications or microservices keep autoscaling, they need more resources, and to get more resources, you can autoscale your Kubernetes cluster as well. In addition to auto scaling, let’s also look at configuration in this specific step. Let’s say you have a microservice and you want to configure a connection to a database or something like that. Where do you configure that? In Kubernetes, the place where you would configure is something called a “config map.” So you can create a config map and set the values in it, and you can have your microservice or your deployment pick up the values from the config map. The command is Kubectl create config map, and you can give it a name. So you can give the configmap whatever name you want, and you can specify the values, so you can specify the DB name or something along those lines. So let’s go ahead and try that. So Kubectl can generate a configuration map, and you can rename it. The to-do web application configuration might not be the right one in here.

Let’s probably call it Hello, World. Config. So I would want to store the configuration for Hello World Config and the database name in there. So that’s it. You have created a config map, and you can use this config map from your microservice. So Kubectl getConfigMap would get the details inside the config map. So you can see that there is a config map called Hello World Config (kubectl get config map). Hello worldconfig, and this is what I would get if I ran kubectl describe config map. Hello, world! Config, and then you can see the data inside that config map. So over here, you can see the name is this, and you can see all of the data that is present, including the RDS DB name with the value of todos. In addition to configuration, you might also want to store secrets. Let’s say you want to store a database password. You can also store that in Kubernetes as a secret. So instead of creating a config map, what we would do is create a secret. So I’ll use this command to create a secret Kubectl. The type of secret is generic, and this is the name we are giving it, and we are saying that these are the values that should be stored in the secret. So let’s try that. So let’s do a clear and execute that command, “cubecattle create secret generic.” Let’s just say Hello World Secrets in here instead of doing a web application.

Let’s go in here and say Hello World Secrets, and we have the password in here. So now the secret is created, so I can say Kubectl get secret, and you can see that there is a default one that was already created by the Kubernetes cluster. By default, the one that we have created is here. Hello there, one secret. And suppose Kubectl describes the secret and the name we’ve given it. Hello there, one secret. What would we get back? You can see that you are getting the RDS password, and you won’t be able to see the password in plain text anymore because it’s a secret. This would be encrypted and stored. So if you have a microservice and you want to store the configuration, you would use a config map. If you want to store a password, then you would use a secret. This is also called a secret map. In both the configuration map and the secret, you can have any number of values. So you can store all the configuration that is needed by your microservice or even a set of microservices in a single config map and a single secret map. We previously discussed auto scaling and configuration. Kubernetes makes it very, very easy to do auto-scaling as well as centralised configuration for your microservices. I’m sure you’re having a wonderful time, and I’ll see you in the next.

7. Step 07 – Exploring Kubernetes Deployments with YAML Declarative Configuration

Back. Let’s switch over to the console and pick up my Kubernetes project. That’s the project where we created the Kubernetes cluster. And let’s go to the Kubernetes engine. and that’s where you’ll be able to see your cluster. And over here, let’s see a few details. I’ll go into the cluster. Once you go into the cluster, you’ll be able to see the details of the cluster. You can see from the name what type of cluster it is is. Whenever we’re talking about Kubernetes, we create a cluster as part of the cluster. There are master nodes and worker nodes, or just nodes.

So you can see where the master node is running and where the default nodes are running from. You can also see the version and the size of the cluster, which has three nodes. Right now, if you go to the nodes, you can see all the nodes that are present here. You can see that we created three nodes. And these three nodes are created as part of one pool. A Kubernetes cluster can have multiple node pools. When we create the default Kubernetes cluster, it creates one node pool with three nodes. However, if I would want, let’s say, a specific kind of node, let’s say if I would want to create a set of nodes with GPUs, if I would want, I can go ahead and add node pools in here. So you can say “add node pool.”

And you can say, “I would want to create a new pool.” You can specify a size for that specific thing, and you can configure everything related to that. So you can go to nodes and say, “I want a specific type of node.” So you can configure what image you’d like to use. You can configure what type of node—let’s say you want to run something that needs a GPU. You can go for the GPU machine family. I’m not going to create this node pool. But it’s important for you to be aware that if you have specific workloads that require specific types of nodes, you can add node pools to your Kubernetes clusters. By default. There is one note pool that is created, but you can always add more. If you go to logs, you can see the logs from the Kubernetes cluster. Now if you want to look at what is running as part of the cluster, that’s basically the workloads. So you can get to work. So the workloads are where you can see our deployment, which we have created. We have created a deployment, and we have one instance of it running. And each instance that is running is called a “pod.” In Kubernetes terminology, each instance that runs as part of a deployment is called a “pod.” If I increase the number of instances of deployment to three, then you’ll have three pods for that deployment. If you click this and go further inside, you’ll be able to see details of that specific deployment.

You can see how much CPU is being used, how much memory is available, and how much disc space is available. You can see that all the monitoring and logging are on by default. So you can see how many revisions are present for that specific thing. We made just one deployment for that. And you can see that there is one pod that is running as part of that deployment. And you can also see that there is an exposed service. So there is a service that is exposed. So. Hello. Rest API. It’s a load balancer, and you can find the URL for it right here. So if you click this, you’d be able to see that I would like to go to that page; please send me there. You can see Healthy returning through True. And if you type Hello World—oops, Hello World—you’ll be able to see a response come back. You can always edit the deployments from the UI, but the recommended way of editing a deployment is through the command prompt. You can go in here and see the details regarding the specific deployment. You can view its revision history. You can see what events were associated with that specific deployment. You can also see events by saying “Cubecattle get events.” So this also gets you a list of events around what’s happening with your specific cluster.

You can see the last few items match whatever is present in here. So you can see that it says that you scaled up this specific Rest API to one instance, and you can go to the logs and see the logs related to that specific deployment. One interesting thing is that, in addition to using commands to deploy, you can also use YAML to deploy. Earlier, we used a command to create the deployment. However, you can also use this YAML and deploy this specific service to Kubernetes. We’ll take a deeper look at YAML a little later.

If you go to service and ingress, this is where you can see the service that we have created. So we have created a service that is an external load balancer. Hello, well, Rest API, and the endpoint has one interesting concept in it: something called an ingress. What is ingress? What is service? A service is a set of pods with a network endpoint that can be used for discovery and load balancing. So over here, what we are doing is exposing a deployment to the outside world by using the service. On the other hand, ingresses are a collection of rules for routing external HTTP traffic to services. So if you have an array of internal services, you can create an ingress and redirect traffic to those services. So if you have a microservices architecture, instead of creating individual load balances for each of the microservices, you can create just one ingress. We’ll also talk about intrusions and services a little later as well. In this step, we took a quick tour of the Kubernetes Engine cloud console. I’ll see you in the following step. 

8. Step 08 – Kubernetes Journey – The End

Welcome back. In the last few steps, we learned a lot of concepts, and now it’s time to get to the end of the journey that we have started. There are a lot more slides about Kubernetes that we will be talking about. Let’s end the most important hands-on paths here. Deploy a new microservice that needs nodes with a GPU attached. How can we do that? We discussed this, right? So you already have existing deployments, and you would want to deploy new microservices. How are these microservices deployed on GPU-enabled nodes? How do you do that? You’d create a new node pool. You’d want to attach a new nodepool with GPUs to our cluster. And we saw how we could do that from the cloud console. You can also do it using Gcloud. Keep in mind that we are changing the node pool. We are changing the cluster. As a result, we’d use the Gcloud container. Container node pools in gcloud create the pool you’d want to create and in which cluster you want to create the pool in.

In addition to this, you can also add a little bit of configuration on what kind of nodes you want to create as part of the pool. The command to list the load pools is gcloud container node pools list. So if you go and find this in there, let’s go to the cloud shell. It’s not here. Definitely not here. So let’s try it again. Pull lists for Gcloud container nodes. It’s saying, “Okay, tell me which zone or which region.” So let’s start with listing the zones. Which zone is this cluster in? Let’s go in. This is in the United States. central one, C. So let’s go there. central letter C. Then it asks cluster: in which cluster is it?

So let’s do that as well. You can also perform a configuration set. So Gcloud config sets the container cluster to the value, and then that would be the default cluster. However, you can also specify cluster as part of your command. So which cluster? What’s the name of our cluster? It’s my cluster. My hyphen cluster Let’s get that done. So you can see that we have enough in one default pool. If we want, we can add another pool by using Gcloud container node pools create. Give it a name and say, “This is the cluster I would want to attach to it.” Now, once we’ve created a node pool, what do we want to do? We need to tell Kubernetes that the microservice, the new Microsoft, which needs a GPU, needs to be deployed to the new node pool, not the old node pool. How can you do that? The way you can do that is by setting up something called a node selector in the deployment YML file. In the deployment YML file, you can configure the node selector and say “cloud.” Google.com Gkenode Pool so use this specific node pool. This is the pool that needs to be used to deploy this specific deployment.

So to deploy a new microservice that has specific needs, you can create a new node pool. And then, in the deployment of the microservice, you can say, make use of that node pool. Now we are ready to get to the end of our journey. How do we delete the microservices? You can first delete the service. Kubectl deletes the service and the name of the service, and then deletes the deployment. Kubectl deletes the deployment and the name of the deployment. We are using Kubectl to delete the service and deployment. And if you want to delete the cluster, it’s Kubectl. No, not Kubectl. Right? You are playing with the cluster. As a result, gcloud container clusters are deleted, and the name and details for that specific cluster are provided. We are not going to delete the microservices and cluster right now. I just wanted to mark the end of this journey. What we’ll do is play around with the clusters and deployments a little bit more as we go further. And at the end of the section, we would delete the Kubernetes cluster. Kubernetes was such a complex topic, and we designed this journey so that you actually have something to fall back on. You can refer to this journey whenever you have a question in the exam, and it will make it easy for you. I’m sure you’re having a wonderful time, and I’ll see you in the next step.

9. Step 09 – Understanding Kubernetes Clusters – Google Kubernetes Engine GKE

Welcome back. In the last few steps, we learned a lot about Kubernetes. Kubernetes is really vast. We have a course of around 13 hours. Just a deep dive into Kubernetes in various clouds (AWS, Azure, and GCP). Trying to discuss Kubernetes in a few hours is a very tough task. And that’s what we are trying to do as part of the certification course. This certification is not just about Kubernetes, but everything related to GCP. So we are trying to clarify the most important concepts from the perspective of an examination. In the next few steps, let’s review some of the important things that we have learned as part of our Kubernetes journey. The first thing that we’ll talk about is the cluster. A cluster is where your workloads are run; whenever you want to deploy something in Kubernetes, you must first create a cluster. And a cluster is nothing but a group of compute engine instances. There is a master node that manages the cluster, and there are worker nodes where you run your workloads inside your master node. This is also called the control plane because this is what manages everything. Whenever we execute a Kubectl command, where are we sending the command to?

We are sending it to the master node. And the master node would do whatever is necessary to execute that command. It manages the cluster, and it says, “Hey, I need a new node.” Or the master node says, “Hey, deployment, I would need a new instance for you.” So the master node is the one that manages the cluster. And the worker nodes are where our workloads, our applications, or our microservices run. And as part of our master node, there are a number of components. The first one is the API server. There is a lot of communication that happens within a Kubernetes cluster. And there is a lot of communication that happens from outside the Kubernetes cluster. AMS executes a command, depending on which part of the master node receives the request for a command. That’s what’s called an API server. This manages all communication for a Kubernetes cluster from outside and the communication from the master to the worker nodes. The next one is the scheduler. When I say I would want to create a deployment with three pods, what does the master node need to decide? Let’s say there are ten worker nodes. It needs to decide which is the worker node where instances of this deployment need to run, and the component in the master node that does that is called the scheduler.

 Is it sufficient if you just deploy things? You’d also need to manage their they are healthy. If they are not healthy, you’d need to actually replace them. And that’s why you have a control manager. The control manager manages deployments and replica sets. A little later, we’ll discuss what the difference is between a deployment and a replica set. The fourth important master node component is etcd. It is a distributed database storing the state of the cluster. You need to store the state of the cluster somewhere. That’s how you can provide high availability. And the database in question is etcd. This is a distributed database. We looked at the different components that are present as part of the master nodes. Now, what happens within the worker nodes? What are the worker node components? The worker node components are where our workloads run and where our deployments run, and deployments are made up of instances that have nothing but pods. So the worker nodes are where our pods run. And in the worker nodes, there is a specific component related to Kubernetes, which is called Kubelate. The worker nodes need to talk to the master node, and that’s what Kubelate does. It manages communication with the master node. When it comes to GKE, there are a few different types of clusters. What are the different types of clusters? The first one is the zonal cluster. So this is a single zone. So there is just one control plane.

 There is just one master node, and all the other nodes are also run in the same zone. In the example that we used, we created a zonal cluster. We saw that the control plane, the masternode, ran inside the same zone, and the nodes were also running in the same zone. There is also a multi-zone possibility for the Zonal Cluster. Over here, you have a single control plane. Basically, your control plane is inside one zone, but nodes are running within multiple zones. There is also a regional cluster. In a regional cluster, replicas of the control plane run in multiple zones of a given region. So you will run multiple instances—even of your master node. And wherever your master node runs in those zones, you also have worker nodes. Why do you go for a regional cluster? Why do you want master nodes to be deployed to multiple zones? because even the master nodes can fail. If there is a failure in the master node, you don’t want to lose the entire cluster. You can have high availability by even distributing your master nodes. There is also something called a “private cluster,” which is specific to a virtual private cloud. A virtual private cloud is nothing but an internal network that you can create in Google Cloud. A private cluster is something that lives within a VPC. And there are also clusters called “alpha clusters.” These alpha clusters are created with early feature APIs, which are nothing but alpha APIs. So whenever you want to test the brand-new Kubernetes features, you can try alpha clusters. In the previous step, we looked at what is a cluster, what are the components that are part of a cluster, and what are the different types of clusters. We talked about worker nodes, we talked about master nodes, and we talked about zonal, regional, private, and alpha clusters. I’m sure you’re having a wonderful time at full time, and I’ll see you in the next step.

10. Step 10 – Understanding Pods in Kubernetes

Welcome back. In this step, let’s talk about pods. What is a pod? Pod is the smallest deployable unit in Kubernetes. A pod contains one or more containers. Each pod is assigned an ephemeral IP address. Earlier, we deployed our application to Kubernetes. And if I say Kubernetes gets pods, it should be Kubectl gets pods. If you receive an error, ensure that you first connect to the Kubernetes cluster. You can get the command by actually going to the cluster. Over here, you can get the connect command run that.So Kubectl gets pods. You can see that there are two pods that are running right now. So a pod is where our microservices run. In a single pod, you can actually have multiple containers. Most of the pods, however, will contain just one container. And when we create a deployment with three instances, we’ll have three pods. Over here, our deployment has two instances. If I say Kubectl gets deployed, how many instances does it have? It has two instances, and that’s why it has two pods. You can see more details about a pod if I cube kettle get pods hyphenated wide. You can see that each pod has an IP address of its own. This IP address is an ephemeral IP address.

If you have multiple containers in the same pod, they will all share network storage, IP address ports, and any volumes or shared persistent discs that you have attached to the pod. So if you have a deployment and you are creating a volume for the deployment, your deployment instances that are nothing but pods can also access your volumes. And all the containers that are part of the pod can access all the things that are part of a pod. The pod can be in different stages: running, pending, successful, failed, or unknown. Running occurs when a pod is running. Pending is when you are waiting for a pod to be deployed to one of the nodes. “Succeeded” is when its job is done. “Failed” is when a pod was not started successfully. Unknown is master’s inability to determine the status of a pod in this quick step. We looked at Pod. Pod is the smallest deployable unit in Kubernetes. Each pod has an individual IP address, and a pod can contain more than one container. Each instance of deployment is nothing but a pod. Let’s talk more about deployments and replica sets in the next step.

Leave a Reply

How It Works

img
Step 1. Choose Exam
on ExamLabs
Download IT Exams Questions & Answers
img
Step 2. Open Exam with
Avanset Exam Simulator
Press here to download VCE Exam Simulator that simulates real exam environment
img
Step 3. Study
& Pass
IT Exams Anywhere, Anytime!