Google Associate Cloud Engineer Topic: Getting Started with Google Kubernetes Engine
December 20, 2022

1. Step 01 – Getting Started with Google Kubernetes Engine GKE

Welcome back. In this section, let’s look at one of the most important services as far as this exam is concerned: the Google Kubernetes Engine. We have talked about container orchestration earlier, and Kubernetes is the most popular container orchestration tool. And the Google Kubernetes Engine is the main edge service offered by the Google cloud platform for Kubernetes. Let’s get started with GKE. or the Google Kubernetes engine. Kubernetes is the most popular open-source container orchestration solution. It provides cluster management. Basically, whatever workloads you want to deploy, you deploy them onto a cluster. Kubernetes allows you to create clusters and manage them, including upgrades on the cluster. In a single cluster, you can have different types of virtual machines. Different nodes in the cluster can have different hardware and software configurations.

Kubernetes provides you with all the important features that you expect out of container orchestrator tools: auto scaling, service discovery, load balancing, self-healing, and zero-downtime deployments. And what is the managed service provided by Google Cloud Platform for Kubernetes? It’s called GKE. or the Google Kubernetes engine. It’s a managed Kubernetes service. With Google Kubernetes Engine, you can optimize your operations with auto repair. Whenever a node fails, it would be automatically repaired. And automatic upgrade. You can have the entire cluster auto-upgraded to keep using the latest version of Kubernetes always. The Google Kubernetes Engine also provides pod and cluster auto scaling. Why is this important? You might be running multiple microservices in a Kubernetes cluster, and these microservices might be running on different nodes of the cluster. Pod auto-scaling deals with increasing the number of instances for a specific microservice. So if I want to increase the number of instances of microservice A, I would need to increase the number of pods belonging to microservice A that are running.

However, if I wanted to increase the number of instances where a microservice runs, I would also need to increase the number of nodes in the Kubernetes cluster. And that’s where we would need cluster auto-scaling. As we keep increasing the number of instances of microservices, you would need pod auto-scaling and cluster auto-scaling as well. Google Kubernetes Engine integrates very well with cloud logging and cloud monitoring. You can easily enable them with very simple configuration, so you can look at logs and metrics around your Kubernetes cluster. Container Optimized OS is the operating system used by Google Kubernetes Engine. This is a hardened operating system optimized for running containers that was built by Google. You can attach persistent discs and local SSDs to the nodes that are part of the cluster. In this article, we got a 10,000-foot overview of Kubernetes and the Google Kubernetes engine. In the next step, let’s get our hands dirty. Let’s start playing with Kubernetes.

2. Step 02 – Kubernetes Journey – Creating a GKE Cluster

Welcome back. Are you excited to start playing with Kubernetes? Let’s get started on a journey to deploy a simple microservice to Kubernetes. So let’s have some fun. Let’s get on a journey with Kubernetes. Let’s create a cluster, deploy a microservice, and play with it in a number of steps. So let’s get started with this. Step one. What is step one? Let’s create a Kubernetes cluster with a default-node pool and a set of nodes. You have two options: say “G cloud container clusters create” or use Cloud Console. Let’s go to the Cloud Console first. So I’m inside my first project, and what I would do is create a new project for Kubernetes.

So I’ll say my Kubernetes project and say “create.” So this would create a new project for us. And inside this, we would want to create a Kubernetes cluster. So now the project is created. Let’s go inside the project and search for Kubernetes Engine. Kubernetes engine? Yeah, that’s what we are looking for. So before we are able to use the Kubernetes engine, we need to enable the Kubernetes APIs. So that’s the first thing that would happen. Now you can see that it goes to the Kubernetes engine API. I can go ahead and enable them. As it says here, Kubernetes builds and manages container-based applications powered by open-source Kubernetes technology. So whenever we perform any operations with Kubernetes, whether we are using Console or whether we are using Cloud Shell in the background, we are making calls to the Kubernetes Engine API. The same is the case with everything that we did earlier. The compute engine would be talking to the compute engine API. App Engine would be talking to the App Engine API. Inside the Google Cloud platform, there are a number of APIs like this. And whenever you want to use a specific service, we need to first enable the APIs for that. The enabling of APIs is taking a little bit of time. Let’s wait for it to be completed.

You can wait for a few minutes and then go into Kubernetes Engine. You can just type in “Kubernetes Engine” and go to “Kubernetes Engine.” You can go in here and say “Create Cluster.” When you click Create Cluster, you should see a pop-up like this: Select the cluster mode that you want to use. Standard and Autopilot are the two options available to you. In standard, you take complete ownership of the cluster. In Autopilot, you delegate cluster management completely to GKE. Let’s quickly look at what Autopilot is all about. The Autopilot mode is actually a new mode of operation for GKE. Earlier, the autopilot mode was not present. The only mode that was present was standard. This is where you can say, “I want five nodes, ten nodes, 15 nodes.” So you are responsible for managing the complete cluster. But with Autopilot, you don’t need to worry about it. The goal of the Autopilot mode is to reduce your operational costs when running Kubernetes clusters. It provides you with a hands-off experience. You don’t really need to worry about managing cluster infrastructure like nodes or node pools. The GKE would completely manage the cluster for you. However, let’s start with “standard” as the configuration. So let’s click configure over here, right beside Standard. Let’s go ahead and configure nodes and everything. You can do the same thing by using the Gcloud container. Clusters create.

It’s important to remember that we previously discussed Gcloud app for app engine and Gcloud compute for compute engine. And now when we are playing with Kubernetes, it’s a G Cloud container. So you want to create a cluster for containers. It says “New cluster creation experience” in here. This screen changes very, very often. So this cluster creation experience changes a lot. So don’t really worry about it. Whatever defaults it offers you, just take them. The only thing I would need to change is the name of the cluster. I’ll call this my hyphen cluster. You can create a zonal cluster or a regional cluster. I’m okay with creating just a zonal cluster, and I’ll take the defaults for most of the other things, so I would just say create. In this step, we got started with creating the Kubernetes cluster. The creation of the Kubernetes cluster will take a while. I’ll see you at the next step.

3. Step 03 – Kubernetes Journey – Create a Deployment and a Service

Welcome back. The creation of the cluster took about five minutes, and after that, I can see that my cluster is now up and running. You can see that by default, the cluster is created with three nodes, and each of these nodes has two vCPUs each.So we have six vCPUs and twelve LGB of memory in total. So we would want to now deploy microservices to this specific cluster. What we can do is connect to this cluster from our cloud shell. So let’s go ahead and say “reconnect” and let’s get the project ID for this specific project so that we can actually set the project ID.

So this is the project ID for my Kubernetes project. Let’s go into Cloud Shell and say, “Gcloudconfig set project” and the project ID. So this would be the default project from now on. Let’s do authorize. The next thing we would want to do is connect to this cluster. So we have created a Kubernetes cluster, and we would want to deploy microservices to it and deploy containers to it. The first thing that I would need to do is connect to it. How can I connect to it? You can get the command to connect to it by clicking this icon in here and saying “Connect.” So this gives you the command. As far as the exam is concerned, it’s very important that you remember the command. So it’s G Cloud container clusters. So this is the same starting point for creating a cluster. When we are creating a cluster, instead of getting credentials, we’ll be using create. However, now we would want to get the credentials of the cluster. Which cluster? We would want to get the credentials of my cluster. You are specifying the zone and the project as well. So what I can do is copy this and paste it in here. So, go to cloud container clusters, get my cluster, zone, and project credentials. That’s the command, and you can see that it fetches cluster endpoint and authorization data. And now the kubeconfig entry for my cluster has been created. We’ll be using something called Kubectl to run commands against the Kubernetes cluster.

And when we run commands, we would be checking kubeconfig for the cluster configuration and then executing commands against that particular cluster. So what we are doing right now is step two, which is logging to Cloud Shell. That’s done. Step three is to connect to the Kubernetes cluster. And the way we connect to the Kubernetes cluster is by executing the command “gcloud container clusters get credentials.” So that’s the command that we just executed. Now I would want to deploy a microservice on this specific cluster. Now let’s try and deploy the microservice to Kubernetes. And to deploy microservices to Kubernetes, we need to create something called a “deployment” and a “service.” And we’d be using something called Cubecattle. So the command is something of this kind. Cubecattle, make a deployment and tell us its name and the image you want to deploy. This Docker image has already been created for you.

A little later, in a separate step, we will see how this image was created. For the time being, the most important thing to remember is that this Docker image is available on DockerHub, and we will download it from there and use it in Kubernetes. And the complete command would be something of this kind. So I’ll try to type this command. It’s.Kubectl. Create deployment. So you’d want to deploy a microservice. The name is Hello, World Rest API. And I need to provide: what is the image? The hyphen-hyphen image is equal to in 28 minutes, which is my Docker ID. And this specific microservice, or the Rest API’s image name, is Hello, World Rest API. And we want to use the image tag zero zero one dot release. What I recommend you do is actually take it from the presentation and run it as is. You don’t really want to make a typo in this. So Kubectl creates a deployment. Hello, world rest API. image is the same as in 28 Hello, world rest API. One release. Now, one important thing that you need to remember is that, until now, we had been using Gcloud container commands. If you want to create a cluster, use gCloud Container clusters. If you want to get the credentials of a cluster, gCloud container clusters get credentials.

So if you’d want to directly play with the cluster, if you want to increase the number of nodes in the cluster, or if you want to add a node pool to the cluster, what you would need to do is use the G Cloud Container Clusters command. However, if you want to deploy something to the cluster, such as a microservice, I would want to expose the microservice to the outside world. In those kinds of situations, you’d use a Kubectl command. So Kubectl is a Kubernetes-specific command line. Whether you have a Kubernetes cluster deployed in AWS, Azure, or Google Cloud, or even on your local machine, or even in your data centre wherever Kubernetes is present, if you want to deploy a microservice, you can use Kubectl. So G Cloud is a Google Cloud-specific thing to create the clusters. We would go to Google Cloud-specific things to create the cluster. However, to deploy something to the cluster, we would go with Kubectl, which is a cloud-neutral solution. Now, once we’ve created a deployment, you can look at the deployment details. You can say Kubectl gets deployed. As can be seen, the hello world rest API one So one instance is ready, it’s up to date, it’s available, and it was created almost two minutes ago. Now, I would want to access whatever is inside this particular deployment. How can I do that?

You need to expose this deployment to the outside world. So the command is “Kubectl expose deployment.” Hello, world rest API. We want to expose it using a load balancer. As a result, type equals load balancer hyphen. The hyphen port is equal to 80 80.So Kubectl exposes the deployment. Hello? Rest API. Hyphen The hyphen type is the same as the load balancer hyphen. 80 is the value of the hyphen port. The container runs on port 88. So we’re pointing port at Dad. What is internally happening is that when you expose a deployment, something called a Kubernetes service is getting created. So we can look at the Kubernetes service status by saying Kubernetes gets service. So you can use the service or services without a problem. The fact that there is a service exposed with the type load balancer shows that Kubernetes receives services. And you can see that the external IP for it right now is pending. So there is a default service that is always running whenever we create a cluster, which is Kubernetes. We don’t really need to worry about that one. What we are interested in is the Hello World, Rest API service that we have just created. So when we expose a deployment, what we are creating is a service. So you create a deployment to deploy a microservice. You create a service to expose the deployment to the outside world. And the type of service that we are creating here is a load balancer.

And what we are waiting for is the load balancer’s external IP to be assigned. “Get a kettle and get services,” I can say. Watch the progress of the hyphen. So you can keep looking at what’s happening in here. So I can see that the external IP is now assigned. So that’s cool. Now I can do a control C. You need to press CTRL-C to terminate the watch. Now I can send a curl request to this, so I can say curl to this specific request. Colon 80 to 80, so let’s see what would happen.So we are sending a get request to that URL. You can see that it returns to health, and I can say that it is colon 80. Hello, world, and welcome back. Hello, World V. One. Now, a simpler way to actually do that would have been to run it in the browser, so I can actually pickup the URL and run it in the browser. Let’s fix this. So the IP address of the load balancer is the port on which we are running it. is Microservice Hello World I would call this the step that is exposed to the external world in the step.We created a deployment, and then we created a service by exposing the deployment. And we saw that we were able to access it at a specific URL; we saw that the service was making use of a load balancer. The type of service that we created was a load balancer. Earlier in the course, we created a load balancer directly. In this case, however, Kubernetes created the globe balancer for us when we created a service in the background. Let’s look at all these details.

4. Step 04 – Exploring GKE in GCP Console

Back. Let’s switch over to the console and pick up my Kubernetes project. That’s the project where we created the Kubernetes cluster. And let’s go to the Kubernetes engine. and that’s where you’ll be able to see your cluster. And over here, let’s see a few details. I’ll go into the cluster. Once you go into the cluster, you’ll be able to see the details of the cluster. You can see from the name what type of cluster it is is.Whenever we’re talking about Kubernetes, we create a cluster as part of the cluster. There are master nodes and worker nodes, or just nodes. So you can see where the master node is running and where the default nodes are running from. You can also see the version and the size of the cluster, which has three nodes. Right now, if you go to the nodes, you can see all the nodes that are present here. You can see that we created three nodes. And these three nodes are created as part of one pool. A Kubernetes cluster can have multiple node pools. When we create the default Kubernetes cluster, it creates one node pool with three nodes. However,

if I want, let’s say, a specific type of node, let’s say I want to create a set of nodes with GPUs, I can go ahead and add node pools in here. So you can say “add node pool.” And you can say, “I would want to create a new pool.” You can specify a size for that specific thing, and you can configure everything related to that. So you can go to nodes and say, “I want a specific type of node.” So you can configure what image you’d like to use. You can configure what type of node—let’s say you want to run something that needs a GPU. You can go for the GPU machine family. I’m not going to create this node pool. But it’s important for you to be aware that if you have specific workloads that require specific types of nodes, you can add node pools to your Kubernetes clusters. By default. There is one note pool that is created, but you can always add more. If you go to logs, you can see the logs from the Kubernetes cluster. Now if you want to look at what is running as part of the cluster, that’s basically the workloads. So you can get to work. So the workloads are where you can see our deployment, which we have created. We have created a deployment, and we have one instance of it running. And each instance that is running is called a “pod.” In Kubernetes terminology, each instance that runs as part of a deployment is called a “pod.” If I increase the number of instances of deployment to three, then you’ll have three pods for that deployment. If you click this and go further inside, you’ll be able to see details of that specific deployment. Y

ou can see how much CPU is being used, how much memory is available, and how much disc space is available. You can see that all the monitoring and logging are on by default. So you can see how many revisions are present for that specific thing. We made just one deployment for that. And you can see that there is one pod that is running as part of that deployment. And you can also see that there is an exposed service. So there is a service that is exposed. So. Hello. Rest API. It’s a load balancer, and you can find the URL for it right here. So if you click this, you’d be able to see that I would like to go to that page; please send me there. You can see Healthy returning through True. And if you type Hello World—oops, Hello World—you’ll be able to see a response come back. You can always edit the deployments from the UI, but the recommended way of editing a deployment is through the command prompt. You can go in here and see the details regarding the specific deployment. You can view its revision history. You can see what events were associated with that specific deployment. You can also see events by saying “Cubecattle get events.” So this also gets you a list of events around what’s happening with your specific cluster. You can see the last few items match whatever is present in here. S

o you can see that it says that you scaled up this specific Rest API to one instance, and you can go to the logs and see the logs related to that specific deployment. One interesting thing is that, in addition to using commands to deploy, you can also use YAML to deploy. Earlier, we used a command to create the deployment. However, you can also use this YAML and deploy this specific service to Kubernetes. We’ll take a deeper look at YAML a little later. If you go to service and ingress, this is where you can see the service that we have created. So we have created a service that is an external load balancer. Hello, well, Rest API, and the endpoint contains one intriguing concept: an ingress. What is ingress? What is service? A service is a set of pods with a network endpoint that can be used for discovery and load balancing. So over here, what we are doing is exposing a deployment to the outside world by using the service. On the other hand, ingresses are a collection of rules for routing external HTTP traffic to services. So if you have an array of internal services, you can create an ingress and redirect traffic to those services. So if you have a microservices architecture, instead of creating individual load balances for each of the microservices, you can create just one ingress. We’ll also talk about intrusions and services a little later as well. In this step, we took a quick tour of the Kubernetes Engine cloud console. I’ll see you in the following step. 

5. Step 05 – Kubernetes Journey – Scaling Deployments and Resizing Node Pools

Welcome back. We made significant progress on our microservices journey in the last few steps. We started with creating a Kubernetes cluster. We logged into the Cloud Shell. Until now, we had been using G Cloud Container Clusters commands to connect to the Kubernetes cluster.

And after that, we started deploying our microservice to Kubernetes. We created a deployment and service using Kubectl. So we created a deployment and exposed it. When we exposed the deployment, a service was created. Let’s now see how you can actually increase the number of instances of your microservice. How can you do that? Do you think it would be difficult? The answer is that it’s very, very easy. So if I wanted multiple instances for my deployment, all I needed to say was “cube cuttle scale deployment” and the name of the deployment. Hello, world rest API. And I need to specify a hyphen. Three instances are equal to three hyphen replicas. You can see that it says it’s scaled, and you can say Kubectl gets deployment. You can see that there are three available right now. And now I would go to the restroom, where I would refresh. When I refresh, I can see that the requests are coming from different instances. Let’s actually take this URL, and I’ll go back to Cloud Shell and say, “Watch Curl and the URL.” Let’s see what would happen. So you can see that it’s coming in from CTB.

Five q CTB, five q; let’s have five dfwb. You can see that the last few letters of the response are changing. What’s happening there? If you do a cubecattle get deployment, you can see that there are three instances, and each of these instances is called a pod. So if you do a Kubit get pod or get pods, you can see all the pods that are part of that specific deployment. Hello, world rest API, as you can see. And these five letters The microservice that we have written will actually pick up the pod name and return it back.And this is why, when I refresh this, you’ll notice that the load is now being distributed among the various pods that are present in here. So this is Szxbw, and that’s the one that’s sending a response back in here. Maybe I will actually execute the request after a few tries. If you keep executing the request, you’ll see that you’ll also get responses back similar to this and this. The important thing that you need to observe is the fact that we were not only able to scale the deployment up, but the Kubernetes service that we have created is automatically doing load balancing between all active instances. Now you can actually try to scale it up to, say, two instances, three instances, or four instances. You would see that everything would be automatically created. So Kubernetes provides scaling and load balancing in a very, very simple way.

Now we know how to increase the number of instances of our microservice. Let’s say I want to run 100 instances of this microservice or 1000 instances of this microservice. Do you think we’ll be able to run it? We have only created three nodes. So the default cluster that we have created has only three nodes. If I wanted to create more instances of this deployment beyond a certain limit, I would need to actually scale up my cluster. So I would want to actually increase the number of nodes in my cluster. How can I do that? I can do this by resizing Gcloud container clusters, and you must provide the name of the cluster, the node pool, and the number of nodes. So what I’ll do is try to execute that command right now. So, Gcloud, remember we are going back to the cluster. That’s why we are going back to Gcloud. Cluster instances services are used by Gcloud container and Gcloud commands. as a result, gcloud container clusters resize Next, you need the name of the cluster. The name of the cluster is “my cluster.” Which cluster do you want to resize? It’s my cluster. And after that, whenever we are resizing, we are resizing a node pool inside a cluster. We are not really resizing the cluster directly, but we would want to resize a node pool. You might have multiple node pools. So you need to specify which node pool you want to increase the number of nodes for. So I’ll go to nodes, and you can see that the name of the node pool is default-hyphen pool. So resize the cluster node hyphen pool. What do you want to do?

It’s the default pool. So default. So I would want to resize this specific cluster. How many nodes do I want? We already have three nodes running. I don’t really want to increase the number of nodes. So what I’ll do is actually decrease the number of nodes. So Gcloud container clusters resize my cluster, specifying the node pool and the number of nodes as two. What is the cluster zone? Let’s go back to the cluster. Let’s go back to the details. We have created a zonal cluster. So we have the cluster running in this specific zone, Central Zone C. So I’ll go back in here. So I’ll go ahead and add in zones equal to US Central Zone C. Let’s see if this would work.You can see that it says “pool default pool” for my cluster will be resized to two. Do you want to? Can you? Yes, I would want to. So it’s now reducing the cluster size to two. You can increase it or decrease it. And this is manual, right? So we are deciding what the size should be the size. The same thing happened with the pod earlier. Earlier, we specified a specific size.

So this is called manual scaling, and manual scaling of the cluster is done by resizing. And the manual scaling of a pod is done by doing a scale deployment using Kubectl. Whatever we’re looking at is one of Kubernetes’ most important concepts. And that’s the reason why we designed this journey in such a way that we can go through the entire journey stepby step and let’s wait for the cluster to be resized. The values shown below will be updated once the operation finishes. So we are still waiting for the operation to be completed. The resizing of the node pool is taking a long time, so what we’ll do is take a break. One of the important things to remember is that we are not really happy. Right? We are manually increasing the number of instances and nodes. What we would want to do is auto-scale. Let’s see how to do that in the next steps.

Leave a Reply

How It Works

img
Step 1. Choose Exam
on ExamLabs
Download IT Exams Questions & Answers
img
Step 2. Open Exam with
Avanset Exam Simulator
Press here to download VCE Exam Simulator that simulates real exam environment
img
Step 3. Study
& Pass
IT Exams Anywhere, Anytime!