Google Associate Cloud Engineer Topic: Getting Started with Google Kubernetes Engine Part 3
December 20, 2022

11. Step 11 – Understanding Deployments and Replica Sets in Kubernetes

Welcome back. In this step, let’s talk about deployment and the replica set. We have been talking about deployment a lot. Now, why do we need a replica set? A deployment is created for each microservice. So Kubit creates a deployment. In place of M1, we used the Hello, World Rest API. and we specified the image. We said this is the Hello World Rest API Zero One release image that we would want to deploy. Similar to that, if I want to deploy a microservice, I can say M1 and probably put in the image for the microservice. A single deployment represents a microservice with all its releases. So you might have two 3510 versions of your microservice. A deployment represents all the versions of a microservice. One of the most important roles of a deployment is to manage new releases, ensuring zero downtime. Let’s go in here. Let’s start with a clear and say cube cattle set.

Image deployment. Which deployment? Hello there, rest API. And within hello, world rest API. We would want to update the image. Which container do you want to update the image for? We want to update the container named “Hello World Rest API.” And we want to update the image to what? In 28 minutes Hello World Rest API 0.0.10.02 Release so we earlier created a deployment. Now what we are doing is updating that deployment. We recommend that the Hello World Rest API image be updated to this. So this is the image I would want to upgrade the release from a 0.01 release to a 0.0.2 release. I’ve already created this container, and this is already part of Docker Hub, so I can go ahead and execute this command. This command might be tough to execute, so make sure that you pick it up from the backup of commands that we have provided you already. So let’s go ahead and run this. You can see what happened. Deployment slaphappy Hello, World Rest API. Image updated. Now this is the Hello, WorldAPI, that we released earlier. So if I execute this, you can see that it’s now written in Hello World Version Two. How is it written in verse two? because now we have automatically upgraded to V Two.

The image is updated to Version 2. So the same URL is now serving version two of the microservice. How can you get this URL? One of the ways you can do that is by saying, “Cattle, get services.” So we had earlier created a service around this deployment. And this is the external IP. So you can say Hello World with this colon. That’s the URL that we used earlier as well. So the same deployment now represents 0.0.1 and 0.0.2 releases. So a deployment manages new releases, ensuring new downtime. We were able to easily upgrade to a new release without any downtime.

What is the role of the replica set? The replica set, on the other hand, ensures that a specific number of pods are running for a specific microservice version. Earlier. We were using V one. Now we are using V two. Earlier, there was a replica set for V One. And now that replica set will be replaced by a V.2 replica set. So a replica set represents a specific version, a specific deployment version, and its instances and pods. Now, how can I find that out? I can say kubectl, get replica sets. What’s present in here? You can see two replica sets. This replica set is from the previous version. And for the old version, we don’t have any pods running right now. And for the new version, there are two pods that are running. Now. I’d say, “cubecattle get pods” and “cube cattle delete pods.” And I’ll pick this up and delete it. So the pod is deleted. And I’ll do it; Cube cat will get pods.

You can see that there is a new one that was automatically launched. So as soon as an existing pod is deleted, the replica set says, “Hey, I actually want two instances to be running.” Every time somebody kills a pod, I would need to launch another immediately. And what does the replica set do? It would ensure that at least two pods are always running. So the duty of a replica set is to ensure that a specific number of pods are always running for a specific microservice version. Deployment is responsible for multiple versions of your microservice. So deployment is only responsible for shifting from one release to another release. So it is making sure that there is no downtime when you are actually moving from one release to another release. Who is responsible for each specific release and ensuring that the specific number of instances are running? That is the replica set. As a result, cube cattle scale deployment M + 2 replicas is equal to 2. What we are updating is the replica set. We are selling the replica set. Hey, you need to have two instances. Let’s try that. Cubecattle-scale command deployment Hello, world rest API. Let’s say I just want one instance. One hyphen equals one iPhone Replicas. What would happen now?

Let’s do a cube-cattle get. Will there be a change in the deployment? The only thing that would change is the addition of a replica set. So Kubectl, get the replica sets so you can see that the desired quantity for the new replica set of V two is now updated to one. So we only want one pod to be running at all times. Even if one of the pods is killed, the replica set will launch a new one. If you deploy a new version of the microservice, it creates a new replica set. So if we update the image, that’s what we have already done earlier.Cubekettle set images for deployment. What happened? A new replica set was created. So a V-two replica set was created. The deployment updates the V-1 replica set and the V-2 replica set based on the release strategies. So if you want to slowly move from one release to another, the deployment can do that for you as well. These are called rolling deployments. In a quick step, we looked at the difference between a deployment and a replica set. I’m sure you are having an interesting time, and I’ll see you.

12. Step 12 – Understanding Services in Kubernetes

Welcome back. We have been talking about services for a long time. Let’s quickly review the most important things that you need to remember about a service. Why do we need a service? Each pod has its own IP address. How do you ensure that external users are not impacted when a pod fails and is replaced by the replica set? We killed a pod, and it was immediately replaced by the replica set. However, our URL would still be working. So our service URL will continue to work even though things change internally; there is no change for external users. A new release happens, and all existing pots of the old release are replaced by the new release. We saw that happen, so we replaced all instances of V1 with V2, and even then we saw that the URL continued to work. So how do you ensure that external users are not impacted when there are changes internally?

 That’s where we create a service. The service exposes your deployment to the outside world and also ensures that external users are not impacted if something changes internally. We previously deployed a service using Cubecattle Expose. The hyphenated heaven type is equal to load balancer. Hyphen Heaven port is equal to 80, 80, or whatever. Whichever port you don’t make use of So exposing ports to the outside world using a stable IP address ensures that the external world is not impacted as ports go down and come up. There are three types of Kubernetes services. The one that we created was a load balancer. In addition, there is something called cluster IP. Sometimes you don’t really need to expose services outside your cluster. There might be an internal microservice that you are making use of. It’s specific to whatever you’re doing inside the cluster. You don’t want it exposed outside your Kubernetes cluster. In those kinds of situations, you can go for a cluster IP. It exposes service on a cluster’s internal IP address.

A use case is when you want your microservices to be available only inside a cluster. Intracluster communication: the next one is what we have been using, a load balancer. It exposes the service externally using a cloud provider’s load balancer. Let’s see what’s happening here. So let’s actually go to load balancers. Is there a load balancer created for us? So let’s get started with load balancing. You’d see that when we created a service, a load balancer was also created for us. This load balancer was created when we created a service internally. We created a service with the type Load Balancer, and externally, a cloud provider’s Load Balancer was created for us. This would work in any cloud. If you try it in AWS or Azure with the respective Kubernetes service, you’ll notice that a load balancer is automatically provisioned. You’d go for a load balancer if you want to create individual load balancers for each microservice. So for each deployment, you create a separate load balancer. This is not always a good idea because you don’t want to use too many load balancers. The other option is a node port.

The note port exposes services on each node’s IP at a static port. So there’s a port called the node port, and you are exposing the service at the node’s IP address. A use case for going with node ports is when, let’s say, you don’t want to create an external load balancer for each microservice. What you can do is expose all the services using a node port, and then you can create an ingress. You can create one ingress and actually route to all the microservices. Where did we see ingress earlier? If you go over to ingress, when I type ingress, it’s actually pointing me to services. The Kubernetes Engine provides similar services. That’s where we need to go. And this is where you can see services and egress. So this is where you can actually create an ingress. What you can do is expose your service as a node port. So if you have ten microservices, you can expose all of them as node ports, and then you can create an ingress.

So you can go in here and create an ingress and egress that can route actual traffic to load balancers and node ports. Typically, the use case you would use is that you would expose the services as node ports and then have an ingress to expose them to the outside world. In the step where we talked about a Kubernetes service, you create a service so that the external world does not need to know about what’s happening inside your cluster. We discussed three kinds of service clustering: IP, load balancer, and node port. A cluster IP will be useful if your microservice or the service you are exposing is used only within your cluster. When you want to use a cloud provider’s load balancer to expose your service to external users, either on the internet or within a specific network within your internet, you use a load balancer. A node port can be used when you want to expose services at the node’s IP over a static port. You can expose all of your microservices as node ports and have an ingress that can route to multiple node port services, which is one of the possibilities with node port. I’m sure you’re having a wonderful time, and I’ll see you in the next step.

13. Step 13 – Getting Started with GCR – Google Container Registry

Welcome back. We have been talking about services for a long time. Let’s quickly review the most important things that you need to remember about a service. Why do we need a service? Each pod has its own IP address. How do you ensure that external users are not impacted when a pod fails and is replaced by the replica set? We killed a pod, and it was immediately replaced by the replica set. However, our URL would still be working. So our service URL will continue to work even though things change internally; there is no change for external users.

A new release happens, and all existing pots of the old release are replaced by the new release. We saw that happen, so we replaced all instances of V1 with V2, and even then we saw that the URL continued to work. So how do you ensure that external users are not impacted when there are changes internally? That’s where we create a service. The service exposes your deployment to the outside world and also ensures that external users are not impacted if something changes internally. We previously deployed a service using Cube cattle Expose. The hyphenated heaven type is equal to load balancer. Hyphen Heaven port is equal to 80, 80, or whatever. Whichever port you don’t make use of So exposing ports to the outside world using a stable IP address ensures that the external world is not impacted as ports go down and come up. There are three types of Kubernetes services. The one that we created was a load balancer. In addition, there is something called cluster IP. Sometimes you don’t really need to expose services outside your cluster. There might be an internal microservice that you are making use of. It’s specific to whatever you’re doing inside the cluster. You don’t want it exposed outside your Kubernetes cluster. In those kinds of situations, you can go for a cluster IP. It exposes service on a cluster’s internal IP address.

A use case is when you want your microservices to be available only inside a cluster. Intracluster communication: the next one is what we have been using, a load balancer. It exposes the service externally using a cloud provider’s load balancer. Let’s see what’s happening here. So let’s actually go to load balancers. Is there a load balancer created for us? So let’s get started with load balancing. You’d see that when we created a service, a load balancer was also created for us. This load balancer was created when we created a service internally. We created a service with the type Load Balancer, and externally, a cloud provider’s Load Balancer was created for us. This would work in any cloud. If you try it in AWS or Azure with the respective Kubernetes service, you’ll notice that a load balancer is automatically provisioned. You’d go for a load balancer if you want to create individual load balancers for each microservice. So for each deployment, you create a separate load balancer. This might not be a good thing to do always because you don’t want to use too many load balancers as well.

 The other option is a node port. The note port exposes services on each node’s IP at a static port. yedo A use case for going with node ports is when, let’s say, you don’t want to create an external load balancer for each microservice. What you can do is expose all the services using a node port, and then you can create an ingress. You can create one ingress and actually route to all the microservices. Where did we see ingress earlier? If you go over to ingress, when I type ingress, it’s actually pointing me to services. The Kubernetes Engine provides similar services. That’s where we need to go. And this is where you can see services and egress. So this is where you can actually create an ingress. What you can do is expose your service as a node port. So if you have ten microservices, you can expose all of them as node ports, and then you can create an ingress. So you can go in here and create an ingress and egress that can route actual traffic to load balancers and node ports.

Typically, the use case you would use is that you would expose the services as node ports and then have an ingress to expose them to the outside world. In the step where we talked about a Kubernetes service, you create a service so that the external world does not need to know about what’s happening inside your cluster. We discussed three kinds of service clustering: IP, load balancer, and node port. A cluster IP will be useful if your microservice or the service you are exposing is used only within your cluster. When you want to use a cloud provider’s load balancer to expose your service to external users, either on the internet or within a specific network within your internet, you use a load balancer. A node port can be used when you want to expose services at the node’s IP over a static port. You can expose all of your microservices as node ports and have an ingress that can route to multiple node port services, which is one of the possibilities with node port. I’m sure you’re having a wonderful time, and I’ll see you in the next step.

14. Step 14 – Important Things to Remember – Google Kubernetes Engine GKE

Welcome back. In one step, let’s look at some of the important things that you need to remember about the Kubernetes engine. Always replicate master nodes across multiple zones. That would ensure high availability—even for your control plane. One of the important things to remember is that some of the CPU on the nodes is also reserved by the control plane.

The nodes need to communicate back to the master, and that’s why there is some CPU reserved. So if you have multicore, then on the first core, 6% is reserved. On the second core, 1% is reserved. On the third and fourth courses, 0.5% is reserved; on the rest, 0% is reserved. Earlier, we talked about creating a Docker image for your microservices. In this section, we made use of pre-created Docker images. However, if you want, you can actually create your own Docker images as well. All the Docker image creation-related configuration is typically stored in a file called the Docker file. So you create a Docker file and say, “This is the content I would want in a Docker image.” And you can build a Docker image by using docker build –hyphen-t and saying what tag you want to attach to that specific image. Once you build the Docker image, you can run it locally.

So first test it locally using the command Docker run, and you can specify the port on which it needs to be run and specify your image. Once you have tested it locally, you can push it to the container repository. You can put it on something like Docker Hub. Docker Hub is a centralized container repository. If you do a docker push, this image will be available on the centralized docker repository. If you are part of an enterprise, then your enterprise might be using a private container repository. You can also push it to your enterprise private container repository if you are in the cloud. Each of the cloud services also provides individual container repository services. In Google Cloud, there is a Google Container Repository. So if you’re using Google Cloud, you can actually push this image to GCR, or the Google container repository. Once you push the image to a Google Container Repository or Docker Hub, you can use this image to create containers in Kubernetes.

The next important thing to remember is that Kubernetes also supports stateful deployments. Typically, most of the microservices we would be using are stateless. However, you might want to run things like Kafkaor, Radis, or Zoo or Zookeeper, which have states. And in that case, you can use “state full set.” So instead of deployments, you would make use of stateful sets. If you have stateful deployments and want to run services on nodes for log collection and monitoring, for example, you want to get some metrics or logs from all the nodes. And in order to do so, you must first create a pod in each of the nodes. So you want to create one pod per node. How can you do that in Kubernetes? The way you can do that is demonstrate or demonstrate.So the Demon Set helps you create one pod on every node. So if you create something as a demon set, then one pod of this demon set is created on every node. You can control which nodes the Demon Set is deployed to, but by default it will be on every node. So if you’d want to run some background services to allow collection or monitoring, you can create DMN sets.

The last thing that you need to remember is that cloud monitoring and cloud logging can be used to monitor your Kubernetes engine. This is enabled by default. Earlier, we saw logs and metrics for our Kubernetes cluster. All these logs and metrics are coming from cloud monitoring and cloud logging. Cloud logging is responsible for logs. Cloud monitoring is responsible for metrics. Cloud monitoring and cloud logging are the managed services in Google Cloud for monitoring and logging-related activities. If you want, you can also take the logs from Cloud Logging, the system logs, which are basically the Kubernetes Logs, and the application logs, which are basically the Deployment Logs or the Pod Logs. You can take them and export them to either BigQuery or Pub sub.BigQuery is the relational big data database in Google Cloud, and PubSub is kind of a queue where you can actually stream your logs too. In this step, we look at some of the things that you need to remember about Google’s Kubernetes engine. I’m sure you’re having a wonderful time, and I’ll see you in the next step.

15. Step 15 – Scenarios – Google Kubernetes Engine GKE

Welcome back. In this step, let’s look at some of the important scenarios related to the Google Kubernetes Engine (GKE.Let’s get started with the first one. You want to keep your costs low and optimize your GKE implementation. What are the options that you can think of? Some of the options that you can use are pre-emptible VMs.

Make sure you select the correct region, and opt for committed-use discounts if your workloads are continuous and run for an extended period of time. Also remember that E2 machine types are cheaper than N1 machine types for most workloads. So experiment with these two machine types and see if they work out to be less expensive for you. Because Google Kubernetes Engine makes use of compute engine virtual machines, the first topic we’ll cover is optimizing your virtual machine usage. The next important thing is to make sure that you choose the right environment to fit your workload type. If the workload type you are running needs GPUs, you need to ensure that your virtual machines that are part of your node pools have GPUs. One of the flexibilities that Kubernetes provides is that you can have multiple node pools. So you can have a node pool of normal machines and another node pool with GPU-attached machines. Workloads or deployments that require GPUs can be deployed to the node pool to which the GPUs are attached. You want an efficient, completely auto-scaling GKE solution. Inside a Google Kubernetes cluster, you have a number of node pools. The node pools contain nodes.

Nodes are where you have instances of your applications or the pods that are deployed. And if you want complete auto scaling, you need to use a horizontal pod autoscaler for deployments. So you want to be able to increase the number of instances of your deployments automatically based on the load, and that is done by using a horizontal pod autoscaler. Kubit deployments was the command we saw earlier. Autoscale. It is not sufficient if you autoscale the instances of your deployment; the deployment should have sufficient nodes. And how do you autoscale node pools? The way you can do that is by using a cluster autoscaler. You want to execute untrusted third-party code in a Kubernetes cluster. Let’s say you are doing some testing with some untrusted third-party code and you want to run it as part of your Kubernetes cluster.

One of the most important things to remember whenever you are running anything in a Kubernetes cluster is that if you have untrusted code, the best option is to actually create a separate node pool and run it there. So if you have any different kind of workload, create a node pool and run it there so that it does not impact the other workloads that are running as part of your Kubernetes cluster. And over here as well, the best option would be to create a new node pool with something called a GKE sandbox. The GKE sandbox allows you to run untrusted code inside a sandbox. Once you have the new node pool, you can actually deploy the untrusted code to the sandbox node pool. You want to enable only internal communication between your microservice deployments in a Kubernetes cluster. If you want to expose your deployment to the outside world, you need a service. So the question is, what kind of service would you go for? Because I would want only internal communication over here, the service type I would choose is cluster IP. Whenever you choose Cluster IP, only the services that are part of the Kubernetes cluster can talk to this service.

Anything outside the cluster will not be able to talk to this specific service. The next one is my pod, which stays pending. So I am trying to create a deployment, and I see that my pod status remains pending. Most probably, the pod cannot be scheduled on a node. If I specify that I require 100 instances of a pod and there are insufficient resources and nodes available, the pod cannot be scheduled onto a node. And that’s why your pod might stay pending. So what you would do in this situation is increase the number of nodes in your node pool. My pod is still waiting. This is a different status. Earlier it was pending; now it is waiting. What could be the possible reason? The most likely cause of failure is a problem retrieving the image. Whenever we create a deployment, we specify the image—the container image—to make use of, and if the path of the container image is not correct, then you’ll not be able to pull the image. The other problem might also be related to access. I don’t have enough access to pull the image from the container repository. In both those situations, your pod would stay weighted. In this step, we looked at some of the most important scenarios related to the Google Kubernetes Engine. I’m sure you’re having a wonderful time, and I’ll see you on the next step.

16. Step 16 – Quick Review – Command Line – gcloud container clusters

Welcome back. In this step, let’s look at how you can actually manage your Google Cloud cluster from the command line. Most of these operations have already been looked at. The idea is just to have one consolidated place where we have all the options listed. So gcloudcontainer clusters are used to create a cluster. The cluster should be resized. You want to increase the number of nodes in your node pool. That’s when you go for cluster resizing a cluster. So it’s time to resize my cluster. Specify the node pool and increase the number of nodes. Auto scale cluster: what are you trying to do? When auto-scaling a cluster, you want to increase the number of nodes in a cluster when there is more load. How do you do that? Container clusters in Gcloud Update the cluster name.

You want to enable auto scaling, and you specify the minimum number of nodes and the maximum number of nodes. You can delete the cluster. Container clusters on Cloud Delete my cluster You can also add a node pool when the existing node pools are not sufficient or when you want to have a new type of node. You want a node type with GPUs, or you’d want a Gwiser sandbox. In those kinds of situations, you can create dog cloud container node pools, specify the node pool name, and specify your cluster. In all these commands, you might need to specify a zone or a region depending on the type of the cluster. You can also list the images that are present in your container registry. So in the Google Cloud container registry, you would want to find out what container images are present. Then you can say “cloud container images list.” In this quick step, we talked about the different commands related to cluster management. Importantly, remember if you are playing with clusters or node pools in Google Cloud, the command is gcloud container. I’ll see you at the next step.

17. Step 17 – Quick Review – Command Line – kubectl workload management

Come back on your feet. Let’s look at workload management. Workload management is nothing but managing pods, deployments, replica sets, and things like that. If you want to list the pods, services, and replica sets, how do you do that? Kubectl gets pods. Cube cattle receive services. Cube kettles are being replicated. Cubecattle receives events.

Cube cats get horizontal podscale. HPA As a result, the cube cattle take command. If you want to create a deployment, you can either do a Kubectl create deployment or, if you have a deployment YAML file, you can do a Kubectl apply deployment. YML If you want to create a service, it can also be created by using YAML files. However, if you want to use a command, you can use Kubectl expose deployment and specify the type of service you’d like to create and the port information. if you want to scale the deployment. Kubectl scale Deployment hello World replicas Five If you want to auto-scale deployment, use Cubecatl auto-scale deployment and specify how many max, how many min, and how much CPU percentage, or whatever criteria you want to auto-scale based on. If you want to delete a deployment, Cubecat will delete the deployment, and you can do the same for a service, a pod, and other things.

If you want to update a deployment, you can do Kubectl apply Hyphen F deployment. YML If you want to roll back a deployment, you can do a Kubectl rollout, say “Hello World,” and specify the revision you’d like to roll back to. As you can see, when you’re playing with pods, services, replica sets, scaling deployments, or autoscaling deployments, the command you’re using is Kubectl. As far as the exam is concerned, learn to distinguish between Kubectl and G Cloud containers. Be very clear about when to use Kubectl and when to use the Gcloud container.

18. Step 18 – Delete GKE Service, Deployment and Cluster

Welcome back. Let’s now delete all the resources that we have created in this specific section to make it really, really interesting. What we’ll do is actually delete everything, step by step. So let’s start with deleting the service. As a result, Q Kettle deleted the service. Hello World Rest API is the name of the service that we have created. So let’s start with deleting the service first. That’s cool. So this would delete the service and the load balancer that are associated with that specific service. Once we’ve deleted the service, we’ll need to deploy delete. After the service, we need to delete the deployment. And after the deployment, what you can do is we can delete the cluster. And after that, what we do is delete the project. So to delete the service and the deployment, we’d be using CubeCattle to delete the cluster. Google Cloud is what we’re using. So, delete deployment, Kubectl. Hello, world rest API. So let’s do that. So, delete deployment, Kubectl. Deployment is also now deleted. The next thing I would want to do is delete the cluster. yedo  I would need my cluster. And the zone where I made the cluster was Hyphen iPhone Zone US 1, US Central 1.

I have not mentioned the project yet. So Gcloud I’ll make a G Cloud Projects list to list the various projects. And I’ll pick up the project ID, and let’s set it down here. This project, as well as the gcloud configuration set project. And now I can go ahead and delete my cluster. So it says the following clusters would be merged into my cluster in the US central one: C. Yes, I would want to do that. and this would go ahead and delete my cluster. The deletion of the cluster would take a long time. I would not really wait for it. So what I would recommend you do is actually wait for the deletion of the cluster. And if you want, you can even shut down the project under which we created the cluster. In this, we deleted all the things that we had created while learning about Kubernetes. I’m sure this was an interesting Kubernetes journey. And I look forward to seeing you in the next section. Until then, bye.

Leave a Reply

How It Works

img
Step 1. Choose Exam
on ExamLabs
Download IT Exams Questions & Answers
img
Step 2. Open Exam with
Avanset Exam Simulator
Press here to download VCE Exam Simulator that simulates real exam environment
img
Step 3. Study
& Pass
IT Exams Anywhere, Anytime!