Vmware 2V0-21.20 – vSphere 7 Kubernetes
May 30, 2023

1. Introduction to Microservices

And what I mean by a monolithic application is basically here I’ve got this application that includes a Web interface. The application also relies upon a database. It’s got a Customer Relationship Manager tool built into it, and it’s got a reporting service built into it as well. And so think of this as a really big and complex machine that does a lot of different jobs. And because it’s so big and so complicated, it’s very hard to write and it’s very hard to debug. If there’s something wrong with one aspect of this monolithic application that can be problematic for the entire application, or if we want to upgrade only the CRM tool, for example, well, that could be really challenging in a monolithic application. So this micro services approach has been really embraced by the development community.

 And essentially, instead of having all of these different features baked into one monolithic application, let’s break them out into separate micro services. Let’s create an API that allows all of these application components to communicate with each other and to work together. And if you’re not really familiar with an API, just think of it essentially as a language that all of these microservices speak. And so let’s think about some of the benefits of taking this monolithic application and breaking it down into smaller microservices. Well, number one, it’s more highly resilient. So let’s focus on the reporting part of this application. And now I’ve got this basically standalone micro service.

It communicates with all of the other microservices for this application, but it’s its own self contained micro service. So if that one particular micro service fails, the rest of the application should be able to continue to function just missing that one specific feature. And with a monolithic application, that often wasn’t the case. If part of the application broke,the entire application was unusable. Another benefit of microservices is the ability to flexibly scale. So, for example, let’s say that the Customer Relationship Manager or CRM part of this application is starting to lag behind. Well, if I’m running that CRM tool as a micro service, I could simply create more instances of that micro service or give that microservice more resources in another way. And so now I can really focus my scaling efforts on the microservices that need the additional resources.

 A third benefit of microservices is the fact that they are easy to deploy. So let’s focus again on the CRM tool here. But in this situation, we’ve got a new version of that CRM tool that we’ve rewritten, so we didn’t have to rewrite our entire application. The Web interface, database and reporting parts, they can stay exactly the same, but maybe they’ll start communicating with a new upgraded CRM tool. I can replace the existing CRM microservice with a new version, and as long as it conforms to the same API, it’s no problem. The other components of my application, the other microservices can just communicate with the new CRM tool. So it makes it easy to deploy new iterations of these very focused and very specialized pieces of code.

So how does this fit into our VMware and our Vsphere Seven overall picture? The big question is, can we roll these containerized applications, these micro services? Can we roll them out fast enough? Can we create an operating environment with the right storage config, the right firewall rules? All of those necessary infrastructure configurations that we have to have in place? What is the workflow for a containerized micro service versus a traditional VM? And how do those vary? So those are some of the questions that we’re going to answer over the next few lessons as we learn more about running Kubernetes containers on Vs Four Seven.

2. Introduction to Containers

Going to start by thinking about why are these important? What’s been the driving force that have made containers so popular? So let’s say, for example, we have a developer and here’s the developer’s laptop and they are creating this software application and as they’re creating it, they’re running it on their laptop. And so when they’re finished writing this application, they may need to move it to a test environment. And then from a test environment, it might go to a staging environment and then eventually it’s going to go into production. Or maybe it’s starting out on a physical machine and eventually it’s going to a virtual machine or eventually it’s going to end up running in a public cloud environment.

 Well, all of these environments have small differences. They may have different operating systems, they may have different security policies, they may have different network configurations and firewall rules surrounding them. They may have different application dependencies, different versions of things like Java or Net. And all of these little issues can cause the application to malfunction as it’s moved from one environment to another. So ideally, what we’d like to have is this little environment around our application that does not change. And that’s where the concept of a container came from. A container includes the entire runtime environment for your application, including all of its dependencies, libraries, other applications, the configuration files. It’s all bundled into this one package, this container.

 And now the application is running in its own little bubble. So if you think about a shipping container, you can take a shipping container off of a boat, you can put it on a train, you can drive it across the country, you can put it on the back of a tractor trailer, you can move it around. But the contents of what’s happening inside that container are not changing. It doesn’t matter which physical platform we’re putting it on, it’s self contained. And with a containerized application, the difference in operating system distributions, the differences in underlying infrastructure are all basically hidden from the application running in the container.

 So the big benefit is now we can take this and move it around, move it from one hardware platform to another, one container host to another. And you may be thinking, well, wait a minute, that’s what virtual machines do. So we have an ESXi host and we can create multiple virtual machines on that ESXi host, each of which have their own distinct operating system. And then within each of those virtual machines we can install applications and all the application dependencies and stuff like that. Then we can move our virtual machines from host to host. So what’s the big difference between a VM and an application running in a container? Well, you can see in our diagram here, we’ve got four instances of our operating system. So this is an ESXi host with a very small number of virtual machines. But that’s four instances of the operating system running and it’s probably the same operating system.

 And there are some efficiencies in the way that the ESXi host works. Memory can be shared. We can do things like transparent page sharing. We can do things like not configuring reservations on these machines. But at the end of the day, you’re still running for operating systems. So what if we could instead run multiple containerized applications on a single operating system instance? That could make things much more efficient. So let’s kind of zoom back a little bit here and think about this from the perspective of a software developer. So I’ve just written this great new application and now I want to deploy my new application.

 I basically have to count on the operations team to get all of the necessary dependencies ready to go for me. So maybe this application requires a certain version of Java. If the operations team doesn’t have that set up correctly on my virtual machine, the application isn’t going to work, right? So this can become frustrating, it can create delays. And then what if I write another application and the other application requires yet a different version of Java? Now I need to coordinate with whoever’s managing those virtual machines and managing the operating systems of them to make sure that that other dependency is satisfied as well. And maybe they’re compatible and maybe they’re not. So these things can create some headaches in getting our software actually rolled out and live. So now let’s contrast that to containers. Here we have a single operating system. We’re going to call this our container host. And our container host is running some kind of operating system. This could be a physical machine, it could be a virtual machine. As a matter of fact, it could be an ESXi host.

 But it’s got this one operating system. And running on top of that operating system is our container engine. And now we’ve got multiple containers running multiple applications, all sharing the same operating system. There are shared parts of the operating system that are read only. And each container has its own mount that it can use to write data to. And that means that these containers are much more lightweight and they use far fewer resources than a virtual machine would use. And each container also takes up much less space than a virtual machine. So you can run many more containers on the same storage. There’s less CPU overhead, there’s less memory overhead. But we do need a compatible operating system under the surface here.

 We need the right base operating system. So we’re going to call this the container host operating system. And we also need a registry where developers can put images of their applications. When they’re done, they’ll take their apps, they’ll put them in the registry and we can deploy them on the spot. And the platform to run those applications is already there. So once they’ve created an application, put an image of it in the registry and now we want to run that image in a container and start up a container. There’s no operating system to boot, so the process to start that container is extremely fast. So what about the container host? We have to have a standard operating system. And if we’re deploying Photon as Vos, it’s got a container runtime baked right into it. So any VMs with the correct operating system could potentially be a container host for us.

 I can run a virtual machine with the appropriate operating system and that VM could be a container host. As a matter of fact, that’s what we’re going to use with a tanzoo grid cluster. So what are the pieces of my applications that are going to go into a container? Well, number one, of course I need the source code. This is kind of the core of my application. It’s the source code and then any other dependencies that that application is going to need. So what files does it need? Are there other components that it needs? What operating system should it be running on? All that sort of stuff. And now the developer gets everything working perfectly.

They get their container application working exactly the way that they want, and they create an image based on what they’ve created. And this is sort of like what we do with Vsphere, right? We create a perfect virtual machine and then we create a template of that perfect virtual machine that’s kind of like an image for a container. We create this perfect image and then we publish that image to a Registry. And the Registry is kind of like a warehouse full of these containers for VMware. That registry is called VMware harbor. And VMware Harbor has some pretty cool security features like image signing and vulnerability scanning to make sure that our containers have not been tampered with in any way or to make sure that they don’t certain vulnerabilities. We can also have different versions of containerized applications and we can always go back to an old version if we start running containers based on a new image and there’s something wrong with it.

3. vSphere 7 and Kubernetes

So before we even get started on this, I do want to lay down some expectations. This isn’t a Kubernetes course. This is what’s new with Vsphere seven course. At some point I might put out a full course covering everything you need to know about Vsphere seven with Kubernetes. But this is not that course. So we’re going to get into some of the basic elements of how Kubernetes can be run on a Vsphere seven cluster. But this course is not meant to give you a full understanding of Kubernetes in any way. So I just wanted to mention that so you have a good idea of what to expect. So let’s take a look at today’s application stack, and we’re going to move over to some documentation. So this is the Vsphere with Kubernetes configuration and management document.

 And I want to take a look at a little diagram here where it breaks down the challenges of today’s application stack as it exists now. So right now we have these distributed systems and a typical stack that’s not based on Vsphere with Kubernetes has a certain level of separation between these stacks. So first, as an application developer, I don’t have visibility or control over anything going on at the Vsphere level. I can run Kubernetes pods. I don’t see the entire stack that is running hundreds of applications. Then I’ve got a cluster administrator who’s in charge of the Kubernetes piece of this. And then I’ve got a VMware or Vsphere administrator that covers the actual virtual environment. So each of these three individuals has a limited amount of visibility, and so it can make it really challenging to roll out new applications quickly. And if I’m trying to get an application rolled out properly, it’s going to require some level of coordination and cooperation between those three teams.

 Okay, so let’s talk a little bit about Kubernetes and the orchestration of Kubernetes. So here you can see I have some containerized applications. And what if I want to take those containerized applications and distribute them across multiple container hosts and load balance them? Now remember, when I use the term container hosts, we’re not necessarily talking about an ESXi host here. We could be talking about a virtual machine that has Linux running on it. But anyways, what if I want to distribute containerized applications across multiple container hosts and then load balance across those applications? What happens if one of my container hosts fails? Is that container going to be restarted on some other host? What happens if my application requires more resources and I want to scale out, meaning I want to launch additional containers and load balance the workload across them? Vsphere with Kubernetes can help with this.

 It transforms our ESXi hosts. It transforms Vsphere into a platform that can run Kubernetes and it can run Kubernetes containers natively on the hypervisor itself right on the ESXi host. So let’s talk about a few Kubernetes concepts before we go any further. Down that road here we see worker nodes. These are my container hosts. This is the surface that the containers actually run on. And then I’ve got master nodes. So there can be one master node. There can be multiple master nodes depending on the declarations that exist in the container. So for example, a developer could create a container and that container could state a requirement for a cluster of three master nodes for fault tolerance. The developers specify the requirements for their applications through something called declarations, basically stating what that application requires.

 And so in this example, the master nodes are going to control placement of these containers across the cluster nodes and they’re going to handle things like high availability. So within these worker nodes, the containers run as pods. And a Pod is a lightweight virtual machine that runs one or more containers. So a Vsphere Pod is sized based on the workload that’s contained. It has explicit resource reservations for that workload. And usually inside of each of these pods, we have one container. But you may have little sidecar containers that do things like logging for the main container. So if you’re going to have multiple containers running in the same Pod, those containers pertain to that main container in some way. So think of it this way. I’ve got a Pod where I want to run a container.

 That Pod is going to have a network interface. It’s going to have an IP address. It’s going to basically serve as a surface for that container to run it. And I may want to create a Pod with one container application that serves my main purpose. But maybe there’s another container in there that’s performing logging functions. And by the way, you can create distributed firewall rules for these as well. So we can set up micro segmentation, we can control all of the traffic coming in and out of these pods. And the master node is also going to serve as an API endpoint for our development team. So each of these pods and the containers that run within them are relatively lightweight. They don’t require a full operating system. And we may have many of them running at the same time.

And we may be load balancing work across them. And when we don’t need them, we may want them to stop running so that we’re not consuming resources unnecessarily. And that kind of goes into the temporary nature of containers. So here you can see we’ve got our code on the left. And so our developer writes all this code. This is part of creating the image. There’s no servers running, there’s nothing like that at this point. We’ve just taken this code, we’ve created an image and then something triggers this container to deploy. Maybe there is an API call, maybe a mobile application has launched and created a call. Or maybe there’s a link on a website. And at that point, our container can start to run.

 We can spin up a new container based on the image that we’ve created. And when it no longer needs to be running, it can stop running. At that point, we still got the image, we can still instantiate another container running that exact same application at any time. But that’s one of the really nice aspects of containers is you would never do that with a web server because they take too long to boot, they take too long to start up. Containers aren’t really that way. The boot process and the start process is much, much faster. So we can use them in more of an on demand nature. So now what we can do is we can start rolling out ESXi hosts. So here you can see we’ve got three ESXi hosts under the surface. We’ve got the VMware Cloud foundation meaning we’re running Vsphere, vSAN and NSX. And so a cluster that’s enabled for Vsphere with Kubernetes. It’s called the Supervisor cluster. We’re going to have a VC and data store that’s going to be used as persistent storage for all of the Vsphere pods. And our containers are going to run in those Vsphere pods. vSAN datastore can also be used as storage for our virtual machines as well. And we can have regular old virtual machines running on these ESXi hosts same as we always have. But we can also have containers running natively on these ESXi hosts too. And so once we have a cluster enabled for Vsphere with Kubernetes, that’s called a supervisor cluster, let’s break down the architecture of the supervisor cluster and the architecture of Vsphere with Kubernetes.

 Running on each ESXi host is something called the sphere. This is based on the Kubernetes cubelet. And basically what this allows the ESXi host to do is run Kubernetes containers. If we make changes to our Kubernetes pods or volumes or services or other configurations, the Sphere is polling for those changes. And in my mind I sort of equate this to like FDM. So if you’re familiar with Vsphere High Availability, we’ve got something running on all the ESXi hosts called FDM Fault Domain Manager that’s always reaching out and finding out the latest configuration of the cluster from Vcenter. Well, this is kind of similar to that. We’ve got a component running on each ESXi host called The Sphere Lip, that is polling and finding out the changes that we’ve made to Kubernetes pods and other configurations. And then we’ve also got some Control plane virtual machines.

By the way, if you notice that K eight S, k eight S stands for Kubernetes. There’s eight letters between the K and the S. That’s where the K eight S came from. But anyways, yeah, we’ve got Kubernetes Control Plane virtual machines running in these ESXi hosts. And there are three total Control Plane VMs created on the cluster. And these can be moved around by distributed resource scheduler as resource requirements. Necessitate. So those are our control VMs. And then we’ve got just regular old virtual machines that can run on the supervisor cluster as well. But of course, we’ve got these Vsphere pods. And Vsphere pods are each running one or maybe more than one closely related containers. And they have something inside them called the CRX.

 The container runtime executive. The Container Runtime Executive is a lightweight Linux kernel operating system that exists within these Vsphere pods and it acts as an operating system for the containers to run on top of. So this starts to open the door for things like automation. We saw that we had vSAN underneath the surface. Here we’ve got a cluster of ESXi hosts with local storage. That local storage is being used to create a vSAN data store. And so our containers can utilize that vSAN datastore capacity. Our Vsphere pods have network connections. We can do things like create firewall rules, we can create micro segmentation, we can isolate certain pods if necessary.

 We can control exactly what traffic can come in and out of them. And DRS can migrate either controller virtual machines or pods. It can move both of those things. So DRS can maintain the overall performance of my cluster by load balancing and moving both pods regular virtual machines and controller virtual machines around. Now, the final concept that I want to cover in this lesson is the concept of a Kubernetes namespace. And a Kubernetes namespace is kind of similar to a resource pool. So if you’re familiar with resource pools and Vsphere, there are definitely some similarities that you can draw here. The purpose of a namespace is to give us a way to control and share resources within a Vsphere Kubernetes cluster.

 So we’ve had resource pools around for a long time and what we want to now deal with these containers something very similar to what we’ve done with resource pools. With namespaces, I can give a project or a team or a customer their own little sandbox that they can play in. They can’t see inside the other sandboxes, they can’t expand past the limits of their sandbox. So it’s a way for me to create policies as a Vsphere administrator and it’s also a way for me to have stuff ready in advance so that they’re not waiting upon me to give them resources. And I can use NSX T micro segmentation along with this. So what this gives me is the ability on a per Vsphere pod basis to create lists of firewall rules that control what traffic is allowed in and out.

And I can also do things like set up resource limits. It sounds a lot like a resource pool at this point and it looks very similar to a resource pool as well. So let’s take a look at some documentation here and you will find a link to this documentation in the Udemy course resources. But here you can see in the hosts and clusters view, we’ve got namespaces and it looks almost exactly the same as a resource pool. Very similar. So the author who wrote this blog created a namespace, and you can go into it, and you can do things like set CPU and memory and storage limits. So that gives me a great way to control the amount of resources that are dedicated to any particular team or project. And then also, just like resource pools, I have the ability to set permissions here.

So at the namespace level, I can establish permissions that will apply to everything within that namespace. And then, like I mentioned a little bit earlier, we have a registry service as well. So this allows me to register my container images, and again, we call this VMware Harbor. So let’s take a moment to recap some of what we’ve learned about here. Kubernetes is a widely utilized open source solution for containers. What we’re now establishing with Vsphere seven is the ability to run those containers natively directly on the ESXi hosts, the ability to move them around with things like DRS, and the ability to have a registry where we can store all of those container images and keep everything organized.

That’s the big leap forward here with ESXi version seven is having this little photon OS that act as the CRX and being able to run those Kubernetes containers without all of the extra work of deploying virtual machines, installing an operating system on those virtual machines, and creating a surface within VMs to run Kubernetes. Now, there may be certain circumstances in which we still want to do that, where we want full 100% control of our Kubernetes environment. And that’s what we’ll learn about in the next lesson when we take a look at Tianzu.

4. Tanzu Kubernetes Grid Cluster

Now, Tanzu Kubernetes Grid Cluster is a way that you can run a traditional Kubernetes cluster using virtual machines and a container host operating system. Tanzu Kubernetes Grid Service automatically deploys the cluster for you, and you get an enterprise grade Kubernetes cluster that’s very quickly and easily deployed. And within it, you’ve got virtual machines running an open source photon OS, and VMware supports the entire stack, and you can have multiple Tanzu Kubernetes Grid clusters associated with a single supervisor cluster.

 So what you’re trying to do here is run a traditional Kubernetes cluster using virtual machines and a container host operating system for you, and the Kubernetes cluster runs inside VMs on your ESXi host. So that’s very different than what we learned about in the last lesson. In the last lesson, we were learning about pods running directly on top of ESXi. And in this scenario, we have a fully upstream compliant Kubernetes cluster that’s compatible with open source Kubernetes. So it’s guaranteed to work with your Kubernetes applications and tools. So this is more of a traditional Kubernetes approach, running on a Kubernetes cluster of virtual machines, and it doesn’t have that same level of integration, or we’re not running our containers right on the hypervisor. So what are some of the use cases for this? The primary place that most customers will run pods is in clusters deployed through the Tanzu Kubernetes Grid Service. The Pod service that we talked about in the previous lesson complements this for use cases in which the application components need the security and the performance isolation that you would traditionally get with virtual machines, but in a Pod form factor.

So if you need an open source Kubernetes deployment, if you need complete control over the Kubernetes cluster, including route level, access to the control plane and the worker nodes, or if you want to stay current with Kubernetes versions without upgrading your ESXi hosts, those are all great use cases here. Also, if you have short lived Kubernetes clusters that you may want to create and destroy in a relatively short time period, or if you want to Kubernetes namespaces using the Cube, CTL, CLI, those are all use cases for the Tanzu Kubernetes Grid Service.

Leave a Reply

How It Works

img
Step 1. Choose Exam
on ExamLabs
Download IT Exams Questions & Answers
img
Step 2. Open Exam with
Avanset Exam Simulator
Press here to download VCE Exam Simulator that simulates real exam environment
img
Step 3. Study
& Pass
IT Exams Anywhere, Anytime!