Google Associate Cloud Engineer Topic: Managed Services in Google Cloud Platform
December 19, 2022

1. Step 01 – What are Managed Services?

Welcome back. In the previous steps, we manually configured the virtual servers, we manually set up load balancing, and we had to do a lot of things manually to get our applications up and running. Is that the only way you can run applications in the cloud? The answer to that is no. All the cloud providers provide a number of managed services in this section. Let’s get a 10,000-foot view of managed services. Let’s get started. We want to continue running applications in the cloud the same way you run them in your data center, or are there other approaches? To understand this better, you should be familiar with some of the terminology associated with cloud services, such as infrastructure as a service platform, function as a service, container as a service serverless, and many others. Don’t be concerned if some of the terminology in this section is unfamiliar to you. Let’s get on a quick journey to understand all this terminology. I’ll see you in the following step. 

2. Step 02 – Understanding IAAS and PAAS

Come back. Let’s get started with infrastructure as a service. IaaS means that when you are using infrastructure as a service, you are only using infrastructure from the cloud provider. A good example of this is using a virtual machine to deploy your applications or databases. If you are using AWS, you might be using EC Two.

If you are using Azure or Google Cloud, you might be creating virtual machines, and then you would manually deploy your applications or manually set up your database on that virtual machine or EC2 instance. When you’re using infrastructure as a service, you are responsible for application code and application runtime. You are responsible for configuring load balancing among your virtual machines. You are responsible for automatically scaling your application. You are responsible for the operating system and making sure that your operating system is upgraded and properly patched.

You are responsible for the availability of your application. Take the case of deploying a Python application, a Java application, or a Node application. When you are doing infrastructure as a service, you are responsible for taking the virtual machine, choosing the right operating system for it, and installing the required application runtime. For example, you’d want Java, Python, or Nodes. You need to install that, and then you need to install your application code and set up the configuration for it. So you are forever responsible for everything about the virtualization layer. The cloud provider is responsible for providing the physical hardware, networking, and making sure that you have access to some virtualization software so that you can create your virtual machine instances. For all the remaining things, you are responsible. This is how we deployed applications in our data centers until a few years ago. Platform as a Service, which is provided by the cloud, is an alternative. When you’re using Platform as a Service, the cloud provider is responsible for the operating system, including making sure that it is upgraded and patched. The cloud provider is responsible for the application runtime. If you want to install Java or Python, the cloud provider can take care of that for you.

The cloud provider is also responsible for auto scaling, availability, and load balancing. You are only responsible for the configuration of the application, the configuration of the service, and the application code. If you have an application code, Good examples for Platform as a Service include the elastic beanstalk from AWS App Service, which is provided by Azure and Google App Engine. When you’re using any of these services, all you need to focus on is the application code. If you are deploying a Java application to any of these, you just provide the code to the specific service. The cloud service would ensure that the application runtime is properly configured, the OS is properly patched, and you get all the features like auto scaling, availability, and load balancing for free. Now, let’s look at a few varieties of platform as a service. There is something called “Container as a Service,” where you have containers running on your platform instead of the applications. You also have “fast” or “function” as a service, where you have functions running instead of apps, and there are also a variety of other platforms as services. Platform as a service is not limited to compute services alone. Database platform-as-a-service offerings exist for both relational and no SQL databases. Amazon RDS, Google Cloud SQL, Azure SQL Databases, and a variety of other platforms and services are good examples of relational databases in the cloud. QS Amazon SQS, for example.

Azure Queue Storage via Google Cloud Pub/Sub The cloud providers also provide you with a number of platform and service offerings for artificial intelligence, machine learning, and performing operations. Instead, we got started with infrastructure as a service and platform as a service. When you’re using Infrastructure as a Service, the cloud provider only provides the physical hardware, networking, and virtualization layers. You are responsible for everything about that: making sure that the writer is installed, making sure that the application runtime is installed, making sure that your application is properly deployed, and taking care of auto scaling,  availability, and load balancing. When it comes to the cloud, we usually go with Platform as a Service offerings. When you choose a platform-as-a-service offering, the cloud provider is responsible for everything up to the application runtime. You are only responsible for the configuration of the application and the configuration of the managed service, and if there is code involved, then you are also responsible for the application code as well. And we talked about the fact that there are a variety of platform and service offerings that cloud providers provide. We referred to containers as services, functions as services, databases, queues for operations, and a lot of other things. In the next step, let’s look a little deeper into containers as services. Why do we need containers? And what are the typical features that Container as a Service services provide? I’ll see you in the next.

3. Step 03 – Understanding Evolution to Containers and Container Orchestration

Come back into this step. Let’s look at containers. Why do we need containers? What is containerization as a service? And to understand that, we will get started with microservices. Enterprises are heading toward microservices architectures. Instead of building a large monolithic application, the focus is on building small microservices.

 The biggest advantage with Microsoft services is the flexibility to innovate. You can build applications in different programming languages. For example, you can build the movie service in Go, and you can build the customer service in Java. You can build the review service in Python. You don’t really want applications built in a variety of languages in the same enterprise, but you have the flexibility to do that. But a byproduct of moving toward the microservices architecture is that your deployments become more complex. The way you deploy your Goa application might be different from the way you deploy your Java application.

How do you ensure that? We have one way of deploying microservices, which are built in different languages. That’s where we talk about containers. One of the popular container-related tools is Docker. What you can do when you’re using containers is create Docker images for each of your microservices. The Docker image contains everything that you need to run a microservice. It contains the application runtime. Let’s say you are running a Java application. You would need a JRE or JDK. Assume you have a Python or a Node JS application running. You need the corresponding runtime. The Docker image also contains the application code and the dependencies that the application needs. So the Docker image contains everything that you need to be able to run a microservice. You can create Docker images for Python applications, Java applications, and NodeJS applications, and once you have a Docker image, you can run it the same way on any infrastructure, whether you are running it on your local machine, in your corporate data center, or in the cloud. Once you have a Docker image, you can run it anywhere and create containers from it. A friend of mine says when I’m using Docker, I don’t really have problems with it running on my local machine. Because you are using Docker images, you can use the same Docker image on your local machine and in your deployment environment.

As a result, you won’t have any issues, as it does on my local. Now, what are the advantages of going with containers? Docker containers. In general, containers are lightweight. They are lightweight because, compared to virtual machines, they do not have a guest OS. Before the emergence of Docker, if you wanted to do any virtualization, you used to use virtual machines. And in virtual machines, in addition to the host OS, there was another guest OS that was present. However, when you’re using Docker, you don’t really need a guest OS. You have the cloud infrastructure, or any kind of infrastructure, and on top of it, you have a host operating system installed. On top of it is where you would install your Docker engine or container runtime. Once you have that, you can run Java containers, Python containers, or Node JS containers on top of it. because there is no guest OS, which is needed. Docker containers are generally considered to be lightweight. Another advantage is that the containers are all isolated from one another. If there is a problem with container 1, it will not impact containers 2 and container Three.

These are isolated from each other, and the most important advantage is cloud neutrality. Once you have a container image, you can run it on your local machine, in your datacenter, or with any of the cloud providers. You can run it on AWS,  Azure, and Google Cloud, for example. Each of the cloud providers provides a variety of options to run your containers. Now you have built container images for your microservices, and you are able to easily run containers with them. You have built a container for Microservice A, B, C, and D, and now you want to actually manage the deployment of these containers. You’d want to ensure that you’d be able to auto-scale the number of container instances that are running based on the load. You might have requirements saying I want ten instances of Microsoft A containers, 15 instances of Microservice B containers, and so on and so forth. Just having containers is not sufficient. You’d need some kind of orchestration around these containers, and that’s where there are a number of container orchestrator solutions. If you have heard about Kubernetes, it is one of the most popular open source container orchestrator solutions. When you’re using these container orchestrator solutions, you can say, “This is the container image for Microsoft,” and “I would want ten instances of it.” You can say this is Microservice B. I would want five instances of it or 15 instances of it. And the container orchestrator tool would manage the deployment of these containers into clusters. Each of these clusters can have multiple servers, and these container orchestrators typically offer a number of features.

Auto Scaling You can say this is the container image for Microservice A, and I expect a lot of load on it, so I would want to auto scale. So based on the number of requests that are coming into Microservice A, the container orchestrator can scale the number of instances of that specific container. Service discovery is a very, very important feature for microservices. You might have 10, 15, 20, or 100 micro services. You don’t want to hard code the URLs of each microservice in another microservice. That’s where the concept of service discovery comes into play. Each microservice can ask the container orchestrator for the location of other microservices, so you don’t really need to hard code the URLs. As soon as I start talking about multiple containers, you need to also talk about load balancing. Once I have multiple containers, I would like to distribute the load among them. Container orchestrators also provide load balancing. You also want resilience. if one of the instances of a microservice is not working properly. You’d want the container orchestrator to identify that and replace it with a working instance. That’s where you can configure health checks, and the container orchestrator can execute frequent health checks and replace failing instances. This is also called self-failing. Not only that, you also want zero downtime during deployments. You might want to go from version one to version two of Microservice A.

However, you don’t want any downtime. Container orchestrators also provide a number of strategies to release your new versions of software without downtime. Kubernetes is one of the most popular container orchestrator tools. All the cloud providers provide Kubernetes-managed services. So EKS, which is provided by AWS or the elastic Kubernetes service, Kubernetes Service on Azure You also have GKE. Google uses the Kubernetes Engine, which is provided by GCP. In addition to the services around Kubernetes, cloud providers also provide other services around containers. AWS provides ECS (Elastic Container Service). Google also provides Cloud Run, which is a simple service to run simple containers. In this quick step, we got a high-level overview of containers, container orchestration, and some of the important containers as service offerings across different cloud providers. We’ll talk about this a little more as we go further in the course. I’m sure you’re having an interesting time, and I’ll see.

4. Step 04 – Understanding Serverless

Returning to this step Let’s get started with serverless computing. You’ve probably heard this phrase before. So what is serverless? When do you go for serverless? What do you think about when we develop an application? In addition to selecting the language or framework with which to develop the application, you would consider where to deploy the application—what type of operating system should I deploy my application into?

How should I take care of scaling, availability, and all the non-functional features around my application? What if you didn’t have to worry about servers, server configuration, scaling, or availability and could instead concentrate on your code and application? That’s Serverless. It is critical to remember that serverless does not imply no servers. Even when using serverless, servers are used in the background. The only difference is that the servers are not visible to you. You don’t have any visibility into the servers on which your code is running. Now, what is serverless for me? The important characteristics of Serverless for me are, number one, that you don’t worry about infrastructure, so you have zero visibility into the infrastructure where your application is running. You don’t know which server is being used to run your application, and you get flexible scaling and automated high availability for free. And another important characteristic of serverless for me is pay-per-use. Ideally, if there are no requests for your service, you should pay zero cost.

So Serverless is all about allowing you to focus on code while the cloud-managed service handles everything required to scale your code to serve millions of requests. And in addition, when you’re using serverless, you only pay for requests and don’t pay for servers. So you pay for the number of invocations of your function or the number of invocations of your application, but you don’t pay for how many servers are used to deploy your application. Now, what are the examples of serverless services? All of the functions of a service are good examples. AWS Lambda in AWS Azure Functions in Azure and Google Functions in Google Cloud Platform are examples of cloud computing services. In this step, we got a 10,000-foot overview of serverless. We’ll talk more about serverless as we go further in the course. I’m sure you’re having an interesting time, and I’ll see you in the next step.

5. Step 05 – Getting my perspective on Serverless

Welcome back. In the previous step, we got a 10,000-foot overview of serverless. In this step, let’s get a little bit of perspective on the terminology around serverless. There is a specific terminology that cloud providers use, and there are specific expectations that we typically have when we call something serverless. So there is a little bit of ambiguity here, and that’s what we will be discussing in this specific step. As we discussed earlier, the important features of serverless for me are zero worry about infrastructure scaling and availability. So that’s what I call number 10. Invocations should have zero cost. That’s number two. Can you scale down to zero instances if there is no load on your application, and can we automatically scale up once there is some load? So that’s number two. Number three is very, very important and is to pay for invocations and not for instances, nodes, or servers. Now, what is the difference between these?

When we pay for instances, nodes, or servers, we are paying for infrastructure. However, when we are paying for invocations, what we are doing is looking at it from the perspective of an application or a function. So you are paying for the number of calls for that application or function. So I would call serverless level one “one plus two” and serverless level two “one plus two plus three.” Typically, when I refer to serverless, I am referring to level 2, which has all three features. However, when cloud providers are referring to serverless, they also include managed services, which are at level 1. So when cloud providers say serverless, they include managed services at both level one and level two. So there are certain services where you would pay for instances, nodes, or servers, and yet cloud providers would refer to them as serverless. Let’s take a few examples. Let’s consider the Google app engine. Google calls it “App Engine,” a fully managed serverless platform. Let’s take AWS. AWS refers to it as the Serverless Compute Engine.

For containers with both of these services, the great thing is you don’t need to worry about infrastructure scaling or availability. You can just configure them, and the service will take care of it for you. And also, you can scale down to zero instances when there is no load. So features one and two are provided, but you’d pay for the number and type of instances that are running. There are also a number of managed services at level two as well. Google functions, for example, make use of AWS Lambda Azure functions. These are great examples of complete serverless services where you get all three features. You don’t need to worry about infrastructure scaling and availability.  can scale down to zero instances and pay nothing if there are no instances at all, and you’d pay for the number of invocations. So if you call a lambda function a thousand times, you’d pay for a thousand invocations. The idea behind this step is to give you a high-level, 10,000-foot overview of this ambiguity around serverless. We’ll come back to this again as we talk about the number of serverless services later in the course. I’m sure you’re having a wonderful time, and I’ll see you in the next step.

6. Step 06 – Exploring Google Cloud Platform GCP Compute Services

Return to the step. Let’s get a 10,000-foot overview of the different managed services for compute that are offered by GC. The Google Cloud platform We already talked about the Compute Engine earlier. It is used to create virtual machines that can scale globally. And where does it fit in? It fits in the infrastructure category as a service category.

If you use a compute engine, you are responsible for selecting an operating system, installing the application runtime, and installing your application, database, and other components. So Compute Engine is infrastructure as a service. The next-minute service is Google’s Kubernetes engine. Earlier, we talked about container orchestration, and Kubernetes is the most popular container orchestration tool.

Google Kubernetes Engine is the engine that drives Google’s Kubernetes service. One important thing about Kubernetes is that when you’re using Kubernetes, you need a cluster. So you need advanced cluster configuration and monitoring. You create a cluster; the cluster contains a number of nodes, or instances; and you deploy your micro services using Kubernetes into the cluster. So the Google Kubernetes Service is a container as a service offering from Google Cloud. The next important compute service is the app engine. If you have a Java application, a Python application, or a node application and want to easily deploy it to GCP without even worrying about creating a container, you can go with App Engine. You can build highly scalable applications on a fully managed platform using open and familiar languages and tools. App Engine fits into the category of platform as a service.

However, if you have a container, you can also deploy simple containers to App Engine. It does not provide complex container orchestration features like Kubernetes, but it does provide a few simple features to run your containers. So App Engine provides a few container-as-a-service characteristics, and you’d also see App Engine being referred to as a “server less platform” by Google. Because if there is no load, Apennine standard can drop to zero instances. For example, if you have a Java application with ad holloed, you get a few requests. It is ideal for a while, and then you get a few requests. In those kinds of situations, you can go for App Engine Standard, because when you don’t have requests, App Engine Standard can go down to zero instances. The next important management service is cloud computing. This is an example of a function as a service, and this is a true server less offering from Google Cloud Platform. You can build simple event-driven applications using the functions that you create using cloud functions.

If you want to do something right away, as soon as a message arrives on the queue, you can create a cloud function that listens on the queue and reacts to it. If you want to do something as soon as an object is uploaded into cloud storage, you can do that using a cloud function. The next service is a relatively new offering from the Google Cloud Platform. This is Cloud Run. Cloud Run can be used to develop and deploy highly scalable containerized applications. So Cloud Run is also a container as a service offering. However, the difference between Kubernetes and Cloud Run is that Cloud Run does not need a cluster. Kubernetes is recommended for complex micro services architectures. However, Cloud Run is recommended for much simpler architectures. So if you have simple container applications that you would want to run, you can try Cloud Run. In this step, we got a 10-foot overview of the different compute managed services that are offered by GCP. I’m sure you’re having an interesting time, and I’ll see you on the next step.

Leave a Reply

How It Works

img
Step 1. Choose Exam
on ExamLabs
Download IT Exams Questions & Answers
img
Step 2. Open Exam with
Avanset Exam Simulator
Press here to download VCE Exam Simulator that simulates real exam environment
img
Step 3. Study
& Pass
IT Exams Anywhere, Anytime!