Pass Google Professional Cloud Network Engineer Exam in First Attempt Easily
Latest Google Professional Cloud Network Engineer Practice Test Questions, Exam Dumps
Accurate & Verified Answers As Experienced in the Actual Test!
Check our Last Week Results!
- Premium File 153 Questions & Answers
Last Update: Jun 6, 2023
- Training Course 57 Lectures
- Study Guide 500 Pages
Download Free Google Professional Cloud Network Engineer Exam Dumps, Practice Test
Free VCE files for Google Professional Cloud Network Engineer certification practice test questions and answers, exam dumps are uploaded by real users who have taken the exam recently. Download the latest Professional Cloud Network Engineer Professional Cloud Network Engineer certification exam practice test questions and answers and sign up for free on Exam-Labs.
Google Professional Cloud Network Engineer Practice Test Questions, Google Professional Cloud Network Engineer Exam dumps
1. Google Cloud: Network Engineer Certification
Course outline for professional network engineered certification on Google Cloud Platform in high level. What they expect you to understand or you should know to clear the exam design, plan and prototype GCP network. And this is where you should know bits and pieces of how you can create a network inside Google Cloud Platform. How to implement the VPC Virtual Private Cloud which is you can think of private data center inside the cloud. So how you can isolate your resources, cloud resources from the world you configure different network services like DNS or CDN. How you should know how to implement hybrid connectivity. And this means you are connecting your data center with Google Cloud Platform. And lastly the network security how improves and impact the ability for you to do certain tasks on the network. Let’s get into details of what they expect you to understand session by session. So the first session is designing and planning prototype Google Cloud Platform. In that particular session they expect you to understand designing overall network architecture VPC, designing hybrid network, designing container IP addressing and plan for Google Kubernetes Engine. And you can think of very overview syllabus which they expect you to understand. If I get into details of that, thesis the home page of Google Network Engineer. The registration is not currently on, this is March of 8th.If I go in details on the outline. So this is the first section. So you should know designing overall network architecture, failure options for high availability and all of this, all of these is just basic concepts around the network. The second one is the VPC and in here you should understand or you should know how to configure in VPC or what are different meaning of say peering routes, firewall and all that. Third one is the hybrid connectivity. That’s where you connect your own datacenter with the Google Cloud platform. So using the interconnect peering IPsec VPN Cloud router, its purpose failure and disaster recovery strategy, stand and run versus shared VPC interconnect cross organization access and bandwidth. And lastly, designing container IPad dressing plan for Kubernetes engine. Why this makes really a big difference because the way Kubernetes or the container engine does its IP addressing is totally different than the virtual machine addressing. So we are going into or getting into that as a special section here in the first one. So that's the first section. That’s where we will get into overview of all those bits and pieces involved in the network. Second one is implement GCP Virtual Private Network or VPC. And this is where you go ahead and do a VPC configuration configuring, routing configuring and maintaining Google Kubernetes engine cluster configuring and managing Firewall rules. So all of that so it is all related to VPC. If I go in details, this is where you configure actual VPC. But majority of the theory is already covered in the earlier section you can think of here in VPC the next part is actually implementing and this is more demo oriented. We'll go ahead and configure VPC peering and all that. Second one is how you configure routing static versus dynamic routing and all that configuring Google Kubernetes engine cluster. I will give you high-level overview of Kubernetes engine first and then we'll get into our networking work in Google Kubernetes cluster. The third section is configuring network services and these are like additional network services which are available and that contains load balancing, CDN and DNS. These are the major services that are available as a network services and then enabling the network services. If I look at the detail syllabus here you configure different kind of load balancers. What are the purpose of different load balancer? We are going to get into demo section as well. That’s where we will configure Http or Https load balancer. We'll understand how intelligent routing works inside the load balancer and how session affinity works or capacity scaling works. The second one is the second one is CDN and this is where you can implement content delivery network. It is simple configuration but we will get into demo section of this one and we will try to understand overall foundation of CDN and how it does, how you can actually create a keys, how you do cash invalidation and how you can generate the sign URLs. So all of that in CDN. The next one is DNS which is 100% uptime guarantee or SLA. We will have a demo and we will configure different zones or records the security global serving any cost IP. All of that we will check and we will go into foundation of what are the limitations of cloud DNS and where do we use internal DNS and all that. So that's the next section the next chapter. The last one is enabling other networking services which contains Hill check which is already there in the load balancer. But we will talk in the context of instance group canary A and B releases distributed back in instances using regional managed instances or enabling private API access. That's the section three and this is you can think of all the miscellaneous network services beyond hybrid connectivity or VPC. The next section is implement hybrid connectivity and this is where you connect your own premises or data centre with Google cloud platform. So configuring interconnect configuring side to side IPVPN configuring cloud writer for the reliability. So all of this contains in the hybrid connectivity. The next one is the network security. That is where we will look at IAM, how you can configure I am we'll get into some of the examples of IAM. Then we'll look at cloud armour how you can actually protect your load balancer from outside world or attacks configuring third party device insertion into VPC using multi nick, managing keys for SSH access. The 6th section is managing and monitoring network operations. That's where we will get into details of strike our offering managing and maintaining security example firewall rules or routers maintaining and troubleshooting connectivity issues monitoring, maintaining and troubleshooting latency and traffic flow using different components. So if I go back here and go to monitoring and managing and monitoring network operations, the first one is logging, monitoring Stackdriver how you can connect Stackdriver with your cloud resources and start monitoring or logging those network. Then we will get into firewalls and diagnostic. Resolving IAM rules. Maintaining and troubleshooting connectivity issues and this includes each and every bit of your connections like identifying traffic flow. Topology. Training and redirecting traffic flow. Cross connection hand off for interconnect monitoring. Ingress and egress traffic flows. Monitoring firewall logs. All of that and the lastly monitoring and maintaining troubleshooting latency issues like network throughput and latency testing router issues or tracing the traffic flow and majorly you can actually go ahead and use Stackdriver trace for your application latency as well. The last section is optimising your network resource and that contains traffic flow optimization as well as cost and efficiency. If I go back here, if you look at the traffic flow it says load balancer and Syrian locations, global versus regional routing, expanding subnet ranges, accommodating workload increases like with auto scaling or manual scaling and the cost efficiency is like your cost observation using specific network tiers, cloud serum or auto scalar. You can do automation, you can use VPN versus interconnect based on which one to choose. We are going to see the table there to identify which one is best cost effective solution for your use case and bad with the utilisation for your resources. What we are going to do is we are going to go ahead and do a regrouping and regrouping happens because even though the syllabus if you look at itis spread and distributed the same concepts in multiple or same service in multiple sections, we are going to cover that in the same manner. But I'm just trying to consolidate the theory part together. So what I'm going to do is I'm going to consolidate these high level theory parts like setup an IAM into one which contains going through the interfaces, console, SDK, permissions and all that. The second is going through the foundation or understanding high level what are computer services, database services and storage services and the third one is actually the network service which is like VPC your private network onto Google cloud platform and the last one is interconnect options like how you can connect your own premises with the Google cloud platform. This is how I'm going to restructure it. We are going to cover theory as well as practical majority of the practical’s in those hot spot but I'm going to give you complete understanding about the syllables in these course itself. So what we are going to do is we are going to see different options for the same thing like the connectivity between your own premises with the GCP. What are different options available. We'll learn basic concepts about those options. We'll run through console as well as CLI whenever it is required, and we'll have some samples so that you can try it. So that's all about the syllabus of professional network engineer. You can actually go to this particular link to understand more about Google Professional Network Engineer certification syllabus. That's it, guys. If you have any questions on syllabus, let me know. Otherwise, you can move to the next lecture. Thanks.
2. Google Cloud Platform Introduction Part 1
Hello and welcome to this lecture, which is part of a series of Google Cloud Platform certification training lectures for cloud architects, cloud designers, out developers, and system operations. We will talk about platform overview, and I believe you will be sufficiently excited to understand Google Cloud Platform in that. So let's get started. This lecture is a part of two platform overview lectures. In this particular lecture, we will talk about Google data center, pop locations, network backbone regions, in zone services offered, and GCP interactions. Why do we talk about this is because Google Cloud Platform, as you know, is a public cloud offering. What is the network backbone? So let's go and understand some of those in these two lectures. First, let's talk about Google data centers. Google data centres are somewhat different from any other data centres in the world right now. And these are some of the highlights that make you think it is different from others. First, renewable energy. Google has signed a long-term contract with renewable energy companies to use electricity from renewable energy resources for their data centers. So they have experience, considerable experience in the data centre operations. What they have done is with their machine learning experiences, they have learned that if you raise the internal temperature up to 70 Fahrenheit, there is no problem. The performance of the hardware goes higher, not the lower. The life of hardware is really good and when you hit like 70 Fahrenheit and more, ultimately you do not need heavy cooling air coolers inside it. You can live with outside air temperature for cooling the inside hardware. And that's how they use it. They build a custom servers so they don't just procure the servers from the market and use it. They are not hardware manufacturer though, but ultimately they purchase CPUs, ramps and all whatnot right, which is required to build a server. And they know what type of service they will need to build it for their own data centers. And based on the research which they have done it, they build their own custom servers and in fact they build their own rack the way they organise the data. They build their own network infrastructure, right? So that's how they know how to take advantage of efficiency of those servers and the network data and security. And this is one of the most prominent aspects which enterprises look for typically when the data is stored, where it is getting stored, and what about the encryption, whether someone gets access to that particular data, whether they can read it or decrypt it. Right. So data in rest and data in transit encryption, both types, are very important for any enterprise. And that's what Google provides. Along with physical data centre security, there are some locations or data centres that people don't even know exist, right, in the physical world. So they sometimes take that thought not to disclose the data centre location to anyone. Right. So if you look at, there are a number of data centres across the world. And this is the map a son today, January or February 2019. These locations, you can think of all the locations, those are the locations currently exist and they are planning to enhance these services are getting continuously enhanced. They are continuously adding the data centre locations across the globe, Jurik or Osaka or Inchakarta. Also they are planning to have additional data centres in 2019.They divided the whole region, the whole geography into multiple regions. And that's how they have divided this. In 18 regions. Actually, the services are accessible through to 25countries and around 100 plus point of presence. We'll talk about that later. But this is the overall map of data centers. They have it let's talk about points of presence. Points of presence, the location in which you can have the connection available or you can connect to the Google's fibre optic network. If you look at here, the blue ones is their own network. And you can think of this is exclusively used for their existing services like YouTube, Google Search and whatnot. So all these blue network, this is their own network. But at the same time what they have done is they have made investment and these are like shared investment cables, marine cables, they have invested the money into that as well. So these are like partial ownership, plus blue ones are like full ownership by themselves. And looking at this particular network, you can think the amount of skill they have for fibre optic network, like having services in one particular location, like in US, if at all, there are services installed and if that is something accessed from Asia or Europe, they have fibre optic connectivity and the traffic travels through their own fibre optic network to the nearest point of location. So this is the network, these locations you can see the connected dots or the fibre optic connections. These are the pop locations for any customer to connect to the Google cloud platform. There are some of the top locations which support CDN, right? And this is another part of the network where caching for customer data occurs. If you understand the concept of CDN, the concept is usually that if at all you have some images or static content that, based on the request, is accessed from the back end services or your service, rather than if that data is not changed frequently, what you do is you ask for the caching for those information to be on ear to your customer. Your server from us in India or in Asia, not all the time, the service will go to back end to access one particular photo or your profile photo. It will come from your nearest location to you. That's how our CDN work. And they have support for around80 plus locations across the globe. And they have partners. So it's not only these services only providing CDN, but they have partners like Equinox or some other companies. They provide Syrian locations and that takes some spanning across 500 or 600 plus regions and zones. As I said, they have divided the complete geography in multiple regions. So the region is independent geography area that consists of zones, right? The main purpose of a region is to host your application near your user. It is about latency and availability. Let's talk about it. Right? The region is the area divided into. It's an independent unit, you could say, of the entire geography of the world. And in that particular region, if you are hosting the services, it should be redundant enough that if one particular location goes down, you can switch back to the other location, but your traffic remains in that particular region or physical area, geographical area. And that's where the concept of zone comes in. Zones are independent physical locations within a region. The region is just a boundary defined so that your services are load balanced and high availability is maintained within the region. Here is the list of the regions plus zones as of January 2019.You can think of Asia as one or as two. These resources, like your virtual machine, database instance, or IP addresses, are all network and computing components. I would say these components are either global in nature, which means accessible and available across all regions, or regional in nature, which means specific to that region. There are some zonal resources, like the actual physical virtual machine that sits inside the physical server. Those are the Zuna resources. If you go out, if you look at the static IP address, that IP address is specific to a region. So it is not a global resource; it is specific to a region, and then you have the network, right? The complete virtual network. If you want to create a virtual network, virtual private network into GCP that span across multiple regions. So that is a global resource. Disk images, global resources because image is a service and you can access any discharge from anywhere, the snapshot, whatever you store it, that is the global resource and definitely firewalls and all those defines a particular physical locations and the rules around it. So the resources are either identified as a global resource, as a regional resource or sooner resource. GCP services in a high level, right? So they have the services from infrastructure asa service to software as a service. So let's talk about what are those innature if it is infrastructure as a service. This is traditional data center model where you have servers, machines, CPUs, network interfaces and everything and you manage the software and platform around it. So the virtual machine typically is a computer on the cloud and you can think that as an infrastructure as a service you connect, you have your hardware like disk. If you provision a disc to store your backups, right, that becomes the infrastructure. You want to have 32 CPU plus 128GPS of Ram in a particular virtual machine that also become infrastructure as a service. That's where you manage, you provision those resources and control those resources in terms of usage of that resource, the disc and everything, right? If you go from the left side to the right, your operational head gets reduced. So if you look at the platform as a service, that's where you will have the Admit platform, which you can provision, and you can just deploy your applications onto the platform, right? And in that case, you don't have to manage the actual hardware, but you are managing the instance; you are managing the cluster. It is not the actual hardware, but it is on top of that particular actual hardware. Google takes care of managing the actual hardware for the cluster, but you manage the performance of the cluster and then you provision the cluster resources, right? If you go to "software as a service," that's where you move towards a more and more serverless environment. You don't even care about the cluster provisioning, those clusters, or the resources for your applications to run. What you do is you typically just push your code or application to the platform, and the platform starts just running it. You are not worried about how much the cluster sizes are or how many virtual machines you are using, the CPUs and all that, right? You're just curious about it. You just use the resources and do all other application level access permissions, and all that is purely from left to right is purely moving towards no operations kind of work. Left, you are managing so many resources, and that's where operations come in. Then you have system operations, which is in between platform and infrastructure as a service because it offers major benefits when you own your own data center, right? You have the resources available and you are just managing connecting those resources, managing operations around it and this onwards up to no operations is what the public cloud environment is DevOps. You are just utilising existing cluster and management and provisioning the resources based on its requirement. And ultimately no ops means they have app engine, right? As a resource, you simply push a code and the application will start running; you don't even know what other resources are running inside it; you are simply building it based on the traffic or the consumption of different hardware resources for those services. So, in a nutshell, we are looking at a different kind of resource, and typically if you look at the IT organization, what they use, they use compute resources, and typically we see a virtual machine. The physical server is your computer source, which is doing manipulation, calculation, accessing the database, and serving your customers. The second resource is storage and compute, and storage is typically only two resources that a business needs. If you look at or business needs it, sorry, that is not the right word. It needs on top of it. There are so many services, and we'll talk about it. And that serving is handled by the compute services. But to have either the data or the computer source available for users to access it you need to have a connection and you need to have control on that particular connection and that's where Google's networking service comes into play and you have multiple options to use it. Beyond or on top of these three core services you need to have identity and access management so that resources are accessible to the person who needs to have access to and all other people should not be able to access it. There are big data services and this is another benefit from the Google Cloud platform. They are very innovative in developing the data solutions and the big data work. They are really good at machine learning and we understand more about it, how you can use the machine learning. But these are the core services you can think of which enterprises use it as their own datacenter, install the software, install have the hardware. Besides that, there are other management tools which are available for anyone to use it. It's like straight offering, like monitoring of It resources and application resources, logging, error reporting, trace like that, as well as the developer tools. So having all that information infrastructure available, how you'll use the APIs inside your applications. And that's where our developer tools comes into play to help any enterprise to build the service or use Google services inside the applications. Besides that, they have come up with another service offering. Google Cloud Endpoint was the only offering which they had it earlier as an API management tool. We'll talk about what is API management and we'll get into how it benefits customer doing the API management to last year they have acquired a company called Apigee and they were very prominent in doing the API management. So using Apigee you can do API management, monetization analytics and whatnot. But it's not a marketing page, but we'll look at it. What is that? If at all you are moving to a public cloud environment for the first time. Or you are having connection between your own datacenter because you have some services which you want to keep it in your data center and you want to connect to Google for elasticity. Then there are these data transfer services which they have created and I think Google storage service and BigQuery data transfer service. I have not seen this two years back, but this is available right now for customer to use It. That's it guys. For this particular lecture we will get into more details of resource hierarchies projects, quota, infrastructure services and different type of accounts as well as pricing in next lecture. Thank you.
3. Google Cloud Platform Introduction Part 2
Part Two: Google Platform Overview In this lecture, we will look at resources and their hierarchies, projects, code and limits, infrastructure services, GCP accounts (there are different types of accounts; what are they? and pricing). A resource is any component. For example, a virtual machine is a resource, as is this thing attached to that particular virtual machine. Or the network component in which the virtual machine is hosted is a resource. Your firewalls are a resource, right? All of those are resources in GCP, and those resources are organised in a hierarchical manner, and we'll look at that. What is the hierarchy? Typically, if you look at any organization, the company has got multiple departments, and each department will have some products, like product one and product here, but it could be anything, right? Whether it is unit of that team or whether it is unit of the department or directly department requires the It resources. So it could be anything. You can mix and match any hierarchy, you can create it. This is a part of organisation and folders are a part of G suite. We'll talk about that in subsequent lectures. But G suite maintains organisation and folders and project and onwards. You can control that in Google cloud platform. So you already have organisation built herein G suite and you do it. It is not mandatory though if you want to build it organization, you can do it there. But you can have your individual accounts for a different project, different departments and they are maintaining their own resources. But there are some restrictions and we'll talk about that, how you can share the network and like that okay, but typically your resources are allocated to a project and ultimately using project you can provision the resources that's the container for your cloud resources hierarchies. So you want to implement the way you want to give access to a different team or applications to different applications. And that is built out of identity and access management. But as well as you want to have the resources, the billing which goes from top to up, where you want to build it, whether you want to build it at the project or you want to have the billing account created taking four or five different departments together, that you can define it. And the roll up happens from bottom to up for the billing. So typically identity and access management will happen from top to bottom. So if a particular person has got access to a project, all resources inside the project that person will have access to. But there are fine grained access control which will need to enable for that particular person. But the way it works is IAM is top to bottom and billing is bottom to off wherever you define it. There are policies in IAM and we'll talk about that later. But the policies are the set of resources applies to the rules or members resources inherited policy from its parent as an example. So all these resources inherit properties from the parent which is like project policies are union of parent and the resource like if at all we are talking about one particular person having access to app engine then those like union of what project has it plus what permissions or in terms of role has given to a particular person if the parentless restrictive overrides more restrictive resource policy. As an example here if the person has got a create resources policy I'm just giving an example create resources at the project level but you can restrict whether he can create app engine or not, right? And that's how hierarchy of IAM works for the billing though usually organisation is the full container of all your resources inside it project is typically usually you’ll have the billing happened at the project level and not at the product level but you can have one billing account created and taking the pay talking about aggregating the payments for multiple projects.I am rolling our key. As we said, it's a top to bottom. There is nothing much but just elaborate and what you can do as an example here organisation admin have full access for all the resources organisation viewer view access to all the projects folder admin create and manage different folders folder viewer can only view the folders project creator can create a project and resource roles like individual resource role here. So we saw there are some resources which are zoneal regional or global. Typically if we carry that over here, typically the instances and disc are zonal external IPat the regional level and images, snapshot and network it is at the global level. Typically your billing happens for all those at the project level. Billing and reporting at the project level typically GCP Resource Manager GCP management means resource manager manage all the resource whichis depicted here like this is theresource manager's work to maintain this hierarchy. So it is centrally managed hierarchy, centrally manage and track all your projects, manage IAM across your organisation manage organisation and organisation policies, create and manage cloud IAM policies, cloud console and IAM access and manage cloud folders. And this is part of the resource management service account. So you have two types of accounts in the Google cloud platform, and we'll talk about the service account later in this section. But the high level service account is used for application access. For example, the right virtual machine wants to access GCP resources, such as cloud storage resources like images, or write images to the cloud storage, right? We will have a service account created that will be used by applications inside the virtual machine, and that account will have access to the cloud storage. This is fine-grained, and you can define what that particular service account can do or can't do. And definitely, this isolates the user-level permissions, which are like users if an employee leaves the organization, right? Or employee move from one organisation to another organization, another department within an organization, then you don't want to go ahead and switch all those permissions to the new employee. It's additional work, right? Instead, if you use service accounts to handle all application-to-application communication, you are careless like whether an employee is joining or leaving the organisation based on a secure token. And there are three types of service accounts. In a nutshell, you can create your service account. You can use built-in service accounts for virtual machines and app engines, and Google APIs internally. GCP Project, as we already know, is a container for all of your resources, and Project handles billing for all of the resources within that project. In a nutshell, Project tracks resources and quota usage. You can have billing on those resources, manage permissions and credentials, and enable and disable API and services within the project. Project uses three identity attributes. One is the project name. Project has a n You don't want to get a cloud bill every month that says, "This is not the case, but the revenue is $100 and you got a $120 cloud bill," do you? Because you can go and use as much as you want at the same time, you need to make sure that your expenditures are controlled. It is not actually intentional, but someone is monitoring your quota, right? And that's where you have quota and limits for those resources. So, quotas are typically controls the budget, it limits your resource utilisation at different levels and you can definitely increase the quota. If the organisation is big and you need multiple resources, you can increase that quota. There are some limits which are enforced in GCP and those limits you cannot increase. So that's the difference between quota and limits. So typically the limits are you can think of platform limitations and quota is something constraint which is put forward as a recommendation from Google, but you can still request for the quota to be increased and Google will look at the different use cases and will enable additional limit. So increase the quota for you. Project quotas. So typically the resources are subject to project quotas. How many resources you can create per project, how quickly you can make API request in a project like rate limit. Some quota limits are applicable for region and zone as well. And you can increase this quota. As I said, the example of this quota is you have five network per project and you have 32 CPU per region, right? As an example. However, these figures may change once you see this particular slide deck or training. GCP Infrastructure Management Services Typically, if you look at any cloud offering or your own on-premises data center, or you procure any datacenter services from any other service provider, you will need to organise these resources; you will need the network, but it will need to have some way of interfacing it to your resources, right? And that's where Infrastructure Management service comes into play. You'll have Resource Management Manager, which organize your cloud resources in Projects folders and ultimately individual resources. You have IAM which is using which you can have fine grained access control across all the resources. You can have audit also set up or logging also set upon Im so that you see who is accessing what services. These are the services which is provided under Strike Driver as an umbrella for monitoring, logging, tracing, debugging and error reporting. There are some CI CD applications to make your deployment easy. You have Deployment Manager, you can just set up template and you can create your own infrastructure. And then you have storage and Scaling, right? These are the additional services. Auto scaling actually makes the services elastic, in which usage of the virtual machine CPU goes certain 80% or 60%.You can configure a rule to spin up multiple of those virtual machines and your service SLA is not harmed. So these are the infrastructure services that are available for Google Cloud Platform or any other service, any other cloud platform provider. There are typically three ways to interact with Google Cloud Platform. The first is through the user interface, which means you can simply go to GCP Console, login, and then start providing or viewing resources. The second is through the command line interface, or CLI, which you can install on your computer or go to Google Cloud Platform and spin off Cloud Shell to interface with the cloud. You can interact using the Rest endpoints and each and every activity that you do using the CLI or the console. You can do that with the Rest APIs. You can use a simple call command or Postman to call the Rest API, and you can provision the resources and monitor the resources using the Rest API as well. So that's all around different aspects of the Google Cloud Platform. Please let me know if you have any questions. Otherwise, we'll get into the next lecture, which is different. Certification is available on the Google Cloud Platform. Let's understand what those are. So that we can concentrate on what we need to concentrate on with this particular training, thank you. Bye.
4. Subscribing for $300 Free Trial
In this lecture we are going to see how you can get the free trial and dollar 300 credit for Google Cloud Platform to try different services while learning Google Cloud Platform. If you go to cloud google. ComFree, you will actually get this particular UI and you can find $300 created, which you will get it or assigned to your account. Once you enable here, you will get actually viewing my console or you just need to log in here. And once you log in, you subscribe for twelve months credits and you will be able to use $300 credit if at all you are using it for this particular course. On any course, $300 is almost sufficient. You just need to make sure that when your lab is done, you need to delete those instances, those services, whatever you provisioned it. If at all you have stored the data on Google Cloud Storage, you just need to delete that particular data. That's how you will not be charged continuously. So you'll get $300 as a free credit, which you can use it for any paid service. On Google Cloud Platform there are always free products and you can explore that as well, like instance hours per day. You will get 28 instance hours per day. Like you can launch two different instances and use it for say 14 hours and it is going to be free 5GB of Cloud Storage shared Mem catched Search if you look at all these, these are different services and the free credit or free tier associated to all those services. These free services will not be considered under $300. $300 will be used only for the service which does not have free, or you have used the free tier and then you are using it more than the free tier. That's where your $300 is used. Majority of the cases you will be charged up to say $100, typically in one year. But just keep in mind that this is for twelve months. After twelve months for your account, this will not be carried forward. So if at all you want to try and use it within twelve months, you can just go ahead and use it. But just keep in mind that these three tiers always available to you beyond your free period. Okay. That's the $300 or free trial subscription on Google Cloud Platform. If you face any challenges, there's a really good amount of help available there, but you can let me know if you have any questions while subscribing. Otherwise, you can just move on to the next lecture.
Google Cloud Platform Interfaces
1. Google Cloud Platform Interfaces
In this section. Let's go ahead and understand what are GCP interfaces which are available for you to use. It these interfaces which means how you can interact with Google Cloud platform using which tools. So the first one is Google Cloud Console and this is the UI or the front end which is Web UI or the Mobile app UI using which you can interact with the Google Cloud platform. We are going to get into details into this section, but that's the first one. The second one is Command line interface where you use your Gcloud commander BigQuery command or Gsutil Command. So that is your command line interface. And in that command line interface you have two different utilities which you can use it. The first one is Cloud SDK and the second one is Cloud Shell and the third one is API libraries and these are the Rest endpoints which you can use it to call from your programming interface. You can include that into Python or Java or any other programming interface. Or you can use a postman to hit a Rest point URL, or you can use the simple utility called Curl to interface with the Google Cloud platform. So this is typically API and primarily used inside your programming language. So if you divide the whole aspect into the list. The first one is the Console. Console again has a mobile app. So that is again another interface. There are plus and minuses in these two. The third one is Cloud Shell. Again the interface command line interface, SDK command line interface. But this gets installed on your laptop or computer and we need to actually go through the installation step for SDK. We are going to do that in the next section, the Cloud API. And that's the command line interface. You cannot just interact with Cloud API using your command line. You can do that with the utility. But that's not the whole purpose of Cloud API. So let's go ahead and understand the Cloud Console in the next chapter. Thanks.
Google Professional Cloud Network Engineer Exam Dumps, Google Professional Cloud Network Engineer Practice Test Questions and Answers
Do you have questions about our Professional Cloud Network Engineer Professional Cloud Network Engineer practice test questions and answers or any of our products? If you are not clear about our Google Professional Cloud Network Engineer exam practice test questions, you can read the FAQ below.
Purchase Google Professional Cloud Network Engineer Exam Training Products Individually