MCPA - Level 1: MuleSoft Certified Platform Architect - Level 1 Certification Video Training Course Outline
Non-Functional Requirements of APIs
Designing Effective APIs
Implementing Effective APIs
Event Driven Architecture
Getting Production Ready
Monitoring and Analytics
MCPA - Level 1: MuleSoft Certified Platform Architect - Level 1 Certification Video Training Course Info
Gain in-depth knowledge for passing your exam with Exam-Labs MCPA - Level 1: MuleSoft Certified Platform Architect - Level 1 certification video training course. The most trusted and reliable name for studying and passing with VCE files which include Mulesoft MCPA - Level 1 practice test questions and answers, study guide and exam practice test questions. Unlike any other MCPA - Level 1: MuleSoft Certified Platform Architect - Level 1 video training course for your certification exam.
4. Ownership and Focus
As we understand the new It operating model, let us now see who should be the winners of what and what they should be focusing on. So, we have different terms like "the Central IT," "yellow," "VIT," and other people, right? So we will see who is the one who knows what to do and what their focus for them.So, as we discussed in the previous section, the It operating model should have a central It that creates reusable assets that are consumed by the Love It that creates business APIs or Business Delivery. Simultaneously, in some of the previous videos, we looked at and understood APconnectivity, which is a three-layered architecture with Experience API, Process API, and System API. So, let us know how to map these particular roles, which we learned in the IT operating model, to each of the layers and how they fit in. Right? So what you see here is the match of each of the lobes, the central IT, and the developers in the organisation and how they are mapped to each layer in the Apollo condition. So, as you can see, we'll begin at the bottom, where there are system APIs. Like we discussed before, system APIs are the APIs that actually unlock the key systems, which are back-end systems, including all legacy systems, data sources, and any other applications, and decentralise and democratise the access to company data. So it is not tightly coupled; it decentralises It.These assets are created as part of the project delivery process, not as a separate exercise or something. As a result, the central repository will only focus on these specific systems and expose all back end functionality to systemaplair. Now that's done, next comes the Process API layer. In the Process API layer, it should be LobIt that actually now makes reuse of all the underlying assets for the back-end systems via the System API and composes them into a particular orchestration. Like we discussed, the Process API is where the business orchestration resides. And once the business orchestration is ready, they can now start building the pro CPAs like we discussed. Because Lob It is the one who is in charge of the lob business. They understand business knowledge. So, once they have the design or how the flow should be, they seek assistance from the lob business to understand the orchestration. Now they make use of the systemAPs and create the process APIs. And finally, what's left is the Experience API layer. So now, all app developers can go and discover and self-service for all the business APIs that are available via the process API layer, as well as create the experience APIs. So, they could be mobile app developers, web app developers, or whoever it is. So, whatever department belongs to that particular lobe So those app developers, mobile developers, or web developers will take the process (APS) and write their own experience (experiences). So I hope you got that right. So one last thing I wanted to just explain is that please do not think of Aidconnectivity as just this three-layered architecture where Experience, Process, and System APIs are implemented. That's not just what APL can compute. It is not an architecture of these three layers by itself, okay? It is an approach that increases the benefits of doing this kind of work. like what we just discussed. It is an approach to achieving this particular IoT-permeating model via these experience-personal system APIs. This whole approach is also part of the definition of Apilad connectivity. And it is also not just about technology. It's not just about OK; if we implement new software and architecture, it is an Apollo Connectivity. No, it doesn't mean that. So it is just how to organise the people and the processes to make delivery efficient within the organization. You understand that right now. So, if you see, we are not just talking about technology here; we are talking about the SDLC or the process in the organisation as well. Because if it's just a technical course, why would we discuss the alignment of it within the organization? Who should be working on what andhow the team should be organized? Right? So this is also part of the API connectivity or the application network architecture. So, it is not the technical word. It is the entire alignment of the organisation in the right way and building the assets in the right manner so that the delivery within the organisation is most efficient. So, the API is depicted in those tires, like you saw in Experience, System, and Process. APIs are actually the building blocks that encapsulate the connectivity, business logic, and interface with which others interact. Right? These are all encapsulated in these three layers. So these building blocks are productized, fully tested, automated, governed, and then fully managed, with lots of policies applied on the API manager. And that is how you make them discoverable, make the most of them, and make your APS very powerful. I hope you understand the concept. If you do not, please, like I said in previous part,do questions in the Q and A session so that Ican answer them and others can make benefit out of it. Happy learning.
5. Platform Capabilities
This video. Let us have a look at some of the important delivery capabilities for the new IT operating model. To perform this APL-ed connectivity, an organisation has to have some sort of important capabilities, which are actually provided by the Any Point Platform. To deliver this it needs in an effective manner. So let's have a look at them. The first one is the API design and development. This is about the design of the APIs and, once designed, the development of the API clients and the implementations. So, now that you are aware of the API terminology, when we say API clients and AP implementations, we are talking about two parties, where the API clients are the source site programmes, a piece of code, or any client that actually calls our API, which we are providing. API implementations are the last piece where the functionality of the back-end system is exposed. I'm sure you are very well aware of this particular terminal. So, API design and development is one of the capabilities provided by the Any Point Platform. The second one is API runtime, execution, and hosting. So, what's this here? We are talking about the place where the designed, implemented, or developed APIs are actually hosted and executed. So, in the first part, we designed the API; we are done with it, and then we developed it. Fine. Now, we have to host it somewhere and make it executable at runtime, correct? It has to be on the runtime so that when the hits come or the invocations come, they will be executed on that particular runtime environment. So, this is the capability also provided by the NewPoint Platform runtime, execution, and hosting. And the third one is API operations and management. This is the third phase where once the API is designed, developed, and then hosted on a runtime, that is not enough for maintaining the API. The following operations occur in the same manner as any SDLC process where the project goes live and into production. Right? So how do we maintain this API? We could apply a variety of policies to API security, such as lifecycle policies or runtime policies. So many things, right? to actually make them operational. So such kinds of operations and management capabilities are also provided by the Any Point platform. And the fourth one is API consumer engagement. This is about emphasising the consumption of those APIs. So this part enables us to engage the developers of API clients and manage the API clients they develop. So we're concentrating on the term from the previous video, which emphasises the consumers. So, these are the four important capabilities that the Any Point platform provides to make the new IT operating model of delivery efficient. Now, among these four, I would like to segregate the group further down to two main core areas. the first two API design and development deals and the API runtime execution and hosting deals. more about the API clients and API deals. more about thereason being that these two deal with the code. If you see properly, this is the part where the code or logic comes into the picture. It could be an EPA client or an AP implementation. Both deal with writing code. Somebody has to write code or logic on the client side to make a call to an API. Similarly, on the API implementation side, we have to write code to expose the back-end functionality. So, these two capabilities deal with API clients and implementation, which is code-related to develop the code, host the code, and execute it. So this is about APA client implementations. The second two, the APA operations andmanagement and the APA consumer engagement. These two are logical entities where nobody writes a piece of code. The operations handle more of a management style by implementing APA policies or different slices on the policies and rate limiting them. So these are all things. This is not writing code. This is where process kind of thing comes into picture. Similarly, consumer engagement is also, as we saw it, a kind of role thing or a management thing. So these two are process- or management-related, the API and APA consumers. So, in addition to these two core sections, API clients and implementations, what are the capabilities provided down this particular group? And what are the operations or capabilities given for the API and the API consumer group? Let us have a look at the important capabilities with respect to the APA clients and API implementations group. So, API clients and implementations are actually the application components as discussed in the previous slide. So these are about the logic or code that has been written. So, for application components, what are the important capabilities? Let's see them. So, these are some of the capabilities provided by the Any Point platform to enable these very mandatory capabilities with respect to implementations and clients. Back-end system integration is the first. So this is more about API implementation because they pay. Implementation is the place where we write logic to integrate with the back-end system and expose the functionality. Right? So the platform provides the capability to actually perform this back-end system integration. The second is fault-tolerant API invocation. What is this? This is mostly about the API client side mostly.So, when the logic is written to call a particular API, which is provided by somebody else, it has to be written in a fault-tolerant way. What do you mean by "fault-tolerant way"? Also, in the future sections and videos, you'll understand it much better. I'd like to give a short description of this. So, sometimes the API we are calling could be down, have slow performance, or not respond at all. So in those situations, there are two ways to do it. Just live with it. If it is, give the slow responses back to your clients. Or just if it fails, give the error back. Or if you want to make your business effective or attract customers or clients, you may write the code as fault-tolerant so that there will be some strategies to still make your AP invocation successful. So, such flexibility is given by the any-point platform. The third one is high availability and scalable execution. Again, this is to deal with either the AB implementation or the client. It can be for anyone. So what are we talking about at this particular point? Based on the run-time load or certain adhoc loads or hammer testing or anything of the sort, your application should still be capable of handling that sudden load and scaling accordingly. If there is a failed instance, then it should have high availability so that a second load comes up, which should have high availability. And it should be scalable on demand so that it will cater to the needs of the sudden load. Right. So this is also the capability provided by the endpoint platform. The last one is API client and implementation monitoring and alerting. So this is like what you must have seen in many projects—a kind of compulsory thing or important thing. So happy exhibition is one part, butstill monitoring is important and I thinkabout the important things is important. This is also a capability provided by the endpoint platform. Now, let us have a look at the next group, which is APIs and API invocations. So, like we discussed again, the APIs and API invocations are more for the underlying application components. These are the groups or logical entities that sit on top of API clients and implementations, such as invocations referring to the consumer part, which is a logical role. and the APIs are nothing but the interface. right, logical specification. So let us see what capabilities anypoint platform gives to manage these things. The first thing for the API interface we have APADesigner, which is on the any point to you aswell, you can do it or on the cloud hub. Then the second one is APA policy enforcement and alerting. This comes as part of the API Manager. So this is where operations can apply different policies to the API to restrict it for security or threat protection, among other reasons. The third step is to monitor and alert on API invocations. This is a little bit different from the monitoring and alerting of APA clients and implementations. On the technical side, in the code, there is monitoring and alerting about how things are behaving, how the performance is, and other stuff, as well as any bug and error alerts. Whereas on the operations side, the monitoring and alerting are more about whether the SLAs are met for the given API? Or are there any denial of service attacks? Any such kind of operationals or otherbusiness related things are alerted and analyzed.The fourth one is the capability to make the assetsdiscoverable so that any interested parties can actually come inand look for the APS they want and they findit in an easy way and consume them. The final step is to create documentation that is simple enough for customers to understand, ease, and use. So this is like comes under theself service of the API, right. we discussed in the previous video. So this is how the recommendation is very elaborative and easy to understand. It will help people to self serve.They will not rely on the provider to send an email or get documentation back. They come there and then consume the API. And whenever they have issues, they can self-serve with easy documentation. So I hope you understand the capabilities of the input platform with respect to these core features provided by the endpoint platform for an effective operational model. Thank you. Bye.
6. Platform Demo
in this video. Let's have a look at the Any Point Platform components and how they are segregated. So what you are seeing in front of you are the segregated components on the Any Point platform. On the left, what you see is the Any Point Design Center. And the components under the Design Center category are API Designer, Floor Designer, and Any Point Studio. And there is any point management center. And under this category, what you see are APImanager and Time manager, analytics and the Access management. What is commonly found between the Design Center and the Management Center. Sharing the components is a point exchange. This is where all the assets are published, like Code Snippets, Rama Snippets, and any other specifications. Mule applications can also be published in the Any Point Exchange. And the next set of the runtime services—where the runtime services are like the MQCloud Hub and the Fabric VPC and all— So we are going to have a look at these components. Anyway, I'm going to take you to the end point platform screen and demonstrate these individual categories soon. But what we are not going to cover in the next demonstration is the hybrid cloud. Unfortunately, I cannot cover this hybrid cloud because, as the name implies—and you should be well aware of this—the hybrid cloud is a mix of on-premises and cloud. So I cannot, unfortunately, demonstrate that. But we're going to have a look at the other categories that you're seeing here. Okay, let's move on to the next section. I have now logged into the AnyPoint platform using my trial account. So what you're seeing is the landing page on the Any Point platform. Let us now have a look at the Any Point Design Center components first. So navigate to the navigation bar. You can either go to the Design Center from here, or you can actually go to the Design Center right from the landing page. Whatever is feasible to you Let's go to the design center. So in Design Center, you usually have two different parts. You can actually create two different kinds of assets. If you click on Create New, you will see one type called Create a New Application. So this is called a flow designer, and there is another one called Create an API Specification, which is an API designer. So what you create in the API specification is the RAMAL spec. Remember, we are looking at it for the Math API in the previous demo. So you'll create the Ramble specifications as well as everything else in the API specification. And whatever the fragment is, don't be confused by the fragment and specification. So an API specification is a complete specification of an API. It's a complete structure, whereas the fragment is just a portion of the API specification. In other words, you can create sample examples, a request scheme, and a sample response example. And the response schema is adjacent. So these kinds of snippets and all the modularization of this access specification and these individual modularized units are called "fragments." So you can create either of them, and with the floor designer, you can actually go and create a Mule application itself. So, we can't make a very complex application in the GUI designer, but if it's just a simple integration with a few calls and connectors, you can definitely do it in the View editor. Or in fact, you can import an existing application as well. And for the complex flow design, the complex integration flows of force, you will use the Nepoint Studio, which you are already familiar with. Now, let's move on to the Any Point Management Center. Under the Management Center are the Access Management API Manager and many other things like monitoring and visualization. So what is access management? So, Access Management is the place where the entire access, or ACLs, are restricted on the endpoint platform. So, in your organization, you could have different sets of teams, right? As previously discussed, there will be various lob it's; various types of it; and other teams who are not developing but may require access to the Any Point platform. So you control all such access privileges, all using different roles, each having dedicated behaviours, and you control the access permissions. And not just that, it even affects client management as well. As an example, once the consumers are created and their apps are connected, you can see the counter apps here that you can integrate with the external access system, not just the new soft authentication and client identity provider environments. I mean, these are straightforward and self-explanatory. I need not go into each and every individual tab. So, these are all the things that, as the name suggests, can be managed or accessed through ACS. Then comes the APA manager. APA Manager is the place where actual operational stuff is enforced. Remember that we talked about the API client and APimplementations as a single category or group. And we talked about APS and API invocations, which are part of the API manager's operations, right, operation monitoring. So this is where the operation and monitoring come into play. meaning you can apply various policies to the APIs. For example, let's take an existing API, which we had before, and say now that I decide that's okay, or my organisation decides that my API wants to have certain policies enforced on it. Say I can go and say, "Okay, I want to apply a car's policy so that it is not accessible over the Internet and is allowed only for particular resources." Or I want to throttle the calls that the clients make on the API. Say I only want 100 / minute, or I want to do some client enforcement, whitelist some IPS, and so on. So all kinds of policies can be enforced on your API for these kinds of things. Also, you can create some SLA tyres by saying, "Okay, for this particular silver tire, these are the rate limiting and the policies applicable for the gold tire; these are the criteria." So you can go ahead and do all that stuff. Similarly, you can also do analytics on your particular API. but unfortunately I don't have enough data to show the analytics. But I can give you a general idea of how it appears, the number of API calls received from which regions around the world, and any violations of the SLA or policies that you have enforced. All this information will be shown as a dashboard here. You can in fact go and create your own custom dashboards as well, using the feature from the Newport Platform where you can just go and say "create a chart," and then it asks you to choose the chart you'd like to show. And then you can actually select the criteria you want to show. You can see like you see, there will bemaps, size and different bar charts and all. It all depends on what you choose and what matrix you are selecting.Is it that metric you want to represent on the chart? So once that is done, you will have the chart displayed here once the data is available. Like I said, unfortunately I do not have the data right now, so I'm not able to see. But I'm sure you can visualise how these dashboards would look. Having said this, we covered the management centerwith APA Manager, analytics and access management. Next comes the runtime.Manager So let's go there. The Runtime Manager is the place where the actual hosting and execution are done. So this is where, if you remember, we hosted our math API in the previous demonstration. So the application is hosted here and executed here. All the hits that come to the API Manager are ultimately forwarded to the Runtime Manager to handle the request. So this is the runtime manager where we host our apps. You can manage the apps here. If you go, there will be many options to give the runtime properties. You can enable some features in your specific application, as well as enable insights on the application, change the logging levels, and do other things on the Runtime Manager. How about the fabric in the Runtime services now? So the word "fabric" is a term used to define the scalability and elasticity. Okay? So how can that be done from the Runtime Manager? So how you can actually use the fabric is that you develop your code once as one entity application, you deploy it onto the runtime manager, and then your job is done. So from the operations or administration side, the administrator or platform manager can completely take care of the fabric's nature. So, if demand arises on the fly and the application needs to be scaled, the existing course and worker size can be selected. You can either vertically scale or horizontally scale. meaning you can scale out or scale up. So how do you do that if you want to scale up? Okay, what you would do is select a greater VCors virtual course. So if you select 0 2 V, that means you are increasing the memory. When you say one v code, you increase it to 1.5 gigabytes. So this is the vertical scaling scaling up.And then, if you want to scale out, we would go ahead and hire more workers. As this is a trial version, I was unfortunately only able to scale the worker once. If I have a proper subscription, I would be able to select as many workers as I can select per subscription. So if I select two or three, then there will be three workers scaled out, and there will be three applications running in high availability. So this is the fabric of the runtime services. And what about the VPC? So VPCs are the "virtual private clouds." So sometimes your applications have to connect to on-premises systems, right? So how do you establish this connectivity from your cloud hub to the on-premises system? You might have another AWS VPC. not from the mules' soft You'll use AWS and its VPCs. But say you have your own hosting on AWS in your own virtual private cloud. So you may have to link up these two, right? So these kinds of things need a virtual private cloud to be created. Because just like how you create a VPC in yourphysical on premise network in the organization, same way forall your apps hosted on the Mule Cloud hub, youwould still have to create a VPC. So all your IP addresses would be arranged and allocated as per the range and CID or range notations we have given our allocation for that particular VPC. So again, it's a trial version, so I cannot actually demonstrate it even if it's a subscribed one as opposed to VPC. But this is the place where that would bedone by the network administrators after brainstorming it. And the last one which we have seenin the categories is the Any Point Connector. So any point connectors that fall into the Any Point Exchange category So what are these characters? So if you come to the landing page of the Any Point Exchange and say, "Click on Provided by MuleSoft," right, just like that, you would see many connectors available out of the box from the Mule Soft.So I am going to go and filter the types because as we discussed, the exchange can have different assets published, right? It could be a RAMBLE specification, an API, a fragment, an example, anything. So I am interested more in the connectors. So, if I filter on the characters, you'll see a variety of out-of-the-box characters, including the Salesforce character Amazon's three. You can see why you should read out the connectors here, but you can also see there are lots. So these connectors enable the integration to quickly connect to an external system, which could be an enterprise system or a normal system, and access its data seamlessly; it doesn't even take time. You need not worry about the authentication mechanism and all the different ways or best practises to get the session established with your enterprise system or the best way to read the data; all are abstracted into these connectors. So all you need is proper permission oraccess privileges like a username password or whattoken credentials and all just if we connected. Everything is handled by these countries. So this is the powerful nature that comes with using any point platform. I hope you understood the demo. For any questions, please use the Q and A section. Happy learning.
7. Platform Automation
In this video, let us see what kind of automation we can do with the Any Point Platform or more about how automation can be done on the Any Point Platform. As you have seen in the previous demonstration video, now you know how the endpoint platform looks and what components it has. So that particular Web UI is more for easy interaction with users. So it's to make user interaction easier; who wants to log in and perform tasks one by one? But let's say we want to have a DevOps automation where we want to automate some of the rendered stuff, which happens frequently. So what I would like to say here is that all the functionality that you saw on the web UI, which was in the previous demo, is also available via the Any Point Platform APIs. So basically, even in the UI demonstration we saw, every click or every action we have performed actually calls for JSON behind the scenes in the Mule software as well. MuleSoft provides the same JSON APIs as any other point platform API. So we can use those APIs to write extensive automation tasks for our DevOps integration. And MuleSoft is not just providing those JSON APIs for high levels of automation; they have more tools as well. There is a point. CLI. It's a command-line interface. It provides a user-friendly interactive layer on top of the Any Point Platform APIs. So, after installing Any Point CLI on your machine, if you prefer scripting to writing code or a programme to invoke JSON APIs, and if you enjoy writing complex scripts, Any Point CLI is the best fit for you. So everything that you can do on the UI or via the JSON Rest APIs can be done via the Any Point CLI. And there is a third way as well, which is the Mule Maven plugin. So, if your projects are mammoth, from mules to all projects are mammoth mules three, they can be maximized. So if you have such managed projects, then even project management plug-ins come in handy for certain levels of automation, like packaging and deployment. So for the packaging and deployment side, Mavenplugin can help you very easily integrate your DevOps, packaging, and deployment model without even having any custom scripting or programmes written. So it can deploy to all kinds of runtimes. This is more on the CICD part. So these are the three different kinds of tools provided by the Any Point Platform. Based on your mix and preferences, you can choose whichever you want to use. Either. It could be JSON or any CLA or movement point. Plugin. But with the combination of these three, you can do any automation related to the modules of any point platform, which can be done on the UI, right? Yeah. Because we are discussing this, there is one more point I would like you to make aware of, which is because we spoke about the Maven plugin, right? When packaging is completed, the assets are generally published to a specific artifactory; the well-known ones are like Jfrog or Nexus artifactory repositories, so that's where the package artefacts will generally be published, and they will be retrieved from there the next time the packaging is completed. If there are dependencies used as dependencies in your projects, they will be retrieved from those artifacts. So it is not necessary that you have to have a third-party Maven repository like a Nexus or JFrog; you can use any point exchange itself as a Maven repository, which is a feature of new software, no matter how good it is. So you need not have a skill set or a special tool to maintain that third-party Mihan repository. Anyway, as a best practice, whatever artefacts we build should be like the API design, phase one, right? The API Rampage, etc. Anywhere we publish them to any point exchange, any kind of small reusable template codes we write to share within the organisation such template codes via point exchange publication as a snippet, right? So in the same way, we can even use the 20-point exchange as an artifactory for all the build assets. So, in Maven Palm, which we use in Mavenist projects, you can actually link or refer to the repository as your any point exchange for the Palm packaging and deployment to refer to that specific location as the publishing artefact factory. So this is one thing that you have to know so that when such a requirement comes from a customer and, as a platform, you are questioned, "Okay, do we have to go with a third-party manufacturer then?" Depending on the budgeting requirements or whatever reasons the customer has, you can very well suggest that we can actually have a stick with any point exchange only if we do not have a third-party deposit. I hope you understood it. Thank you. Bye.
Pay a fraction of the cost to study with Exam-Labs MCPA - Level 1: MuleSoft Certified Platform Architect - Level 1 certification video training course. Passing the certification exams have never been easier. With the complete self-paced exam prep solution including MCPA - Level 1: MuleSoft Certified Platform Architect - Level 1 certification video training course, practice test questions and answers, exam practice test questions and study guide, you have nothing to worry about for your next certification exam.