PL-200 Microsoft Power Platform Functional Consultant Topic: AI Builder
December 19, 2022

1. Welcome

So congratulations on completing the Power Beam section of the course! We are almost done with the course. You have looked at database security, power apps, power automate, power virtual agents, and power VI. Now we’ll talk about the AI and ML parts of the course. And then we’ll have a few lectures on integration with Microsoft 365. And that’s it. Thank you.

2. First AI App – Invoice Form Processing

Welcome to the first lecture on AI Builder. In this lecture, we’ll introduce AI Builder and create a sample application that will process invoice forms. So let’s go to the app. This is how, in PowerApps, you have options. for AI builders. And the screen appears when you click Build. You now have a variety of models at your disposal. Category classification is the process of categorizing text based on its meanings in order to extract an entity. From your data, you can process a form. This is what you’ll see. You can look at any PDF or image file and process the form and extract information out of that form.

Object detection can recognise and count things. And prediction. Then there are options like business card readers and many others to consider. So, when it comes to models, you have your models that you have defined. So let us build a model, and we’ll use form processing and name it form. And to build a model, you need to at least have five documents for the same layout. And if you click on examples here, you can get sample layouts. So I’ll go ahead and create this model. And, once I’ve finished this model, I’ll need to tell him which fields to extract from the PDF file in order to contact these four fields. will create a new collection. It will contain all the image files that have these fields. So if I click on “plus,” then “add documents,” and then “from local storage,” I will add these five PDF files.

So I have these six PDF files and six PNG files. I will use the first five PDF files to train my model and the sixth to test it because you have to upload at least five. It is uploading. Now that I have all of these files here, I’ll analyse them, so while it’s analyzing, we’ll go back to the slides and see this screen you’ve built in. models on the left. and we used form processing. These are all the options available to us. We gave it a name. Here we can get sample documents, which I’m using. For this example, we added these four fields. Then we created a collection and added documents to it. We have uploaded the five documents, we have AR, and we have clicked on “analyze.” You see that we have used collection. Collection is now a group of elements. that share the same layout. And what we can do is, if we have different layouts, create different collections that will contain the same information.

So, with the collections, you can create unique AI models that extract the same information from documents with different layouts, so you’ll need to create more collections for each unique document. So if you have two different types of documents, we have done this. You make another collection. and add a different type of document to that collection. Now you must at least add at least five examples, and if adding up to 20 examples will give us good results, Now I have all these. So in each of the documents, I have to identify these five fields. So for example, this one, this field says “Build.” This is the contact field. This field is Date, and this field is Total. So that is how I have to tell them where these fields are in these five documents. And it is recognizing some of the documents automatically. Now, it has a recognised date and a recognized total.

I have to just tell him Build two and make contact. So, in this manner, I will quickly analyse all five documents, construct to, and contact. And again, I’ve done all five documents, and I’ll click on “next,” and it will use these five documents to train my model. Now my model is training, and it will take some time. As I said, your model is in training, and it may take some time. We’ll go to the models, and I see all the models here. This is the model I’m just creating. It is now being trained. So what I can do is click on this model, which is trained, and I will see all my model details. I can now put this model into action. Automate or PowerApps These are the four fields I have selected, and these are the documents. So I can do a quick test. Here I can publish and do a quick test. Let me do a quick test, and let me upload the 6th file and see if it’s able to recognise it.

Now it is analysing my document, and it will show me the results. So let’s wait. Yeah. So it has given me Date, contact, and date with 100% confidence; date with 99% confidence; build two with 99% confidence; and total with 100% confidence. So my model is done. Here we are adding identifying fields for the model. Here, we have identified the date. Then we have identified all the fields in all five documents. This is my model summary. Then I click on “Train.” Then it goes to the screen, which shows me models, and it is training my model. Then on this screen, I open up my model, and I can link it to Power, Automator, and Power Apps. I can publish it, and I can do it. Do a test quickly.

After the test, it identifies all the fields for me with the probability of their coordinates. So using flows, whenever you can enhance this model, So what you can do is, whenever email comes to your invoice processing department, you can trigger this AI model based on incoming email and pass on this attached file. The model will process the attachment, extract the information, and send the information to the required system as required. You can send an email with information. Or it can also process this information into any system. So here we show how easy it is to create a sample AI model. That’s all for this lecture. see the next one. Thank you.

3. Integrating With Power Apps & Power Flows

In this lecture, we’ll talk about the integration of AI Builder with Power Apps and flows. So first, we’ll talk about PowerApps. So let us go to Powernap’s. Let me create a canvas application from scratch, and I’ve created this sample application. Here I will insert a model that I’ve created in AI. So, once it’s finished, I grab the screen and go to insert. Then I have the option of using AI Builder, and I can enter my form processor, which I have already created earlier.

So I’m using this form processor that I’ve created earlier, and I can make it a little bigger, and then I can run it to see how it works. If I click on “analyze,” I can again give my PDF file. It will analyses the PDF file and show me the results. While we are here, we clicked on input and clicked on AI builder, and we have all these options. We used a form processor, and we can use other options as well. Then these are the results. So let’s see if the results are there. Yeah, so I can maximize it. So it has shown me that it has identified the date with 99% confidence. This field contains 100% of the invoice, or 98% of the total. So the screenshots show the same thing are the screenshots. Here we have seen which fields it has identified, which now, with confidence, look like flows.

So if I go here and create a new flow, this is my test flow, and I’ll manually trigger the flow. So as an input, I will give it a file, and then I will use my AI model. And this prediction will be used in the AI model. So I’ll gather all of the models I’ll need here. I have all these options. I’ll use the form processing model, and I will say image PDF, and I will give it the input file content that I just used. It needs documentation. I will save my model, check and test it, and then run it to give it my PDF file and see if it works. So it has saved it. If I make sure there are no errors, I can test it and run it manually. I have to give it a file name. So I’ll give this file a name and then run the flow. It is going with the flow. It will show me that it has run successfully. Navigate to the Runs page. It will show me all the run history. I can go to this run history and see the details. Now this is still running. It has run successfully. Now that this has run, input is my file, this PDF file, and output is the identified fields on the file. So I ended up with a total value of 457 and a confidence level of 1.

Then, with a confidence of 98, construct the value of fabric, the contact value of the editor, and the date value of the number, and that’s all there is to it. So we used Predict to connect to our AI model. Then, out of all these options, we used form processing. This is then converted to a flow Vivity file, and the predict model is applied to it. Now we are running it. So this is just the file we’ve given you. Then it took off. This is my running history. All instances of running are listed here, along with my output values. We saw, so we saw how it is. It is easy. It is to integrate AI with power apps and power flows. Thank you.

4. Form Processing – Tables

We looked at form processing in our early lecture. Now forms can have data in different types of fields, and they can have data in tables also. So let us look at how to process data if it is in tables. So when we are building our app, we have this where we added fields, but we can click on this table preview and add a table. So, say I had an items table, and I could then enter columns in those tables, such as this total. Once I do that, I have to train my model to identify this table and get the values from this table.

So let’s say I add a new collection and add these five files. So once I upload these files, I will identify the table items in them and add the description and item code total into that. So, now that we’ve added them, I’ll go over them. It will just take a minute. So while we did that, we looked at it, clicked on the tables tab, added table items, and then added columns to that table. Then we’ll select the table, link it over to the items, and then select each description. We’ll click on here to say we are selecting a description and select the description, and then click here to say we are selecting an item total and select the item total, and then we will do it all for all the rows present. So this is how we’ll train the model to identify tables. So let’s go back. All right, so this is again the same thing. We have selected our table and identified all the items and descriptions, and then we can see the results. So we’ll also go to the results also.

So here I have my application already built up, I can do a quick test, and I can upload from my device. Okay, now it has been analyzed and is loading fields. So let me pause. Okay, now it is done. So you see, I have this field here. So what I will do is select my whole table and link it to my items. Then in items, I have a description and an item total. So I say I’ll first identify the description and then choose my description, and it’s arrived. Then I will identify my item total and select my item total. Then I will add a row, say description, and the second description, the second item total, and then the third description and the third item total, and I’m done here. So this is my definite table. Now I have already created this app, and we saw that I uploaded the test one. So it has identified a table here, and if I click here, I can see that it is able to identify the description in item two.

And I just ran the PDF file; if I run the PNG file, it will be able to analyses that also and give me the item and amount. So we saw this. We are able to select this table. We are able to identify and train the model. This is training the model for another PDF. And this is how you saw the result. And this is for the other file, and we have this result, as well as the table in table format. So this one is not very accurate. You see that it is able to identify both rate and column, and it has put both values here. So that’s about it. Let us see if it is able to react. So it is done with this one. And if I click, I get this. So if we don’t get confidence scores in a table, that’s because tables don’t currently return confidence scores. So that is the difference between tables and items. So that’s about it. Thank you.

5. App Versions & Sentiment Analysis App

So in this lecture, we will talk about versions me that of your application and Sen analysis. So let us here. Now, this is my model, which I published and edited to make any necessary changes. hanges. So what it has dgives given tabs. o tabs. This is the pub version, version anise this is the last version, version and edit my edit my published version otrainedtversion. So it saved the flowchart and reports no errors. I’ll test it manually by telling it, “Good day today, I’ll run the flow,” and then going to the flow runs page, where it will show me all the runs and their histories. I’ll go to this specific instance, verify that the flow has completed successfully, and enter whatever text I’ve provided.  given, ND then onmodel, I model it is telling me that When positive nexus is one, I have a probability of positive nexus one, negative nexus zero, neutral nexus zero, and overall nexus one is positive. So you see how easy it was. Just give your input to the model, and it will give you the result. So this is an example we just saw. This is the input, and this is the output. That’s all. Thank you.

6. Extract Text From Image File

In this module, we’ll talk about text recognizers. So basically, what it does is look at images and extract text from the image. I made a table called Text Recognizer Results, and I just made a table that adds an M column automatically, and I then made a flow for this table. So, if I look at this flow, the first thing I see is when a file is created on my shear point. So this is my sole point.

Then I put in an AI model and a text recognition model. It will take the content of the file, and when it takes the content of the file, it returns the text identified in that file as a collection. As a result, it appears in the results collection. So for each of the lines it has identified, the results collection is there, and then for each of the lines it has identified, it will add a new row in my text recognizer results table, and this is what it will insert in the table detected text. So what I have done is I have already uploaded this image, image seven. And this is my image. And in PowerApps, this is the data.

So, take a look at this image. Hello, this is the information I’m receiving. And let me do one more thing. Let me go back here and upload one more file. So it will be this file. So it is uploading this file, and once it uploads the file, my Power Automate flow should trigger automatically and not issue only one successful run, but another successful one. So while it runs, let us look at the presentation. I created a table that reprograms the results. Then this is the flow. Whenever a file is created, I run a prediction model on it and pass files to it. The prediction models return all the lines of recognised text in the results and then in the results collection. For each line, I add a new row in this table, and the value in the row in the name field will have detected text. So I gave this file a run, and after running this file, it showed me this is the text it was able to recognize. Now that I have given you this file, I will upload just this file and see what happens. So if I go and click on all runs here, I will see that my second run has also succeeded.

And if I click on my run to look at its details, this file is created. It has run a prediction model, and it has run, so what I have to do is go to my tables and refresh the data to see if it has added the rows. These rows blend together perfectly. font selection handwritten, introducing modern California in S font. So, if you look at my file, you’ll notice that they’ve introduced their modern calligraphy wow five fonts handwriting font collection, which has been blended perfectly and tested curve. So because I edited it just now, I will add the results here and put them in my lecture. So this is how the AI builder is able to recognise text from an image. Thank you. You.

7. Business Card Reader

In this lecture, we’ll talk about the AI module business card reader. So let’s have a look at it. First, we’ll create a new Canvas app. So let us create a new Canvas app. And we create a blank app, which we will insert using the AI Builder business card reader. So once we have that, we can insert a few labels.

So this label will have a value of “business card reader dot full name,” then another label for “business.” So this is a business card reader, and we’ve added labels to see the information extracted from a business card. So if I just play it and plot a business card, this is the business card I have loaded. It should provide me with all four pieces of data. So Hannah Wilcox is the name. This is the email ID. Here. This is the business phone number. And this is the website, so it is able to extract information from the card and show it here. So I have created this application, or a cleaner version of the application, where I can give it another image. Now, this has other items along with the business card. However, it will recognise this business card and provide me with the four pieces of information.

So it has identified the information. I have put labels here so that’s easy to read. This is how easy it is to integrate a business card reader in the Canvas app if you want to do the same thing in flows. What you have to do is manually trigger flow. You add input for a file and then use your AI builder to pick up predictions. And in the meantime, you have this business card model. In the business card model, you just tell it the image type, and once you know the image that you have triggered manually, we can also use model-driven applications. Let me create a new model-driven application now. What I will do is link an appointment entity to this model-driven application. because there has to be an entity. So this is where I will edit the appointment, and so this is not showing up.

But I’ve already made this so that we can look at it. So I linked it to an entity of type appointment, and going back, I edited the quick edit form on the form side to include a business card reader. So if I edit it in the quick edit form, I have added a business card reader in the middle so that we will be able to upload the image. As a result, this is a quick edit form. And what I have done is I’ve added the plus component, and here I will be coming back to my test application, which I just created. Let me edit it and try it one more time. So I go and edit the site map here. If I click on the subarea, I have to click on the entity and select the appointment entity, and then I can also edit it. This is an appointment. Then I have to save and close it.

Then, if I go to forms and check all the forms, then I can go to quick edit form, quick create form, and so on. and I edit it. Here again, it will show me the appointment form. And here I have added a business card. Reader here. So I’ve added a plus component, and then I’ve added this control here. So that’s all. So this is already added, and what I can do is run my model-driven application. So once I run this, I can see all the appointments here. And if I say “plus activities,” it will show me the quick edit form here. I can scan my business card once I have uploaded. My business card It will extract subject and location information from here. And if I click save and close, it adds an appointment for me really quickly and easily. So, reader of business cards. It’s a prebuilt AI model that allows you to extract information from business card images. And if it detects a business card in the image, it will extract all the information, such as name, job title, address, email company, and phone numbers.

So here we are creating a canvas app, giving it a name, and then inserting a business card reader. And then we have added these labels and linked them to the business card information we have used to add these four full names. Email, business phone, and website But we have many other available fields, like address, city address, country address, postal code, address, street, business, phone, clean image, company name, department, email, fax, first name, full address, full name, job title, last name, mobile phone, original image, and website. So this is an example of a screenshot where we have tested it, uploaded a business card, and can see all this information here in PowerApps. What we did is add a prediction model. Use the business card model, give it an image type, and we can link this image from manually entering the image to this image. So whenever an image file is uploaded, we can read the business card and then use that information to insert a label on Tatar paper. So we have done the same thing with our text recognizer as we saw in the previous lecture.

Then, coming to the model-driven app, we created a new model-driven app and linked it to an appointment entity. So this is an entity appointment, and I linked it here. Then we went to forms and edited the quick edit form. This is the original form before editing; we don’t have a business card reader here, so we added a component for a business card reader, and these are the settings we did so for full name: we gave subject, so that is how the subject was populated with name, and for address, we linked it to location. So for location, we’ve got the address here. So then we ran the application, we went to Plus and created a quick appointment, uploaded the business card, and we were able to see the subject and location. When we clicked save and close, we were able to see the appointment record that was created. So that is how we used the business card reader. Thank you.

8. Object Detection

In this lecture, we’ll talk about object detection, where AI is able to detect a particular object from an image. So let’s come back to PowerApps first. So, as an AI Builder, you can create a model and use this object detection. In our earlier model, we had to put in five images. But for this one, we have to put at least 15 images of each object. So once we create it, we’ll have to first tell him which entities we have to extract from the image. And there are various types of domains, like common objects like this machine, drones, or a skateboard. Or you can have objects in a retail store like these or brand logos like these ones. We’ll use common objects and go over them next.

Then we will choose the objects that we need to identify. So I will add lemon, green tea mint, and green tea rose. And we have put in three objects. Then we will add images to our model and train it. So let us say we have added just one image. We’ll see how to train using this one image. So once it has uploaded the image, I can go and train it. Now it has told me that for all three objects, I don’t have any tags applied yet. So if I move my mouse over this image, it will start detecting it. I can also make a square like this, but it will also auto-detect also.

So this is Rows, this is Mint, and this is Cinema. And, as you can see, as I tag it, these counts increase, and I need to have at least 15 tags applied to each of these objects. So I have done all the tagging. So let me go back to my PowerApps models.

And I have already created this object detection model. So we can go to this one after training. It took some time, and this is how it shows up. And then I can do a quick test and upload an image from my library. Now I have uploaded this image, and it is analyzing the image, and it has shown that this is green tea rose with 97% probability, green tea mint with 93% probability, and green tea cinema with 96% probability. We can also build a canvas app. So if I go to the app, I can build a canvas app. So let me look at my test. Okay, this is model three. We have to build a Canvas app. So let me create a new Canvas app. And again, it’s a blank tablet app layout. And I will use an object detector and data tables in this canvas app.

So first I have to insert an object detector and link it to my model, which I’ve already created. And then what I will do is add a data table to it. Allow me to repeat: add data to the table. Okay, I will not use this one. I’ll say “object detector proof itself” on the items tab. So if I run this application, and then this is my test, and I upload an image, it is analysing the image, and it should show me the expected results of these three images. So it has shown me green tea rows, green tea maintenance, and green tea cinema. And then if I come here, I can see all these three objects with an object count of one. So these are all there in the hair. So we have to select the object detection model, give it a name, and then select the domain.

We selected common objects, and we have a domain for retail shells and brand logos. And then you choose the objects that you need to identify. At least add 15 images for each of the objects. So here we have around 30 images in total, with 15 or more for each object. You tag your objects in each of the images, and once they are 15 or more, you are done tagging. Then you will see this image of your home tag. And then you can train the model. After it gets trained, you will get this type of screen where you can publish your model or do a quick test. In a quick test, you can upload an image and it will help you identify the object. Then, in AIBuilder’s canvas app, you use the object detector to insert objects. Then you add a data table. They connect the two using objects on data table items; when you enter object detector group results, they should appear, but only on the screen. But we went back to the app and saw it here, and then it showed me the results. Thank you.

9. Language Detection

In this lecture, we’ll talk about language detection. So let’s go. Here we’ll go to power flows and create an instant cloud flow. As a result, the names are test language and manual trigger flow. So here we’ll add some text input. We’ll give it a text, and then we’ll use an AI builder to detect the language. So for this, we’ll give it the input that we manually entered. And this is our power flow.

Now that we have saved it, once we give it a test, it should tell us what language it is. Now, suppose a customer is asking something, and based on the language, you can route that query to the correct agent who has the correct language skills. So then you can add conditions to it, and then, based on the language, you can perform some measures.

Now it is saved, and let me test it first. I will give it the English phrase. It will return a two-letter word that tells you what language it is. So I ran it; it ran successfully, and you can see that this is my input text and that the output said English language. So if I run it again and give it another language, I have given it a German text. “So good morning” in German is “gooden Morgan.” And this has also run successfully, and this is my input, and de is German. So this is an overflow. We give it input and then run it through the Predict model for language detection. And this input is a text, and whatever text I give based on that, I get a language and a score on what the confidence level is. So that’s it for language detection. Thank you.

10. Key Phrase Extraction

So this module is about key phrase extraction. What it will do is take a piece of text and extract key phrases from it. So, imagine you want to extract key phrases from customer service or a customer quiz and see what they are about. So, once again, let’s make a flow that tends to flow. We’ll add text input later. It will use text and then a builder to extract Ephraemes. So in this model, we’ll give it a language. The text is in English, and the language is English.

We have entered the above input. So this is the text input, and we have given the same input here, so we have saved our flow. What we’ll have to do now is send it a text message. So I’ll just copy and paste from my slide here, and then test it with some text. So it has run successfully. Let me go to the runs page; it has completed successfully. Input will be the text that I have given. This is the text I have given, and these are the key phases it has extracted. Pre-built AI models, custom AI models, Microsoft, and things like that So Key Phrase Extraction identifies key phrases in the text by using Azure Cognitive Services analytics technology, and it provides a cloud-based service that provides natural language processing features for text mining and text analytics analysis, including sentiment analysis, opinion mining, and Key Phrase Extraction.

So we have looked at key phrase extraction here, language detection, and targeted entity recognition. So this was overflow, and we used predictmodel for key phrase extraction, and we gave this text an output, which is this one. So there are prebuilt AI models, and then there are prebuilt models. Like that. I have marked whatever phrases it has identified. So you see, it looks very relevant. AI Builder Key Phrase Accession is a pre-built AI model that identifies the key talking points from unstructured text. Custom AI models require that you provide data samples to train them before they can be used. Prebuilt models are pretrained with data from Microsoft, so they are ready to use right away. So that’s it. Thank you.

11. Receipt Processing

This lecture will talk about receipt processing. So we get a receipt from your grocery store or restaurant. And this model can look at that image and process the information on that receipt automatically. So in PowerApps, let us create a canvas application from the bank, name it “receipt,” and create it. So in this, we will add labels, a datatable, and a receipt processing AI builder model. So there you have it. We will insert an AI builder. And we had this receipt processor. We’ll add a receipt processor. Then we will add a data table and just move it. And for items, we’ll link it to a sheet that keeps track of them. Then we’ll add a label to indicate the total amount received by the processor, followed by another level for the probability of the confidence detected fields.

Now we have it ready. Now we can run it, and let us upload a sample receipt. So let me upload this one. It is processing this and has given a total of 54.5. I just picked up from here with a 93% chance. And it has other circles for other fields it has identified. So for the items displayed, it has identified this item. So although the receipt has more than one item, it has identified only one item. So this is how you use it in flows: You can also use it in apps, and you can also use it in flows. So let me create instant cloud flow, and then let me create it. So I will add input to add a file. Then, use the predict model sheet once more. And in this model, I’ll give it a field that I’ve already filled out. So this is my simple model. I will save it, test it, and give it a receipt. It is still saving it, but it is now saved. So let me give it another example. and it has run successfully. So it’s still running because it takes a little while to finish and then it says successful.

So you see, it has given inputs like transaction time, confidence, transaction date, total, subtotal tags, tip, and this information. So all of the information is also present in the items. So we used the AI builder as well as the receipt processor. And then we inserted a data table here and linked it to the seed processor using purchased items. We added the total field label and linked it to the total confidence of the seat processor, as well as the probability label and linked it to the total confidence of the seat processor. So this is the output we just saw. It identified the total probability from here probability.And I couldn’t identify this item from the first list, nor the other three. So here also, it has marked in blue only the topmost item. This is what happened when I submitted the same application. The second receipt total of 64 comes from here. Now you can see that this receipt has a 98.4% chance of being valid. Also on the menu: bloody Marys and fresh oysters. It has also identified the quantity from here and the price from there.

Then this is the third receipt that I gave, and in total it was not able to identify, but it was able to identify all these three items. Their quantity is from here, and their price is from here. This is our flow. We gave it a file, and then I ran the flow also. So when we run the flow, it gives me the merchant name from here. Here is the merchant phone number, and here is the total. Then on the second receipt, it gave me the transaction time and date from here, the date, and then the total from here. It also provided me with a subtotal tax and graduated tax. It is linked to the team. Then this is the third receipt. It gave me the merchant name, the address, the phone number, and the date and time from here. These are the dates and times, and it gave me a subtotal and text. So you can see how it basically reads a receipt and extracts information from it. So that’s all for this lecture. Thank you.

Leave a Reply

How It Works

img
Step 1. Choose Exam
on ExamLabs
Download IT Exams Questions & Answers
img
Step 2. Open Exam with
Avanset Exam Simulator
Press here to download VCE Exam Simulator that simulates real exam environment
img
Step 3. Study
& Pass
IT Exams Anywhere, Anytime!