AZ-204 Microsoft Azure Developer Associate Topic: Develop for Azure Storage Part 2
December 16, 2022

15. AZ-203/204 – Lab – Storage Accounts – Blob snapshots

Hi, and welcome back. Now in this chapter, we are going to look at something known as “snapshots,” which are available for your blobs. Let me go on to an existing storage account. So I’ll go on to my containers. Let me go ahead and create a new container so I have the container in place. Now let me go on to the container. Let me go ahead and upload a simple file for my local workstation. So I’ll browse for the file. So it’s a very simple sample TXT file. Let me go ahead and click on “Upload.” So I have the file in place. Now, if I go on to the editing section, you can see the contents of the file.

Now if I go back, I can actually go ahead and choose the file, and I can go ahead and create a snapshot out of this particular blob. So that means it’s taken a copy of this file at this particular point in time. Now, let’s say I go to the file, then edit it, and then make a change. Allow me to proceed and click the Save button. So now this is the content of this particular file. Now if I go back onto the file and view the snapshots for this particular blob, I can choose the snapshot, something I took earlier. You can actually go ahead and delete the snapshot, or you can also go ahead and promote the snapshot, which will override the existing blob. Please note that you can also go ahead and download the snapshot at any point in time. That’s basically your blob. So if I go ahead and click on Promote Snapshot and then OK, now if I go back onto my file and go on to edit, you can see we have the original version of the file, right? So, in this chapter, you’ll learn about the snapshots that you have for your blobs. 

16. AZ-203/204 – Lab – Blob – Properties and Metadata

Hi, and welcome back. Now in this lab, let’s look at blob properties and metadata. So a blob in a container in a storage account has system-defined properties. You can also define user-defined metadata for the blob object itself.

So these are additional name-value pairs that you can assign to the blob. So let’s go on to a zero and see how we can view the properties and metadata for a blob object. So here we are in Azure. So, if I go on to the object itself, I have a blob object in place. So these are all the properties of the object. So you have the URL, the last modified date, and the creation time. So these are all the system-defined properties for the blob object itself. But at some point in time, you might also want to add additional information for each blob object in your container. So you can do that by actually using the metadata property that’s available for each blob object. So these are nothing but key-value pairs. So let’s say you want to say that this object belongs to a particular department. So let’s say you want to ensure that each object in a container has a key that specifies which department the blob object belongs to. So you can specify a key and a value, and then click on Save. Please keep in mind that you can do this from a web program, correct? So, very straightforward. This is written to the properties and metadata for an object in a container in Azure storage.

17. AZ-203/204 – Lab – Blob lease

Hello and welcome back! In this lab, let’s look at the Blob Lease feature. The lease operation is used to establish a lock on a Blob object for write and delete operations. At this point in time, you can still read the Blob, but no other client can actually perform another write or delete operation on the Blob itself. Nazio. So I have a Blob object in a specific container in a specific storage account. Now, if you go ahead and select the object, you can acquire a lease right here. So if I click on “Acquire lease,” it will just confirm that I’m acquiring a lease on this Blob object. Once that is done, if you now go on to the object itself, it mentions that this Blob has an active lease and can’t be So here I am mentioning a time period. I’m then using the handle that I have on the Blob itself and then acquiring a lease.

And then I am actually fetching the lease state and the lease duration from the properties of the blob itself. Now, remember that, using Properties, you could also get the system-defined properties and the metadata for your Blob object. This is also possible. So remember, in an earlier chapter, we had looked at properties, system-defined properties, and the metadata. So you can also get that using properties over here. So let’s go ahead and just quickly run the program, right? So let’s go ahead and run this program. So here it’s telling me that the lease date is leased and the lease duration is fixed. Now, just one quick note over here: I am also fetching the attributes. So this is a method that you can use with your Blob. When attempting to obtain the properties of the Blob object itself, please ensure that you first obtain the attributes in order for the attributes to be fetched from the server onto your client. Sometimes, when you try to get the lease, you might not get the actual state of the lease if you have not fetched the attributes. Just a quick note, right? So this marks the end of this lab.

18. AZ-203/204 – Lab – Copying Blobs

Hi, and welcome back. Now, in this chapter, let’s look at a lab on working with copying blobs. Now, there are various tools available for working with copying blobs in storage containers or even between storage accounts. So you have the Azure Command Line Utility, you have the Easy Copy Command, and you have the Net Storage Client Library. So we’re looking at using the Azure Command Line Utility. So you can use this utility to upload and download blobs between storage accounts and the local file system. Here, the transfer is synchronous. So the transfer between your local system and the storage account is synchronous in nature.

 Now, if you are transferring large objects, there could be an issue. So if the transfer fails, there is no way of recovering it. So, remember, when you’re using the Azure Command Line utility, you can also directly transfer blobs between containers and storage accounts using the Azure Command Line utility. The operation in this case is asynchronous in nature. So you could basically start one and then start another copy command at the same time. Now, when looking at another utility, this is a specific utility that’s used for working with copying blobs between storage containers and storage accounts and between your local hard disc and storage accounts. So this is, as I said, a tool specifically written for working with blobs. You can use this page to transfer data into, out of, and between storage accounts. Here, every transfer operation happens asynchronously. And one of the biggest advantages is that if you’re transferring an object and that operation fails, you can also restart a failed operation. So it will always try to restart at the closest point where the failure happened. And then you have the net storage client library.

So this is from within a program. So the advantage is that you can build your own custom code to work with blobs as your storage. And you can also get access to the metadata and the properties of the blob using the Net Storage Client Library. So in this lab, let’s look at how to work with blobs using the Azure CommandLine utility and the AZ copy command. So here I am in Azure. Now, I have a couple of storage accounts in place. So I have a storage account on this app store for 2020. In this, if I go on to the blob service, I have a container called Demo. Let me go ahead and delete the current file in this container. So I don’t have any files in this container. Now here, I’ve gone ahead and opened the cloud shell. Now let’s see how we can actually upload a file. It could be in our local system, or it could be in the zero cloud. So let me go ahead and upload a file onto Cloud Shell. So I just uploaded a sample audio log, so if I don’t LS, I can see my audio log file. To upload a file to the container, follow these steps. I’m going to be using the AZ Storage Blob upload command. So this is the Azure command-line interface. I’m specifying the name of the container. I’m specifying what should be the name given to the blob object I’m giving.

What is the name of the local file that needs to be uploaded? I’m giving the account name, and I’m giving the account key. So remember, the key is used for authorizing Azure Cloud Shell to actually work with the account itself. So you could go to the App Store 2020 access keys. So you can go ahead and take either key one, copy key one, or copy key two from your storage account. So let’s go ahead and run this command. Right, so that’s done. If I now go on to the Blob service, let’s go on to our demo container. So now you can see our audio log file. Please note that it’s just showing up archived here because I have an Azure Logic App in place that automatically archives objects that are uploaded to this container. Now, we could also go ahead and copy objects between containers within storage accounts. So I have a separate storage account for the destination change money. So currently, if I go on to the blob service, I don’t have any containers as of yet. So I want to copy the file that’s audiolog from this demo container onto a destination storage account using the Azure command-line interface.

So I’ve got the command in place. So it’s an easy storage blob copy. Here I’m giving the account name of the destination account. This turns the account key off. Remember the destination account. So go to that storage account, go to the access keys, and take key one from here. Then, at the destination container, you have the destination blob name. Then you have the source account name, the source key, the source container, and the source blob. So make sure I move on to blobs. Let me create a container’s destination. So this is the container in our destination account. Let me click on “okay,” so the container is in place. Let’s go ahead and now run the command. So it’s saying it’s done. Let’s go on to the container. And now you can see the file in place. So you can use the Azure command-line interface to work with the copying of blobs into storage accounts. Now you could use the other tool known as the AZ Copy Tool. So in the AZ Copy Tool, you first have to go ahead and download the executable. So I’ve done it for Windows. It’s a simple zip file.

So I’ve actually gone ahead and placed the executable in a folder named “C Work Temp.” Now, in order to actually authenticate ourselves from our local machine, I’m going to be using something known as a shared access signature to authorise myself to actually use the Blob service in our Azure storage account. Again, if you haven’t read Shared Access Signatures, we have a number of chapters on the subject. So if I go on to our storage account, let me go on to the App Store, let me go onto Shared Access Signature. I can now actually close this cloud shell since I don’t require it. So this is a secure way of giving access to your services, which are available in your storage account. Now, since I want to work with the blob service, I’ll mark the allowed services as blob. I’ll leave all the allowed resource types as hazardous. I will grant the permissions to read, write, list, add, and create. Let me go ahead and generate a SAS and a connection string. And here’s my Blob SAS URL. So let me go ahead and take that.

So now I’m going to be using the AZ Copy command, which I have in my work temporary folder, to upload the audio log file onto our storage account. So if I go back to my storage account, let me go onto the Blob service and onto the demo container. Let me go ahead and again delete this file just to make sure it’s not there. So now I’m going to go ahead and show the following command: AZ. Copy the orderblock file onto a container that’s “demo” in our storage account using the Shared Access signature. So it’s saying the total number of transfers is one. If I go on to my storage account, I click on Refresh. You can now see the audio log file. You can now use the simple copy command to copy Blob objects between containers and storage accounts. The Azure command-line interface is the same. I’ve attached a commands file as part of the resource for this chapter so that you can practise these commands out. Right? So this marks the end of this lab.

19. AZ-203 – Data Movement Library

Hi, and welcome back. So in this chapter, let’s go through using the Microsoft Azure Storage Data Movement library. just something that’s important from an exam perspective. So, this is a cross-platform, open-source library that can be used for copying Azure Storage Blobs and files. Now, in this, you have two main classes. So, first, there is a directory Transfer Context class. This was used to transfer an entire directory of files. Now, as part of this class, you have something known as the “should transfer callback async property” that is used to specify whether a transfer should be done or not. Then you have the Single Transfer Context Class. This is used to transfer a single file. You now have the authority to override the callback sync property as a member of this class.

This is used to specify whether an override should be performed on the destination. You are in the transfer manager class. This contains the methods to copy the data, such as copying a sync and uploading a sync. Now, something from an exam perspective. Now, the older version of these classes is something known as “Service Copy Flag.” So this is a flag that is used to indicate whether a copy is a server-side asynchronous copy or not. So if the flag is set to true, then the asynchronous copy will be done on the server side. If it is set to false, then the file from the source will first be downloaded locally and then be uploaded to the destination. So, let’s go ahead quickly and see how we can use the Storage Data Moment Library. So, here we are in Azure. Now, I have two storage accounts in place.

So in my source storage account, I have a container on this demo in which I just have an audit log file. If I continue on to the destination store, I will have a container.I don’t have any blobs in this particular container. Now, if I go on to the code itself, or if you go on to the new Get Package Manager, I’ve gone ahead and installed Microsoft Azure Storage data movement in the program. Very simple. Again, I’m getting a reference to the container and the file, the block object in our source container. And then I’m getting a reference to the destination as well. And then I’m going ahead and using the Single Transfer context class. And then I’m going ahead and using Transfer Manager. Copy? Sync. So you are the handle to the source blob and the destination blob. as simple as that. Let me go ahead and run the program. After I run the program, if I go to the destination, if I click on Refresh, you can now see the audio log file. So you can copy objects using the Data Movement Library as well. So this marks the end of this chapter.

20. AZ-203/204 – Lab – Azure Table Storage

So now in this chapter, we are going to look at a lab on Azure Table Storage. So basically, this is a service that is used to store structured, non-SQL data. This is a key attribute store. The design is schema-less in nature, and we’re going to see this in our lab. The cost is typically lower than a traditional SQL database for the same amount of data data. Now, use Table Storage for use cases such as storing user data and application data. But if you need complex joins for tables, if you need foreign keys, or if you need stored procedures, then look towards using traditional SQL databases instead of Azure Table Storage. Now, in Azure Table Storage, let’s take an example where you have a customer table.

So, customers, here is the name of the table. And then you have entities that you define as part of the table. So the entity is like having rows in a sequel table. So each entity has a set of properties. Each property has a name-value pair. Each entity also has three system properties. These system properties are known as the partition key, row key, and timestamp. Each entity has a fixed size. So let’s go on to Azure. Let’s see how we can work with Azure table storage. Now, if you go on to resources, I only have a storage account in place. If you go on to the table service, I already have a table in place. Adding a table is very easy. You can click on the “Plus Table” sign, give a table name, click OK, and then you have your table in place. As you can see, the table has a URL. Now, to add data to the table, you can go ahead and download the Storage Explorer. There’s also the Storage Explorer in preview mode within the Azure Portal itself.

So if you go on tables, you can go on to your customer table, and you can go ahead and add an entity from here itself. Now, when you add an entity, by default, there are two properties that are mandatory as part of the entity. One is the partition key, and the other is a row key. Now, these are similar concepts, which are also in the table API in Cosmos DB. I’ll go through what a partition key is and what a row key is in a subsequent chapter. For now, let’s try to logically map our partition key and our row key onto the properties of a class. So let’s say that I am trying to have a class, or I’m trying to have data in place that is going to store information about the prices paid for courses by various customers. Let’s say that I decide that the coursename is going to be our partition key. So let’s say I have a user who’s taking a big data course and has the row key. Let me enter the username, and I’ll add another property that we can add: the price that is paid for the course. So let me just enter it; it has ten. We can add it as an integer and click on Insert. I can go ahead and add another entity. So by adding the values accordingly and clicking on “Insert,” I can add one more.

So here I am giving the same partition key to another entity but a different row key. Keep this in mind. Now that we have a cumulus in nature, we can add another property if necessary. So you don’t have to have a predefined schema for your table. You can actually add different properties for different entities in the table. For now, let me just insert this entity onto the table, right? So now we’ve got three entities and three system properties. So we have the partition key, the row key, and the timestamp. So we’ve seen now how we could create a table and add entities to it using the storage Explorer, which is available in the Azure Portal. Now we also have to look at the code that can be used to query data from an Azure table. But let’s go ahead and just execute some rest APIs to see how we can work with tables. So for that, what I’m going to do is let me go on to another tab on our same storage account. Let me create a shared access signature. So this will allow us to basically be authorised as a user by Postman to work with the table service. So I’m going to go ahead and just ensure that I allow the table service. I have all the allowed resource types. I’ll just make sure to add the permissions to read the data from the table. Let me go ahead and generate the SAS and the connection string. Over here. I can take the table. SAS URL Service Let me go ahead and copy that. Let me go on to the Postman tool. Let me go ahead and add the URL.

And over here, let me add the name of the table. So we want to get all of the entities from the customer table. Let me go ahead and click on “Send.” So now we are getting all of the partition key and the row key off the table using this URL. Now, of the different authentication and authorization techniques, this is just one using the shared access signature. Assume I want to get a specific entity based on the partition and the rowkey, so I could enter the partition key. What is the value that I want? And for the row key, what is the value that I want? Let me go ahead and click on “Send.” So now we can see that I’m only getting that entity that has that partition key and that row key. So again, it’s very important to understand how the URL was formed. The service access is just a way to authorise yourself. But what is important is the way that you construct the URL to access the Table service, to access the Table in the Table service, and to access an entity from the Table in the Table service. Right. So this marks the end of this chapter.

21. AZ-203/204 – Azure Table Storage – Partition and Row Key

Hi, and welcome back. Now, we’ve seen these terms, partitionkey and roki, when defining a table. So what exactly are this partition and this row key? When you have a large amount of data, say in a database table, it’s usually a good idea to split it up into multiple, either logical or physical partitions. Now, what’s the advantage of having different logical partitions for your data? It makes it much easier to find a record. So let’s say you have a SQL table. So finding a record is much easier when your data is split across multiple partitions. So it’s like indexing at a different level. So if you want to find a record in that data, you can actually use the partition where the data is residing to actually find that record. And that’s the concept behind Azure Table Storage partition keys. So in Azure Table Storage, if you have a table and a customer, when you start adding entities to that table, those entities are spread across multiple logical partitions. Now, on the underlying infrastructure, these partitions could either be stored on the same Azure physical server or on different servers.

So, internally, an algorithm is used to determine the best way to store these logical partitions. But when it comes to defining the partitions themselves, they are defined as logical partitions. Now, the way these partitions are determined is based on your partition key. So let’s say you define or map your partition key to your entity. Assume you determine that the customer is in another city. So the city in which the customer is located is the partition key. Assume you have a value for an entity named New York. So all the New York-based records will come in one partition. If you have Chicago, all of its entities will appear in the other partition. So how does this make searching easier? Well, if you are looking for a particular record wherein the customer city is equal to New York, the engine can actually pinpoint it, drill down, and just go to the partition that consists of the records or the entities. In this case, where the partition key is equal to New York, instead of scanning all of the data and trying to find your entity, it is now drilled down onto just the partition section, and it can go onto each entity and satisfy your query.

So it’s very good when it comes to the concept of searching for data. And that’s the concept behind the partition key. Next, you have the row key. So since you have multiple entities that can have the same partition key, to uniquely identify an entity within the partition itself, you have the row key. So maybe the customer name can be the row key. And the combination of the partition and the rowkey basically forms the primary key for your table. So, similar to having a primary key for a sequel-based table, the combination of the partition key and the row key is the primary key. Now, when you are querying your data, remember that the querying of data is very important. So if you’re going to do frequent queries based on the customer city, then that’s good; that is a good candidate for your partition key. But let’s say you’re querying on other properties of the table that are not part of the partition key, then it’s an issue. Then the engine helps scan the entire table just to satisfy your query, right? So that’s the concept behind the partition and the row key. Please keep in mind that these system properties are important in.NET when creating a table, and you can actually map these properties onto the properties of your class.

So if you have a customer class, you can map them accordingly so that when you insert the object or the entity in the table, they are mapped to the partition and the row key. So just a quick recap on the Azure Storage entities So we have the partition key. So these are string values. These define the partition the entity is stored in, and then we have the row key. These are also string values. These are used to uniquely identify entities within each partition. We then have the timestamp. This is the last time the entity was modified. So the table’s primary key is a combination of the partition and the row key. Now, some examples of partition and row key candidates are given in the Microsoft documentation. So, for example, let’s say you’re redefining a customer information table. You can use the city as the partition key and the customer ID as the row key. So all the partitions will be sorted based on the city, and within the partition key, within the partition itself, you can have each row that corresponds to the customer ID. Another example is the product information table. Here, you can have the product category as the partition key and the product number as the row key. Now, you always have to ensure that you choose the candidate for the partition key and the row key based on table queries.

Efficient queries will make use of the partition key and the row key when searching for entities. Consider using a property with a slower changing rate as the partition key. If you choose a property where the value changes very quickly, this will cause a lot of performance issues on the underlying table. Now, if you choose a contiguous set of values, you have the partition key. So let’s say you’re choosing the customer ID that has a partition key. So your partition key ID will be 1, 2, 3, etc. Now, because your table storage may group these partitions on the same server, and because your ID is only incremental in nature, insertions to this table will always happen at the end. So these insertions can lead to something known as a “hot spot,” wherein all of the rights are just happening on a certain partition or a certain part on the same server where the partitions are being stored. So to avoid this, what you can do is create a hash value out of that customer ID in your programme and then insert that value with a proper property in the table. Right? So this marks the end of this chapter. Just a quick update on your table storage entities. 

22. AZ-203 – Lab – Azure Table Storage – .Net

So here we are in the code which will be used to work with Azure Table Storage. So, again, let’s go on to our new get package manager. Let me go on to install, so what I’ve gone ahead and done is install Microsoft Azure Cosmos table. Now, the Cosmos service itself has something known as a table API.So you could use the table service in Cosmos DB as well in Azure Storage accounts. But when it comes to the API calls, the API calls are the same whether your data is stored in an Azure Storage account or whether it’s stored in Azure Cosmos DB.That’s why you can use the same API even with table storage in Azure Storage accounts. So, going back to our program, what do we have first? So, first, is the connection string. Again, very simple.If you go back onto the Azure Portal, let’s go on to all resources. If you go on to your storage account, if you go on to access keys, again, you could take either the connection string of key one or the connection string of key two and paste it in the program.

Now, going back to the program, here I’m mentioning the table name. So we’re going to be interacting with the customer table. In our storage account, I am setting the partition key and the row key. So this will be used to basically insert an entity. And then in the main program, I’m first making sure that I pass our connection string from our storage account. I then create a cloud table client. Finally, I create an object of the typecloud table, and I make sure that I use a table client to get a reference to our table. And then I’m going ahead and basically calling two methods. One is to insert the entity, and the other is to read the entity.So in the program. So any method to insert the entity. Very simple. So let’s go ahead, run the program, and see how it works. Now, the first thing I’ll do in the programme is to insert an entity onto our table.So the entity I’m going to insert is a zero. The developer has the partition key, user C has the row key, and the price is 100.Let me go ahead and run this programme so I can see the entity is added.If I go on to Azure Storage, let me go ahead and click on Refresh. And now I can see the entity has been added. Now let me go ahead and run the programme to read an entity from the table. So let me go ahead and run the programme so when I run the program, I’m getting the output as desired. So this marks the end of this chapter.

23. AZ-203 – Lab – Azure Table Storage – .Net – Part 2

Hi, and welcome back. Now, in continuation of seeing how to work with Azeo Storage accounts, a table service is available in DotNet. So early on, we saw how to insert an entity in the table and how to query based on the partition and the row key. Now, I want to show you two more methods that you can actually use when working with table service. So, first, let’s look at that method based on the partition key. So, let me go into the method to explain what exactly you can do. So I’m over here making a query of the type customer. So that’s the type of glass I have. I’m not specifying a where condition; I’m saying to generate a filter condition based on the partition key.

Then I can use the equal operator in a query comparison. If I go to the side, I can now get all the entities where the partition key is equal to your developer. So, if I go back to my table query, please keep in mind that you can combine multiple filter conditions, and then execute my query, I’m getting all thereturn entities and writing them to the console. So this is how you can actually create a query based on the partition key. Now, apart from that, another operation is to basically insert a batch of records. So we’ve seen how to insert one entity into a table. Now let’s see how to insert a batch of entities into the table. Now, when you’re inserting a batch of entities, what’s important to understand is that you can only insert a batch where they all have the same partition key.

So, over here, you can see that I am creating three objects of my customer class, but all of them have the same partition key. Please remember this. I am then creating something known as a table batch operation. I am then inserting all of my objects into the batch operation table object. Then I run an execute batch. And this will go ahead and actually add all the entities to my table. So let’s go ahead, run the program, and see how it works. So, here we are on the program. So, what I’ll do first is execute the method to insert a batch of entities. So let me go ahead and run the program. So, if I go to my Storage Explorer, I can see that the records have been inserted into my customer table. So you can see that I have the three entities in place. Now, let me go ahead and run the programme to query the table based on the partition key. And now you can see all the three entities wherein the partition key is equal to zero being returned. Right? As a result, this mark oxy ends up in this lab. 

24. AZ-203 – Exam Extra – Dynamic Table Entity

Hi, and welcome back. Now in this chapter, I’m just going to talk about something known as the “dynamic query” that’s available with tables. So, once again, this is something important to understand from an exam standpoint.

So over here, I have a table that’s defined in one of my Azure storage accounts. So I have a product key that’s basically mapping to the partition key, and at the customer ID, it’s mapping to the row key. And then I have quantity, and then I have price. Now, we’ve already seen how we can actually get the entities from the table. We can also get entities from the table using something known as a “dynamic table entity.” So if you want to formulate the entity on the fly, you can go ahead and use the dynamic table entity.

So over here, what I’ve actually just commented on is to basically create a query using the dynamic table entity. Here I’m getting all the entries from the table based on the partition key, and then I’m going ahead and writing the partition key and the row key. But over here, I’m essentially deciding what kind of entity I want to return from the table.

So instead of actually going ahead and getting all the attributes of the entities from the underlying table, what I’m saying is that apart from the partition key and the row key, I only want to fetch the quantity attribute of the entities from the underlying table. So that’s where the select clause comes in. I have my normal where condition with the generate filter condition over here. So I’m going ahead and selecting those entities wherein the quantity is greater than ten, which means that I only want to take or get the first two entities that match this condition in the entry resolver. So I am saying that I only want to get the row key and the property of quantity. And then I’m going ahead and writing it to the console. So let me go ahead and run this program. As you can see, running the programme yields only two results. I’m getting the key and the value. That’s the key and the quantity, right? So this marks the end of this chapter. I just want to talk about the dynamic table entity.

25. AZ-203/204 – Lab – Queue Storage

Hi, and welcome back. Now in this chapter, I’m just going to talk about something known as the “dynamic query” that’s available with tables. So, once again, this is something important to understand from an exam standpoint.

So over here, I have a table that’s defined in one of my Azure storage accounts. So I have a product key that’s basically mapping to the partition key, and at the customer ID, it’s mapping to the row key. And then I have quantity, and then I have price. Now, we’ve already seen how we can actually get the entities from the table. We can also get entities from the table using something known as a “dynamic table entity.” So if you want to formulate the entity on the fly, you can go ahead and use the dynamic table entity. So over here, what I’ve actually just commented on is to basically create a query using the dynamic table entity. Here I’m getting all the entries from the table based on the partition key, and then I’m going ahead and writing the partition key and the row key. But over here, I’m essentially deciding what kind of entity I want to return from the table.

So instead of actually going ahead and getting all the attributes of the entities from the underlying table, what I’m saying is that apart from the partition key and the row key, I only want to fetch the quantity attribute of the entities from the underlying table.

So that’s where the select clause comes in. I have my normal where condition with the generate filter condition over here. So I’m going ahead and selecting those entities wherein the quantity is greater than ten, which means that I only want to take or get the first two entities that match this condition in the entry resolver. So I am saying that I only want to get the row key and the property of quantity. And then I’m going ahead and writing it to the console. So let me go ahead and run this program. As you can see, running the programme yields only two results. I’m getting the key and the value. That’s the key and the quantity, right? So this marks the end of this chapter. I just want to talk about the dynamic table entity.

26. AZ-203/204 – Lab – Azure Functions – Queue binding

Hi, and welcome back. Now in this chapter, we are going to see how wecan invoke an Azure function while an Azure Q Storage trigger.So in Azure, you can go ahead and create an Azure function that’s based on an Azure queue trigger. S

o that’s what we’re going to do in our particular lab. So we’re going to go ahead and create a new Azure function. When you create an Azure function of the type Azure queue trigger, you must also create a connection to an Azure Storage account.So remember, in the Azure Storage account, if you have a general-purpose V1 storage account or a general-purpose V2 storage account, you can make use of the Azure Q Storage service. So first, you have to go ahead and ensure that the Azure function has a connection to the storage account so that it can go ahead and access that particular queue. Now, when we go ahead and add a message to the queue, the Azure function will be triggered, and the Azure function will have access to the details of the queue message. Now, depending on what you define as the underlying input object, you can actually get access to the queue message. So over here, I’m giving an example of a JSON object. You can have the data passed as a JSON object to your Azure function. You could also have your data passed as a simple string to your Azure function. So let’s go ahead and see how we can work with Azure queue triggers for the Azure functions. So here I am in Azure.

Now I have a storage account in place. So this is a general-purpose account with two storage areas. Now, let me go on to the queues. Let me just go ahead and add a new queue. So I don’t have any queues over here; let me go ahead and click on OK; let me go on to the queue. So there are currently no messages in this queue.Now let me go on to my function app. Let me go ahead and create a new function. So this time I’m going to go ahead and choose an Azure Queue Storage trigger. Let me give you a name. Now you must go ahead and tell me what the queue’s name is.So the name of the queue is “app Queue.” And here we have to go ahead and create a new connection on our storage account. So let me hit on something new. I’ll choose my storage account; that’s the demo store.

So once we have that storage account connection in place, we can go ahead and now hit on “create.” Now, once we have the function in place, we have some boilerplate code. So again, we have our run method. This time it’s not returning anything. Now our queue message is being returned with a string. So remember, the data can be given in different forms to your run method. Over here, we are just logging the information on what is in the queue message itself. So let me go ahead and open the logs over here. Let me go head-on to my app queue. Let me go ahead and add a message. So I’m adding a JSON message over here. So I have my keys: my ID, my name, and my rating. Let me go ahead and click on “okay.” So we have the message. If I go ahead and click on Refresh, you can see the message is no longer there. And that’s because the message has now been picked up by our Azure function. If I go on to the function over here, you can see the details of the message. So here you can see we have the ID, the name, and the rating.

So now an RQ message has been sent from RQ to your function. Now, let me go ahead and just change the code for the function. So over here, what I’m doing is my normal course class. Now, this time, I am ensuring that my data is made available as a JSON base object and then going ahead and typecasting that object onto my course class type. And then I’m going ahead and logging that information. So again, let me go ahead and open the logs. We can go ahead and click on “Clear.” Let me go ahead and click Save as well.To ensure that our function is saved, it will also go ahead and compile the function. Now over here, you can see we are getting an error message. So there is a missing trigger argument, named my Q item. So what does this mean? So for that, let me go ahead and move on to the other file. So, remember, over here I have the name of my object, which is being passed as OBJ of the type JSON object, or jbject. Now, if I go on to view files, let me actually go on to the function dot JSON file. So, remember, in the function dot JSONfile, we have our different binding information. Now in our binding information, we have an input binding. So, remember, this is an input binding because the data is being sent from our queue on to the Azure function.

So we have our direction, we have the name of the queue, the name of the connection, and the type of trigger, which is the queue trigger. And we have the name of the parameter of the object that will be sent from our queue to our Azure function over here.Now, what we can do is, since in our Azure function we have named that parameter object that is being sent onto Azure as OBJ, we can go ahead and change the name over here and click on Save. So now you understand the correlation between the function JSON file, which has your bindings, and your Azure function. So let me go ahead and hit “Clear.” So once you have gone ahead and changed the function JSON file, let me go on to my application queue; let me go ahead and add a message. So I’ll go ahead and add the same message. Click on OK. If I click on “refresh,” the message is gone. If I go on to my function over here, you can see the information message. But remember over here, this time I’m getting it as an object, which is much better. I’m able to typecast it onto my custom class, and now I’m able to access the properties of the class. Right, so this marks the end of this chapter, wherein we have looked at the queue trigger.

Leave a Reply

How It Works

img
Step 1. Choose Exam
on ExamLabs
Download IT Exams Questions & Answers
img
Step 2. Open Exam with
Avanset Exam Simulator
Press here to download VCE Exam Simulator that simulates real exam environment
img
Step 3. Study
& Pass
IT Exams Anywhere, Anytime!