AZ-500 Microsoft Azure Security Engineer Associate Topic: Storage Security
December 14, 2022

1. Lecture: Storage Account Overview

Let’s get started with Blob storage by looking at this from the storage account perspective. First of all, we have our storage account, and you’ll need one that supports Blob storage. In addition, we have our containers, which are inside our storage account. This allows us to organise our various blobs, images, and data files as we come across them, as well as learn about the blobs themselves, which go inside. Now, you may be thinking, “Okay, what is a blob?” How is this different? Blob, on the other hand, is a large binary object. Think of it as your ones and zeros. These could be images, video files, random text files, or whatever they might be. But don’t confuse them with files compared to, say, an SMB file share, which has permissions and other things directly associated from a systems perspective. This is for storing large amounts of images, videos, etc.

Now, if we look at the storage account types with three major types, we have GPV version 1, Blob accounts, and GPV version 2. Now, GPV version two is the main choice you’ll make when you create storage accounts in Azure. Now, because it allows us to do page blobs and block blobs, we can do all of our images and videos. We can also store VM discs in there as well. We can mix everything into a GPV-2 account. GPV One allowed us to store blobs and other discs as well. But a blob account was specific only for blob storage, and ultimately it was confusing for everybody. So that’s going away.

And GPV two accounts, ultimately the future direction, you may ask them, “Okay, well, what is the difference between a block blob and a page blob?” So a block Blob, again, is ideal for storing text or binary files. Those are, once again, the image and video files. A single block Blob can contain up to 500 blocks of up to 100 megs, for a total size of 475 terabytes. That’s a single image that ultimately could be 475 terabytes. Append blobs are also there, and they’re optimised for append operations. example for login. Now, you may see the difference between blob creation and appending blobs, and that’s just as important from that perspective. Page Blobs. On the flip side, these are for read-write operations.

Think about your VM discs here. These are used by Azure VMs, and they can be up to eight terabytes in size. This brings us to the next thing we need to COVID, which is storage tiers. We’ve decided now that we want to use Blob storage and store a bunch of images. But what’s the right tier to go for? Well, the first one we look at is hot storage. This has our highest storage costs but our lowest access costs. Consider this for frequently accessed images, such as those in our application or website content, which are accessed on a regular basis. Then we have cold storage. This has low storage costs now but higher access costs, and it is possible to move from cold to hot and from hot to cold, as you’ll see in some of the demos. However, you will incur a storage cost penalty if you do so.

Because if you move from cold to hot storage, you’ll have to pay those higher access costs for accessing the cold storage as soon as you move it to hot storage. So even though you can move them around, you know it’s best to aim to use the right tier of storage for your purpose. And if you look at Cold, it’s really intended for data that will remain cool for 30 days or more to be cost-effective. Finally, we have archive storage as well. This has our lowest storage costs but our highest retrieval costs. When a blob is in archive storage, it is offline and cannot be read. And this is really because it’s pushed off to tape or some other archive-type storage that Microsoft uses on the back end. So, if you decide to access something in archive storage, it will take some time for it to be rehydrated. So you can access it now. Ultimately, just pick the right storage for your use case, and if you decide OK, well, sometimes things go into hot storage, and then eventually I want them to go to coal storage or eventually to archive.

It is possible to impose some sort of auto-tyrion around us, which will gradually move unaccessed blobs from Hot to Coal to Archive. And there are various automation techniques in Azure that allow this to take place. Finally, let’s just take a look at the decision criteria as we try to choose between blobs, files, and disks. Often, this gets very confusing. So, if we divide it into “blobs,” this is for storing large amounts of objects like images, videos, and so on. So we might want to access them from anywhere—lots and lots of images, videos, text files, etc. So that might be accessed.

Then we’ve got files. So think of these as more “accessing from multiple machines” jumpbox scenarios. Maybe you’ve got a remote desktop that somebody wants to store a root user profile on. That’s a good scenario. And then finally, we have a disk. So these are really associated with our virtual machines attached to the operating system. Think of this for things like lift and shift scenarios. When we move in from off premises, we might be doing disc expansion for application installations and lots and lots of rewrites. That’s when we would ultimately use discs themselves. Now, that concludes the overview, and I encourage you to check out the subsequent demo, especially as we kind of build blob storage accounts and move images around. You’ll get a lot of mileage out of everything that we’ve covered so far.

2. Demo: Create a Storage Account via Portal

Here we are in Azure, and the first thing we’re going to do is on the left: just go ahead and create a new resource group. So select resource groups, select ads at the top, and type in your resource group name. So in my case, I’m going to use SlashStorage, choose your subscription, and choose the resource group location. Again, this is just for the metadata of the resource group itself.

You can still put your storage account in any region you like and go in and create that. The reason we do this and create our own resource group here is so we can put all our storage accounts in there for demonstration purposes. Just delete the resource group when you’re done, and you’ve ultimately wiped everything out right away. So within a few seconds, that should be created if we hit refresh here. And there we go, slashing storage. Now we’re going to that resource group, so let’s go in here and create a new storage account. So go ahead and click the Add Type in Storage Account button. You’ll see it come up in that storage account’s Blob file table queue. That’s all we have space for, so go ahead and select it. And now we’ll get the blade for creating our storage account. So first of all, let’s go ahead and give it a name. So in my case, I’m going to use the SL Storage demo, and as you can see, this is a publicly addressable namespace that we need to have, so it has to be unique.

Then we choose the deployment model for everything. In this course, we’ll ultimately be using the resource manager models you’ve probably seen so far, and then we’ll get on to the storage account type. So if we go and hit the drop-down here, we can see we have Storage, Storage V Two, and Blob Storage. And, eventually, we’ll be using a Storage V2 account for everything. Some of the options listed below have now changed. If we go back and choose Storage GV One, we don’t get that option for the tier straightaway. Switch back to it again, and you can see that there would be a default access tier. And this is because we can set a tier of storage at the storage account level that will apply to all the objects underneath it. Now we can still change the individual blobs to have their own access here, but this is quite an interesting feature because we can say, “Okay, everything we want here.” We might want cool, we might want hard.

In my case, I’m going to choose “Hot,” and everything underneath it will apply to us, then we can choose our performance. So we’ve got standard and premium, much like when we talked about virtual machine disks. In this case, we have the option of using SSD storage or a standard spinning disk, and we’ll stick with the latter for the sake of this demonstration. Next, we need to look at replication options, and we have the same four that we covered in the Virtual Machine section: LRS, which is locally redundant storage. Three copies are available locally. In the same data center, we have ZRS, which is zone redundant storage. This is in Preview, currently still in three copies, but in a zone-based region. So a couple of data centres are in the same region. Georedundant storage, which is GRS, There are six copies across multiple regions. And finally, RA-GRS, which is read-access georeliable storage. That allows us to replicate to another region and actually read data there. It’s basically warm and ready for us to access. in our KCR. We’ll simply select LRS.

Next, we choose our access tier. As we already discussed, we’re going to use the Hottier, and then we have options around secure transfer if we open up the information on this one. This allows us to enforce encryption options for when we access data from the storage account. For the purposes of this demo, we’ll just choose Disabled, choose Your Subscription, and select the source group we just created previously. Finally, scroll down and select your location. In my case, I’ll stick with North Central US and select Disabled for virtual networks. This option, which will be covered in the networking section, is around virtual service endpoints, and that allows us to say, take a storage account. Instead of exposing it over the Internet, we can expose it directly through a virtual network. And this is a way for security teams to accept the fact that we have Blob storage available that is only available within the Azure private network.

With all those options selected, go ahead and click Create, and then we’ll fast-forward. Yeah, it should take about a minute for your account to be created, and we’ll transition to it once that’s completed. As you can see, after a period of time, the storage account has now been created, and we can go ahead and select it. Once it opens up, you can see the performance that we chose, which is standard replication set to LRS, and our account, which is set to GPV 2. And then, at the bottom, you can see our various services. We have Blobs, tables, and queues. If we wanted to go ahead and upload blobs, we could go into the blob section, create our container, and upload it from here. But you’ll see in a subsequent demo, as we use Storage Explorer, how you can upload lots of Blob images and manage your storage account from there. And with that, this concludes the demonstration.

3. Demo: Use Storage Explorer with Azure Storage

Storage Explorer can be found at Azure.Microsoft.com/en-US, or simply Google Azure Storage Explorer. You’ll be presented with this page. There’s a Windows client, a Marcos client, and a Linux client. Go ahead and download Storage Explorer for free by selecting the link at the bottom after you select your OS. And once that’s installed, we’ll continue with this demonstration. Okay, so once Storage Explorer is installed, you’ll be prompted to run it; do so, and you’ll be presented with the screen. You have a number of ways that you can log in and browse your storage.

You can simply add an Azure account directly. You can use a connection string or a shared access signature that we’ll learn about later on in the module here. And you can always use your storage account name and key as well. But in our case, we’re just going to sign in by selecting Sign in on the bottom here. And this will sign us in, prompting us to sign into our Azure account. Okay, once that’s added, simply select Apply, and then you can see all of the various storage accounts. So in our case, we first created a Storage demo via the Portal, and then we created one via PowerShell. So we can stretch these outside and see what’s inside. Again, we really don’t have anything in them at the moment, but you kind of get the idea there. Now what we can do is add some files directly to that. So we can take some images, for example, and then go ahead and select the Blob container section of your storage accounts. If we right-click there and select Create New Blob Container, we can name it Images. And on the right hand side, we see we have the “Upload” option, so we can go ahead and select that. Click to upload files. Now I’m simply going to select a folder full of files. I’ve got an image sample on my desktop, so I’m going to select all of these. Select Open, and then go ahead and select Upload.

And what you’ll see is that this will begin to happen here in a moment. And on the bottom, if we expand this out, we can see the group of images it’s uploading and how many are complete, and you can see they’re appearing in the top pane there as each one is uploaded. So you can see the flexibility. It’s similar to managing a file share on your own machine, where you can create images, delete images, and so on, as well as create new folders to get a URL from or rename. You’ve got this whole series of options available to you up there. The other thing as well is if we go over to the Azure Portal and look in our same storage accounts or SL storage demo, we also have an option there to “Open in Explorer.” So if we go ahead and select this and click yes, you’ll see it’ll switch apps and open directly in Storage Explorer as well. So this is another quick shortcut. If you’re in Azure and you want to open up Storage Explorer, you can simply do that, and it will take you right across there. And with that, this concludes the demonstration.

4. Lecture: Manage Permissions

Alright, so we’ve discussed a lot about the different storage types that are out there, and we’ve created blobs and things in the various demonstrations. But how do we control access to them? And the first one I want to bring your attention to is the access policy, specifically for the container. And on the container inside your blob storage, you can set the public access level.

Now the default is private, so this is not anonymous access; nobody can access your images, et cetera, and things are set to private. The next one is Blob, which I used for some of the demonstrations, where we have anonymous read access for Blobs only. And the third one is containers, where we have public access to the container as a whole. So that’s read access for the containers and the blobs. And if we go further into this, we can also get a little bit more granular with SAS, which is shared access signatures. We may not want to just generically assign commissions at the container level.

So we can use this method, which is a query string that we add on to the URL of a storage resource. And this string informs Azure what access should be granted. And we’ll get a little bit deeper into the shared access signatures and how you construct them in a little while. But to go a little bit further, there are two types. There’s an account. SAS tokens. These are granted at the account level to grant permissions to services within the account. And then there are specific services. SAS tokens. These grant access to a specific service within a storage account. And it should be noted that these are all encrypted. They utilise hash-based message authentication to keep everything nice and secure. So what does a SAS token really look like? And if we break it down, when you copy it from Azure and you say, “Hey, I want my SAS token,” it’s going to give you a very long UUID that’s a unique resource identifier.

And there are a couple of things to keep in mind here. So you’ve got your storage resource Uri, so in my case, I’ve got Slashdemo Blob, Core, Windows Net, which is my storage account, Images container, and Image JPEG, which is the blob that I’m referring to. And then I have this giant string that’s appended to it, which is my SAS token as a whole. And it helps if we go ahead and break that piece down further. So to begin, let’s start with the blob that we know we have. We have our image. JPEG, which is the first part of the entire UI. Then we have the storage service version. So this must be set to version 2015 by April 5 or later for it to work. At the moment, that’s the current state of the documentation. Then there’s SS equals BFQT. Now, it’s very simple, actually. Looking at the alphabet, the letter B stands for blob, the letter F stands for file, the letter Q stands for Q, the Q service, and the letter T stands for table. So these are the signed services that are accessible with the SAS account SAS.Then there’s SRT, which in my case is SEO. So these are the signed resource types. So the S in SEO stands for service. So this is access to service-level APIs. The C is for containers’ access to container-level APIs. And then finally, we have the object itself. So this could be the Blob, or in the case of atable, the table, or messages queued, and so on. Then we go down a little further, and we have signed permission. So this is a bit of a mouthful.

Here we go. Rwdlacup. Well, let’s break this down. So if we take the R, that’s for read. As a result, we can read, effectively giving us read access to the object. The W is for write access, and the D is for delete. The L is for list, a is for add, C is for create, U is for update, and P is for process. That’s specific to Q messages. The same is true for the Q message-specific update. We’ll have our signed expiration and start date if we go a little further. So the Se is the expiration date and time for the SAS, and the St is the start time. Finally, we had signed protocols. So we’re going to say in this case, we’ve got HTTPS as the signed protocol. And finally, as you can see, we have the signature, which is SIG equals, and then the whole hash that we have after it. And that’s really important because this is used to authenticate the request made with the shared access signature. So hopefully you can see, you know, when you look at it as a whole, it’s like, “Oh my God, what am I getting into?” But now, if you look at it, you can say, “Okay, this is really just breaking down.” These are the permissions.

This is kind of identifying what you can do with the object or what I can do with the container. And you know, the signature is what really authenticates the whole thing at the end. Now you might be thinking, “Well, that’s great, but this could get a little cumbersome managing all of these shared access signatures that are out there.” And so Azure provides something called Stored Access Policies, which is a method for control in SAS. It has group-shared access signatures and provides additional restrictions. Now the good thing is that they can be used to change the start and expiry times of permissions, or you can even revoke them after they have been issued. However, it is only supported by the SAS service. so for Blob containers, file shares, queue tables, et cetera, but not on the storage account as a whole. And you’ll see how we configure that in the subsequent demonstration. But, hopefully, this gives you an idea of some of the ways you can manage access; check out the demo, and hopefully that will solidify the knowledge you need for the exam here. 

5. Demo: Create and Manage SAS

And we’ll use the storage account SLSAs demo in the portal. So select that one. And you can see our access keys and shared access signature on the left hand side. So I’m going to show you access keys. First of all, this is very important because you have two access keys associated with your storage account out.And if you want to at any point, you can regenerate your keys. Now the reason they give you two is because if you need to regenerate your key, you can move everything over to key two, make sure everything’s working, then regenerate key one. Because when you regenerate your keys, all of the Shared Access signatures that we generate will no longer function. Now if we go over to Shared Access Signature, we can now create one for, let’s say, our Blob store. So we can say, “Okay, we want to allow services for Blob, but we don’t want file queues and tables.” We’re going to give full permissions to the resource. And if we scroll down now, we can select the start and expiration dates.

So, as you may have noticed in the previous module description around our Shared Access signatures, this is where it will generate the start and end time. And sometimes it helps to just create one hour in the past for your start time. So you know it’s immediately going to be accessible—actually, even further, just depending on your time zone. If you’re worried about that, you’ll next find a section on allowed IP addresses. So if we want to restrict access to specific IP addresses, we can do that here. This could be a single IP address or it could be set up using our Loud protocols.

So HTTPS is the most common. And then you can see at the bottom here that we have the signing key. So we can choose to sign this shared access signature with key one or key two. And as long as we don’t regenerate that key, this shared access signature will continue to work. So let’s go ahead. Let’s use key one. Let’s generate the SAS. And then if we scroll further down, you can see our SAS token and our specific Blob service, the SAS URL. so we could grab this token. Now hit it there, go ahead and put it into one of our images that we haven’t already loaded, and let’s go ahead and grab that image. The image JPEG pasteour token is there at the end, and our image comes up using the SAS token that we’ve created. But this can obviously be a little cumbersome because we’ve got to now manage this Shared Access Signature, which would have created 1020, 30, 40, and 50 of these. How do we deal with that?

And so the policies that I mentioned previously are a great way to do this. And if we go to Storage Explorer, we can right-click our container and select Manage Access Policies, from which we can create a new access policy. We can give it a name; let’s call it “images.” As an example, we’ll begin today and conclude on December 24, 2018. In this example, I select Save after selecting the permissions again. Now, by itself, the policy doesn’t actually do anything. When I create my Shared Access signature from Storage Explorer, I must refer to that specific policy. So now I can return to the container, right-click it, and select Get Shared Access Signature. At this point, I can drop down and select my access policy, and it’s going to put in the start and expiration times as well as the permissions that I selected in the policy. and I can go ahead and create that. And now I have that Shared Access signature, which is ultimately referring to. The policy is visible in the query string. And the good thing now is that if I want to change the expiration date, I can simply go ahead and change the policy. Because it is based on that policy, the Shared Access signature will eventually expire. And with that, hopefully this gives you an idea of how you can control access to your containers, images, etc. using shared access signatures. but in combination with policy.

6. Lecture: Encryption Keys and Key Vault

The last thing we just want to cover briefly in this module is encryption keys and key vaults. And Key Vault is a service in Azure that you can use to store secrets, keys, etc. that you might want to use. And you can certainly take the key from the storage account, put it in the key vault, and then, when you want to access it, you open up the key vault, get the key, and use it from there. Now, for added assurance, when you use Azure Key Vault, you can also import or generate keys in hardware security modules. That’s HSM. So on the bottom right of the screen, on the slide here, what you see are the customer-managed keys that can be put in the Azure Key Vault.

They can be used to wrap and unwrap the wrapped account encryption key for something like storage service encryption, which ultimately is used under the covers for the storage account. And you’ll see a little more of that later. But for the purposes of the exam right now, know that you can use Azure Keyboard to store your keys. And know that it is possible to bring your own customer management key as well.

Leave a Reply

How It Works

img
Step 1. Choose Exam
on ExamLabs
Download IT Exams Questions & Answers
img
Step 2. Open Exam with
Avanset Exam Simulator
Press here to download VCE Exam Simulator that simulates real exam environment
img
Step 3. Study
& Pass
IT Exams Anywhere, Anytime!