CompTIA CASP+ CAS-004 Topic: Cloud and Virtualization (Domain 1)
December 14, 2022

1. Cloud and Virtualization (OBJ 1.6)

In this section of the course, we’re going to discuss how to implement secure cloud and virtualization solutions. Throughout this section, we’re going to be focused on domain one security architecture and specifically objective six. Given a set of requirements, you must implement secure cloud and virtualization solutions. We’re going to begin this section by discussing the cloud and the different types of cloud deployment models and cloud service models that are out there.

Then we’re going to discuss some deployment considerations, such as cost, scalability, resourcing, data protection, and the difference between multitenant and single tenant solutions. Then we’re going to move into discussions around the storage models used with these different solutions and how we provision and DE provision these resources. Finally, we’ll discuss virtualization again, which is at the heart of different cloud-based solutions. And we’re going to discuss the differences between a type one and type two hypervisor, the rising use of containers and application virtualization, and the use of emulation in today’s enterprise networks. So let’s get started in this section without discussion of the cloud and virtualization.

2. Cloud Deployment Models (OBJ 1.6)

These days, cloud computing seems to be the big trend within our industry. With the promise of increased availability, higher resilience, and unlimited elasticity, the cloud definitely provides our organisations with a lot of advantages over traditional network architectures. But cloud computing can also bring numerous unique security challenges into our environments that we must be aware of. To better understand these, we first have to look at the different types of cloud solutions and architectures that are currently available in these environments. There are six types of cloud deployment models available: public, private, hybrid, community, multitenancy, and single tenancy. The most common type of cloud architecture is the public cloud. Under this model, a service provider makes resources available to end users over the Internet. There are numerous public cloud solutions available today, including those from Google, Microsoft, and Amazon.

For example, Google Drive is a public cloud service that’s offered both for free and on a pay-per-use basis. Now, public clouds can often be an inexpensive way for an organisation to gain a required service both quickly and efficiently. The second option is a private cloud. This service requires that a company create its own cloud environment that only it can utilise as an internal enterprise resource. With a private cloud, the organisation is responsible for the design, implementation, and operation of the cloud resources and the servers that host them. For example, the United States government runs its own private cloud known as GovCloud, which is used by different organisations within the government. But your company and mine can’t get access to it and use it like we would with Google Drive, AWS, or Azure. Generally, a private cloud is going to be chosen when security is more important to your organisation than having a lower cost. A hybrid cloud solution can combine the benefits of both public and private cloud options.

Under this architecture, some resources are going to be developed and operated by the organisation itself, much like a private cloud would be. But the organisation can also utilise some publicly available resources or outsource services to another service provider, like the public cloud does. Because of the mixture of public and private cloud resources, strict rules should be applied for whatever type of data is being hosted in each portion of the hybrid cloud. For example, any confidential information should be stored in the organization’s private cloud portion. The fourth option is a community cloud. Under this model, the resources and costs are shared among several different organisations that all have a common service need. This is analogous to taking several private clouds and connecting them all together to save money. The security challenges here are going to be that each organisation may have their own security controls, and we have to mitigate that as we combine these things together.

Remember, if you connect your network to another network, you’re inheriting their security risks as well. This does not change simply because we have moved to the cloud. Now, in addition to the four cloud deployment models, We also have to look at the other two models that you need to be aware of. This is the difference between multitenancy and single tenancy. The first one here is the multitenancy model. Under this model, the same resources are used by multiple organizations. This allows for a large gain in efficiency because most organisations don’t use all the capacity of a single server or set of servers. But when two or more organisations share the same physical resource, you’re going to have some security concerns. Here. For example, if your website is hosted on a shared server with 20 other customers and one of those customers is the victim of a denial of service attack, that entire server will be undergoing the same attack.

And this can also make your stuff go offline as collateral damage during the denial of service against that other server. Now, this is just one of the dangers and risks assumed under a multitenancy model. To combat the risk assumed under a multitenancy model, there’s also a single user model known as single tenancy. Now, under this model, a single organisation is assigned to a particular resource. Because of this, single-tenancy solutions tend to be less efficient than multi-tenancy solutions, and they’re also more expensive because they require more hardware to run properly. So which of these six models—or which combination of models—is going to be right for your organization? Well, that really depends on your security needs, your cost restrictions, and your risk tolerance. It will be less expensive for you to combine a multitenancy model with a public cloud model. However, this increases the risk to the confidentiality and availability of your information. As with many things we consider as security practitioners, there is no single right answer here. Instead, it’s our job to weigh the benefits and drawbacks of each of these models and then decide which is the right one based upon our organization’s specific needs and concerns.

3. Cloud Service Models (OBJ 1.6)

Has decided that using the cloud is the right solution for them. The next decision is whether to host it on site or to contract it as a hosted solution from a third party. Now, when hosting a solution on site, this is often referred to as “on premise.” While on-premise solutions are great from a security standpoint, they are extremely costly. If you’ve decided to use an on-premises solution, this means you need to procure all the hardware, software, and personnel necessary to run your organization’s cloud solution. In addition to this, you also need to have a facility for the data centre that can hold all the equipment while providing adequate power, space, and cooling. Because of this, many organisations instead use a hosted solution.

With a hosted solution, a third-party service provider will provide all the hardware and facilities needed to maintain your cloud solution. This is often done in a multitenancy environment with multiple organisations having their cloud solutions hosted within a single third-party provider’s facility. For example, we have services from Google,  Amazon, and Microsoft that all provide hosted solutions for organisations to utilize. Consider the example of Amazon Web Services, or AWS. This is a multitenant solution that utilises the same physical hardware located at the same physical facility to support a large number of diverse organizations. Of course, there are logical separations in place to keep your data secure and from being exposed to other organisations within this hosting platform, but this is a risk that you’re dealing with now. If you have information that you want to remain strictly confidential, you’re better off using an off-premises solution where you control all the physical and logical access to that server. When using a multitenant solution, residual data from your organisation could be exposed to another tenant’s server elasticity, which expands upward and downward to provision and deprovision excess server capacity.

Since you’re both utilising the same shared resources, if you decide to use a hosting provider, it’s always important to understand their authentication and authorization mechanisms to ensure that you are adequately meeting your requirements. Also, you should inquire about the redundancy and fault-tolerance measures to ensure that they are up to the level that you’re going to need as well. Another concern with hosted providers is their location. Where exactly is your data being stored in the world? This is important because based on that location, there are going to be different laws that affect your organisation and your data. These are things you have to understand when choosing a hosted service provider. Now that you’ve made the decision on whether to use an on-premise or hosted service provider, the last decision revolves around what type of service you want to purchase. Now there are three cloud service models to choose from: software as a service, infrastructure as a service, and platform as a service. Under the Software as a Service model, the service provider is going to give your organisation a complete solution.

This includes the hardware, the operating system, and the software applications needed for that service to be delivered. For example, your organisation may be using Office 365 from Microsoft. This is considered a software-as-a-service solution, and it allows your end users to access their email, Word documents, and PowerPoint presentations directly within their web browser. Sometimes, though, you’re going to have to build a customised piece of software to meet your service needs. In this case, you might only need the service provider to give you the hardware, the operating system, and the backend server software. This will be known as “Infrastructure as a Service,” or IaaS. You’re going to get the benefit of the dynamical allocation of additional resources known as elasticity. But you don’t have to deal with the headache of a long-term commitment that involves buying a certain amount of hardware and the underlying operating systems. For example, you might contract for a new cloud-based web server to host your company’s website on. The server might be built and hosted by a cloud-source provider and come preinstalled with a Linux operating system and an Apache web server.

Now, your programmers can simply create a custom application for your customers that’s going to be run by this web server without having to worry about the underlying operating system and all the underlying hardware. The final type of service is known as “Platform as a Service,” or PaaS. Under this model, the third-party vendor will provide your organisation with the hardware and software needed for a specific service to operate. For example, if the company is developing a new piece of software, they might have a development platform provided by a third-party cloud provider. This would be a great example of “Platform as a Service.” So in summary, Infrastructure as a Service provides you with everything needed to run a service, including power, space, cooling, network, firewalls, physical services, and the virtualization layer. With Platform as a Service, the operating system and infrastructure software are going to be added to that list. Infrastructure software includes things like Apache Web servers, MySQL databases, programming languages, and much more. Now, with Software as a Service, we’re going to have the hosted application software added on top of the infrastructure and platform portions. As you can see, software as a service is much closer to your end user than either platform as a service or infrastructure as a service. As a security practitioner, it is always important for you to be able to determine which type of service is going to be right for your organisation and its requirements.

4. Deployment Considerations (OBJ 1.6)

As you’re likely aware, there are many different deployment models, hosting models, and service models used when an organisation decides they’re going to implement a cloud-based architecture. Now, as you design your cloud architecture, it’s important to consider the cost, scalability, resources, location, and data protection requirements and limitations that are going to affect the system that you’re attempting to design. Since many organisations you might work for are nonprofits, costs will definitely play a big factor in your deployment choices and designs. When you consider the four different deployment models—public, private, hybrid, and community—you need to be aware of the tradeoffs. In terms of cost, a public cloud deployment on a multitenancy solution is going to be your least expensive option, but the trade-off is that you’re losing some physical segmentation between your data and other users that may exist in the same shared infrastructure.

If you move to a public cloud deployment with a single tenancy solution, though, you’re going to have a bit more segmentation, but it’s also going to cost you more money because you’re no longer sharing the physical hardware with another organization. Similarly, you may opt to work as part of a community club, and this will be more expensive since there are fewer organisations involved to split the cost of the resources. But it’s still going to be cheaper than moving to something like a full private cloud deployment using a single tenancy model. As with most things in design, there are going to be tradeoffs that have to be made between the most secure or most segmented solution and the amount of money that you can afford to spend on a given solution. Another factor to consider is scalability. Scalability is defined as the ability of a system to handle a growing amount of work by adding resources to the system. In terms of cloud computing, a public cloud deployment tends to be the most scalable option because a provider like Amazon Web Services has a huge amount of extra capacity to allow you to scale upwards at any time using their public cloud multitenancy infrastructure. Conversely, if you’re developing your own private cloud as part of an on-premises solution, you’re likely going to be constraining your ability to quickly scale up because you’d have to procure,  install, and configure new servers to handle all the additional loads as they come up. Resources are another big factor in determining which solution you’re going to implement. Are you working for a multinational Fortune 500 company or a small Silicon Valley startup?

Depending on your answer, you may have more or less resources available to support your intended designs. For example, my company, Dee Training, is a small company with fewer than 20 people in it. Therefore, we are resource limited when it comes to supporting and building out complex cloud-based solutions. We are limited not only in terms of the amount of money we can spend on a given solution, but also in terms of the amount of human capital we can invest in it. Because of those limitations in terms of resources, both in terms of money and human capital, we’ve decided to build out most of our infrastructure using serverless technologies like AWS Lambda functions, as well as utilising public cloud multitenancy solutions for most of our custom solutions and automations. If I were working for a Fortune 500 company with a large IT department and teams of hundreds of programmers, I might choose to use a private cloud single tenant solution for added security. Another thing to consider is the location of the physical servers. Do you need servers in multiple regions or countries? Do you need your servers to be located within a specific country because of legal requirements and regulations? All these are things you need to think about as you’re designing your solution.

For example, in one of my previous positions as an IT director, we were working towards consolidating all of our email servers into a single location as part of a private cloud on-premises single-tenancy solution. This worked well for most of our end users, but we had one department within the organisation that wouldn’t let us remove their on-premise email server because of a legal requirement that their user-sensitive and private data could not be moved outside of the country of those people and where they were working. So due to this privacy regulation, we had to keep a single on-premise email server to support about 100 users inside of this country, while every other user from around the world was able to be migrated to our newer centralised private cloud-based solution. This is just one example of how location can significantly influence the types of solutions that must be built and supported within your environments. Finally, you need to consider the type of data protection that you’re going to require for your designs. If you’re going to be hosting public information, like your corporate website, then you may want to consider using a public cloud multitenancy solution. But on the other hand, if you’re going to be handling private and sensitive employee or financial data, you may want to have a private cloud single-tenancy solution based on the sensitivity of that data. Remember, all these considerations surrounding cost, scalability, resources, location, and data protection are going to be important. As you design your enterprise architecture and determine if you’re going to move into the cloud, think about what type of cloud deployment and service model you’re going to use to support your organisational requirements.

5. Provider Limitations (OBJ 1.6)

When you’ve determined that you want to migrate to the cloud, you’re often going to find that there are certain cloud provider limitations that you’re going to have to deal with. One of the biggest limitations is in the area of Internet Protocol, or IP, addresses that you can utilize. Typically, the service provider is going to limit the IP address ranges and availability that you’re going to be able to utilize. So if you’re trying to migrate all of your in-house services, like your web servers, to a public cloud infrastructure such as Microsoft Azure or Amazon’s AWS platform, you’re going to find that you’re going to have to modify your IP assignments to make your deployments work properly. If you’re migrating a public-facing server, like a web server, then you’re going to need to get a static IP address assignment from your cloud service provider and link that to your domain name using the proper DNS records. But if you’re using a highly scalable cloud-based infrastructure and not just a single server, this is going to become much more complicated.

 In these cases, you’re going to need to assign the IP address to a content switch or load balancer and then have that device perform a variation of address translation to the different horizontally scaling virtual machines that will perform the actual hosting of the website for your end users. To best support this type of limitation, you need to ensure that you’re properly configuring the DHCP scope that’s being used by your virtual machines so that your load balancers can reach those devices without any issues. If you have multiple different sets of cloud resources, you’re also going to need to be able to route traffic between those different virtual private clouds, or VPCs. To do this, you need to configure VPC pairing, which is a networking connection between two different VPCs that enables you to route traffic between them using private IPV4 or IPV6 addresses. Basically, this VPC pairing is going to allow traffic to communicate between two VPCs as if they were on the same network, even if they’re in different regions or data centres located around the world. VPC pairing is often used to transfer data from region to region for redundancy and fallback purposes as well.

When you configure VPC pairing, that traffic never actually touches the Internet itself, but instead remains within your cloud service provider’s global network as it connects to the different VPCs from region to region. Another thing you need to consider is the use of middleware with your cloud service providers. Middleware is the software that connects computers and devices to other applications and networks. Essentially, middleware is the glue between all of our disparate cloud-based solutions and on-premise solutions. If your organisation is working towards a cloud-first design approach, it is imperative that you consider if your cloud service provider will support your middleware and have the ability for everything to communicate properly. Middleware is the integration of your different services, and it gives you the functionality for data transformations, monitoring, and administration that your cloud-based solutions and architecture are going to rely upon as you move from the tightly coupled integrations of the past into a more loosely coupled cloud-native integration. Middleware is the secret weapon that allows everything to happen within your enterprise architecture. 

6. Extending Controls (OBJ 1.6)

These days, we have the ability to extend our on-premise controls into the cloud to augment our security services. Anti-malware products were among the first security services to be offered in the cloud. Instead of installing traditional antivirus or antimalware programmes on your desktop computer or server, the client was instead configured to use the cloud to provide these protections. Now, there are a couple of advantages to doing this. First, it removes the need for installing an antivirus or antimalware solution on your end client. Instead, a small utility is installed. This utility uses very little processing power and is always up to date with its signatures and definitions. This is the true power of a cloud-based solution, because as soon as the provider creates a signature for a piece of malware, all of its registered clients immediately have access to that new signature and that better protection. Unfortunately, there are also some downsides to this approach.

The main one is that it is highly dependent on a good internet connection. Because the scanning engine resides in the cloud and relies on its processing power, your machine may be vulnerable if it’s offline. Now, some of these cloud services can also be configured to only scan portions of a computer as well.For example, it might only look at your core Windows files but not all of your user documents and file storage. One of the most effective antimalware solutions is really found in the form of antispam services. These services have an organization’s email routed through those servers first in order to detect any malware or spam. Then any suspected emails are placed in an aquarantine area that’s accessible through a web browser by the local administrator for review and possible release to the end user. Another great cloud-based security solution that you can utilise is vulnerability scanning. Now, with traditional vulnerability scanners, you perform the scan from within your own network.

But with the cloud-based scanners, you have the option of scanning your network from the Internet, simulating an attacker’s perspective. Now there are many advantages to this approach. The installation costs are lower, the maintenance costs are lower, and because the cloud service provider is responsible for providing all the hardware and software necessary to conduct these scans, your overall cost of ownership is much lower as well. Also, the vulnerability scanners always remain up-to-date under this paid service subscription model. And in most organizations, vulnerability scanners aren’t going to be used 24 hours a day, seven days a week, so you have a lot of wasted time there. Instead, by using a cloud service provider model, the equipment can be shared across multiple organizations, and that means those operational costs are distributed among all the clients, again saving you money. Unfortunately, there is one major disadvantage. Because the scanning is being conducted from the cloud provider’s system, the vulnerability data will also be stored on their systems. This is the same data that shows all your vulnerabilities that you’re open to and that could be exploited by an attacker.

And therefore, the security of this data needs to be a top concern for your organization. Another security technique that can be provided by cloud services is the use of sandboxing. Now, sandboxing utilises separate virtual networks to allow security professionals to test suspicious or malicious files. For example, let’s say your organisation is conducting an instant response. Your responders could take a piece of malware that they found on a system, put it into a cloud-hosted sandbox environment, and then run it to see what the effects of that malware are in real time without affecting the rest of your corporate network. Content filtering is another example of a possible security service that a cloud provider can offer your organization. When your organisation signs up for this type of service, all of your organization’s traffic will be diverted to the cloud provider through a VPN before going out to the larger Internet. This way, the provider can act as your content filter. The provider can also allow you to create policies, such as limits and categories of content that should be blocked, or report which users are tending to access which websites, and provide all that data to you as well. As you can see, there are a lot of different options that you should consider when dealing with cloud-based security offerings. To simplify this, some companies have gotten into the business of acting as cloud security brokers.

These companies will provide you with the type of software or system that’s placed between your organisation and the cloud environment. This acts as a middleman, and it consolidates all the different cloud security services into one suite of tools that your organisation can configure based on your security policies. To further simplify security, there is also the managed service provider option, which can provide security as a service. Now, security as a service allows organisations that don’t have the necessary security skills to essentially outsource those to this type of provider. This allows them to provide them with a lower cost than trying to hire security professionals on staff and directly working for the organization, as well as immediate security expertise and the ability to outsource common tasks and provide the organization’s IT staff with a simple interface that they can use to provide better security for the organization. With a managed service provider, the organization’s entire cybersecurity and information assurance programme can be outsourced to this third party. This can include your identity and access management, your data loss prevention, your threat protection, and even large portions of your compliance requirements.

7. Provision and Deprovision (OBJ 1.6)

Users, servers, virtual devices, and applications all need to be provisioned in order to provide services to our end users. Now, provisioning simply means setting aside a certain amount of resources in order to provide an established service. Every time we need a new server, though, this could mean that we need to create a purchase request, order the server, and wait for it to arrive before we even get started with dealing with the traditional networking and installation, configuring, and operating of that physical server.

This can make provisioning and deprovisioning time-consuming and challenging. Thankfully, with virtual devices and cloud computing, we can instantly provision new servers on demand by using our personnel, a cloud service’s personnel, or even automation. In the information technology world, we provision things all the time. For example, we may provision a new user account when our organisation hires a new employee. Another example might be provisioning a new desktop for an employee, in which we want to take a computer out of storage and connect it to the network. Based on our organisational needs, we may need to provision additional servers to increase our capacity and the ability to serve our end users in a cloud environment.

This can be done automatically based on certain utilisation thresholds within our network. If we have a more traditional network, it’s going to be extremely important to monitor our utilisation and start the change request and resource allocation process to acquire additional servers before we actually need them. On the other hand, if utilisation decreases, we may need to deprovision some of our servers. In terms of virtual devices and cloud computing, provisioning doesn’t necessarily require physical hardware like it used to, but there’s still a cost to using that virtual resource. Virtual devices still require physical resources on a host server, but that host server may be owned by a cloud service provider instead of our own internal IT department. Because each new virtual device requires some memory and processing capability from the physical machine, most cloud services have a per-resource cost associated with them.

We increase our organizations’ financial costs by designing our architecture, deploying our servers, and provisioning new accounts and services. So if these devices are no longer needed, we should deprovision them to free up resources and send those back to the host server. If we provision a new server on Amazon Web Services Cloud, for example, we’re going to be charged for the amount of processing, memory, and storage that’s utilised by that virtual server. If the server is no longer needed, it’s important that we have a process in place to deprovision it, thereby releasing that resource back to Amazon and preventing us from incurring further charges. Now, another thing to consider is that when a server is deprovisioned, there may be data remnants left behind on the cloud service provider’s server. For this reason, we need to look over our service level agreement with the cloud service provider to determine how they’re protecting our data and destroying these data remnants. Based upon the classification of our data, we determine the appropriate level of destruction that the cloud provider should be utilising to prevent inadvertent access to those data remnants.

8. Storage Models (OBJ 1.6)

Our systems need a place to store their data. And this brings us to the concept of a storage model. Now, a storage model describes the method used by a cloud computing infrastructure to store data. This includes object storage, database storage, block storage, blob storage, and key-value pairs. Object storage is a data storage architecture component that manages data as distinct units known as objects. Object storage is ideal for storing, archiving, backing up, and managing large volumes of unstructured data in a reliable, efficient, and affordable manner. The data stored in an object storage system is usually going to be unstructured, which includes things like emails, videos, photos, audio files, and other types of media or data.

With object storage, there are no folders, directories, or complex hierarchies; instead, they are stored in a flat data structure environment. File-based storage is similar to object storage in that it can store any type of unstructured data, but with file-based storage, there is a hierarchical file and folder structure imposed on that data. This hierarchical use of file-based storage does make it slower and less efficient than using object storage. For cloud-based solutions, though, the next type of storage we need to discuss is known as database storage. Now, database storage is an organised and structured format for storage. Database storage is usually created as a series of interlinked tables that consist of rows and columns. The benefit of database storage is that it is extremely fast when it comes to searching and finding data stored within the database because everything is stored, structured, indexed, and easily accessible. The next type of storage we have is known as block storage. Block storage is used to store data files on storage area networks, Sans, and cloud-based storage environments.

Block storage breaks up the data into equivalent-sized blocks and then stores those blocks as separate pieces with a unique identifier. Block storage is considered to be fast, efficient, and reliable when you need to transport that data as well. Block storage is used to decouple the data from the user environment and allows it to work with different operating systems, too. The next type of storage is known as “blob storage.” Blob storage, or the binary Large-scale objective storage is considered a collection of binary data that is stored as a single entity. Typically, blobs are used to represent an image, audio, or multimedia object. Blob storage is a type of unstructured data storage, and these blobs are then grouped into containers that act a lot like a directory or folder. Blobs are popular with those using Microsoft Azure’s Cloud platform, and they provide a high-speed, scalable, and easy-to-use solution for large amounts of user data. The final type of data storage mechanism you’re going to find is known as a key-value pair. Now, a key-value pair is a type of non-relational database that uses a simple key-value method to store its data. These pairs are groupings of data that are then used by other applications. For example, our learning management system maintains several key value pairs about our different users, such as their first name, their last name, their student ID, their email address, and other things like that. If I’m going to look for a particular student, I would find that the first name equals John, the last name equals Smith, the student ID equals one, two, or three, and the email equals John Smith at student. Each of these key-value pairs could then be passed by our different automations to reference this particular user.

Each key serves as a unique identifier, and the value can be used to represent simple objectives or complex and compound objects, whichever is needed. The final thing we need to discuss in terms of storage is the idea of metadata and tags. Now, metadata is information about data, or a set of data that describes and gives information about other data. All the data we’ve spoken about so far in this lesson can also contain metadata about it. For example, if I uploaded a digital photo from my iPhone to my blob storage, there’s going to be a lot of ones and zeros that make up the photo, and that’s part of the file itself. But there’s also metadata about the photo, things like the GPS coordinates of where I took the photo, the time of day it was taken, the focal length of the lens of my camera when I took that picture, and other details like that.

All this metadata is usually going to be created automatically whenever you create that photo or that file. Now, another type of data we have is known as a tag. A tag is added to indicate the type of information and to dictate its importance among other pieces of information. For example, if you send an email to our support address, your email is tagged and categorized based on keywords within your email. For example, if you were to send an email that said, “Jason, I have a question about blobs in your course,” our system would automatically tag your email with the tag “blobs” since it realizes that’s the topic you’re asking about.

Now, this allows us to know something more about this piece of data. This email in this example belongs to the same category as other emails and questions about the concept of blobs. And if I see a lot of questions on that particular topic, it tells me that I may need to go back and create a new lesson that covers blobs in more depth. Tags are a really helpful way to categories your data, even if that data comes in a variety of unstructured forms, like an email, a chat request, a voicemail, or even a video upload. So as you can see, there are a lot of different ways to store data in cloud-based architectures, including object storage, database storage, block storage, blob storage, and key-value pairs. And then there are the additional metadata and tags that help us categories this data even further. Bye.

9. Virtualization (OBJ 1.6)

for cloud computing. To gain its intended cost savings and efficiencies, it relies heavily on virtualization. Now, by using virtualization, numerous logical servers can be placed on a single physical server. This, in turn, reduces the physical amount of space, power, and cooling that’s required inside our data centers. Additionally, by using virtualization, we can achieve higher levels of availability by spinning up additional virtual servers whenever we need them.

This ability to dynamically provision memory and CPU resources is one of the key benefits of cloud computing. Now, while there are many different benefits to cloud computing, there are still numerous security issues that we need to consider. Most of the same security issues that we have with physical servers also get carried over into the cloud environment. Oftentimes, executives think that moving to the cloud will solve all their problems. But this is never really the case. When using virtualization, one or more logical servers reside on a single physical server. We do this by utilizing specialized software known as a hypervisor.

The hypervisor controls the distribution of all the resources, such as the processor, the memory, and the hard disc availability. Essentially, the hypervisor emulates a physical machine so that the operating system and all of its applications don’t even realise that they’re operating inside a virtual environment. Now, hypervisors are divided into two categories: type 1 and type 2. A type I hypervisor is known as a “bare metal” hypervisor. With this type, the hypervisor replaces the operating system on the physical server. This allows the hypervisor to interact directly with the physical hardware without any intermediaries. Products such as Vsphere, ESXi HyperV, and Zen Server are all examples of type one or bare metal hypervisors. Now, the second type of hypervisor is known as a type 2 or hosted hypervisor. This type is installed after an operating system is placed on the server. For example, on my laptop, I have a Mac OS X system and an application called VirtualBox on it. VirtualBox is a virtualization software and is considered a type 2 hypervisor. It allows me to install additional operating systems inside of virtual machines on my MacBook, and it operates them just like it would any other application. This allows me to install Windows, Linux, and even Android inside of a window on my Macintosh computer. Now, when comparing bare metal versus hosted hypervisor types, the only main difference from a security perspective is that with hosted types, we have to ensure the underlying operating system is also properly secured and patched. In terms of performance, bare Metal will be faster and more efficient than its hosted counterpart. Another virtualization option we have is known as container-based virtualization, or containerization.

With this type, a hypervisor isn’t utilized at all. Instead, each container relies on a common operating system as the base for each of those containers. While each container shares the same underlying operating system, each container can have its own binaries, libraries, and applications that can be customised for their needs. Currently, container-based virtualization is almost exclusively used with Linux as the underlying operating system. Docker is a great example of a container-based virtualization solution, and it is extremely popular and widely utilized. Container-based virtualization uses less resources than type 1 or type 2 virtualization because it doesn’t require its own copy of the operating system for each individual container. Many cloud service providers have taken virtualization a step further.

With hyper converged infrastructure, this allows the provider to fully integrate the storage networks and servers without having to perform hardware changes. Instead, they rely on software and virtualization technology to perform all the integrations. We can manage all of this from a single interface or device without having to worry about all the underlying vendor solutions. Application virtualization is another type of virtualization that’s commonly used to create additional security for an underlying host. Application virtualization is a software technology that isolates computer programmes from the operating system on which they run. A fully virtualized application is not even installed in the traditional sense, although it’s still executed as if it were. With application virtualization, you can run legacy applications that were designed for an endless-life operating system like Windows XP or Windows Seven on top of a more modern operating system. Or you can even run cross-platform software, such as Android applications, on a Windows machine.

Many cloud providers also offer VDI, or virtual desktop infrastructure. Now, VDI allows a cloud provider to offer a full desktop operating system to your end users from a centralised server. There are a lot of security benefits to using this approach. For one example, I had an organisation that I worked with that created a new virtual desktop image for each user when they logged on in the morning. This desktop was non-persistent, so even if it were exploited by an attacker, it would be destroyed as soon as the user logged off or at midnight each day. This effectively destroyed the attacker’s ability to remain persistent on the end user’s desktop, even if that is now where it was installed. Alright, that’s a lot of different information about virtualization, but we have a little bit more to cover now because one physical server is going to store lots of different logical servers, and we need a way to keep that data confidential and separate from the other logical servers. To do this, we can use secure enclaves and secure volumes. Secure enclaves utilise two distinct areas to store and access the data. From now on, only the proper processor can actually access each enclave. This technique is heavily used by Microsoft,  Azure, and many other cloud service providers. Secure volumes, on the other hand, are a method of preventing prying eyes from accessing data at rest.

When data on the volume is needed, a secure volume is mounted and properly decrypted to allow access to that data. Once that volume is no longer needed, it’s going to be encrypted again and unmounted from the virtual server. This is the same basic concept used by BitLocker on a Windows laptop or FileVault on a MacBook. The last thing we need to discuss is the concept of emulation because many people confuse it with virtualization. Now, emulation involves using a system that imitates another system. With virtualization, a virtual instance of a particular piece of hardware is created and used. So in reality, you’re using a new physical machine that’s represented by software. With an emulator, though, a piece of software is translating the environment in real time to pretend that it is something else. For example, if I wanted to play an old Super Nintendo game on my MacBook Pro, I could download a Super Nintendo emulator, like Open emu, and then it would translate the game’s code in real time to instructions that my Mac could understand. With virtualization, the software being run can physically access the hardware of your machine, making it much, much faster than using an emulator.

So when might you want to use an emulator instead of virtualization? Well, if you need to run an operating system that’s meant for some other type of hardware, such as a Super Nintendo, on a Mac or Windows machine, then you’re going to have to use an emulator if you want. Run software that’s meant for a different operating system, like an Android that’s designed to run on an Arm processor, but you have an Intel processor and a Windows operating system. Then again, emulators are going to be the right choice for you because you’re dealing with different underlying hardware. On the other hand, if you need high speed and better performance, you want to use a virtualization solution. Remember, though, that with virtualization you are limited to running software that is coded for the particular underlying hardware of your processor. Normally, most of us are using X86 or X64-based processors. So whether you’re running Windows, Linux, Mac, or any of those three operating systems on these types of systems with virtualization, it will work just fine. But if you’re trying to run an operating system designed for an Arm processor, it’s not going to work. With virtualization, you’d have to use an emulator. Now, in general, you’re going to use virtualization most of the time instead of an emulator because it’s faster and more efficient, and most things are written to work on an X86 or X64 processor. But again, if you need to run something on a different processor, that’s when you’re going to have to use an emulator.

Leave a Reply

How It Works

img
Step 1. Choose Exam
on ExamLabs
Download IT Exams Questions & Answers
img
Step 2. Open Exam with
Avanset Exam Simulator
Press here to download VCE Exam Simulator that simulates real exam environment
img
Step 3. Study
& Pass
IT Exams Anywhere, Anytime!