CompTIA CASP+ CAS-004 Topic: Securing Architectures (Domain 1) Part 2
December 15, 2022

7. Network Segmentation (OBJ 1.1)

When building the architecture of our enterprise networks, we should consider how to utilise network segmentation to increase our security. We may achieve this segmentation by using access control lists, virtual local area networks, or even physical routers, switches, and firewalls. Many organisations will create zones or segments for different trust levels throughout their networks. The three most common zones are the internal or trusted zone, the external or untrusted zone, and the demilitarised zone, which is the semitrusted zone. In large organizations, there may be additional segments or zones created inside of these three larger zones as well. For example, in the internal zone, there may be a segment for users, another for the data center, and another for all of our systems involved with taking credit card payments. We can apply more granular security controls to each of these by segmenting into zones. Because we often create these segments or zones for a specific functional need, specific administrators can be assigned to each zone.

This delegation of responsibility will allow the administrators to focus more specifically on the policies, procedures, and regulations associated with their portion of the network and their specific zone. So when considering the architecture of your internal network, it’s important to consider breaking up your network into multiple security zones. These zones can be further broken down into subzones through subnetting, ACLs, firewall rules, and other isolation methods that help prevent or shape the flow of data between different portions of our networks. One of the most common security zones is known as the “DMZ,” which is a specialised type of screen subnet. This zone is focused on providing controlled access to publicly available servers that are hosted within our organization’s network. For example, if we’re self-hosting our web server and email servers in our organisational network, it is going to be a best practise to place them in a tightly controlled demilitarised zone, or DMZ. This allows us to maintain precise control over the traffic that’s going to be allowed between the inside, outside, and DMZ portions of our network. To create a DMZ, multiple interfaces are going to be used in our organization’s firewall with a strict set of access control list rules applied and a public IP address that will be required for each server hosted within that DMZ. The goal here is to create security zones, similar to the DMZ, in order to separate critical assets. Now, not all the devices in a network are going to require the same level of protection. Some resources, such as our file servers, that contain confidential employee data, for example, might require additional security measures. Instead of protecting every device to the same high level, we can divide our network into sub-zones based on the levels of protection that they require. In addition to internal subzones such as the DMZ, we will be able to create additional external zones such as an extranet. An extranet is a type of DMZ created for our partner organisations to access via the wide area network. It acts a lot like a DMZ, but it’s not going to be publicly accessible to everyone. It’s only going to be available to our partners.

This extranet is also a place with additional security monitoring and additional scrutiny that can take place to protect those devices. Now, to enforce these different network segments, we’re going to use five different service techniques known as boundary control, access control, integrity, cryptography, and auditing and monitoring services. As applications and systems are designed, their data flow is going to be controlled by each of these services. Boundary control services focus on placing different devices at the boundaries of the security zones to control the ingress and egress of data going into or out of those portions of the network. For example, a firewall could be placed at the border of a network and configured with ACLs to tightly control the data flowing into or out of that network. Segment access control services are going to be used to control access to sensitive materials. This can be used as part of mandatory access controls, rule-based access controls, or discretionary access controls, depending on the particular application and system in use. The focus of integrity services will be on ensuring that data is not changed, damaged, or corrupted during its transfer in the network, system, or application. Usually this is going to be performed by some sort of checksum, hash, or digital signature of that data as it’s accessed or transferred. Cryptographic services are going to focus on maintaining the confidentiality of that data. This ensures that the system is designed to scramble or encrypt the data as it is placed in transit over the network. This could be through a software solution on a particular system or a hardware-based solution like a VPN or bulk encryption device at a network segment. Boundary auditing and monitoring services are going to track all the activities of an agent or system in the network. These controls can be placed within each security zone or at the border of the security zone to identify, track, and better understand what users and processes are able to access data within the security zones. Now, up to this point in the course, we have been discussing numerous different types of firewalls that our organisations can purchase for their networks. Now, let’s focus on how these firewalls could be deployed into our network to create a more secure firewall architecture.

First, let’s discuss the concept of a bastion host. Now, a bastion host may or may not be a firewall. In fact, a “bastion host” refers to any device that is directly exposed to the Internet or other untrusted network. Due to the placement of firewalls, these devices are almost always going to be considered bastion hosts. However, they are not the only bastion host available. Now, due to their connection to untrusted networks, firewalls must be hardened to ensure that they have the latest software and firmware patches. All unnecessary services are disabled, and all unnatural ports are closed. A bastion host can be any server or host that is connected directly to the Internet or an untrusted network for these devices. We should also ensure there’s a separate authentication service being utilised for these hosts and that any unneeded software utilities are removed or disabled. These servers and hosts should be placed between an internal and an external firewall. In this location, we will call it the DMZ, or demilitarised zone. Now, most firewalls have at least two interfaces; in a dual HomeMed firewall, one interface is connected to the internal side or trusted network, while the other interface is connected to the external or untrusted network. This is the simplest firewall architecture to configure, but it also creates a single point of failure for our network. This configuration only requires a single firewall device using the appropriate access control list, thereby making it the least costly architecture while still providing some level of security and the ability to conduct network address translation, or Nat, at your network boundary. A multihomed firewall architecture is similar to a dualhomed architecture, except it uses more than two interfaces. For example, if we wanted a simple architecture that would support an internal or trusted zone, an external or untrusted zone, and a demilitarised zone, we could create this with a single multihomed firewall. A screened host firewall architecture works similarly to a dual-homed firewall except that it is not physically located between a trusted and untrusted network. Instead, the external router of the network logically forwards all the traffic to the screen host firewall over a single interface, and the firewall then forwards the approved traffic back to the router for routing into the internal network. This provides a more flexible solution than a dual home firewall because it utilises rules instead of interfaces to create the separation. But the configuration is much more complex than a dual-homed firewall. Now, a screen subnet takes the concept of a screen host architecture a step further. By utilising two firewalls, traffic is going to be inspected by both firewalls prior to being allowed into the internal network. This allows for the creation of a DMZ using a screen subnet architecture. This architecture places the firewalls logically between the external, untrusted network and the DMZ, and then places the other logically between the DMZ and the internal, trusted network. The screen subnet architectures have additional security built into their design, but they do cost more than the other architectures we talked about because you need to have two firewalls. This also adds to the complexity of the architecture. With that said, it’s not as simple in the real world as it is in this lesson. These approaches are often going to be mixed together to create hybrid architectures based on the particular security requirements of your organization. Now, when it comes time to configure our devices inside the DMZ, we’re going to use something known as a jumpbox. Now, a jumpbox is a hardened server that provides access to other hosts located within the DMZ. Essentially, we can have this one server to which we can connect in order to communicate with any other servers within the DMZ, and we use it as a pivot point. Now, by using a jump box, we can create segmentation between the internal network, the DMZ, and the servers within that DMZ.

This will allow us to configure all the access controls needed to make sure that only the jump box is able to communicate from the internal network to that DMZ. Because of this, that jump box has to be heavily hardened and protected. Since this jump box is heavily hardened and monitored, it allows the administrator to connect to it, and then that jump box will connect to the host in the semitrusted DMZ. This is why we call it a jump box, because we’re essentially pivoting or jumping off of it into the other servers and devices within that DMZ. So in the real world, the administrator will go from their desktop or laptop to the jump box, and then from the jump box to the server or device that they want to configure. That’s why we call it a jump box. Now, this jump box can be a physical PC or it can be a virtual machine. Either one is fine, depending on your architecture. A lot of people like to use virtual machines as jump boxes because that way you can have them hardened and secured. You can use it for the time you need, and then you can destroy it and rebuild a new one very quickly by rebuilding it from a virtual machine image that already has a known good baseline. Now, both that jump box and the management workstation that you’re using to connect to that jump box should always have only the minimum required software to perform their jobs, and they should be very well hardened. Again, this is the one box that has permission to go through the firewall and touch the DMZ from your internal network. So you want to make sure that it is well protected. If we want to take segmentation to the extreme, we can also use something known as an “air gap.” Now, an “air gap” is a type of network isolation that physically separates a network from all other networks. Essentially, what you’re trying to do here is provide a physical space or lack of connection between these two different networks. That’s why we call it an airgap, because there’s air between them. So if I wanted to take something from one network to another and they’re air gapped, I would have to physically take that data, burn it to a CD, put it on a USB drive, put it on a hard drive, or something like that, and then carry it over to the other network and plug it in. That’s the idea of an air gap. Now the big problem with an air gap is that it can create management issues for you because you don’t want to have to do this cross-network transfer all the time whenever you want to bring things over.

For example, if I wanted to work on a network that was air gapped, I would have to physically walk over to that network, plug in my laptop, and then start working on it. This is not as easy as sitting behind my desk and reaching out to it over the network or connecting to a jump box and connecting into it. So why would we want to use an airgap network or this type of physical isolation system? Well, let’s say you work at a nuclear power plant. Do you think you’d want to have that nuclear reactor control system on the Internet so that anybody could touch it? Well, obviously not, right? This is a great case for having systemic isolation in a physical manner. By using an air gap this way, we can create a gap between our physical corporate networks and our physical reactor control networks, keeping the reactor plant much safer. That’s why we would use an air gap in some of our production networks that deal with manufacturing or real-world environments. Again though. Remember, when you choose to use an airgap, that does mean you’re going to have a management headache on your hands. Because every time you need to do a software update, a firmware update, an antivirus update, or something like that, you have to physically bring the device and connect it to the network to get those things into it. And that again brings its own vulnerabilities because if I connect that laptop to this network, it can now bring any malware on that laptop into this standalone network that we had air-gapped. If you look back in history, you can look at the 2010–2012 timeframe where we had Stuxnet out there. There was a nuclear reactor plant that got infected with viruses even though it was an air-gap network. Now how did that happen? Well, it happened because somebody carried information from the internet and bridged the air gap by plugging a device into the reactor plant, which caused the infection. So to keep your network secure when you’re using an air gap You need to make sure you maintain proper isolation and segmentation and that anything you’re going to plug into that network is checked and double-checked for malware prior to actually putting it on that network. You must ensure that it is completely clean, because bringing something into that network will cause it to spread just like it would on any other network. Now I know this was a lot of information. Remember, segmentation is one way to help protect our networks. We can add more or less security protections to each area of the network based on its risk profile and your needs by creating these various types of segmentations. 

8. Implement Network Segmentation (OBJ 1.1)

Observe the rule at the top for regular httpbrowsing This came from 192, 168, and 2192 and went to the web server at ten 1254.Port 80 is ten over. Now observe the default blocking rule that we saw here. Ten one, two, and 3410 are not permitted to connect to 192, 168, or 2192, which respectively over port 80 and 80. This is what we try to do when I use that curl command. Essentially, I was attempting to transfer data from the web server to a client PC on the internal network. And because of this DMZ rule, it’s going to block that.

Now, if I click on that cross, I can see additional details about this hidden default rule. So, as you can see here, we have successfully isolated our web server from other hosts on the internal network while still allowing it to have access from those hosts to the web server when they need it but preventing the web server from getting back to those hosts. So you can see how we can add segmentation here to make sure that the devices in the internal network are protected from the external network and from that untrusted DMZ. Now, we’ve talked about jump boxes as well, and you could configure a jump box on the DMZ as a single point of entry. Essentially, you would get SSH access into that jump box in the DMZ, and then from that jump box in the DMZ, you could access all of the other hosts inside the DMZ because it’s already in that environment already. This would be a good way to set things up, especially if you configure a forward proxy connection to those different application servers. Okay, so I hope you enjoyed this lesson because we got a little hands-on with Unified Threat Management and Firewalls and went over segmentation configurations. 

9. Server Segmentation (OBJ 1.1)

In this lesson, we’re going to continue our discussion of segmentation by discussing the concepts of group policies and security groups, micro segmentation, data zones, and other server-based segmentation solutions. The first method of server segmentation that’s going to be used in our networks involves the use of group policies and security groups. Group policies and security groups are heavily used in the enforcement of standard operating systems in a Windows domain or other environments. Even for Windows machines that are not part of a domain, we can create local security policies to enforce settings on those workstations.

Now, group policies and local security policies are a powerful and welcome addition to our segmentation techniques. Active Directory utilizes a hierarchical structure that allows a single group to contain multiple other groups or machines inside of them. It becomes very simple for us to create Group Policy Objectives, or GPOs, that can be applied to all systems in the domain and to add or remove specific security policies based on machine group membership. Each time a workstation in the domain is booted up, shut down, logged into, or logged off of, it receives and applies the latest GPO, or group policy objectives. That way, we can have it based on their group membership and provide them with the most up-to-date security features. Additionally, a domain administrator can also force a refresh of the group policy from a centralized server at any time it becomes necessary. Now it’s time to tweak these policies and see how they affect things. The administrator is going to use the Group Policy Management Console, or GPMC, within your Windows environment. This gives them granular control to allow or disallow the inheritance of a policy from one group or container to another.

For example, the parent sales group might contain two groups beneath it known as permanent sales and temporary sales. If I create a disabled application inside the sales group, that will be inherited downward into the permanent sales and temporary sales groups, but I can also break that inheritance and allow something like sales managers to be able to use that application even though I’ve blocked it from permanent sales and temporary sales. There are a lot of ways you can configure this within your group policies. Now, the GPMC can also be used to filter out specific computers or users from an enabled policy. You can delegate administration across the Active Directory domain and even use the Windows Management Instrumentation Filters to apply or exempt the policy to a specific computer based on its installed hardware. For example, let’s say we want to disable all the wireless cards in our organizational laptops with the GPMC. We can do that by using a group policy based on the brand of wireless adapter that’s in those laptops. There are many security policies available for configuration under the GPMC, or Group Policy Management Console, including account policies, local policies, event logs, restricted groups, system services, registry file systems, public key policies, and IPsec policies on Active Directory.

Micro segmentation is the next segmentation method that we’re going to discuss here. Now, micro segmentation is a method of creating zones in data centers and cloud environments to isolate workloads from one another and secure them individually. System administrators can use Micro Segmentation to create policies that limit network traffic between workloads using a Zero Trust approach. Micro Segmentation now provides the benefits of reusable server roles, environment and application tags, reusable security policy templates, platform agnostic separation, an automatic audit trail for every action, and a zero-trust network with complete visibility and control.

Essentially, we can use more and more granular security zones with Micro Segmentation by creating multiple mini-DMZs across all of our data centers. And this prevents an attacker from gaining access to one system on our network and then being able to reach every other system in the network. Instead, they’re going to be limited only to the microsegment that that machine happens to be located within. This helps reduce the total attack surface that could be exploited during an attack or security incident. Now, before you start to segment up your networks and servers, it’s important that you determine what data zones you have by conducting a data topology mapping for your enterprise network. Data topologies will define, classify, cluster, and manage data based on the users of that data, its constraints, the flow of that data, and the importance of that data. Data zones are a representation of data with a shared purpose, a shared need, and shared users. The categorization of data into data zones helps define what protections will be needed for each of those pieces of data as they’re protected in our environment. For example, I may create a few simple data zones within my company. One for financial data, one for student data, and another for proprietary data, like the custom software we developed to make all of our business models more efficient and effective. Now that I have these three data zones, I can define the protection levels for each. For our financial data, we need to ensure we’re compliant with regulations and laws that might affect us. things like PCI DSS for our credit card data. For our student data, I have different protection requirements. For example, I need to meet the GDPR compliance requirements for any student data I collect. Finally, we have my proprietary software data, and for this zone, I get to make up the requirements because there is no requirement or regulation that’s going to dictate my protection levels.

But instead, I’m going to create my own based on how much time, money, and resources I want to spend protecting this type of data. Now, by segmenting our data into these different data zones, I can then secure each one better or worse, based on the level necessary for that particular type of data and the systems that process it, allowing me to better use my resources and my time in securing that data. Another method of segmentation you may use, especially if you’re dealing with the world of cloud computing, is known as region-based segmentation. Now, most cloud service providers give you the ability to dictate where your data will be stored based on different geographical regions. You can then use this as a method of segmentation in your cloud-based networks and servers, as well as storing different types of data in different regions. For example, one organization I work for is a multinational organization, and it has employees all over the world. They were concerned with storing their European employees’ data outside of the EU, or European Union. So we had to configure our databases to only store data on European employees on servers that were hosted in the EU. But ours Employees’ data could be stored in Europe, the United States, Asia, or anywhere else. It really didn’t matter because there was no regulation that prevented it. This was an example of regional segmentation, making sure we put data into the right region based on the user—in this case, Europeans being stored in the EU. Another cloud-based segmentation technique is known as an availability zone. Now, availability zones, or AZ’s, are isolated locations within the data center regions from where the public cloud services originate and operate from. Now, when we use regions, these are geographical locations that we can use for segmentation.

But many of these regions have multiple availability zones located within them. For example, within the Amazon Web Services cloud, they have US East 1, which is a region located in Northern Virginia. Inside of that, they have six availability zones. So I could segment my workloads across some or all of these availability zones to provide additional redundancy and minimize my cost, as opposed to sending all my data to another region. Each availability zone has its own ability to remain online using its own redundancy systems, even if another availability zone in the same region goes down. Next, let’s discuss VPC and Vet. Now, VPC is the term used by Amazon Web Services, and it stands for “Virtual Private Cloud.” Vet is the term used by Azure, and it’s known as a “virtual network,” but both of these are similar segmentation offerings. Now, VPC and Vet allow you to deploy private networks in the cloud or extend your on-premises network into the cloud. Each of these allows you to create subnets to further segment your networks and use no routable private IPV4 addresses. You can then create security groups and network access controls, or Knuckles, to provide protection to the VPC and VNS that you’re creating and increase your segmentation. Network access controls are a stateless filtering rule that’s going to be applied at the subnet level and to every resource deployed to that subnet. Within the VPC or Vet, network access controls will examine the traffic, allowing or denying it based on the allow and deny rules that you create, essentially just like an ACL would. Another method of segmentation we have is the use of production, staging, and guest environments. When we’re building some kind of system, whether it’s a new server, website, or other feature, our whole goal is to get things into production. Now, “production” is what we call any service that’s deployed and being used by our intended audience. For example, if you go to deontrain.com right now, you’re accessing our production web server and all of the production features that we’ve released. On the other hand, we have an entire set of servers that mirror each other that are set up and used by our development team to build the next generation of Deontrain.com and its new features. This is our development server, and it then moves to a staging server. So when we built out our new exam voucher tool for our students, we first built it on our development platform. Then we moved it to our staging area. Now, once we were happy with it and it passed all of our quality assurance and testing, we deployed it into our production environment. Once deployed to our production environment, our students, also called our end users, were then able to access and use this new feature. In general, your staging environment should look and act just like your production environment, but it’s usually going to be scaled down in size and scope to save you money and resources. You’re going to be able to develop something, move it to staging for final testing and design, and then push it into production when you’re ready to go live. The other environment I mentioned is known as a guest environment. This is commonly used in corporate networks that require visitors to access their networks, but only within a limited scope. In these cases, the user would connect to the guest environment, which is a segmented portion of the network with limited access to other functions on the network.

For example, at my office, we have a guest environment that allows visitors to connect to our wireless network, and it puts them into a guest segmentation of that network that directs their traffic back out to the Internet without giving them access to any of our internal resources, such as our file servers, our databases, or even our printers. Finally, let’s quickly talk about peer-to-peer segmentation. Now, peer-to-peer occurs when two devices connect directly to each other. In general, peer-to-peer networking should be avoided due to the lack of segmentation that’s going to occur. If peer-to-peer traffic is going to be using your network, it should be segmented off into a separate VLAN for local networks or a separate VPC or Vet if you’re using cloud-based networks. Like I said, peer-to-peer is less secure because there’s less segmentation here, but it does allow for direct connections between two devices. But in general, you’re not going to widely use it inside enterprise networks, except in some specialized use cases.

10. Zero Trust (OBJ 1.1)

In this lesson, we’re going to talk about deeper-parameterization and the need to implement “zero trust” in order to secure your networks. First, what is reprioritization? Now, deprimmetrization is the removal of a boundary between an organisation and the outside world. In the old days of computer networks, we would have all of our laptops, desktops, servers, and network devices contained within our own office buildings.

Over time, though, we added things like personal digital assistants and smartphones, and this allowed our networks to expand beyond the walls of our own buildings. These days, we cannot trust that all of our devices are going to be connecting to our networks from within our trusted office spaces. This is known as the “deep perimeterization” of our networks. Now, because of this, we cannot rely solely on perimeter-based defences like firewalls, intrusion prevention systems, and other network appliances. Instead, we must protect our systems and data using multiple levels of encryption, secure protocols, data-level authentication, and other host-based protection mechanisms. Now, deprivation is a wonderful thing from an operational perspective. It allows us to reduce our costs, conduct business-to-business transactions from anywhere in the world, and become a more agile organization.

The move to the cloud has rapidly increased our ability to conduct secure operations within a deprimerization architecture. But deprimerization has occurred due to the migration into the cloud. This increase in remote work and the embracing of mobile technologies and the use of wireless networks, as well as a move to outsourcing and contracting, However, if we are not cautious, it poses a significant risk to us. Let’s start by looking at mobile devices when I began working in the IT field nearly two decades ago. Networks are mostly comprised of servers, desktops, and a few laptops thrown in here and there. Over the past ten years, though, the type of devices that make up our networks have expanded to include mobile devices such as tablets, smartphones, and countless other devices that make up the Internet of Things. Due to the ever-increasing scope of devices on our networks, it’s become very important to consider the unique challenges that each of these devices brings to our organizations.

Our organisation must first consider if it’s going to allow these devices to connect to the network, and if so, what security policies will reduce the risk that these various devices are going to introduce. Each device can connect to multiple networks, including our organisational wireless networks, the untrusted Internet over cellular, the user’s own home network, and even hotel or coffee shop wireless networks when our employees are travelling for business. While we cannot control all of these networks, each one introduces security risks into devices, which introduces them back into our organisational networks. So, with the move to the cloud, many organisations are placing their critical data either in public cloud offerings like AWS, Azure, or Google Cloud Platform, or they’re putting that data into software-as-a-service offerings. Either way, the data now goes beyond the perimeter of your network, and this too is considered a form of deprioritization. Increasingly, our employees are working remotely, whether that’s from their home, from a hotel room, or from a coworking space.

Because of this increase in the number and type of locations being used, these employees and their data are also outside the traditional perimeters of the corporate network. When they’re working outside of our corporate offices, our employees may also be using different types of connections back into our networks. For example, when they’re working from a hotel or their home office, they might be using a wireless network, a cellular modem, or being directly cabled to an Ethernet connection over a Cat 5 or Cat 6 cable. As the owner of the office network, you really won’t have much control over which type of network they’re connecting back to you over. So it’s always the best practise to implement a concept known as “Zero Trust” to ensure the security of your corporate network and your corporate data.

Now, you see, in traditional networks, we used to believe that our networks and our users were trusted because we gave them access to our data. But under a “zero trust” model, that is not the case, and that is considered a good thing in a modern world. Zero trust is a security concept centred on the belief that organisations should not automatically trust anything inside or outside of their perimeters and instead must verify everything that’s trying to connect to their systems before granting them access.

This all comes down to the fact that everyone is suspicious just because someone is using a username and password that was assigned to one of your users. You really don’t know if that user is who they say they are or if they can be trusted. Zero trust is a strategic initiative that helps prevent successful data breaches by eliminating the concept of trust from an organization’s network architecture. Instead, zero trust can be used to protect modern digital environments by leveraging network segmentation, preventing lateral movement, providing layer seven threat prevention, and simplifying granular user access control. Now, by using the concepts of micro segmentation, we can create micro perimeters within our networks, and every time someone tries to cross that perimeter, either inbound or outbound, they’re going to be checked again to see if they have the right access.

These checks are not focused simply on authentication, which is who is conducting the action, but also on what they want to do, where they’re doing it from, why they’re doing it, and how they’re doing it. This policy of zero trust will determine who can transit the micro perimeter at any given point in time, and this prevents access to your protected area by unauthorized users and prevents them from exfiltration sensitive data from your network and other services regardless of their actual location. The final consideration in this move toward reprioritization and the implementation of Zero Trust revolves around the world of outsourcing and contracting, which is ever on the increase inside modern organizations.

We’ll include methods for them to authenticate back into our network when we outsource functions within our company to a contractor. For example, I have a video editor on my staff, but I have additional video editors that I contract out to whenever we’re producing a lot of courses at the same time. Now, both of these video editors, though, need to access our file servers to access the videos that I recorded so they can do their job and make the final videos that you’re going to see.

My on-staff video editor can simply access it from our local Internet, but my contracted video editor can’t do that because they’re not located in or near our offices. Instead, we have to implement the concept of deep parameterization to allow them to securely access cloud-based file shares under our zero-trust policy so they can access those files and perform their necessary job functions. As you begin to work with outsourcing and contractors, it’s always important to identify what data they need access to, where it is currently being stored, and how you’re going to move that across the perimeter to give them access to the data while keeping it secure.

Leave a Reply

How It Works

img
Step 1. Choose Exam
on ExamLabs
Download IT Exams Questions & Answers
img
Step 2. Open Exam with
Avanset Exam Simulator
Press here to download VCE Exam Simulator that simulates real exam environment
img
Step 3. Study
& Pass
IT Exams Anywhere, Anytime!