CompTIA Network+ N10-008 – Module 12 – Virtualizing Network Devices Part 1
February 25, 2023

1. 13.0 Virtualizing Network Devices

A game changing technology that we hear a lot about is virtualization. That allows us to take multiple physical devices such as servers, routers switches, and have them virtually reside on a single physical server. That can be an enormous cost savings for us.

So in this module, we’re going to see how virtualization happens, and we’ll consider some related technologies, such as storage, area networks and even cloud technologies where almost everything is virtualized. Now, let’s begin our look at virtualized devices in our next video.

2. 13.1 Virtualized Devices

In this video we want to take a look at server virtualization. I’m reminded of when I used to work at a local university. We had a server farm and we purchased these really big expensive servers for the different colleges within the university. The College of Business might have a server, the College of Computer Science might have a server, the College of Natural Sciences might have a server and on and on we had all these different physical servers and they were really underutilized. There was lots of unused processing power, lots of unused storage. We spent a lot more money than was really needed for how much they were used. The great news is today we can take all of those different servers and instead of dedicating hardware to each one, we can consolidate them on a single physical server.

As another example, let’s say that I’ve got a Microsoft Windows server, a Linux server, an Oracle Solaris server and instead of having three physical boxes running all this, what I can do is take their storage, their processing, all their different resources and I can combine them on a single physical server. And by doing that, we can dramatically save on our expense then that physical server, it can connect out to the network going out to a regular Ethernet switch. Now, if I had three physical servers, each server would have their own network interface card with their own Mac address. Same thing here. They each have their own virtual network interface card and they each have their own Mac address and all of those servers and we’ll talk more about this in just a moment.

But all of those servers could share a single network interface card leaving the physical server and getting out to the rest of the world. Now, a lot of these physical servers that we host virtual machines on, they do have multiple network interface cards. But I wanted to show you that it’s possible they could all share a single network interface card and we’ll talk more about the virtual NYX and the virtual switches in just a moment. But first, let’s talk about the software that’s going to be running on this physical server that makes all this possible. How can I virtualize different machines and them not interfere with one another? Well, we run a software called a hypervisor.

That software is able to create a virtual machine. We can use that software to start it, to stop it, to monitor its usage and we can be doing that simultaneously for multiple virtual machines. And there are two types of hypervisors. I would also want you to know a type one hypervisor runs right on the hardware. So let’s say I’ve got this physical server and instead of installing something like Microsoft Windows first and then installing the hypervisor on top of that, I install the hypervisor software directly on the hardware. That’s why it’s called a native or a bare metal hypervisor.

It runs directly on the server hardware. An example of this is VMware’s ESXi hypervisor. Another type of hypervisor. A type two hypervisor is hosted. In other words, it runs on top of a traditional operating system. In the Microsoft Windows world, we might be using VMware’s Workstation product on an Apple Mac, I might be using VMware’s Fusion, I might be using VirtualBox. There’s a free option.

But a type two hypervisor runs on top of an existing operating system and then we can install virtual servers on top of that type two hypervisor. That means we might have, for example, a Linux server that’s virtualized running on top of VMware Fusion, which is running inside of Apple’s macOS as an example. But let’s dive into how all these different virtual machines or VMs interconnect we could have more than one virtual network interface card for each virtual machine. But oftentimes we just say this server gets its own Nic, its own virtual network interface card and each virtual nick is going to have its own Mac address. They’re going to be unique.

And instead of requiring each virtual nick to have its own physical nick that it maps to, we can have all of these virtual nicks share a single physical network interface card on this physical server. To do that we can have a virtual switch. That’s a piece of software that acts like a switch, like an Ethernet switch, where we could attach all those virtual servers to that V switch or virtual switch and then that could go out our single physical nick on the server. But in large deployments you typically will have multiple physical network interface cards on this physical server. But I wanted to show you in software we can have all these virtual nicks and they can interconnect to a virtual switch. We can have multiple virtual switches inside of this physical server.

I recently set up a virtualized environment where I had multiple switches and in between some of those virtual switches I had one or two virtual routers. That’s right, we can virtualize our router also in this physical server. And all this virtualization might be happening at our site, maybe in our data center or many people are now migrating those resources to the cloud. As just one example of a cloud provider. We have Amazon AWS. There are lots of others out there as well, but that’s an example that a lot of people have heard of. And let’s say that we have all these VMs running in Amazon AWS or some other provider. Those are our virtualized servers. But maybe we want to protect those servers a bit.

We would love to have a firewall. How do we get a firewall in the cloud? We could install a virtual firewall on a virtual machine inside of our cloud providers platform. Maybe we wanted a router to do some router functions in that cloud. Well, as already mentioned, we could have a virtualized router. And let’s say that all these different VMs are there for load balancing. They contain the same content. I could have a virtualized SLB, a server load balancer where we can distribute the traffic across those different virtual machines. So I don’t overload a single virtual machine. That also allows me to take one of those virtual machines out of the rotation. Maybe for maintenance. Or if I think I’m going to have a big demand coming in, I can spin up some additional virtual machines temporarily. That’s going to be a lot more flexible and cost efficient than having to have multiple physical servers that we might only need for a brief period of time. And that’s an overview of how we can virtualize lots of different things, including servers, network interface cards, switches, routers, and on and on. On top of hypervisor software, which might be running locally or it might be running in the cloud.

3. 13.2 Virtual IP

Let’s say that we’ve got a PC that’s pointing out to the Internet. As part of that PC’s configuration, it has an IP address. So on screen we have PC One with an IP address of ten 1100. It’s got a subnet mask, it’s got maybe the IP address of a DNS server and it has the default gateway. That’s what DG represents. On screen. This PC’s default gateway is ten one one. In other words, if it wants to go to somewhere other than its local subnet, such as the rest of the corporate network or to the Internet, it needs to go to a next top IP address of ten one one. That’s its default gateway. That’s the way it gets off of its local subnet. And that’s the IP address of a router.

 However, if I’m pointing to a single router, do you see that that could potentially be a single point of failure? What if I’m pointing to r one in this topology and R one goes down? What happens then? Well, some operating systems do allow you to have multiple default gateways, but in my experience, that doesn’t work really well and certainly does not work quickly. We would like to make this fairly transparent to the end user that if we were using R One to get to the Internet and it goes down, we’d love to start using R Two very, very quickly. And that’s what we can do with Virtual Router Redundancy protocol or VRRP. Specifically, we get to point to a virtual IP address that ten one one, even though it can be the IP address of our interface, as we see here on R One.

 It’s the IP address of interface gigabit zero one. It does not have to be. It could be ten one one three as an example. But that’s a virtual IP address and it’s going to be represented by this virtual router that sort of ghosted add on screen. So I’m really pointing to this virtual router and you might say, how does it know the difference between the virtual router and R One? They seem to have the same IP address. Well, the difference is the Mac address. We’re going to have a Mac address that’s associated with that virtualized router and we have R One, let’s say, servicing that virtual router most of the time and we call it the Master. But if it goes down, R Two is acting as the backup.

You might wonder, how does R Two know if R One goes down? Well, R One is going to be sending out an advertisement by default every 1 second to R Two, saying, hey, I’m still here, I’m still the master, I’m still forwarding traffic off of this subnet, so you don’t need to do anything. But if R One were to go down, for whatever reason, then R Two would notice that after an extended period of time, if it did not hear those advertisements, it would say, I think something has happened and I probably need to take over. So let’s imagine R One goes down in a fairly short amount of time. R Two then becomes the master, and traffic now is going to go out via R Two to the Internet.

 However, from the perspective of the PC, we’re still pointing to the same default gateway we’re still pointing to the same Mac address to get to that default gateway, because we’re really going to a virtual router. And let’s take a look at some other characteristics of VRRP. The first thing I would like you to know is it is a standard. And when I say it’s a standard, it’s a standard first hop redundancy protocol, an FHRP that’s as opposed to something that is proprietary to a vendor. For example, Cisco Systems, they have a couple of proprietary first hop redundancy protocols. They have HSRP, the hot standby router protocol, and GLBP, the gateway load balancing protocol. However, they also support VRRP, which is an industry standard. In fact, it’s defined in RFC 37 68. And we call the router that’s actively forwarding traffic and the router that’s standing by, we call those the master and the backup routers. If you’ve worked with Cisco’s HSRP, those are known by different names. Instead of master, it’s called active. Instead of backup, it’s called standby. And I mentioned that the way we distinguished between the virtual router and R One, the physical router, they both had the same IP address. The way we distinguished between those is the Mac address. When we sent out an ARP saying, hey, can somebody tell me the Mac address for ten one one? R One responded on behalf of the virtual router, saying, yeah, here’s the Mac address, and it’s in this format. We’ve got four zeros and a five E, and then three more zeros and a one. And those last two X’s, those represent the Hexadecimal values we would plug in there that would match the VRRP group.

You see, when we’re setting up VRRP, we can have multiple instances running on our router, maybe one instance running for one interface, another instance running on another interface, and we separate those instances by giving them group numbers. Let’s say that I gave this subnet a VRRP group number of ten. Well, in Hexadecimal, we would write that as zero A, that would be in the place of those two XS you see on screen. Another feature you may have come across if you’ve worked with Cisco’s HSRP solution is a feature called Preemption. And preemption means if R One is the master and it goes down for whatever reason, somebody kicked the cord out of the power strip.

If it goes down and it comes back up, even though it has a higher priority that we’ve configured, that’s how it became the master in the first place, even though it has the higher priority, if it goes down and comes back up, in order for it to take its old job back and regain its master role. It has to have this feature called preemption enabled. And here’s a distinction with Cisco’s HSRP. Preemption is disabled by default, but it is enabled with VRRP. And I mentioned that the way R Two knows that R One is still there is every 1 second by default, we can adjust that. R One is going to send an advertisement over to R Two saying, hey, I’m still here. And it’s not that uncommon that we might occasionally, somewhat randomly drop a packet. What if I dropped one of those advertisements? Does R Two immediately switch over and say, all right, I’m now the master? No, it’s going to wait for a while.

And that time that it’s going to wait is called the default master down interval. And we calculate that interval with this somewhat convoluted formula. We say it’s three times the master advertisement interval, which is three times one. That’s 3 seconds. And then I mentioned that we had a priority assigned to these routers, and the highest priority value that router is elected as the master router. The default priority is 100, by the way. So the rest of this formula is we say it’s 256 minus the priority divided by 256. That element of the equation there, that’s called the SKU time. So by default, it’s 3 seconds plus the SKU time. In other words, it’s a little over 3 seconds that R Two is going to wait before it says, I don’t think R One is there anymore. I want to take over the master role. And those advertisements are going out, not to R Two specific IP address.

They’re going out to a multicast address of 224 00:18. And R Two has joined that multicast group. That’s how it sees those advertisements. And as I pointed out here with VRRP, it is possible that that virtual IP address be the same IP address that we have assigned to a physical interface on a router. The reason I make a point of that is that’s not possible with Cisco’s HSRP. But that’s a look in theory at how we can have a virtual IP address and a virtual Mac address as well that reference a virtualized router, a router that can be serviced by more than one physical router for redundancy purposes.

4. 13.3 Storage Area Network (SAN) Technologies

In this video, we want to talk about some different options for having data storage on a network. First, let’s consider a traditional server with an internal hard drive or even an externally connected hard drive. Here we have Server One and Server Two, and they each have their own independent hard drive. And this is called das. Das for direct attached storage. In other words, the hard drive is directly attached to its server. And the downside of this is, let’s say Server Two’s hard drive has extra capacity. Server one cannot easily write to Server two’s hard drive. It’s writing to its local hard drive. But the type of storage we’re doing here locally is very efficient. It’s using block storage as opposed to file storage. You see, block storage is where we’re dealing with bits and bytes.

We’re not having to transfer an entire file at one time. We’re just transferring a block of data that’s as opposed to file storage that we might have if we go out to a network server. You might say backslash. Backslash, you give the server’s name, another backslash, you give the shared drive name. And that’s how you get to a shared drive containing a bunch of files. If you pull one of those files down, you’re using file storage, not block storage. And it’s not as efficient. You’re having to transfer entire files at one time. This approach is more efficient because we’re using block storage and we’re doing it over a scuzzy connection. That small computer system interface is what that stands for.

But since we’re not able to share resources easily or efficiently between these servers, another option is to have a hard drive on the network. This is called an Nas or a Nas network attached storage. This is a device that might have multiple hard drives. They might be connected in some sort of a redundant configuration like Raid. And we can have multiple devices such as servers or even clients accessing this shared repository of files on the Nas server. Now, this Nas server, it acts like a file server. We’re doing file level storage. So it’s not quite as efficient as doing block level storage that we will be doing if we had a local hard drive. Something that can give us that more efficient. Block level storage, though, over the network, is a technology called Fiber Channel. Now, Fiber Channel is going to involve a fiber channel switch that’s going to connect out to our servers and it’s also going to connect down to the Fiber Channel storage array. But you’ll notice that each of our servers, they’ve got two connections. Instead of having just a single network interface card, they have an Ethernet network interface card. And they also have to have a card for Fiber Channel.

So they have a network interface card, a nick, and they have an HBA, a host bus adapter for connecting in to the fiber channel. But this is going to be more efficient. This is going to give us that block level storage, but we’ve got to have a couple of adapters per server. It’s going to be a little bit more expensive. However, there is a way to do this block level storage over an Ethernet network. We can use something called Fiber Channel over Ethernet or FCoE for short. Here, the servers do not have to have an HBA, a host bus adapter. Instead, they’ve just got their Ethernet Network Interface card that connects that to an Ethernet switch. But connected to that Ethernet switch is an FCoE switch. And that’s going to convert between the Ethernet environment and the fiber channel environment to get us down to the fiber channel storage array.

And you might see this in large scale deployments in large businesses because this is an expensive solution to have the FCoE switch and the fiber channel storage. Another way of getting that more efficient block level storage is to do something called Icsy. Here we have what seems to be a Nas or a network attached storage device. Except instead of using file level storage, we have a Scuzzy connection that’s running over IP. And it’s that Scuzzy connection that lets us do the more efficient of block level storage. So here when you go to a computer that is pointing to an Icuzy storage array, that storage area does not look like a shared drive on the network. It looks like a locally attached drive. And that’s why we get to do that block storage. But that’s look at some different ways that we can have storage on our network.

Leave a Reply

How It Works

Step 1. Choose Exam
on ExamLabs
Download IT Exams Questions & Answers
Step 2. Open Exam with
Avanset Exam Simulator
Press here to download VCE Exam Simulator that simulates real exam environment
Step 3. Study
& Pass
IT Exams Anywhere, Anytime!