N10-008: CompTIA Network+ Certification Video Training Course
CompTIA Network+ Training Course
N10-008: CompTIA Network+ Certification Video Training Course
18h 32m
110 students
4.0 (70)

Do you want to get efficient and dynamic preparation for your CompTIA exam, don't you? N10-008: CompTIA Network+ certification video training course is a superb tool in your preparation. The CompTIA N10-008 certification video training course is a complete batch of instructor led self paced training which can study guide. Build your career and learn with CompTIA N10-008: CompTIA Network+ certification video training course from Exam-Labs!


Student Feedback


N10-008: CompTIA Network+ Certification Video Training Course Outline

Module 1 - Introducing Reference Models and Protocols

N10-008: CompTIA Network+ Certification Video Training Course Info

Gain in-depth knowledge for passing your exam with Exam-Labs N10-008: CompTIA Network+ certification video training course. The most trusted and reliable name for studying and passing with VCE files which include CompTIA N10-008 practice test questions and answers, study guide and exam practice test questions. Unlike any other N10-008: CompTIA Network+ video training course for your certification exam.

Module 2: Network Pieces and Parts

10. 2.9 Load Balancers

Let's say we have a server of some kind, perhaps a web server that handles e-commerce traffic for our company, and it would be great to have a backup and some load balancing to ensure that it doesn't go down and cost us all of those sales. It would be great to have other servers with identical content. But the question is: if I've got multiple servers containing the same content and will pretend they're virtual servers, how do I spread the load across all of those servers? Well, one option is to use a network appliance called a load balancer. Let's say these virtual servers on screen contain identical content for a website. Or maybe the first packet that comes in goes to the top server, the next packet goes to the middle server, or the next packet goes to the bottom server. And again, these servers have identical content, so it really doesn't matter which server the packet gets forwarded to. And by doing this load balancing, we're going to take a load off the hard drive and the processor on any single server. And the load balancer can also help out with maintenance. Let's imagine that I want to take one of those servers offline to do an upgrade on it. I need to swap out a hard drive, for example. Well, is that going to impact our performance? Maybe not. We can simply enter the load balancer and remove one of the servers from the pool of servers that were load balancing across. And if we run a commercial on Big Game, for example, and we're expecting an influx of orders to our e-commerce site, having a load balancer will help us adjust our capacity. Sometimes this is called elastic server capacity because we can stretch that server capacity by spinning up some additional virtual servers. And then, after the demand dies down, maybe a few hours after the big game, we can take some of those virtual servers offline and not be paying our service provider for those servers. And here I'm showing you that load balancer as an appliance. It doesn't necessarily have to be an appliance. It could be a router that's configured to do load balancing, or there are even virtual load balancers. So if you have these virtual servers in the cloud, like Amazon AWS, you could have a virtual load balancer installed in Amazon AWS and use it to load balance across your virtual servers.

11. 2.10 Advanced Filtering Appliances

In this video, we're going to examine some networking appliances that can be used to filter out unwanted traffic. And you might already know about some of these appliances or some of the features on these appliances. First, let's think about a firewall. We know that a firewall tries to protect one area, like our enterprise network, from another area, like the Internet. And a basic firewall does what's called state-full inspection. We send traffic out to the Internet. The firewall memorises the source and destination IP addresses and port numbers. And when the return traffic comes back in, it's able to recognise it and allow that traffic back in, assuming it originated on the inside of our network. But some firewalls have more features than just that basic service, and those firewalls might be called next-generation firewalls. Or you might see it written as a seven-layer firewall or an application layer firewall. As a few examples of what a next-generation firewall might do, we might have knowledge of the nature of different applications. If I'm setting up a voiceover IP phone call between a couple of IP phones, I might be using the protocol called Sip, the session initiation protocol, to get the call set up. But then once the call is set up, we start using RTP, the real-time transport protocol, to stream the actual voice back and forth between the IP phones. When we switch from Sip to RTP, we may have a firewall that knows that RTP frequently follows Sip and can determine that these RTP packets that it's now saying are part of the same voice session that it began inspecting when it was using the Sip protocol. This firewall might be able to do deep packet inspection and filter out data that it finds at layer seven at the application layer and prevent sensitive information from being leaked out to the Internet. Perhaps we could have an intrusion prevention system sensor or an IPS sensor built into this firewall, where the firewall itself keeps a database of known attacks and can analyse inbound traffic as it enters the network. And if the traffic coming into the network matches the signature of a well-known attack, we can drop it in its tracks before it ever gets an opportunity to get inside the network and do damage. And one type of attack that has made the news a lot is a ransomware attack, where perhaps you get this message on your screen that your data on your hard drive has been encrypted and you have to pay so much bitcoin to this person in order to get the key to decrypt all of your data. It's like they're holding your data hostage. That's why it's called ransomware. That's a type of malware, by the way. And malware can also get on your system and do bad things. And what these malicious people on the internet have been doing lately is sending their malicious traffic—their malware, their ransomware traffic—in to infect our clients. They send that over an encrypted TLS tunnel. Now, does that just destroy our ability to analyse that traffic? After all, if I cannot read the traffic because it is all encrypted, how can I know that it's malicious? I'm just going to let it run through. Well, actually, there is something called encrypted traffic inspection, which allows us with a very high degree of certainty to identify malicious traffic even though it's encrypted. And when I first heard about this, it just seemed nearly impossible. How can we recognise malicious traffic if it's all scrambled up? Well, one way that some companies used to do this is that they would decrypt the traffic and inspect it before they sent it on. That could pose a couple of issues. It's going to take time and resources to do all that decryption, and it might break some privacy policies to be decrypting this encrypted traffic. But with encrypted traffic inspection, what we can do is use statistical analysis with these huge data sets to recognise what is probably malicious traffic. And the best metaphor I've heard to explain this is going to a doctor's office. Maybe you're not feeling well. You go to a doctor's office, and you give the doctor your symptoms. And after they hear your symptoms, they think, "All right, based on this and this, and the fact that you're running a fever here, you probably have this particular illness." That's kind of what encrypted traffic inspection is doing. It's going to take the characteristics of this traffic coming in and compare them against huge data sets. And these data sets have characteristics of benign traffic as well as infected traffic. For example, in a TLS session, we start off with a Hello message. And that's not encrypted. It contains what are called cypher suites. A cypher suite is a list of parameters that we're going to use to negotiate the TLS connection. Well, one thing encrypted traffic inspection might be able to do is recognise certain cypher suite listings known to have accompanied malicious traffic. And it can also examine the lengths of the packets and the intervals between packets. If it measures the time between the arrival of two packets and if that time is significantly different statistically, that could make that traffic suspect. An encrypted traffic inspection can often have more than a 99% accuracy rate. Another appliance I want you to know about is a content filter. And the main purpose of a content filter is to filter out traffic that might be considered objectionable. This might be something like pornography or violence or hate speech. Parents might install software on a computer, and that could protect their homecomputer from things like that. Or in a large enterprise, we might have an appliance dedicated to filtering out what we consider to be objectionable traffic. And finally, I want you to know about a UTM appliance, a unified threat management appliance. This is a dedicated appliance that can combine multiple filtering functions. And different UTM appliances might have different sets of features. But as an example, we might have a UTM appliance that acts as a firewall. It acts as an IPS sensor, it protects us against malware, it can terminate a VPN connection coming in, and it can do content filtering. And that's a look at some advanced filtering appliances.

12. 2.11 Proxy Server

In this video, we want to consider a proxy server. A proxy server may still appear on some networks with which you work. However, it's not as needed as it once was. Back in the 1990s, I administered a proxy server at a university where I was the network administrator, and it helped us out there because we had limited bandwidth going out to the Internet and it could add a layer of security as well. But today it's more common to have high-speed Internet connectivity, and we have more advanced security appliances that can scan content coming in from the Internet. But here's what a proxy server does. If you come across one, a proxy server will receive traffic from an internal client and will terminate the connection. And then just like it was placing the connection itself, it's going to originate that same connection going out to the destination on the Internet, like a web server. So if the Web server asked, "Who is sending me this packet?" The source IP address would be the IP address of the proxy server, not the PCs inside the enterprise. Now, what's the advantage of doing that? Termination and then origination of this packet flow could, for one thing, allow that proxy server to filter the content. Maybe we want to block particular URLs, as an example. A proxy server might be able to do that. And when I used it back at the university, the main reason we were using it was for caching, because we did have limited connectivity out to the Internet. If we had lots of clients booting up their computers and going to @yahoo.com, for example, whatever the graphic was for the day on @yahoo.com could be stored on the proxy server. That way, when the second person went to @yahoo.com or wherever they were going, that graphic could be served up locally from the hard drive of the proxy server without pulling it down again from the Internet. That process was called caching, and at the time it saved a decent percentage of bandwidth. It was definitely worth having a proxy server. However, maybe not so much today if we do have a network with fairly high-speed connectivity out to the Internet. And in order to use a proxy server, we would have to go into the browser and point the browsers to the proxy server. So some applications may need to be configured in order to use the proxy server. However, there have been some advancements in proxies, and you may have a proxy that can operate transparently, where the apps do not have to know that there is a proxy server sitting in the middle. And the proxy server can just transparently terminate and then originate that packet flow going out to the Internet.

Module 3: Stay on Top of Your Topologies

1. 3.1 Star Topology

Let's consider in this video one of the most popular types of network topologies, and that's a star topology. And we commonly see an Ethernet switch as the central point of that star. And then radiating out from that central point, we've got different network devices, such as a laptop. We might have an access point for wireless communication. We might have a printer or an IP phone. And the Ethernet switch sits in the middle, and it kind of looks like a star. However, this is not the only device that we might find in the star topology. Before we had Ethernet switches that were widespread, we had Ethernet hubs. Now, a hub is not nearly as intelligent as an Ethernet switch. It doesn't do a great job of forwarding traffic. You send a packet into that Ethernet hub, and it just regenerates it out of all the other ports because it doesn't really know where it's supposed to go. And even that switch is much better. The laptop is talking to the printer. We send the packet into the switch, and the switch knows that this printer lives off of this port, and it only sends the packet where it needs to go. But back to our discussion of the topology, there are a few characteristics I want you to know about a star topology. First, if one of the links were to fail, that wouldn't impact the other links. If the laptop link were to go down, the access point, the printer, and the IP phone would continue to function. That's not the case with some networks, such as a bus topology. Perhaps we were using ten base two or ten base five Ethernet technologies in those networks, and we had an end user device that tapped into a coaxial cable. And if that coaxial cable were to fail, or in other words, if that link were to fail, every device connected to that coaxial cable would also fail. But here, a single link failure does not bring everybody down. But that's not to say there's not a single point of failure. That central point of the star and the Ethernet switch In our example, that is a potential single point of failure. Because if that switch goes down, what happens to network connectivity? For all the devices we talked about, they don't have a way to get to the network. That's why, in high-availability environments such as a data center, we might have a server with more than one network interface card for redundancy. One network interface card might connect to one Ethernet switch, and another network interface card might connect to a different Ethernet switch again for redundancy. That way, we could lose a switch and still have connectivity out to the rest of the network. However, with most user-facing devices, such as a laptop, that's probably going to be a single point of failure. And there's one more thing you should remember about star topology. We've already mentioned that it is very popular in modern networks in the form of an Ethernet switch.

2. 3.2 Mesh Topology

A wide area network, or WAN, has multiple geographically dispersed locations, such as the one shown on the screen. With these different offices, we may see some sort of mesh topology. And if we have a full mesh topology, that means that every site is connected to every other site directly. For example, if office B wants to go to office A, it doesn't have to go to office D and then get rerouted; it's going to go directly to office A, office C can go directly to office B, office D can go directly to office E, and so on. However, with five sites, we've got a lot of links to set up and maintain and pay for. How many links do we have? Let's count them. 123-45, 67, 89, 10 links. So this is not going to scale very well. In fact, here's a formula that tells us how this works. The formula is n times n minus one divided by two, where n is the number of sites that we have. And this is going to calculate for us the number of links we need for a full mesh topology. In our case, we had five offices: ABC, D, E, and F. Let's say that n equals five. That's going to give us five times five minus one, or four. Five times four equals twenty, divided by two equals ten. With five sites, we have to have ten links, and if we had 20 sites, that would be 20 times 19 divided by 2, so that's 190 links. This is not going to scale very well. So what can we do instead? Well, one option besides a full mesh where everybody connects to everybody else is that we could observe the traffic patterns and say there really isn't a need for office D to go directly to office E. There's not that much traffic that goes directly between those two offices. So we could prune that one off. But notice that Office D still has a path to get to Office E. One path—it has multiple paths—is to go from D to A to E, so we can still get to it, but since we don't have a lot of traffic, it's probably not going to justify an extra land link. Maybe we also prune off that link between B and C, maybe the link between B and D, and Band E, and maybe we're left with this topology. This is going to be called a partial mesh topology; it's not a full mesh, since everybody is connected to everybody else. But we have strategically put in links where we have the most traffic flow. Maybe there is a lot of traffic flow between offices C and E. So yeah, let's give them a link. Now let's do a side-by-side comparison of some of the characteristics of a full mesh topology versus a partial mesh topology. With full mesh, we have an optimal path; we can go directly from one site to another site. With partial mesh, it might be optimal, but it might not be. It might be sub optimal.In the case of D to E, we had to go through another site first, but that was an acceptable trade-off based on the cost of having that extra link and the low traffic flow between those offices. Full mesh is not very scalable. We saw that with only five sites. We needed ten links to have a full mesh, and I said with 20 sites, we needed 190 links. With partial mesh, it's going to be more scalable because we don't have to have everybody connecting to everybody else, and because a full mesh has more links, obviously it's going to be more expensive compared to partial mesh. And those are some characteristics I want you to know about. Mesh Topologies.

3. 3.3 Ring Topology

A ring topology allows the different devices on a network to take turns transmitting data. This is as opposed to the old Ethernet hub, where we could have only one device transmitting at a time. So those, those Ethernet devices, theyuse something called Csmacd carrier sincemultiple access with collision detection. They listen to the network to make sure nobody else is speaking, and if nobody else is speaking, then they transmit. However, if we had two devices listening during the same period of silence, they could transmit simultaneously, and that's not allowed. Their packets would collide, resulting in a collision and a retransmission. A ring topology avoids that by taking turns. Assume the laptop has some data to send to the server in a ring topology, such as a token ring; this is a common example. In a ring topology like a token ring, we're going to be passing a virtual token around the ring. Laptop One is going to send it to Laptop Three, which is going to send it to the server, which is going to send it to Laptop Two, which will send it back to Laptop One. And right now, let's say laptop one is in possession of the token, and it's empty right now. It was empty when it received it, but it put some data in that token and said this is destined for the server, and it's going to send this now-populated token over the laptop. When Laptop Three gets it, it's going to examine that token. It's going to say there's some data in here, but it's not for me. So it's going to send that token down to the server. The server examines it and says there is data and it is for me. The server is then going to remove the data from the token, leaving an empty token. If the server wanted to send something, they could populate the token. After all, it's currently in possession of the token. But let's say it didn't. At this point we have an empty token, and we're going to send that over to laptop two, which is going to examine it. We'll say that Laptop Two has nothing to send right now. So this empty token takes us back to the beginning, to laptop one. Now logically, that's the way that a ring topology works. Another example of a ring topology is FDDI. That's where we have a couple of rings, and the physical media is fibre optics that run at 100 megabits per second. It's still more of a legacy technology, like tokenizing is a legacy technology, but just sticking with the token ring example, that's the classic example that most people think of with a ring topology. Even though we're logically in a circle physically, we did not really run a cable from laptop one to laptop three and then over to the server. It was connected physically, much like an Ethernet network that connects where it looks like a star topology. In other words, we had a device at the centre, and that device was called a Mau. an Mau immediate-access unit. and the devices would be linked to it. Mal, so think about this for a second. This is a big distinction I want you to make. There's a difference here between the logical topology and the physical topology. Logically, we were a ring. We logically passed a token from one device to the next, to the next, to the next. But physically, we're a star.So, in some cases, we do have a difference between physical and logical. Topologies.

4. 3.4 Bus Topology

A bus topology was one of the early topologies used in the Ethernet world, where we had a coaxial cable, and that coaxial cable could run from room to room to room.That's actually the first type of network that I worked on, and the network I was on was a 10-base-T network. There was also a 10-base-five network that had a thicker coaxial cable. But with a bus topology, all of the network devices tapped in to this single coaxial cable that was the bus. And on an Ethernet bus, we can only have one packet at any one time. That means that we cannot have two devices that are transmitting at the same time. How do we avoid that? Well, Ethernet uses something called the CSMA/CD carrier. because of multiple access and collision detection What's going to happen is before a device communicates on the network, it's going to listen for a brief period of time to see if anybody else is talking on the network. And if they're not—if the coast is clear, in other words—then they'll transmit. The challenge comes up when two different devices are listening to the same period of silence. They both simultaneously conclude that nobody's talking, so it must be safe to talk. And when that happens, let's say that the two bottom devices on screen listen to the same period of silence. They simultaneously transmit. What happens? Well, those packets collide, and that collision is detected on a bus by a spike in the voltage that the computer's network interface card consents to, and it knows something bad happened in my packet; I've got to retransmit. But what's to keep it from happening again? Well, what happens with Csmacd is that the PCs that transmitted and had a collision are going to set a random back-off timer. And by adding that element of randomness, hopefully they're now going to transmit at different times. Let's say the PC on the bottom left selected a random back-off timer of ten milliseconds, while the bottom PC on the right selected a random back-off timer of 20 milliseconds. That means that the bottom left PC transmits before the bottom right PC transmits. And the result is, hopefully, no collision this time. Now this is the way that Ethernet began, with a bus tapping into a queue cable. Later, Ethernet adopted a star topology where there was a hub in the middle. And that brings up the topic of a physical topology versus a logical topology. For example, here we see how an Ethernet hub logically operates. Even though everything is plugged into a centralised hub, logically it still acts like an Ethernet bus. However, physically, yeah, it looks like a star. We have an Ethernet hub in the middle, and all our devices start out from that centralised Ethernet hub. And that's a look at bus topology.

Pay a fraction of the cost to study with Exam-Labs N10-008: CompTIA Network+ certification video training course. Passing the certification exams have never been easier. With the complete self-paced exam prep solution including N10-008: CompTIA Network+ certification video training course, practice test questions and answers, exam practice test questions and study guide, you have nothing to worry about for your next certification exam.


Read More

Provide Your Email Address To Download VCE File

Please fill out your email address below in order to Download VCE files or view Training Courses.


Trusted By 1.2M IT Certification Candidates Every Month


VCE Files Simulate Real
exam environment


Instant download After Registration


Your Exam-Labs account will be associated with this email address.

Log into your Exam-Labs Account

Please Log in to download VCE file or view Training Course

How It Works

Download Exam
Step 1. Choose Exam
on Exam-Labs
Download IT Exams Questions & Answers
Download Avanset Simulator
Step 2. Open Exam with
Avanset Exam Simulator
Press here to download VCE Exam Simulator that simulates latest exam environment
Step 3. Study
& Pass
IT Exams Anywhere, Anytime!


You save
Exam-Labs Special Discount

Enter Your Email Address to Receive Your 10% Off Discount Code

A confirmation link will be sent to this email address to verify your login

* We value your privacy. We will not rent or sell your email address.


You save
Exam-Labs Special Discount


A confirmation link was sent to your email.

Please check your mailbox for a message from [email protected] and follow the directions.