Pass CompTIA Network+ Certification Exams in First Attempt Easily
Latest CompTIA Network+ Certification Exam Dumps, Practice Test Questions
Accurate & Verified Answers As Experienced in the Actual Test!
- Premium File 470 Questions & Answers
Last Update: Mar 20, 2023
- Training Course 211 Lectures
- Study Guide 1485 Pages
Check our Last Week Results!
Download Free CompTIA Network+ Practice Test, CompTIA Network+ Exam Dumps Questions
Free VCE files for CompTIA Network+ certification practice test questions and answers are uploaded by real users who have taken the exam recently. Sign up today to download the latest CompTIA Network+ certification exam dumps.
CompTIA Network+ Certification Practice Test Questions, CompTIA Network+ Exam Dumps
Want to prepare by using CompTIA Network+ certification exam dumps. 100% actual CompTIA Network+ practice test questions and answers, study guide and training course from Exam-Labs provide a complete solution to pass. CompTIA Network+ exam dumps questions and answers in VCE Format make it convenient to experience the actual test before you take the real exam. Pass with CompTIA Network+ certification practice test questions and answers with Exam-Labs VCE files.
Module 1 - Introducing Reference Models and Protocols
6. 1.5 Maximum Transmission Unit (MTU)
In this video, we want to better understand an MTU, or a maximum transmission unit. And an MTU is the largest frame or packet that can be transmitted or received on an interface. and that that interface could be the interface of most any network device. And notice that I say the largest frame or packet because we can identify an MPU at layer two, where we have frames, or at layer three, where we have packets. Let me give you an example. Now, on screen, let's say that router R-1 has an MTU, a maximum transmission unit of 1500 bits, and this is a packet size. So this is the layer three MTU. At layer two, where we have frames, we have an MTU of 1518 bytes. Because an Ethernet header is at Layer 2, it's 18 bytes in size. So we're adding on that 18-byte Ethernet header at layer two to account for the extra size of a frame as compared to a packet. And in order for communication to work properly between Ronen and Swooned, the MTUs on each side of that link should match as they do here. So in this case, R One's interface is on the same segment that connects over to Switches One, they do have the same MTUs, and we're able to send traffic from R One through SW One and out to its destination. But consider a situation where R has a smaller MTU than SW does. Let's say that the MTU is 1492 bites for the packet MTU. Well, the switch has a packet MTU of 1500 bytes. Why would we reduce it to 1492, by the way? Well, as an example, we might be running something called Point-to-Point Protocol over Ethernet (Pepo).And PPP has an eight-byte header, and we have a PPP frame that's encapsulated inside of an Ethernet frame. So we might want to reduce the MTU by eight bites to accommodate that eight-byte PPP header. That's why it's considered a best practice to have an MTU on a PPPOE link where the packet size is 1492 bytes. However, let's say that we did not change anything on SW 1. What's going to happen then? We'll switch SW One that might send traffic into ours, and it's going to say, "Whoa, this packet that you're sending me is too big." You're sending me a 1500-bike packet. I cannot handle that. My MTU is 1492 bytes. Now, that's not to say that SW One is always going to be sending 1500-byte packets. It could certainly send smaller packet sizes. But let's just assume it's sending out the max. The maximum package size is not going to make it into R-1 because it exceeds R-1's MTU on that interface. So what is R One going to do? Well, it could fragment that packet. In other words, it can cut it up into a couple of different packets, each with their own header. And those two smaller packets could be sent on their way. However, let's say that the packet that SW One sent over to R One Let's imagine that in the IP Version 4 header, the DF field was set to one. There's a bit in an IP version 4 header that's called the "don't fragment" bit or the "Debits" that disallows a packet from being fragmented. In a case like that, SW One is going to try to send the traffic into R One. It's going to say, "I cannot fragment it, so I have to drop it because it exceeds my MTU." And we need to let SW One know that the packet was dropped. So in response to that, we send back an ICMP message. Specifically, it's a "fragmentation needed and DF set message." Now, this is with IP Version 4. Things are a bit different with IP version six. With IP version six, there is no DF bit. So what happens? Well, IPV six still needs to let the sender know about a packet being dropped. As a result, the receiver will return to the sender a packet containing a largeICMP version six message. And something else about IPV6 that I want you to know is that it supports a feature called MTU Path Discovery, and that's going to allow the sender to say, "Whoa, it looks like I was sending a packet that was too large." Let me see if I can dynamically adjust that. And that means the IP Version 6 sender will dynamically reduce its packet size, and then it's going to try sending a smaller packet. And if it gets another too-big ICMP version message back, it's going to realize it's still too big, and it will try to reduce it again. It's going to repeat that process until it finally gets a packet size that does not result in packet too big" message coming back. Now, it's interesting that MTU Path Discovery can be supported on IP version 4, but it's on by default with IP version 6. And that's a look at the maximum transmission unit.
7. 1.6 Ports and Protocols
A really common network activity is to go from a client machine like a laptop to a web server. Maybe that Web server is local to your company. Maybe that web server is out on the Internet somewhere. But when we reach out to the web server, not only do we need to know the web server's IP address, we also need to know the well-known port number of Web services. We're using the HTTP protocol, the hypertext transfer protocol. If this is an unsecured Web connection, we'll talk about a secure way of doing that in just a few moments. But if it's an unsecured connection, we're going to be using the well-known port number of 80. So if we take a look at the header information in the IP packet, the source IP address is the IP address of the client. Ten to one 170 216, ones and two, is the destination IP address. That's the Web server. And because we're going to web services on the web server, we're specifying the well-known port number of 80 as our destination port. Now, specifically, it's TCP port 80. Remember, we have TCP and UDP as a couple of our different transport layer protocols. Well, we're going to be using TCP port 80 when we're connecting to a traditional web server. But what about the return traffic? The Web server has to get back to us. It needs to transpose those source and destination IP addresses and port numbers. Where did the client get its port number of 49,158? It picked that somewhat randomly. That's called an ephemeral port. Or you might hear it called a dynamic port or a private port. But it's going to pick, somewhat randomly, this high number that can be used for the duration of this connection. And I'd like you to know the different ranges of port numbers that we have. Well-known ports are in the range of zero to 1023.
This is where we commonly have our network services like HTTP and HTTPS, DNS, and DHCP, the big protocols that we typically think of. There's another range for registered ports. This is the range from 1024 to 49,151. And when I say these are registered ports, this is where a vendor or an organisation registers a protocol with the IANA, the Internet Assigned Numbers Authority. This is in an attempt to avoid different vendors and organisations from overlapping one another as they're assigning ports to their various protocols. And the ephemeral ports, also known as the dynamic or private ports, are in the range of 49,152 to 65,535. And that's what this client used to pick its return port. It picked 49,158. It's in that range. And by the way, you may be doing a packet capture one day. And notice that the client has picked a much lower port number. It picked a number that appears to be in that registered port range. Well, what is going on is that some operating systems don't strictly adhere to this ephemeral port range. They might pick a much lower ephemeral port number for the return traffic. They might pick 1500 as an example. So don't be shocked by that. But technically, these are the different port number ranges. And in this video, we're going to go through some common protocols you might run into, and these would be good for memorization. The way I recommend you memorise these is through the use of flashcards. Or maybe you have a Flashcard app you could use. But I'm going to be showing you the name of a protocol. I'll give a brief description, and we can see what port number it uses and if that port number is UDP or TCP or both. So let's go through some of these common protocols we might run into. The first one is FTP, the file transfer protocol. This has been popular for many years, where somebody has an FTP server out on the Internet, and you can connect to that FTP server. You use your credentials to log in. You could upload files, and you could download files. When I used to write books for Cisco Press, they had an FTP server, and I would upload my chapters and my graphics to their FTP server. And it uses TCP ports 20 and 21. FTP by itself, though, is not secure. The data is sent in clear text. Now, here's a secure protocol. It's.SSH. That's a secure shell. That's what we might use to connect to a remote system, maybe using a terminal emulator, and it's going to protect our traffic over an untrusted network, and it's going to be using TCP port 22. But what if we combine these two?
If we want to secure FTP, we could use Secure Shell to do that. That gives us SFTP. That's secure FTP. We're still using FTP, but we're sending all that FTP information inside a secure shell. So we're using the secure shell port of TCP port 22. Now, here's one that many people get mixed up. We said SFTP is secure FTP, but we alternately have FTPS, which is secure FTP secure.What's the difference here? Instead of using a secure shell to protect our FTP data, we're using Secure Sockets LayerSSL, or a more secure version of that called TLS Transport Layer Security. And you see on screen that it's going to be using ports 989 and 990, and those could be either TCP or UDP ports. And I mentioned that a secure shell was a secure way of communicating with a host, maybe a Linux host that we're connecting to remotely, or maybe one of our network devices that we're connecting to for administration. That's a great way to do it, to use Secure Shell to get to a command line. However, an unsecured way of doing that is to use a programme called Telnet, which uses TCP Port 23. That will give us a plaintext connection with a remote device. So if I'm Telneting into a router, for example, to do some administration, If you were to do a packet capture of my telnet session, as I'm giving the username and password to log in, you could capture that and actually see my username and password. So it's recommended that we use Secure Shell as opposed to Telnet. Another protocol I want you to know is SMTP, the simple mail transfer protocol. And there are a few different protocols we can use for email. and I want you to understand the distinction between each. SMTP is what we use to send mail to an email server.
So you've got an email client on your desktop. Let's say you might be using SMTP to send that email, and you're going to be using port 25, specifically TCP port 25, to send that email to the email server. However, SMTP by itself is not secure. You could secure it using SSL or TLS by doing SMTP over SSL or TLS. Remembering that TLS is more secure than SSL, something we use pretty much on a daily basis is the DNS domain name system. When we're going out to a web server somewhere, I don't know about you, but I'm not going to memorise the IP address of all those web servers. I'm going to memorise the name. I'm going to go to KWTrain.com, for example. I have no idea what the IP address is, but I can communicate with a DNS server, and it's going to resolve a domain name like KWTRAIN.COM into the corresponding IP address, and it's going to use either TCP or UDP port 53. And we mentioned FTP was a way to transfer files. We had to log in and give credentials,and then we could upload or download files. Well, there's a lighter-weight version of FTP, and it's called TFTP (trivial file transfer protocol). and I say it's trivial because it doesn't require that we give user credentials. You don't have to log in or provide a password. If you know the name of the file you want to download, you can just request that it be downloaded from this non-secured TFTP server. Or you could upload to this TFTP server, and it's going to use UDP port 69. Let's take a look at a few other protocols. Next up is the DHCP dynamic host configuration protocol. This is very, very common when a device boots up. It needs IP address information to communicate on the network. We could go in and manually configure all of those IP address settings, or we could just use DHCP to say, Hey, can somebody give me some IP address information?
And maybe we've got a DHCP server out there that says, "Sure, here's your IP address and your subnet mask and your DNS server and your default gateway's IP address." There are lots of parameters that a DHCP server can provide to us using UDP port 67. And when we go out to a traditional web server that's not secured, and we talked about this at the beginning of this video, we're going to be using HTTP, and that stands for Hypertext Transfer Protocol. This is how we exchange information with an unsecured Web server. But we probably don't want to use HTTP if we're sending something like credit card information. Instead, we want to use a secure version of HTTP. So we could use Httpsecure or Https. This is going to secure that web transmission using either Secure Sockets Layer or, today, it's more likely that we'll be using the more secure TLS transport layer security. And Https is going to beusing TCP port four, four three. And we talked about the SMTP simple mail transfer protocol that would be used to send mail from our client to an email server. How do we get email from that server and onto our client? Well, we could use Top 3—that's Post Office Protocol version 3. This is going to retrieve email from an email server. However. One of the downsides that I've run into with Pop Three is that if I have multiple clients, they might be retrieving email from that email server. if I download it from my laptop. Let's say.
I'll try to check that email later from my smartphone. It's not going to work. Because the email has actually been moved off of the email server and down to my laptop. So it's no longer available to see with my smartphone. I'm a big fan of something we'll talk about in a few moments called IMAP instead of POP 3. And pop three by itself is unsecured if we want to secure that transmission and encrypt it. So if somebody intercepted it, they couldn't make any sense out of it. We could send POP3 over SSL or TLS, and that could be using either a UDP or TCP port. And in the network, it's important that we agree on what time it is, and authoritative time can be communicated to network devices using NTP, the network time protocol. And that will use UDP port 123 as a memory aid. When I think of NTP, I think of the old Jackson Five song, and I won't sing it for you, but it goes AB C E as one two three. Well, I think NTP is as easy as one, two, three, because that is its port number. Now, I talked a moment ago about how I was not a huge fan of POP 3 and that I preferred IMAP. Let's talk about that. IMAP stands for Internet Message Access Protocol, and when we talk about IMAP, we're probably talking about IMAP version four. This is a way for us to view email on the email server, but we're not necessarily removing it from the server and bringing it down to our client. So I could view the email from my laptop, and then if I'm out of the office, I could then view that email later from my smartphone because it still resides on the email server. It didn't remove it like Pop Three did. But like Pop Three, IMAP by itself is not secure.
We can secure it, as we've been doing with several of these protocols, using either SSL or, more likely, TLS, and that's going to be using a TCP port. Another protocol we see in large enterprise environments where we have lots and lots of users is a directory services protocol called LDAP. That stands for a lightweight directory access protocol. This is where you can have a big database of users and lots of user information, like their username and their password, maybe their phone number, maybe their email address—lots of information about these users. And you could point to this directory, ServicesServer, like an LDAP server, from multiple systems. Maybe you're trying to log into your email. You could be validated against an LDAP server. Maybe you're trying to log into a Web host that could be validated against an LDAP server. And I've asked my students for years what LDAP server they use in their environment. And well over 90% have told me that they use Microsoft Active Directory as their LDAP solution. So if you hear about Microsoft Active Directory, you realise that that is a very popular LDAP server and it uses TCP port 389. And yes, there is a way to secure it, and that's to do LDAP over SSL or TLS, and that will be using TCP port six three six.Next up, we have the SNMP Simple Network Management Protocol. This is a way for us to manage our network devices by querying those devices for information. We might say, "I want to know what the bandwidth utilisation is on this interface, on this router." If it's running SNMP, we can go ask it. We can request information from these devices, which are running SNMP agents. And if we want to make a change, we can even send out configuration information. We can push that out using SNMP. And when we have that communication coming from the SNMP server, that's going to be using Port 161, either TCP or UDP. But we empower the SNMP agents themselves to keep an eye on a few statistics. Let's say we have a device that keeps track of the temperature in the room.
I've had more than one occasion where the air conditioning went out in something like a server room and equipment started to fail because it was getting so hot in there. We can have equipment proactively monitor its temperature, and if it exceeds a certain threshold, it could alert us about that by sending what's called an SNMP trap. A trap is communication initiated by the agent, not the server. In those traps, they're going to use port 162 again, either TCP or UDP. A Microsoft protocol that can be super handy if you want remote access to a system is RDP, the Remote Desktop Protocol. This allows you to take control of a desktop on some remote computer, and it's going to use TCP port 3389. Now, notice 3389. That's out of the well-known port number range. We're now in the registered port range. So Microsoft has registered that as a port in that registered port number range. A protocol that we use a lot in voiceover IP or video over IP is SIP. That's the session initiation protocol. We're inviting the other phone as an example into a session, and SIP, which is a session layer protocol in the OSI model, is going to help us set up, maintain, and then tear down that session. Again, that session could be a phone call, it could be a video call, and it could be a variety of other sessions as well. But we're going to use port 50 60 if it's not secured, and port 50 61 if it is secured.
Now, another way we can set up audio or video calls is to use an ITU standard called H 323. And this protocol has been around for a very long time. It uses TCP. Port 1720. And you commonly see this when you're trying to control some sort of voice or video gateway to set up a voice or video call. similar to Sip. Another protocol that comes from Microsoft is SMB, which stands for Server Message Block. It uses UDP ports four, four, and five, and you often use SMB for file sharing. I remember SMBs back in the 1990s when I was using Microsoft's Land Manager network operating system. SMBs were used to transfer files between a client and a server. And speaking of servers, one of the popular types of servers we have in enterprise networks is a database server, like a relational database. You might have heard the term SQLSQL, which stands for Structured Query Language. Let's take a look at some protocols used by SQL. Microsoft has their own Microsoft SQL Server, and they have an application called the Microsoft SQL Server Management Studio, or SSNs, and it can manage Microsoft SQL servers scattered around a network using a TCP port or UDP port 1433. Oracle has its own SQL Server. It's called SQL.Net, and it speaks on TCP port 1521. And there's an open-standard SQL Server as well, called MySQL, and it's going to reach out and talk to my SQL clients using the TCP port. Again, to memorise these ports and protocols, I recommend you use flashcards, either physical flashcards or a flashcard app, where maybe you see on one side of the flashcard the name of the protocol, like SNMP, and then on the back you have a brief description like we've presented in this video, along with what port number or port numbers are used for that protocol.
Module 2: Network Pieces and Parts
1. 2.0 Network Pieces and Parts
Welcome to Module 2 of the course, where we're going to be examining some of the different pieces and parts that make up our networks. Specifically, we're going to be taking a look at things such as switches, routers, wireless access points, firewalls, intrusion detection and prevention for clients, and many, many more. And we're going to get started in our next video by going back in time a little bit to relive the glory days of analogue modems. I'll see you there. Peace.
2. (N10-007 ONLY) 2.1 Analog Modems
In this video, we're going to take a look at a legacy way of transmitting data across the PSTN. the public switched telephone network, and I say it's a legacy technology. It is still in very limited use today. What I'm speaking of is an analogue modem. When I got my first analogue modem back in the early 1990s, the 1000 was a 300-bod modem. It transmitted data at 300 bits per second. and speeds have come a long way since then. From there, we progressed to modems capable of 2400 and 9600 bits per second. And let's take a look at this video on how that works. When the PC is transmitting the data to its modem, be it an internal modem or an external modem, it's doing it digitally. It's sending ones and zeros, where maybe a binary one is the presence of voltage and a binary zero is the absence of voltage. And once those binary bits are sent into the modem, the modem needs to convert them into tones that can be sent over the PSTN. So these analogue tones will be sent from one modem to the far-end modem. Then that far end modem doesthe demodulation of the modulation demodulation. It converts this analogue waveform back into ones and zeroes and sends it to its destination. And I've used a couple of different terms here in this video, and I want us to distinguish between them because there is a lot of confusion about this. I mentioned both bod and bits per second. Those are not necessarily the same things. For example, bod represents the number of tone changes per second. That's how, with an analogue waveform, we can represent binary ones and zeroes based on the number of tone changes. And I said that I had a 300 bodmodem that transmitted at 300 bits per second. So in that case, the bod and the bits per second were equal. But they're not necessarily going to be equal because bits per second are the number of ones and zeros that we can transmit over a line. The challenge comes when we try to have more and more tone changes per second. If that number gets higher and higher than the number of tones, changes might not be generated correctly by the source modem such that they can be interpreted by the destination modem. So let's take a look at how we get some of those higher speeds. In the example I gave you, I said I had a 300-broad modem and it transmitted data at 300 bits per second. And in that case, there were 300 tone changes per second. But notice we're only using one channel because we could send multiple frequencies over the PSTN. But here I'm just using one channel and saying we're changing between this tone and this tone, this tone and this tone—that's one channel. We could bump it up such that we have 2400 tone changes per second. That would be 2400 bots. That would give us 2400 bits per second if we were using one channel. But what if we used more channels? We could stay at 2400 bod. We could have 2400 changes in tone per second, but we could use multiple channels. One channel might be using these two frequencies that it goes back and forth between. Another channel could be using different frequencies. So if we had four channels, with each of those channels doing 2400 tone changes per second, that's still 2400 BOD on four channels. But it gives us 9600 bits per second of throughput. Let's jump up a bit more. Maybe we have a 2400 bod still, but we're using twelve channels. That would let us have a throughput of 28.8 kbps. still fairly slow speeds by today's standards. But this is a technology I want you to know about. traditional analogue modems.
3. 2.2 CSMA-CD vs CSMA-CA
In this video, we're going to distinguish between a couple of terms that could get confused, and those are CSMACD and CSMACA. First, let's consider Csmacd, which stands for "carrier since multiple access with collision detection." And to illustrate what's going on here, consider a really old Ethernet network. This is representative of the very first network that I worked on back in the 1980s, where we had this coaxial cable running around the floor of a building. It would go from one room to the next, and if a device wanted to get on the network, they would tap in to this coaxial cable. And that was called an Ethernet bus. We didn't have RJ-45 connectors that we plugged into a switch or even a hub. We actually tapped into this cable. And the rule for transmitting on the network was that on this bus or on this Ethernet segment, we could only have one packet at a time, because if two packets were transmitted simultaneously, they would collide with each other. In other words, they would corrupt each other. And in order to prevent that, these devices ran CSMACD. And here's what that means. The carrier sense means that we can listen to the wire to see if anybody else is transmitting. Because if they are, we know we cannot transmitonly one packet allowed at any one time. The multiple access means that we can have multiple devices connected to this wire. And any device can talk at any time as long as nobody else is talking right then. And if a collision does happen, we can detect that. So let's say that these two PCs on the bottom of the wire both want to transmit, and they both listen to the wire at the same time. They hear the same period of silence, and they both simultaneously conclude it must be safe to transmit. And they transmit, and their packets collide, and they're corrupted. Well, the way they would detect that collision is that on an Ethernet bus, there would be a spike in voltage that would be detected by the PCs, and they would realize, "Oh, I tried to transmit, somebody else tried to transmit." At the same time, our packets collided. And knowing that we still both need to transmit, what I'm going to do is set a random back-off timer, and hopefully the other PC is going to set a different random back-off timer, and we will each transmit when our time expires and hopefully this time avoid a collision. So let's say the PC on the bottom left sets a random back off timer for ten milliseconds, and the PC on the bottom right sets a random back off timer for 20 milliseconds. So now they should not collide because they're going to wait these random amounts of time. And, assuming those random times are not equal, they should not collide with one another. Now, that's the way it works on an Ethernet bus. But then, as time went on in the early 1990s, Ethernet hubs were very popular. We were still using CSMECD. However, this was a star topology with everything starting from a centralised hub. But logically, it still works like that Ethernet bus. We still had the rule that only one packet was allowed on this infrastructure at any one time. And when one sent a packet from a laptop, maybe to the printer, it would go everywhere because the hub did not know how to intelligently forward packets. But let's say both laptops tried to transmit at the same time. Again, we would have a collision. The thing that's slightly different here, though, with a hub, is that instead of seeing a voltage spike on the wire like we would with an Ethernet bus, the hub would notice there was a collision, and it would alert everybody connected to the hub by sending out a jam signal. And this jam signal said, "Hey, there was just a collision on the wire." If you just transmitted your packet and probably did not make it, you might want to set a random back-off timer and try again. and CSV CD required that devices be connected using half duplex, meaning they could not transmit and receive simultaneously. They could do one or the other because if they were doing both simultaneously, that could be two packets on the wire. We don't use CSME CD very much these days because we're typically connecting with full duplex connections into an Ethernet switch, where we don't have that collision issue. But we do see Csmaca in today's networks. Carrier: since multiple access with collision avoidance is possible in a wireless network, we might see this there. and the goal is essentially the same on a wireless access point. We may have a limit of one transmission at any time, and that's certainly developed over the years. And there are now technologies that allow us to have multiple antennas and multiple streams at one time.But in its basic form, a wireless access point can only talk to one device at any one time. So what CSME CA does is allow, let's say, client two to listen to the airwaves before it transmits. Let's say client One is transmitting to that wireless access point. But in addition to sending the radio waves to the wireless access point, client two hears those radio waves and is going to wait its turn. You see, when client one sends out the radiowaves, it doesn't just go to the access point. Client number two has hopefully received it. Client two, which wants to transmit, is going to see that signal coming in from client one. It's going to say, "somebody's talking." I need to set a random back-off timer, and then I'll try again after that timer expires. A challenge that can happen, though, is if, say, client one is to the left of this access point, such that it's just barely in range. It's very far off from this access point, but the access point can receive the signal. But client two is very far to the right of this access point. So just because the access point can see client one's transmission, that does not mean that client two can see the transmission. It might fade out before it gets to client two. This is called a hidden node problem. This is where the client who listened to the airwaves incorrectly concluded that nobody else was talking at that moment, and then it transmitted. That could lead to a collision. And we're not going to be able to detect that there was a collision by listening to something like the voltage on the wire. There is no wire. But we avoided a collision by tracking the airwaves before transmitting. And we can have some upper layer protocols that can let us know if there were some missed packets and then retransmit any packets that were corrupted.
4. 2.3 Hubs, Switches, and Routers
In this video, we want to distinguish between a collision domain and a broadcast domain. First, consider a collision domain. This is a network segment, and on that network segment, we can only have one packet at any one time. In. To visualise this, let's go way back in time to the original Ethernet Bus, where we had stations connecting into a coaxial cable that kind of ran around an office floor, and we would tap into that coaxial cable. All of these devices live on the same collision domain. You may recall that we're using something called the Csmacd Carrier on this EthernetBus. because of multiple access and collision detection We're listening to the wire in the hopes that nobody else is transmitting at that moment. Because we're all on the same collision domain, we could only have one frame on the wire at any one time. But let's say those two bottom PCs,they're listening to the same period ofsilence, and they simultaneously transmit. That creates a collision. And the reason that collision was possible was because those devices were on the same collision domain. And these Ethernet bus networks back in the 1990s were replaced in large part by Ethernet hubs. However, even though we are physically connected as a star topology with an Ethernet hub, logically we're still acting like that bus. We're still running CSMA CD, and I want you to know that all ports on a hub belong to one and only one collision domain. Because a hub does not make intelligent forwarding decisions because it does not know where anybody lives, It just replicates any bits that it receives on one port out on all other ports, and as a result, only one device can communicate at any one time. So a bus is one collision domain; a hub is one collision domain. What about an Ethernet switch? Well, it gets a lot better with an Ethernet switch. With an Ethernet switch, each port is in its own collision domain. For example, if laptop one wanted to talk to the printer while laptop two was talking to the server, that's perfectly fine because the switch will not take that frame from laptop one going to the printer and flood it out to laptop two in the server as long as it knows where that printer is. So each port on a switch, I want you to know, belongs to its own collision domain. As a benefit, besides more traffic flowing simultaneously on the network, we can run these devices in full duplex mode, where we can transmit and receive simultaneously. If we're using a hub, we can only use half-duplex mode, where we can transmit or receive at any time, but not at the same time. Next, let's consider a broadcast domain. A broadcast domain is a portion of the network through which a broadcast will travel. So let's say that a PC boots up and wants to get an IP address. A way that it can do that is by sending out a DHCP discover broadcast to say, "Hey, are there any DHCP servers out there that can give me some IP address information?" Since we might not know where that DATP server lives, we send out broadcasts throughout this broadcast domain. On a hub, if a broadcast arrives on one port, it is replicated out all other ports. No surprise. That's what hubs do. Because the hubs have no idea where the end stations are located, they receive bits on one port and send them out all other ports. But what about a switch? With a switch? By default, all of these ports on ASwitch belong to one broadcast domain. If a laptop sends out a broadcast, maybe you're looking for a DHCP server. The switch will flood that through all other ports. Does that seem a little unswitchlike? Well, what's going on here is a broadcast that has a destination Mac address of all FS in hexodesmal, and the switch is never going to learn a Mac address of all FS. So it's always going to appear as an unknown destination. And as a result, it's going to flood it throughout the broadcast domain, which is a good thing. We want broadcast to be able to reach devices that it needs to reach within the subnet. But this is not going to be terribly scalable. If there were a lot of broadcast traffic, that could start to cause a problem, because every device in this broadcast domain gets the broadcast. If it's not destined for that device, the device still has to take time out of its day, look at the packet, and say, "Oh, this is a broadcast." It's not for me. Discard, we don't want to have too much broadcast traffic on the network. So the question is, what breaks up a broadcast domain? and the answer is a router. A router is going to interconnect broadcast domains, also known as subnets or VLANs. And with a router, each port is connected to a different subnet, and each subnet is a broadcast domain. So let's say that the laptop on the left is going to try to reach a DHCP server, and it sends out a DHCP discover broadcast. Well, here's what happens by default. By default, that router is going to say you're sending a broadcast. I'm not going to afford a broadcast, and the router is going to discard that packet because arouser is going to break up broadcast domains. So in this example, we have three gigabit Ethernet ports on a router, and that means we have three broadcast domains on this router.
5. 2.4 Collision and Broadcast Domains
In this video, we want to distinguish between a collision domain and a broadcast domain. First, consider a collision domain. This is a network segment, and on that network segment, we can only have one packet at any one time. In. To visualize this, let's go way back in time to the original Ethernet Bus, where we had stations connecting into a coaxial cable that kind of ran around an office floor, and we would tap into that coaxial cable. All of these devices live on the same collision domain. You may recall that we're using something called the Csmacd Carrier on this EthernetBus. because of multiple access and collision detection We're listening to the wire in the hopes that nobody else is transmitting at that moment. Because we're all on the same collision domain, we could only have one frame on the wire at any one time. But let's say those two bottom PCs are listening to the same period of silence, and they simultaneously transmit. That creates a collision. And the reason that collision was possible was because those devices were on the same collision domain. And these Ethernet bus networks back in the 1990s were replaced in large part by Ethernet hubs. However, even though we are physically connected as a star topology with an Ethernet hub, logically we're still acting like that bus. We're still running CSMA CD, and I want you to know that all ports on a hub belong to one and only one collision domain. Because a hub does not make intelligent forwarding decisions because it does not know where anybody lives, It just replicates any bits that it receives on one port out on all other ports, and as a result, only one device can communicate at any one time. So a bus is one collision domain; a hub is one collision domain. What about an Ethernet switch? Well, it gets a lot better with an Ethernet switch. With an Ethernet switch, each port is in its own collision domain. For example, if laptop one wanted to talk to the printer while laptop two was talking to the server, that's perfectly fine because the switch will not take that frame from laptop one going to the printer and flood it out to laptop two in the server as long as it knows where that printer is. So each port on a switch, I want you to know, belongs to its own collision domain. As a benefit, besides more traffic flowing simultaneously on the network, we can run these devices in full duplex mode, where we can transmit and receive simultaneously. If we're using a hub, we can only use half-duplex mode, where we can transmit or receive at any time, but not at the same time. Next, let's consider a broadcast domain. A broadcast domain is a portion of the network through which a broadcast will travel. So let's say that a PC boots up and wants to get an IP address. A way that it can do that is by sending out a DHCP discover broadcast to say, Hey, are there any DHCP servers out there that can give me some IP address information? Since we might not know where that DATP server lives, we send out broadcasts throughout this broadcast domain. On a hub, if a broadcast arrives on one port, it is replicated out all other ports. No surprise. That's what hubs do. Because the hubs have no idea where the end stations are located, they receive bits on one port and send them out all other ports. But what about a switch? With a switch? By default, all of these ports on ASwitch belong to one broadcast domain. If a laptop sends out a broadcast, maybe you're looking for a DHCP server. The switch will flood that through all other ports. Does that seem a little unswitchlike? Well, what's going on here is a broadcast that has a destination Mac address of all FS in hexodesmal, and the switch is never going to learn a Mac address of all FS. So it's always going to appear as an unknown destination. And as a result, it's going to flood it throughout the broadcast domain, which is a good thing. We want broadcast to be able to reach devices that it needs to reach within the subnet. But this is not going to be terribly scalable. If there were a lot of broadcast traffic, that could start to cause a problem, because every device in this broadcast domain gets the broadcast. If it's not destined for that device, the device still has to take time out of its day, look at the packet, and say, "Oh, this is a broadcast." It's not for me. Discard, we don't want to have too much broadcast traffic on the network. So the question is, what breaks up a broadcast domain? and the answer is a router. A router is going to interconnect broadcast domains, also known as subnets or VLANs. And with a router, each port is connected to a different subnet, and each subnet is a broadcast domain. So let's say that the laptop on the left is going to try to reach a DHCP server, and it sends out a DHCP discover broadcast. Well, here's what happens by default. By default, that router is going to say you're sending a broadcast. I'm not going to afford a broadcast, and the router is going to discard that packet because a router is going to break up broadcast domains. So in this example, we have three gigabit Ethernet ports on a router, and that means we have three broadcast domains on this router.
6. 2.5 Wireless Access Points
We might have several different types of mobile devices in our network, such as a smartphone connecting to our network or maybe a laptop that we're carrying from one office to another office. How is that mobility possible? Well, it's thanks to wireless technologies and these wireless clients that they can connect to wireless accesspoints, and this is their pathway out to the rest of the world. And this wireless access point is going to contain one or more antennas for communicating with these wireless devices. And even though we won't get into it in this video, there are different bands of frequencies that we can have used by those wireless access points. And each band has different channels, and we want to avoid overlapping channels so that one access point doesn't interfere with another access point because, in larger environments, you probably want to have lots of access points spread throughout your building or campus, so someone can be mobile and never lose network connectivity. And this wireless client is going to get to the rest of the world through this wireless access point, because they typically have one or more connections out to an Ethernet network. And there are various types of wireless access points available, each with a different number of antennas. But just as one reference, here's an access point that's pretty popular out there, and it's a Cisco access point you might have seen mounted to the ceiling in different buildings. I've seen these types of access points mounted to the ceilings in buildings like hotels, for example. And that's at a wireless access point, which can give a mobile device connectivity to the rest of the world.
7. 2.6 Firewalls
The purpose of a network firewall is, in general, to protect our inside network, like our company, from an outside threat like the Internet. And the term "firewall" comes from a structural firewall. You might see in a building that if there is a fire in one area of the building, the firewall is there to prevent that fire from spreading into another area of the building. That's basically what the network firewall is doing. It's trying to protect one area of our network, such as our company's network, from another area of the network that might present danger, such as the Internet. We could have an attacker on the Internet attempting to attack our network and send malicious traffic into it. We want that firewall to intercept that malicious traffic and drop it. And there are different types of firewalls that I want you to know about. one type of firewall, and I hesitate to even call it a firewall because it's very rudimentary. It's called a packet filter. And a packet filter is essentially a rule or set of rules that says this source is allowed to go to this destination using this particular protocol. And we might have a rule that said anybody on the inside of our network can go out to the Internet using any protocol, but we don't want people on the Internet to come back into our company. Now, at first glance, that might seem like a good thing, but let's say that PC One on screen is trying to go to a Web server on the Internet. Well, it sends that request out to the Internet for a Web page. The firewall allows that request to go through because the source was PC One. It's on our trusted inside network. But what happens when that Web server tries to send back the requested Web page? The firewall looks at that packet and says, "No, the source IP address on this packet is on the Internet." I don't trust the Internet. I'm going to drop that. So PC One's traffic to the Web server gets to the Web server, but the return traffic is dropped. How do we fix that? with a stateful firewall. With a stateful firewall, this firewall does what is referred to as packet inspection. It still wants to trust the inside network and not trust the outside network. But if traffic originated on the inside of the network, like from PC One going out to the Internet, the firewall is going to inspect that traffic originating on our trusted inside network. And it's going to make a note of things like how this source IP address on the inside network is going to this destination IP address on the Internet. And we're using this particular protocol with these particular port numbers, and it remembers that information. So in my example, when the Web server is sending the return traffic back, the firewall says, "Oh, I think this is return traffic." It wasn't originally on the Internet. It was originated on the inside of the network because I inspected a packet that had these IP addresses and these protocol port numbers transposed. So I conclude this is safe return traffic. That's a stateful firewall. And we might hear a modern day firewall referred to as a "next generation firewall" or an "NGFW," sometimes called layer 7" or "an application layer firewall." We can perform deep packet inspection here and identify data types that we do not want to allow to leave our company, such as sensitive employee information. Under no circumstance do we want to send that out to the Internet. We can block that by doing deep packet inspection. We can also check an Internet-based database of current threats. And based on that database of current threats, we're going to be able to better recognise malicious traffic that is coming in from the Internet. We'll say this sequence of packets matches the signature of a known malicious attack. We're going to block that traffic. And in this example, notice that the firewall only has a couple of interfaces. One is going to the Internet, and one is going to our inside network. However, we might have some servers at our site that we want to be accessible from the Internet. Maybe we have a Web server. Maybe we have a corporate email server. And we want both of those servers to be accessible to Internet users. So what can we do? We can have another zone. We can have an outside zone that connects to the Internet. We can have an inside zone that connects to our inside corporate network. But for those devices that we do want to be accessible from the Internet, we can put them in a zone called a DMZ, a demilitarised zone. So even if somebody from the Internet did compromise the Web server, they would not be able to use that as a hopping-off point to get into our inside network because the inside network is protected from the DMZ. And this is when we have servers at our site. But more and more people are putting their servers in the cloud. Is there a way to have firewall protection for servers that live in the cloud? There absolutely is, because we can have virtualized firewalls. Just like we can install a virtual server on Amazon Web Services, we can install a virtual firewall that logically sits in front of that server and protects that server virtually from malicious traffic. And that's a look at a network firewall.
8. 2.7 Intrusion Detection and Prevention
There are so many different types of attacks that someone could launch against our network from the Internet. So for security purposes, it would be super useful if we could maintain some sort of database of well-known attacks where we could recognise the signature of a well-known attack based on the pattern of packets coming in. And we have a couple of network appliances we're going to discuss in this video. They can do that. They can analyse packets and see if they match a recognised signature. And if we do have a match, we can take action to stop that attack. And those two appliances are an ID sensor, or intrusion detection system sensor, and an IPS sensor, an intrusion prevention system sensor. First, let's consider the ID sensor. The IDs sensor will obtain a copy of the traffic and analyse it, whereas the IPS, as we will see, will be in line with the traffic. But in this case, we have a multilayer switch configured to mirror, or to make a copy of traffic destined for that layer 2 switch and send those packets, or copies, to the ID sensor. And the IDs sensor is going to analyse those packets against its database of well-known attacks, which does need to be updated periodically. But let's say that somebody from the Internet happens to be launching a malicious attack against the client. And it wasn't the type of attack that would be recognised by just a basic stateful firewall. So the attack makes it through the firewall, the multi-layer switch makes a copy, and it sends one copy to the client, which suffers from the potential impact of that attack. And the idea sensor also gets a copy. The idea sensor analyses these packets and says, "I think we're under attack." And it can send a message to the firewall to say, "I want you to start blocking this IP address that is out on the Internet because we have reason to believe that they are sending malicious traffic into the network." So dynamically, the firewall can create a rule. And when the next packet tries to come in from the Internet destined for that client, the firewall says, "No, you're going to be denied, and it's going to drop that traffic." And while this is a great solution, we're protecting ourselves against many, many different types of attacks, and the client actually does get hit with maybe one or two or a few packets of that attack. and there are some attacks called atomic attacks. They can do damage with just a very small number of packets, maybe even one packet. So even better than an IDs sensor, many would argue, is an IPS sensor because an IPS sensor doesn't just receive a copy of the traffic; it sits in line with the traffic coming in from the Internet. So it can analyse the packet and say yes, it's allowed, or no, it's not allowed. before it's even sent onto the network. Here, let's say that we're getting traffic from the internet again. Somebody is trying to attack our client. The IPS sensor is going to analyse it immediately and discard it. It never gets onto the network. And here I did not include a firewall in the topology. But we could have had an IPS sensor send updated configuration instructions to a firewall, much like the idea sensor did. But in this case, the IPS sensor took care of blocking that traffic. So here's the difference: an IDs sensor It can inspect and react to a copy of the received traffic, while the IPS sensor can inspect and react to traffic in line if it ever gets an opportunity to reach a client.
9. 2.8 VPN Concentrators
A virtual private network, or VPN, can allow us to securely communicate across an untrusted network like the Internet. Maybe we're working from home as a remote client, or maybe we're travelling and working from a hotel room, and we want to connect securely to our corporate office. Well, we could install some software on that client and maybe have the router back at the corporate office on the left hand side of the screen. That would handle all of the encryption, decryption, and authentication of this incoming traffic. And the router can probably handle that just fine for maybe one remote client or maybe a few remote clients. But it's going to put a processor burden on the router. All that encryption and decryption can be very processor intensive. And that suggests that if this needs to scale and handle multiple remote clients, instead of relying on the router to do all of that heavy processing, maybe we should have a separate appliance. and its job is to handle that processing. That's what we've got with a VPN concentrator. And it might look something like this. So this remote client is going to try to talk to that accounting server. It's the VPN concentrator that does the encryption and decryption as the traffic comes off of the Internet and goes back to the Internet. And this device, because it's doing such heavy processing, is typically a dedicated hardware appliance. And it can be the termination point, as in my example of the VPN, or it could initiate the VPN, it could be the originator. And as one end of this VPN connection, it's going to be responsible for doing the encryption and decryption of the traffic that we're trying to protect.
So when looking for preparing, you need CompTIA Network+ certification exam dumps, practice test questions and answers, study guide and complete training course to study. Open in Avanset VCE Player & study in real exam environment. However, CompTIA Network+ exam practice test questions in VCE format are updated and checked by experts so that you can download CompTIA Network+ certification exam dumps in VCE format.
CompTIA Network+ Certification Exam Dumps, CompTIA Network+ Certification Practice Test Questions and Answers
Do you have questions about our CompTIA Network+ certification practice test questions and answers or any of our products? If you are not clear about our CompTIA Network+ certification exam dumps, you can read the FAQ below.
Purchase CompTIA Network+ Certification Training Products Individually
Feb 8, 2023, 08:42 AM
The practice tests for the Network+ exam were very useful for me. They reinforced my knowledge that I already acquired and made me proficient in all the topics evaluated in the test. I took my exam and got a top score. I do truly appreciate Exam-Labs for availing these resources to us for such a low price!
Jan 16, 2023, 08:41 AM
I used the questions and answers for the Network+ exam three weeks ago, and they were valid by then. To score a good grade in the exam, one is needed to read a wide variety of the relevant study resources. But, in my opinion, it is also important to read some forums to know how to prepare well for this exam. I read some and find some of the official resources to add to the ones I bought here.
Jan 6, 2023, 08:41 AM
Honestly speaking, preparing for the N10-007 exam can be quite nerve-wracking, especially if you don’t get good revision materials. I was afraid that I wouldn’t find any good prep resources, but the VCE files and lectures available on this site prepared me well for the test. I’m indeed very happy and thankful that they didn’t disappoint me.