CompTIA CASP+ CAS-004 Topic: Securing Architectures (Domain 1)
December 14, 2022

1. Securing Architectures (OBJ 1.1)

In this section of the course, we’re going to continue discussing ways to secure our network architectures. But this time we’re going to be focused on different services, segmentation, and zero trust in conjunction with deep parameterization and software-defined networking. In this section, we are again going to discuss domain one security architecture and specifically the objectives of one given a scenario. We will analyses the security requirements and objectives to ensure an appropriate secure network architecture for a new or existing network. So as we move through this section, we’re going to start out by discussing how to conduct traffic mirroring so that we can add sensors to our networks. Then we’re going to discuss the different types of sensors that we’re going to use in conjunction with that traffic mirroring. Then we’re going to move into segmentation of our networks, covering micro segmentation of screen subnets, security zones, and much more. After that, we’re going to discuss the concepts of deep parameterization and zero trust before we move into the concept surrounding the merging of networks from various organizations that we may be conducting business with. Finally, we’re going to talk about software-defined networking, or SDN. This includes open SDN, hybrid SDN, and SDN overlays. So let’s get started. In this section on securing architectures.

2. Traffic Mirroring (OBJ 1.1)

When we talk about securing our enterprise architecture, we must start with the concept of traffic mirroring. This is important because in order to secure our networks, we have to be able to censor and monitor our networks. And in order to do that, we have to be able to see all the network traffic going into or out of our networks. This is where traffic mirroring comes into play. Now, “traffic mirroring” is a generic term that enables you to monitor network traffic passing into or out of a network. When you conduct traffic mirroring, there are four different methods we utilize: span ports, port mirroring, VPCs, and network caps. The switch port analyzer port, or spanport, as most people like to call it, is a Cisco proprietary method of conducting port mirroring. In the industry, most people use span ports and mirrored ports to mean the exact same thing, but technically, it’s only correct to call it a span port if you’re using a Cisco device. For our purposes, though, the terms are going to be used interchangeably within this course and on the exam. Essentially, when you set up a span port or a mirrored port, you can configure the router switch to make a copy of every packet that that device processes and then send it out to the span or mirrored port. This allows you to connect the device to it and capture, monitor, or analyse all packets sent to or from this segment of the network. For example, if you have a small network, you could simply set up traffic mirroring on your external router, and then you’ll see all the traffic entering and leaving your network. Different router brands have different features for traffic mirroring, but most of them will support three types: local traffic mirroring, remote traffic mirroring, and aclbased. Traffic mirroring. Local traffic mirroring is the most basic form, and it allows you to connect a monitoring device or network analyzer to a local port and then receive a copy of every piece of traffic going into or out of that network device copied over to that port. Now, remote traffic metering is going to allow you to instead create a GRE tunnel over an IP network, and this will allow you to connect a network analyzer to that network device. This means you don’t have to be directly cabled to the router or that switch that’s being monitored; instead, you can do it remotely over the network. ACL-based traffic mirroring is going to allow you to monitor the traffic based on the configuration of the interface’s ACL. This means instead of seeing everything, you can configure it to only mirror traffic, meeting certain permit requirements or denying statements based on your ACLs. Now, the span ports and mirrored ports work great for physical networks, but once things start moving to the cloud, we have to figure out a way to monitor those parts of our network to enter the world of traffic mirroring in the Virtual Private Cloud, or VPC. Traffic mirroring can be configured to copy all inbound and outbound traffic to the network interfaces that are attached to your cloud-based servers within a single virtual private cloud. The traffic can then be routed to a mirror target, which could be a monitoring appliance in the same VPC or a different VPC linked via an intraregional peering or transit gateway as part of your cloud infrastructure. Basically, you’re going to configure a traffic mirror session and then configure a set of filter rules that will be applied to that session.

If there’s any traffic that matches your filter rules, that traffic is going to be encapsulated into a VXLAN header and sent to your mirror target for monitoring and analysis. Now, the fourth way to capture network data is to use a network tap. A network tap is a physical hardware device that connects to your network. This tool will usually have three ports. One port is going to be dedicated for mirroring, and the other two are going to be used to connect to two different parts of your network. For example, you could tap your network between your ISP’s modem and your border router. In this case, one port would be plugged into the modem, the other port would be plugged into your border router, and the third port would be plugged into your monitoring or capture device. As data goes through that network tap, a copy is always sent to the monitoring device in real time. So which of these four options should you use? Well, many people would like a hardware tap to be permanently installed on all their network devices. For example, if you want to install network-based IDs, you can install a network tap and then connect your network-based IDs to the monitoring port of it and be able to see all the network traffic in real time. Now on the other hand, if you’re just troubleshooting anetwork, you might want to use a span or mirrorport and that can work well for your purposes. If you’re going to be using a cloud solution, it’s going to have to be a VPC, or virtual private cloud, and that would be the best way for you to monitor your cloud architecture.

3. Network Sensors (OBJ 1.1)

In this lesson, we’re going to talk about network sensors, because sensors are a big part of protecting our network. After all, if you don’t properly censor your network and collect the relevant security data, you won’t be able to analyse whether or not your network is truly secure. So in this lesson, we’re going to discuss security information and event management, where Seam Systems’ simple network management protocol, or SNMP, traps netflow and other network monitoring data. First, let’s talk about Seam’s security information and event management. Seam is a system of utilities that consolidates log files from various systems, servers, and devices into a centralised collection to allow for easier analysis of that data. Before Seams existed, a security analyst would have had to manually locate and retrieve all the log and event data from each piece of hardware or operating system that they wanted to analyze. Over time, this process began to get automated, first with systems built into things like syslog for log collection, but then into the security information management, or SIM, systems, and also into security event management, or SEM, systems. The natural evolution of these two disparate systems was to create a single solution called Asceam Siem, which combined the information management and event management systems into one. The security information and event management systems include logs from many different sources, including your application logs, antivirus logs, operating system logs, malware detection logs, router logs, firewall logs, NetFlow data, file integrity monitoring logs, data loss prevention logs, and many others. While these systems can collect nearly any kind of log or event data, we must be careful to properly scale our data collection efforts. While it may be easy to say “just collect everything,” this is extremely resource intensive, and it costs a lot of money to actually perform this. Therefore, we need to determine what we really need to collect within our seams.

This will ensure that we get the right amount of visibility into our network events, faster correlation of events, and that we maintain enough logs for compliance and reporting monitoring, as well as to help prioritise our security concerns. When we install Seam, we need to place it in a centralised location in order to have all the reporting systems be able to reach that device in a timely manner with relatively low latency due to the type of information it contains. The Seam needs to be placed in a secure portion of our network, and it should be tuned to collect only the necessary amount of data that’s needed for it to provide efficient and reliable operations. Now let’s dive a little bit deeper into some of the different systems they’re going to feed into the Seam with all their different data sources. There are numerous devices that make up our complex enterprise networks, and most of these are going to feed data back into our scene. First, we have all of our network devices, such as our switches, our routers, and our firewalls. These devices are normally configured to send data to the scene using SNMP traps. And SNMP basically stands for “simple network management protocol.” It’s a protocol that will be used to send data over port 161 by defaulting to our Syslog server or Seam. SNMP is an application-layer protocol that’s widely used in network monitoring. The latest and most secure version of SNMP is version 3. SNMP relies on a three-part architecture that consists of managed devices, agents, and a network management station. Devices like routers, switches, and firewalls—even printers and workstations—can all be configured as managed devices. Now, the agent is the SNMP software that’s going to be on a locally managed device. The agent’s purpose is to collect, store, and signal the presence of data on a given device.

The Network Management Station, or SNMP Manager, is the base system that provides the memory and processing functionality between the different devices and agents for the system to work. The network management station essentially queries the managed devices by sending out a Get request to those devices, and the agent on those devices will respond with the requested information. Now, Get is used for a single piece of data, but there’s also a Get bulk request where all the information about a managed device can be sent to the SNMP manager all at once. The problem with both a get and a get bulk is that the manager has to request the information for network monitoring. Though it would be preferable if the device simply sent the data to the SNMP Manager without being asked. Well, that’s exactly what an SNMP trap does. SNMP traps are initiated by the managed device agent. An SNMP trap is going to signal to the SNMP manager an occurrence for a specific event. For example, if your router lost its Wang connection, it could send an SNMP trap message to the SNMP Manager to let it know it is effectively cut off from the wide area network or the Internet. Now, if you use a product like SolarWinds to keep an eye on your routers and switches to know exactly if they’re up or down at any given time, you’re likely using SNMP traps. Another crucial technology for our network security is a network intrusion detection system, Nibs, or a network intrusion prevention system, Nips. These devices are going to be used to monitor your network traffic and report on that traffic. And in the case of a NIPS, it’s going to block or react to suspicious and malicious traffic. Now, Nibs and Nip are categorised based on the method of detection, such as signature-based, statistical anomaly-based, or stateful protocol analysis detection. Signature-based detection compares network traffic to a preconfigured attack pattern known as a signature. For example, there might be a signature that looks for a certain series of bits that represents the execution of the command exe programme over the network that you want to be able to log. Statistical anomaly-based detection, on the other hand, is going to utilise a normalised baseline of all of our network traffic and then compare it against our current traffic. If the current traffic is considered abnormal or anomalous, then that traffic is going to be flagged and reported upon. Stateful protocol analysis detection is going to be used to identify any deviations in the current network traffic against what is considered acceptable for a certain network protocol. This is done by comparing it to predefined profiles for different network protocols based upon activity that is defined as not harmful. So, for example, if somebody’s trying to transmit a file using NTP instead of FTP, that would be abnormal, and it would get flagged. But there’s a challenge when we’re trying to do this type of analysis, and that is TLS and SSL. As security professionals, we are constantly encouraging our users to ensure they’re visiting websites using TLS or SSL because it creates an encrypted tunnel.

But what happens when one of our users connects to their Gmail account using their corporate workstation? That user session is going to use TLS or SSL to create an encrypted session between their client and the Gmail web servers. Now, this is great on one hand because it means that nobody can see the users entering in their information, like their usernames, their passwords, or even the contents of their emails. But from a security perspective, this creates a challenge for us as network defenders because our Nibs and our Nip sensors are now blinded to the information received and transmitted through that session. To overcome this limitation, security architects have implemented a technique known as “break and inspect,” which allows the organisation to use their proxy server as a form of “man in the middle” or “on-paper” attack for these different sessions. So when a client attempts to connect to @gmail.com, they’re actually going to be getting a digital certificate from the proxy server, and that way they can create a secure connection between the client and the proxy server. But the proxy server is then going to establish a secure tunnel between itself and the Gmail server using TLS or SSL. All the data that’s sent and received between the client and Gmail is now going to be inspected by that proxy server for malware and malicious attacks. The data of the client is encrypted between the proxy server and the client, and again between the proxy server and the Gmail server. But the proxy server can see everything in this connection. break and inspect, which is also known as TLS or SSL. Inspection is something that we strongly need to consider as part of our organization’s security architecture. After all, nearly 70% of all web traffic is sent through a TLS or SSL-encrypted tunnel in our contemporary networks, and without it, we’re going to be blind.

In addition to Ned and Nips, the next most important thing that we need to do is log our traffic inside our network and on our devices using audit logs. These logs provide the digital evidence that we need when we’re investigating anomalous issues within our networks. Now, the amount of data that’s logged is going to depend on our organization’s security policy, since the details of what should be logged are going to be included in that policy, including what levels of event should be logged and how long these logs need to be retained. Audit logs, however, are going to be completely useless if you never review them. When you review audit logs, you need to ask some really tough questions. Who is accessing the information or performing tasks that are required for their position? Asking this question will help you identify if you have an insider threat to your organization. Are mistakes occurring repeatedly? If so, this could be a signal of an incompetent person or, again, an insider threat. Do too many people have escalated privileges? This again ensures that we’re using the principle of least privilege. All of these are good questions to ask as you’re digging through your audit logs. Remember, audits can be conducted internally within the organization, which is known as a “self audit,” or by bringing in an independent third party to conduct your audits. When a third party is going to be brought in, they often look to ensure the audit logs are being properly maintained, that the logs are controlled, and that they’re not being modified by unauthorised personnel. As well as that, there is a clear separation of duties between the security staff and the IT staff in the organization. It’s also important to understand the role that audit logs play in an organization’s overall security. For example, if we see multiple unsuccessful login attempts, this could be an indication of an authentication attack, and it should be mitigated by disabling an account after three failed login attempts. If multiple drop, reject, and deny events are being captured in our logs for the same IP address, then we need to be on the lookout for a firewall attack such as Firewalking or somebody trying to evade our Nibs or Nip. To mitigate against these types of attacks, we should have an alert set to the network monitoring console any time ten or more of these events occur within a minute or less. Protocol analyzers are another tool we use for network monitoring. These are also known as “network sniffers” because they capture the raw data frames that are being transmitted over the network. Tools such as Wireshark are commonly used to conduct network protocol analysis. This tool, in conjunction with our span or mirrored ports, allows us to analyse traffic entering and exiting our network. Now, in order to understand the security of our networks, we must start by understanding the network flow on them.

A network flow is a single session of information that shares certain characteristics between two devices. Different network flow tools define a session based on different characteristics, such as the interface through which the traffic enters the network, the source IP, the destination IP, the protocol used, the source port, the destination port, or the IP type of service being used. Utilities such as the Nfdump command-line tool and Cisco’s NetFlow Analyzer are going to be used to capture and analyse these different network flows. Also known as “netflows,” these tools can be used by network administrators and security professionals to identify the top protocols being used in the network, the most commonly accessed servers, or the top users of the network. All this helps security analysts better understand their networks and the patterns on them. Now, this NetFlow doesn’t contain information about what’s in those packets themselves; instead, it’s only going to capture and display information from the header of those packets. So it’s more data about the packets than the data within the packets. Now, if we want a deeper dive into the traffic of our networks and to see what’s actually in those packets, we would use a packet capture and analysis tool like Wireshark, which allows us to read the contents of those sessions and those individual packets. Now, this is one of the key roles of a security analyst when you’re trying to recreate a malicious cyber attack on your network and figure out exactly what happened. And you do that by looking through these packet captures.

4. Host Sensors (OBJ 1.1)

In addition to all the network-based tools and sensors, there are also a few host-based sensors that we need to discuss and that we use inside of our security assessments. Things like file integrity monitoring, antivirus or antimalware solutions, and data loss prevention sensors Now, file integrity monitoring, or FIM, is ahostbased intrusion detection system that creates hash digestfor every file that’s being monitored. If the file is changed or altered, that means that file is also going to be altered, and this will create an alert in the system. We usually conduct this technique on operating system and application files. File integrity monitoring is also a requirement for PCI, DSS, Sarbanes-Oxley, the federal Information Security Management Act, the Health Insurance Portability and Accountability Act, and Sans’ critical security functions. As you already know, it’s also imperative that we have antivirus or antimalware protection installed on our host as a protection mechanism. But this antivirus can also be useful during a security assessment because it provides another source for logging potential infection vectors into the system and the network. This specialised software can help us detect and stop problems such as adware, spyware, viruses, worms, and other destructive types of software. In the past, antimalware was performed by individual tools like antivirus, antispyware, and spam filters. But today, much of this functionality is combined into a single piece of software called antimalware. For example, Windows Defender provides antivirus and antispyware protections in a single piece of software on most of our hosts. To prevent malware, it’s important that antimalware solutions remain up-to-date by continually updating their signature files, either daily or weekly. We should also scan the computer to ensure no unidentified malware is on that machine.

The operating system should also be set up to prevent auto-run or auto-play features from working so that new malware cannot be introduced into the system. Another infection vector is through email, so our email clients need to be set up to not allow automatic previews to be displayed. Because this can load images that could have malware inside them, users must be trained to carefully think about which links they click and, when they’re rereading emails, which emails to open and use. Because phishing is another major attack vector that brings malware into our systems, most major anti-malware solutions also provide an extension that can be installed into your web browser. This extension will warn users when they’re reviewing websites that are suspicious or malicious. This is another layer of protection to help keep users from installing malware by mistake. Our browser should be set up for various levels of trust based on the different security zones in which we’re going to be using it, such as the Internet or the Internet.Now, spyware attempts to track and monitor our activities and steal our personal information. If our antimalware solution also doesn’t provide antispyware protection, we should invest in a standalone antispyware product for our different hosts and servers. This will help detect and prevent keyloggers and other spyware from capturing our personal information and sharing it with attackers. In addition to host-based antivirus or antimalware solutions, we also have network appliances that can provide this functionality, and many unified threat management devices will also have antimalware built into them. Whether you’re relying on host-based or network-based protection, or a combination of both of these, you should configure them to forward or redirect their logs into your storage area network for correlation and analysis with other devices across the network. All this information is extremely useful when you’re conducting your security assessments and during an incident response. Another tool that can feed its logs into our scene is our Data Loss Prevention Systems, or DLPs. Now, DLPs can use endpoint software or a network appliance that’s configured to monitor and prevent data leakage. Data leakage occurs when sensitive data is disclosed to personnel who don’t have a need to know. This data loss could be intentional or inadvertent, but either way, it could have been prevented if we had a proper DLP solution in place. Data loss prevention solutions can utilise filters to help identify when files are being uploaded to a site like Dropbox or Google Drive. It can also put restrictions on documents, so they can only be printed when you’re connected to the office network and not when you’re working from a home office. This software can also prevent people from sending files through email if we so desire.

For example, in my organization, we have a data loss prevention system in place. Whenever an employee attempts to send an email to somebody outside of our organisation and attaches a file, it brings up a prompt asking them to confirm that they understand that they’re sending a file to somebody outside of our organization. This little warning is just a way to tell our staff to think twice before they send something. This is an attempt to prevent inadvertent data leaks, but this setup won’t prevent a malicious insider from actually sending out those files. Instead, we’d have to put it on a higher security level, which we chose not to do. Now, data loss prevention can be installed as either network-based protection or endpoint protection. When you’re using network DLPs, they’re going to be installed at your network boundary, and they’re going to analyse all the traffic that is leaving your network. Endpoint DLPs are going to be agents installed on a server or workstation within your organization, and they only protect data on that particular asset. Besides deciding whether to install network or endpoint protection, we also need to decide whether or not to use precise or imprecise methods for data loss prevention. Now, precise methods involve registering all the content that’s considered sensitive, and then each of those registered files would be hashed, and that hash would be used to create a unique signature to test all outbound files against.

This method results in a very precise identification of protected data and virtually no false positives. Imprecise methods instead rely on regular expressions, metadata tags, Bayesian analysis, and statistical analysis to guess what files should be protected under your DLP program. This is a much less precise method, as its name implies, and leads to more false positives, causing files to be blocked when they should have been sent anyway. For example, you may use a regular expression that says any number in the form of three dashes, two dashes, and four dots should be blocked because that’s the format of a Social Security number. But if I use that same type of number as an unique identifier in my system, now I’m going to have false positives because it thinks those things are social security numbers, and it’s going to block them from being sent.

5. Layer 2 Segmentation (OBJ 1.1)

In this lesson, we’re going to talk about layer-two segmentation. So we’re going to take a look at segmentation within our local area networks, or “lands.” And we’re first going to do this by looking at a concept known as VLANs, or virtual local area networks. Many switches have the ability to create these VLANs, or “virtual local area networks,” to add an additional layer of separation and segmentation to our network without requiring us to purchase, configure, and operate additional hardware switches. Now, when VLANs are created, this reduces the background traffic and allows the network to grow while providing different security protections to each of the different parts of our network. For example, let’s say I wanted to create a VLAN for the accounting department, another one for Human Resources, another for the IT department, another one for my printers, and another one for my general users. By doing this, I’ve essentially created logical groupings for each type of user or device that I have on my network, and I can isolate each of them into separate broadcast domains. When you create a VLAN, you’re essentially creating a virtual switch.

So if I want to move traffic from one VLAN to another, I’m going to need to route that traffic using either a router or a layer 3 switch. Now, because of this requirement to route traffic between VLANs, I have the opportunity to add ACLs and check the traffic as it enters or leaves each VLAN, again adding to the network segmentation and providing us with more security in our network. Remember, access control lists, or ACLs, are used to control what traffic is going to be allowed inbound or outbound onto a certain interface on these devices. These decisions are made at layers three and four of the OSI model using IP addresses and port numbers. These ACOs can be applied to a switch interface, a router interface, or a network firewall. Either way, it’s going to allow us to have this opportunity to block or allow traffic. One of the most common uses of VLANs in networks today is a management VLAN. The management VLAN is going to be used to pass inner switch traffic across your network, and this gives us separation of network devices based on their types, which gives us additional security. So our management VLAN is going to be isolated from the devices and require its own credentials for us to access and configure the devices that are touching this management VLAN. If you want to access the management VLAN to perform some management, you’re going to need to configure the network interface card on a server or workstation to allow it to operate on this specific VLAN, especially on servers. We may find that we need a more tightly controlled and restricted interface configuration because they may be connected to a management VLAN or some other sensitive network. By using these out-of-band management ACLs, management interfaces, and data interfaces, our network interface card can be configured for higher levels of security and monitoring. Out of band, or OOB, nix can connect to a separate and isolated network that is not accessible from the Internet or the rest of the LAN.

These out-of-band interfaces can be a standard Ethernet card or a serial interface card, depending on how you want to configure it. OOB interfaces should always be in a separate subnet from your production network and run on a separate VLAN. If our management VLAN, for example, needs to connect over a WAN, it should be on a different logical WAN connection than our standard Internet connection. Also, the management VLAN needs to have its quality of service set to high to ensure that you always have access to the management of the devices whenever you need them. Some newer systems with an Intel V or Pro chipset Active management technology will allow machines to be managed over the out-of-band network, even if they’re powered off, because you can use the wake-on-LAN feature to bring them back up. and some other features inside Windows 2012 or newer server environments that will wake up the devices and allow you to configure them. Now, a management interface is used for remotely accessing devices. This can be used for outer-band or in-band networks. For best security, though, we should disconnect the management interface from the in-band regular production corporate network and instead use an out-band management interface.

Through this management interface, we can connect to the network using SSH and monitor the network using SNMP, the simple network management protocol. This management interface can be a physical port on a switch or a router, or it can simply be a regular port that is logically configured to act as a management interface. The entire point is to keep the management of the network in a secure and separated channel instead of using a regular network connection. Each management interface should be configured for security with a long and strong password at a minimum. Another type of network interface that we find in our networks are the standard interfaces that pass business traffic. Known as our data interfaces, This is our typical production network. Now, these interfaces are not used for local or remote management, but instead they provide the necessary network services to our end users and their devices. Each interface can have security added to it, and it can be placed into an appropriate VLAN based upon your security needs. Like I mentioned earlier, I can put this in the marketing area, the human resources area, the IT area, or the general user area if we add security to a router’s data interface. This is known as an ACL, and it operates at layer three by filtering out IP addresses. If instead we add this type of security to a switch’s interface, we call this port security, and it operates at layer two by filtering out things based on their Mac addresses.

Leave a Reply

How It Works

img
Step 1. Choose Exam
on ExamLabs
Download IT Exams Questions & Answers
img
Step 2. Open Exam with
Avanset Exam Simulator
Press here to download VCE Exam Simulator that simulates real exam environment
img
Step 3. Study
& Pass
IT Exams Anywhere, Anytime!