Pass Cisco CBROPS 200-201 Exam in First Attempt Easily
Latest Cisco CBROPS 200-201 Practice Test Questions, CBROPS Exam Dumps
Accurate & Verified Answers As Experienced in the Actual Test!
Check our Last Week Results!
- Premium File 400 Questions & Answers
Last Update: Oct 1, 2024 - Training Course 21 Lectures
- Study Guide 965 Pages
Download Free Cisco CBROPS 200-201 Exam Dumps, CBROPS Practice Test
File Name | Size | Downloads | |
---|---|---|---|
cisco |
1.5 MB | 1423 | Download |
cisco |
4 MB | 1223 | Download |
cisco |
1.5 MB | 1252 | Download |
cisco |
3.2 MB | 1474 | Download |
cisco |
1.4 MB | 1542 | Download |
cisco |
1.8 MB | 1730 | Download |
cisco |
1.7 MB | 1832 | Download |
Free VCE files for Cisco CBROPS 200-201 certification practice test questions and answers, exam dumps are uploaded by real users who have taken the exam recently. Download the latest 200-201 Understanding Cisco Cybersecurity Operations Fundamentals (CBROPS) certification exam practice test questions and answers and sign up for free on Exam-Labs.
Cisco CBROPS 200-201 Practice Test Questions, Cisco CBROPS 200-201 Exam dumps
Section 1
1. Introduction
Being in cybersecurity can be a lot like being a detective. Sometimes you need to know how to handle incidents and properly analyse evidence. The 210-255 exam primarily focuses on these aspects of cybersecurity. In this course, you will learn things like how to analyse malware and investigate security breaches. Let's get started with the first section of this course, Threat Analysis and Computer Forensics.
Section 2
1. Malware Analysis Tool Report
Interpreting the output report of a mail wire analysis tool can be extremely helpful when you are trying to find the root cause of an attack. Amp Threat Grid and Cuckoo Sandbox are both tools that can be used to analyse malware. Amp Threat Grid analyses suspicious behavior in your network against more than 450 behavioral indicators and a malware knowledge base sourced from around the world. As a result, Amp Threat Grid provides accurate, context-rich analytics on malware. It can be delivered as either a cloud-based or on-premises solution to help organizations understand what malware is doing or attempting to do, how large a threat it poses, and how to defend against it. Let's take a look at how to analyse a malware analysis report using AMP Threat Grid. I'm just going to pick a random file out of the global list of files that have already been analyzed. So as you can see, there is all sorts of good info displayed in this report on behavioral indicators, which is a list of all the suspicious activity found while running the file in the Sandbox environment. Network activity information can be used to see if the file triggered external CNC connections or scans to other devices on its network. You can also see the processes that were active after the file was launched to help pinpoint what it was trying to accomplish, such as changing firewall rules on the machine it was installed on. Registry changes and file system changes are both key pieces of information. Here I can see which registry entries were modified, as well as file system activity for both read and modified content. Taking the time to analyse files that have been involved in a security incident can not only help you identify what the file was trying to do locally, but it can also show you other places in the system or network that could have been affected.
2. CVSS 3.0
In this section, I'm going to talk about the Common Vulnerability Scoring System, Version 30. The CVSS provides a way to capture the characteristics of a vulnerability and produce a numerical score reflecting its severity. The numerical score can then be translated into a representation such as low, medium, high, or critical. To help organisations properly assess and prioritise their vulnerability management processes, you can find information about the CVSS by going to www.dot. CVS. Let's take a look at the factors used for the calculations to determine vulnerability. Scoring. An attack vector is the path that an attacker uses to access a system for malicious activity. This could be an email, an open-layer, four-port network, or a web page. This score increases the more remote an attacker needs to be, logically or physically, to use an attack vector. Attack complexity describes the conditions that are outside of an attacker's control to exploit a vulnerability. So if an attacker had access to a network all the time, then the attack complexity would be low. But if an attacker had to wait for a certain condition to arise, like a pivotpoint, then the attack complexity would be high. Privilege as Required describes the level of privilege that an attacker must have before exploiting a vulnerability. So if a system requires admin credentials for access, then the risk would be low. But if a system only required basic user credentials or no log-in at all, then the risk would be high. User interaction describes the requirement for some type of user activity before an attack can be launched, such as a user login. This score is highest when no user interaction is required. The scope of an attack refers to the ability for an attack to affect other resources separate from the original target. An example would be an attacker compromising a Web server and then corrupting the database server by pivoting from the Web server. Then finally, we have the CIA triage of confidentiality, integrity, and availability. You've probably learned about these topics in earlier security studies, but just as a refresher, confidentiality refers to limiting information access; integrity refers to the trustworthiness of information; and availability refers to the loss of availability of the impacted components. To calculate your vulnerability score, you simply have to select the values for the base metrics, and the website will generate a score for you. So let's say my attack vector had to have local access. The attack complexity is high, high privileges required, userInteraction then we'll say scope changed and then ourCIA will be high all the way through. So with these values, we have a score of 7.2 in the high range. One of the coolest things on this website are the example scores. So if you go over here to the CVSS Version 3.0 Examples, it actually takes you through some vulnerability examples and how the scores were determined based on the criteria for the vulnerability. So let's take a look at this one. We have a reflected cross-site scripting vulnerability here, and I'll just skip down to the person's 3.0 based score and read through each metric and the comments to see why it determined the value for each metric. So for the attack vector, it shows network because the vulnerability is a web application and reasonably requires network interaction with the server. Attack complexity is low because the attacker would need to perform some type of reconnaissance attack on the targeted system. Because an attacker requires no privileges to mount an attack, the privileges required and their value are known. User interaction is required for a successful attack; it requires the victim to visit the vulnerable component by clicking a malicious URL. For example, scope is changed because the vulnerable component is the web server running the PHP Myadmin software and the impacted component is the victim browser. The confidentiality impact is low because even though information in the victim's web browser can be read and sent to an attacker, it is restricted to certain information. The integrity impact is low as well because the information maintained in the victim's web browser can be modified, but only the information associated with the website running PHP MyAdmin. And then finally, we have none for the availability impact because it does not have a major impact on the availability of the victim's system. So take some time and look through these different examples. It really helps to provide some context for each metric and what type of value should be chosen based on each scenario.
3. Microsoft Windows File System
To be able to obtain forensics evidence during a cyber breach investigation, you should understand how endpoint file systems work. So what is a file system? Think of them as digital filing cabinets. When you store data on your computer hard drive or flash drive, file systems are used to divide it into organised units that can be accessed later on. This can be accomplished in many ways, depending on the type of file system that is used. Back in the day, File Allocation Table, or Fat, was the default file system used by Microsoft operating systems. Fat organised data by splitting discs into clusters with a table to reference what was stored where. Over time, newer versions of FAT were introduced, such as the more recent FAT 32. Fat 32 overcame some of the limitations of earlier Fatversions, but it still lacked size capacity, making it unsuitable for newer systems. As an alternative fact, Microsoft introduced the new technology known as the File System, or NTFS. NTFS is more secure and scalable than Fat, making it the clear winner, and it has been used since Windows XP. As the Microsoft OS file system, NTFS keeps track of timestamps for any changes to a file system. Each file has a timestamp for "create, modify, access, and entry modified." As one might expect, time is critical in all aspects of cyber security. If there is any file activity on a compromised device, the time of the activity could be the smoking gun in a breach investigation. If you go to Computer Management on a Windows device and then go to Storage and Disk Management, you can see what file system is used for each type of drive. As you can see here, I have my main hard drive, which uses NTFS as its file system. And then I have an external storage device that uses XFAT, which is another version of the file system. Next, I want to open up a file that has been modified on my system to show you where you can look at timestamp information for files on a Windows computer. So I'll go to my desktop and have this test text file. Just by hovering over it, I can see that it was last modified on October 30 at 10:32 p.m. So if I was doing an investigation and found new files on the system or wanted to see if critical documents were accessed or not, I could right-click on that file, go to properties, and then I could see the last time it was created, modified, and accessed. So I'm going to open up that file and I'll make a change, and we should see an update to today's date, which is November 21 of 2017. So modify the file, click Save X out of there, and then let's go back to the properties of that file. And now you can see that the modified date has been changed. So that's just a really basic thing to know. if you're not that familiar with file systems on Windows computers. That can be very helpful for computer forensics.
Section 3
1. Common Artifact Elements and Protocol Headers
When an intrusion event occurs on the network, it needs to be analysed to determine who, what, where, and when things happen. Network intrusion analysis primarily uses information like IP addresses and port numbers to track security events. Here is a list of some of the common types of information that we are looking for while analysing network intrusions. Here is the intrusion analysis page used by Cisco's Firepower Management Center. As you can see, there is detailed information for each intrusion event. Let's click on one to take a closer look and see what type of security event artefacts can be found. So here on the detailed Network Intrusion Analysis page, I can see when events occurred, the source and destination hosts that were involved in the event, as well as layer four port numbers and application protocols. Collectively, these common security artefacts can be used to analyse intrusion events. Intrusion devices also have the ability to analyse more than just data payloads, such as protocol headers. Protocol headers are used to carry network model layer protocols like IPTP and HTTP. Pre-processors that analyse header information can detect attacks that exploit things like IP fragmentation checksums, validation, and TCP or UDP sessions. In the next video, we will analyse an intrusion event with a PCAP file and drill into protocol headers and some of the other common security artefacts.
2. Security Analysis with Wireshark
Packet captures are the most detailed type of analysis data. With packet captures, you can see all of the network layer information, plus actual data payloads. You can analyse information like web page images, passwords, and even voice call conversations. Cisco's Firepower Management Center has a sweet feature that actually allows you to download packet capture files for intrusion events. So I'll find the event I want to analyze, left-click the drill down arrow, and then if I scroll to the bottom, I have the option to download all packets. So I'll download this, and then we'll use our Wireshark software to open up the PCAP file to analyse the security artifacts. While that's downloading, if we look here in this packet information section right from the Firepower Management Center, we can actually look at some of the data that we would see right in our PCAP file. So I guess in this case, you really could probably get by just by using the data that's provided in the Firepower Management Center, but you're not always going to have that luxury. Plus, Wireshark has some additional features that can be used to help you analyse the data. Okay, so it looks like the download is done. I'll open up the zip file, double click it, and it should open up the file with my Wireshark software. So we can see that the first packet that was collected in the security event was the acknowledgment to open up the TCP session on port 80 to the web server. And then once the connection was established, we could see that there was an HTTP get, and we could see our protocol header information and all of our addressing for each network model layer. And then the most interesting part is the HTTP section. It can provide information such as images that were pulled down in the HTTP get, as well as the full destination URI and the DNS hostname information. So as you can see, having a PCAP file of a security event can give you all the information that you need to properly analyse it at a network level. Bye.
3. NetFlow v5 and Security Events
NetFlow version five record data is commonly used to analyse security events. Here is a list of traffic flow data that can be provided by NetFlow version five records. This type of information can help to identify traffic anomalies and many cyberattacks.
Section 4
1. NIST.SP800-61 r2
You. Incident response is necessary for rapidly detecting incidents and restoring computing services after a security event. To assist organisations in establishing computer security, incident response capabilities, and handling capabilities, the National Institute of Standards and Technology created the NIST SP 861R, a two-part publication that is a guide for computer security incident handling. If you follow the link in the resources for this lecture, it will take you to a PDF version of this guide. Now, let's take a look at the incident response process. Handling an incident should include four major phases: preparation; detection and analysis; containment; eradication and recovery; and post-incident activity. The preparation phase is used to make sure that you have your ducks in a row in case there is an incident. Some key examples would be equipment for analysing compromised devices and the proper communication channels. Jump kits can be built with all of the necessary tools so that you and your team are ready to quickly respond to an incident. The detection and analysis phase can be difficult due to there being so many event types and source technologies. Here are some NIST recommendations for making incident analysis easier and more effective: things like network profiling, understanding normal behaviours with anomaly detection, log retention policies, event correlation, and among other things, keeping accurate time to make sure that you can properly correlate events. Next, we have the containment, eradication, and recovery phase. This phase is primarily used to prevent additional damages, preserve evidence, and maintain network availability. As you'd expect, the faster you can discover the attacking host and contain it, the better off you are. The longer a compromised host is on the network, the more time the attacker has to spread and create backdoors on the network. You'll want to look for logins throughout the network and logs during the time of the attack to make sure that the attacker didn't compromise other hosts on the network. Once an incident has been contained, evidence should be collected and documented with information such as computer identifications and collection details, especially since you may need it for legal proceedings after the storm clears. Eradication may be necessary to eliminate components of the incident, such as deleting malware and disabling breached user accounts, as well as identifying and mitigating all vulnerabilities that were exploited. For some incidents, eradication is either not necessary or is performed during recovery. In recovery, administrators restore systems to normal operation, confirm that the systems are functioning normally, and remediate vulnerabilities to prevent similar incidents. The last step in the incident response process is the post-incident analysis. This phase is used to take a step back and assess why the incident happened and what your organisation could have done differently to improve their incident response process. This can be accomplished by holding internal meetings to discuss lessons learned and updating processes accordingly.
2. CSIRT
In the last video, we covered the fundamentals of creating an incident response plan. The next step after your Incident Response Plan has been created is to form an internal computer security Incident Response Team. An Incident Response Team is responsible for providing incident response services to part or all of your organization. The team receives information on possible incidents, investigates them, and takes action to ensure that the damage caused by the incidents is minimized. This team could be in-house members from your organization's security team, or it could even be outsourced to an external group like a managed security service provider. MSSPs are companies that provide incident response and managed security services. Some ISPs like AT&T and CenturyLink offer these services, and even Cisco has an incident response service. At the national level, CSIRTs and Cert Computer Emergency Response Teams work together to help protect partners and citizens from cybersecurity threats. The US. Search, for example, is used to protect all Americans by responding to major incidents, analyzing threats, and exchanging critical cybersecurity information with trusted partners around the world. The US's critical mission activities include providing cybersecurity protection to federal citizens, developing timely and actionable information, responding to incidents, analysing data about emerging cyber threats, and collaborating with foreign governments and international entities to enhance the nation's cybersecurity posture. One. well-known, sir, is the coordination centre at the Software Engineering Institute? They help solve cybersecurity problems by working with software vendors to resolve software vulnerabilities. They even work with organisations such as the FBI and the Homeland Security Department.
3. Network Profiling
The more information that is available for your networks and hosts, the easier it will be when it comes to incident response time. Profiling data provides a complete view of all of the information the system has gathered about a network or host. In this lecture, we are going to take a look at different profiling methods that can be used to provide contextual data to detect and even prevent incidents. Throughput is the measure of how much data is successfully transferred between network hosts. So if you send a file between two computers and the rate of the transfer is 50 megabits per second, then that would be the throughput. Throughput utilisation can be an indicator that there has been a security incident. If it is monitored, network management servers can collect this data historically, so a baseline can be created. If the NMS reports that a remote site had an average throughput of 30 megabits per second for a year, and then all of a sudden the site spikes to 100 megabits per second, then you would want to investigate that network. Of course, just because there is a large increase in traffic doesn't necessarily mean that there has been a security incident, but it is possible. Session duration is another important thing to watch out for on a network. Really long sessions to hosts could be a sign of a misbehaving client on the network. Most sessions shouldn't last past a typical eight-hour work day, but if an attacker has a back door to the network, then they may stay connected for long periods of time. One easy way you can check for session duration is by looking at your firewall's connection statistics. So here I am, logged into my Cisco Firepower device; that's my internet gateway in the lab, and just by running the command show connection detail, just like you could on the ASA platform, you can see connection or session details. So I can see how long these connections are idle for as well as their uptime, which is what I'm looking for. This one, for example, is four days long. That might be one I want to look into. That seems a little unorthodox to me. It could be legitimate traffic, but any connections or sessions that have been active for more than a day straight should be investigated to ensure that there are no CNC connections or back doors connected on your network. Knowing what wired devices are connected to your network is a big part of network security. Since most wired networks do not require authentication or authorization for access, it is an easy way in for an attacker. Small, low-profile devices like Raspberry Pi's can be easily tucked away and remain connected to the network for years without anyone noticing. There are even devices that can be used that are disguised as equipment, like power strips and chargers, to control what is plugged into a network. A NAC solution like Cisco Ice can be implemented. Ice has the ability to profile what types of devices are connecting to the network and permit or deny endpoints based on posture. A common Ice rule set would be to only let company-owned devices gain full network access, and then all their devices would only get limited access. So just to give you a basic idea of what I mean when I talk about wired authentication and authorization, I'm going to show you a switchport that I have configured for authentication that actually talks to Cisco Ice to verify if users or devices should be allowed on the network. So it's quite the long configuration, but basically all these authentication commands are tweaking the switchport, saying, "Hey, when something plugs into the switch port, I want to authenticate them with One X or Mac address authentication." So right now, I have a phone and a computer that's plugged into the phone and plugged into this port. So if I run the command show authentication session and then interface F 112, then I can see the authentication and authorization status of my devices. So let's take a look at this. So here, this shows the username of the computer that's plugged into the back of the phone. It's a Mac address. its IP address. It looks like it's authenticated properly, and it received this downloadable access list from Ice. So I don't want to get too crazy into the details here, because knowing how to configure this stuff really isn't required for the Cyrus exam. I just want to share with you as much information as possible so you have a better understanding of how this all ties together. So basically, my computer is authenticated and all happy in the data domain, and then I have my phone in the voice domain that has also received this permit and any downloadable access list. So you saw the switchboard configuration. Now, to actually talk to Ice, we have some AAA commands that are set globally. So, for example, I have this AAA authentication, One X, and then I say, Hey, for One X requests, I want you to talk to this group. And that Radius Ice group basically has the IP address of my Ice server so that it knows to send Radius requests for wired authentication to the Ice server. So now I'm in Ice. I just want to give you a quick look into what the policies look like. When someone plugs into a switch port and the switch sends their username or Mac address to Ice, what is it going to do to that device? Is it going to deny access to the switch? Or is it going to push an access list to the switch to apply to the switch port? So let's take a look at my wired access policy here. So, just a few examples here. I have a Mac address list here, basically, so that if a device, such as a phone, lacks the ability to send things like username and login information, I can still send them. Then I just check its Mac address against a list that I've compiled. If it doesn't match that, then it looks for either Active Directory, user credentials, a certificate, or something else. And at this point, if it passes authentication and is a valid username and password, then how is it authorised to access the network so you authenticate successfully? Then we go down here and you can get crazy with authorization policies. It's actually one of the bigger parts of the configuration. I could send a user to a guest portal if I wanted to. When they plug into a wired port, give them limited access. Or if they're a company-owned asset and a valid domain user, then give them full access. So now you can see that with a naive solution like Ice, you can truly secure what is connected to the network so that a hacker couldn't come in off the street and go plug into a lazy switch port to gain access to the network. Segmenting IP networks and VLANs based on security levels is obviously a best practice. For example, guest users should not be on the same network as internal servers that run critical roles in the environment. IP address planning can take a lot of thought to make sure that addressspace is allocated for scalability and security.
4. Server Profiling
In the last video, we talked about network profiling. Another method of profiling is server profiling. Server profiling information consists of things like what ports the server is listening on, logged-in users running processes and tasks, as well as applications. By tracking this type of information, a contextual view of network hosts would be available for incident response investigations and anomaly detection. Let's hop in the lab and we'll go through how to identify this profiling information on Microsoft and Linux servers. All right, so here I am on a Microsoft Windows server. First I'll show you the listening ports on the server. So if you go to Start and then either search command prompt" or just pick it from shortcuts here, Okay, so the command we'll use is net stat, and then we'll do a so before I hit enter here, I know there'll be a long list of listening ports. So I'm going to first increase my buffer size for my window so that I can scroll all the way to the top so you can see everything. So I'll right-click on the window, go to Properties layout, and then hit Enter, and I should be able to go back to the top. So here you can see all the listening ports we have: port 80 for HTTP, port 88 for Kerberos, port 389 for LDAP, and so on. So as you can see, it's a really easy way to understand what parts are being listened to on your servers. So, in any server environment, it's a good idea to keep some documentation on what ports are being listened on on each of your servers, and make sure to block any ports that you don't want the server to be listening on in the Windows firewall. Okay, so next we can actually find the rest of the profiling information we were looking for by simply going to the task manager. So I'll just go below the taskbar here, right click somewhere, and then click Task Manager. So as you can see here, we have our applications that are running on this server, as well as the logged-in users. So this shows that there's an administrator user logged in, which is me via an RDP session. So that does it for Windows. But now let's jump over to a Linux server to find the same information. All right, so here I am on my Linux server. I'm going to go to the terminal, and we'll start off by looking for listening ports. You can actually use the net stack command just like we did in Windows, except there are some different options. So I'm going to add the listening ports and the flags for UDP as well as TCP. And it looks like the only port listening on this server is UDP 68 for DHCP. To identify users logged into a Linux machine, you can run the W command. and as I expected. Only one user is logged in to this Linux box, which is myself as the root user. To see the running processes on a Linux server, you can run the command top, and as you can see, you can see the process ID information users tied to each process as well as CPU and memory usage. Then, to check all the running applications on the server, I can use the command PS dash, and there I have my documentation on my applications running on the system.
5. PCI
The PCI Data Security Standard is for organisations that handle credit cards from the major card schemes. Basically, PCI is a set of rules to follow if your company processes payments. The types of data that are supposed to be protected by PCI compliance are things like cardholder names, PINs, and account numbers. The first PCI standard, PCI 1.0, was released in 2004. The latest version is PCI 3.2. The latest requirements can be found on the PCI Security Standards website. Currently, there are twelve requirements to view the detailed PCI Security Standards document from the home page. You can actually click on Document Library and then click on View Document for PCI DSS version 3.2. Click on PCI DSS for the Cyber Ops exam. You do not need to memorise the entire PCIDSS document, but I would go over the high-level overview of the twelve PCI DSS requirements and commit those to memory, as well as the different types of cardholder data listed on page seven. So those are two big things that I would focus on. Of course, go through the whole guide. There's some really good stuff about network segmentation as well as best practises for implementing PCI. Really, the best way to be PCI-compliant is to create a dedicated PCI environment for any PCI systems. Then you only have to maintain PCI compliance for a small portion of your network.
Cisco CBROPS 200-201 Exam Dumps, Cisco CBROPS 200-201 Practice Test Questions and Answers
Do you have questions about our 200-201 Understanding Cisco Cybersecurity Operations Fundamentals (CBROPS) practice test questions and answers or any of our products? If you are not clear about our Cisco CBROPS 200-201 exam practice test questions, you can read the FAQ below.
Purchase Cisco CBROPS 200-201 Exam Training Products Individually