CompTIA CYSA+ CS0-002 Topic: Configuring Your SIEM Part 1
December 19, 2022

1. Configuring Your SIEM (Introduction)

SIEM Systems, which are security information and event management systems, will be discussed in this section of the course. Now our focus in this section is going to continue to be in domain three, with a singular focus on Objective 3.1. Now Objective 3.1, in case you forgot, states that given a scenario, you must analyses data as part of security monitoring activities.

In particular, we’re going to focus on the security monitoring activities associated with Seams, which makes sense since this is a theme section. Now, as we move through this section, we’re going to start by describing how a security information and event management system, or SEAM, is used within your network to increase your monitoring and detection capabilities.

After that, we’re going to explore the various use cases for security data collection techniques that are available for us to use as analysts. Then we’re going to focus on the importance of security data normalization inside your network before we move into our event logs and syslogs as we analyse security incidents. Finally, I’m going to perform another hands-on demonstration, this time showing you how to configure a Seam agent to collect data from across your network. It appears to be an important part of detecting malicious activity on our networks. So you won’t want to miss this section of the course. It’s really important.

2. SIEM (OBJ 3.1)

Now, log review is a critical part of security assurance. We’re going to gather logs from all sorts of different systems. But gathering logs does you no good if you don’t actually look at those logs. Now, logs shouldn’t be viewed just after an incident. You shouldn’t use them just as part of an incident response, for example. However, you should examine them on a regular and routine basis as part of your threat hunting and proactive system management. To do this effectively, though, you really do need to use a seam. Now, a seam is a solution that provides real-time or near-real-time analysis of security alerts generated by network hardware and applications. Now, as we look at a seam, there are a lot of uses for one, but one of the best things they do is help us correlate events.

Let’s take a simple example. You’re looking through the logs, and you see that someone is logged in over a VPN from Asia. It’s John Smith. And he’s logging in from Asia because he’s on a business trip. Well, there’s nothing wrong with that. But if you look just moments later, you’ll see that John Smith’s ID has been used to log in to the server room in your building. Well, that’s an issue because he can’t be in your server room and on a business trip to Asia at the same time. So one of those two things is wrong. Now either one by itself is fine, but puttingthose together and correlating them flags as something thatwe need to look into and figure out whereis this person and how is he in twovery different places, both at the same time. A sewing machine helps you do this very quickly and very easily.

Now, a security and information event management system called ORASEM can be implemented in many different ways. You can do this with software, hardware, appliances, or even as an outsourced managed service. Now, to effectively deploy a seam, you have to consider a lot of different things. To begin, you must be able to log all relevant events while filtering out anything that isn’t considered relevant, or any irrelevant data. Second, you need to make sure you can establish and document the scope of the events. Exactly what are you going to log? What is inside and outside of your scope? Third, you need to develop use cases to define a threat. This will help you define exactly what you do and do not consider a threat, and then what you might take action on later. Speaking of that, that brings us to number four. You need to plan incidents and responses for different events. If you know that when you see this type of thing happen, you’re going to take those types of actions, That’s what we’re talking about here. It’s pre-planned responses to any threat you may face. Fifth, we want to establish a ticketing process so we can track all these different events that we flag.

This way, as we go into them, we see something that’s unusual, like my example earlier, where somebody is logging in from Asia and at the local office at the same time. You can flag that and have it tracked throughout the process to make sure it doesn’t get dropped. 6th, we want to schedule regular threat hunting. Now, by doing this, we want to make sure we’re not missing any important events that may have escaped alerts. By going through and doing “threat hunting,” we’re going to be able to catch bad guys doing bad things that may have escaped our alerts. Finally, our seventh item is to provide an evidence trail to auditors and analysts. A database is a great place with an uncentralized repository of lots of different data. And so it’s a great place for auditors and analysts to look through as they’re doing their analysis. Now, when I talk about a seam solution, there are lots of different solutions out there.

There are many commercial and open-source solutions available, and it’s up to you to decide which one you want to use. As we go through the rest of this lesson, I’m going to bring up a couple of them and show you what they look like. We’ll SplunkElk or Elasticstack, arc Site QRadar, alienVault, OSSIM, and Gray Log. Let’s start with Splunk. Splunk is a market-leading big data information gathering and analysis tool, and it can import machine-generated data via a connector or a visibility addon. Now, Splunk is really good at connecting lots of different data systems. In fact, it has different connectors built for most network operating systems and different application formats.

Essentially, all the data from all the different systems can be indexed as it’s taken off those systems and then written to a centralised data store. This allows Splunk to go through historical or real-time data and be able to search through it using its proprietary search algorithms, called the Search Processing Language. Now, once you get those results, you can start visualising them using different tools. So when you use Splunk, it looks something like this: Notice here that I have what looks like a dashboard on here. I can see the important information, I can see a lot of data, and I can see the trends going up or going down. I can see events over time, and I can actually drill down by clicking into each one and going in and looking at the data behind it as well.

Splunk is a really great tool, and it can be installed locally or as a cloud-based solution. When you buy Splunk, it comes with a lot of templates and preconfigured dashboards for security, intelligence searches, and instant response workflows. Splunk is a big player in the marketplace, and it is a great theme to consider. The next one we want to talk about is Elk or Elasticstack. Now, Elk and Elasticstack are a collection of free and open-source scene tools that provide storage, search, and analysis functions. Now, the Elk and Elastic stack is actually made up of four different components. These are Elasticsearch, which covers the query and analytics; Log Stash, which is your log collection; Normalization Cabana, which does your visualization; and Beats, which is your endpoint collection agent that is installed on the machines.

The way these all work together is that you’re going to have the different beats installed on different servers or hosts, and they can then send out either directly back to the elastic stack or they can go into the log stash first. Now, when it goes into the log stash first, it’s going to do the parsing and normalisation for you and then send it into Elastic. If you go directly to Elastic, it has to be in a format that it already understands. Now, Elastic is that centralised data store, but you don’t really go into Elastic to look at the data. Instead, you use Cabana, and Cabana goes into Elastic and then visualises that data in a way that you can see and understand. Just like Splunk, Elk Stack may be installed locally or as a cloud-based solution.

Our third seam tool that we’re going to discuss is ArcSite. ArcSite is a theme log management and analytics software that can be used for compliance reporting for legislation and regulations like HIPAA, SOX, and PCI DSS. When you look at our site, it looks like another dashboard, and again, you can drill down into that information and display it in lots of different ways. The fourth one we’re going to talk about is QRadar. And QRadar is a log management, analytics, and compliance reporting platform created by IBM; it does a lot of the same stuff we’ve just talked about, and again, it comes with a nice dashboard. As you look at the dashboard, you see different things that you can be looking at and considering for your network.

Alien Vault is our fifth entry. An OSSIM. The Open Source Security Information Management System Now, this is a seam solution that was originally developed by Alien Vault, which is why it’s called Alien Vault, but now it’s owned by At AmpT, and they’ve been rebranding it recently as AT&T Cybersecurity, just like the other ones. It does come with a dashboard where you can search and dig into the different information that could be presented here. Now, one of the nice things about Alien Vault and OSSIM is that OSSIM can integrate other open-source tools such as Snort IDs and OpenBoss Vulnerability Scanners, and it can provide an integrated web administration tool for you to manage the entire security environment. So it does give you this nice all-in-one solution. Also, because you’re using a lot of open-source tools here, it helps keep your costs low.

The final one we want to talk about is grey logs. And Gray Log is an open source theme with an enterprise version that’s focused on compliance and supporting IT operations and DevOps. And again, it has a nice dashboard where you can drill down and search for things. The big difference with Greylog is that it’s really focused on DevOps and supporting IT operations, as opposed to doing more of the log analysis and instant response that some of the other things like Splunk are much better suited for. Now, let’s talk about the exam for just a moment.

You do not need to know specific tools for the CYST exam like the ones I mentioned in this lesson. We cover them here simply to make sure you are introduced to the brand names and the different tools. If you hear any of these names, you should know they have the ability to act as seamen, but beyond that, you don’t need to know how to use them or operate them for the exam. Now, that said, a lot of these open-source tools make a great addition to your own practice labs and your own home networks. Because if you’re building out your home network, this will give you hands-on experience using these tools handson.This will make you a much better analyst in the real world. Now, in the real world, which of these tools should you use?

Well, that depends. Which company are you trying to get a job at, or which company do you already work for? As I’ve worked at different companies and organisations Over the years, we have used several of these different tools, including Splunk, Elk Stack, and Alien Vault, in some of the different organisations I’ve worked with. So I have experience with all of those. Is one better than the other? It really does depend on your use case, but when it comes down to it, it’s what your boss likes and what your company is already using.

3. Security Data Collection (OBJ 3.1)

Security data collection In this lesson, we’re going to talk about all that data that you’re collecting inside your scene. Now, a lot of this can become intelligence, but intelligence loses its value over time. So when you’re dealing with this, you need to make sure that you’re capturing and analysing the information in real time, or as close to real time as possible. The sooner you can find out about a bad guy intruding into your network, the quicker you can get them out, right? And so intelligence needs to be current and relevant.

Now, as we talked about the intelligence process earlier, we talked about the fact that we have five different stages. We start out with requirements, then move into collection and processing, analysis, dissemination, and feedback. Here on the screen, you can see the fact that I’ve highlighted collection and processing, analysis, and dissemination. The reason for that is that this is what we’re focusing on right now when we’re talking about a theme. This is where a seam operates. It helps you collect the information, process that information, and normalise it.

It helps you analyse that information, and then you can even run reports or send information out to others as part of dissemination. And so a seam really does fit into these three phases of the intelligence lifecycle. One of the things your systems can do for you is to be configured to automate much of the security intelligence lifecycle, particularly when it comes to data gathering and collection. I don’t want to have to go out to all these different systems across my network and grab their logs and start analysing them. But using a seam, I can make all those systems feed me that data, and it can go into that central repository that we can later analyze. Now, one of the big things you have to consider when you’re configuring your seams is: what do you want to collect? Some people have a tendency to just try to collect everything. They dump all the data into the system. But the problem with that is that it can end up overloading your system. All this data has to be stored, processed, and normalized. And if I’m sending it in with millions of endpoints, that can really quickly overwhelm my systems.

So instead, you should spend some time upfront doing your planning based on the requirements and determining exactly what you need to collect. Remember, while your scene could collect all the logs across all of your systems, this isn’t a good idea. Instead, you need to configure your system to focus on the events related to the things that you need to know. Not everything is important, so you need to identify what is and collect on that. Now, one of the biggest features of a system is the ability to process data and then look for different trends and alert on those.

Now, just like all alerting systems, it does suffer from the problem of false positives and false negatives. When we talk about the problem with false negatives, this is when security administrators are exposed to threats without being aware of them because their system falsely categorises them as negatives instead of alerting them that there was something bad there. Now on the other hand, we also have issues when we have false positives. If our system starts having a lot of positives but they aren’t real, we’re going to overwhelm our analysis and response resources because someone has to look at that and analyse it and determine whether there really was an event that happened.

So we want to make sure we’re tuning our systems to avoid a lot of false negatives or false positives. To help us do that, we developed what’s called a use case. By developing use cases, we can mitigate the risk of these false indicators. Now, when I talk about a use case, this is a specific condition that should be reported, such as a suspicious log-on or a process executing from a temporary directory or something like that. Essentially, we want to consider what the bad thing is that we want to collect on, and then build this use case around it. Based on that use case, we can then configure our system to collect the relevant data. Now, what we want to do here is essentially develop a template for each of these use cases.

And as we do that, they’re going to contain a couple of different things. We’re going to contain things like the data sources and the indicators that we want to collect on.It’s going to contain the query strings that we’re going to use to correlate those different indicators across different systems. We want to make sure we have the actions that are going to occur when the event is triggered. Essentially, what are we going to do to respond when we see this bad thing happen? And then, by having those three things as part of our use case template, that tells us that this set is what is known as this bad thing. And so eventually we would write a rule or a query to be able to identify all of those things across our systems. Now, in addition to providing those three things in the use case, we also need to make sure that each use case captures the five WS.

When dealing with an event, this would include questions such as “when did this event begin” and “when did this event end if it has already ended.” We also want to figure out who was involved in this event, which user, or which system. Then we want to figure out what happened and what the specific details of this event are. Essentially, did somebody try to run a program, and there was malware in it? Did somebody try to attack our network from the outside? What happened? We need those specific details. Then we go and figure out where. Where did the event happen? Was it a host’s fault? Is it a server or a file system? The network? Where is this issue? And then we want to also figure out where the event originated from. Did it come from the inside because it was an insider threat? Did it come from the outside because there was an external hacker? This is an important piece of information for us to know too. So by knowing those five Ws, we’re going to be able to better understand what to expect and then how to respond to it.

4. Data Normalization (OBJ 3.1).

Normalization of data Normalization is really important, especially when we’re collecting everything into a central repository. When we collect everything into a central repository, like a theme, we have to realise that our security data comes from numerous sources across our organization, and all of these use different formats and different ways of storing data. So what we have to do is normalise that, and we talk about normalization. This is a process where the data is going to be reformatted or restructured to facilitate the scanning and analysis process later on during the intelligence cycle. Now, when you start thinking about your seam data, where does this data come from? Well, it can come from three different places across our network. For instance, here on the screen, you can see all sorts of different places that the data is coming from. It’s coming from a HID or an EDR.

We have it coming from agents, we have it coming over the Syslog protocol, we have it coming from span ports and tap ports, and we aggregate all that information into a seam. Now we’re going to break that down as we go through this lesson. First, we have agent-based information. Now agent-based information comes from areas like the Hidden and the EDR, as well as from agents that are installed on different servers. Now. Agent-based collection comes from systems like host-based intrusion detection systems, or EDRs. It comes from agents that are actually installed on a host or server. And all that data can then be sent back to a server. When dealing with agent-based collection, we have an agent service that’s installed on each host or server to log, filter, aggregate, and normalise the data on that host or server before we send it up to the Seam server for analysis and storage.

Now the second way we can deal with things is when we deal with a listener or a collector. When we have a listener collector, this is going to send information over the Syslog protocol. And as you can see here, we have Syslog being used inside our extranet and our VPN to send that data back to the same server. Now, as we look at this, a listener collector is going to be a host that’s configured to push updates to the same server using a protocol like Syslog or SNMP. It can use either of these protocols or another protocol. It is still considered a listener and collector in either case. In the diagram I just showed you, we were using Syslog. Now the next one we have is sensors. And you can see that I have those taps and span ports across a switch that will allow me to collect network data. Now, when I’m dealing with a sensor, this is going to allow my system to collect packet capture and traffic flow data from different sniffers and sensors that are positioned across your network.

And by doing this, we get a good mix across all of our different devices because now we have all of these things working together, pulling data from hosts, from network equipment, from packet capture, from servers, and bringing all that data to the centralised server where it’s aggregated. That gives me access to all of the aggregated data, which I can then examine. This data is aggregated across the network from multiple sources in multiple formats. However, because it is available in so many different formats, this does pose a problem. We now have proprietary binary format, tab-separated formats, comma-separated values, and database log storage formats. syslog format, SNMP format, XML format, JSON format, or text-based format All sorts of data from all different sensors and systems are all coming to this one, which seems to aggregate it all. So we need to parse it and normalise it. Parsing and normalisation are used to interpret the data, which is what we call parsing, through all these different formats and then standardise them into a single format for later analysis and processing. And this is one of the major functions of your system in addition to just collecting all that data: making sure it’s all in a format that we can look at. Now, the way it does this is by using connectors or plugins.

Now, a connector or plugin is a piece of software that’s designed to provide parsing and normalisation functions to a particular seam. As I said previously, when talking about Splunk, there are tonnes of different connectors that are built into the Splunk framework. So you can pretty much get data from everything. And depending on the system you’re dealing with, that can be one of your deciding factors on which one you’re going to use—which one has the widest variety of plugins and connectors that will work with all the other systems you have on your network. Now, the last problem we have to deal with is what we call synchronization. So now that we’ve figured out the problem of parsing and normalization, there is one more problem that we have to solve, and that’s time. There is time coming from all these different systems. And depending on your organization, you may not just be located in one time zone. For instance, in my company, we operate in four different time zones.

We operate on the east coast of the United States. We operate in Puerto Rico, which is on Atlantic Standard Time. We operate in the Philippines, and then we also operate in Italy. So right now we have four different time zones that we are synchronising across our entire staff to make sure we’re all working together. Now, that’s not such a big deal. When we’re working together, we can send files asynchronously, which is not a big deal. I sent an email. Somebody will work on it later today. But it is a big issue if we’re trying to synchronise our logs because all of these different systems have data coming in from different time zones, and that can become a big challenge for us. So especially if we’re trying to correlate an event because we had an intruder come in or some kind of data breach and we need to reconstruct a timeline, this can become really difficult without synchronising all of our dates and times.

So one of the things I like to do with my scenes is to use a consistent format for the duration of our time together. And every system is going to work together inside that time format. And that way, we all have the same time, regardless of where we are in the world. Now, that doesn’t mean we all agree to use Eastern Standard Time. No. Instead, we like to use Coordinated Universal Time, which is UTC. Now, let’s take a look at it a couple of times real quick and see if we can understand the problem a little better. First, what time is it? Well, this is something in UTC. It very clearly tells me what time it is. It is January 1, 2021. at one second. In UTC, it is 1 second after midnight. Now, what time is this? 120 (2012) 319-001-0500. Well, this is based on the Eastern timezone of the United States, which includes places like Miami, Florida; Washington, DC; and New York. Essentially, this is the exact same time as the one I just showed you. The difference is minus five because it’s five time zones to the left of UTC.

So in this case, we’re actually talking about 1 second past 7:00 p.m. on New Year’s Eve. This is the exact same time as we talked about, with UTC being 1 second after midnight because there’s a 5-hour difference between the two. So, when we do this, we like to coordinate everything by using UTC and Coordinated Universal Time. This uses a time standard as opposed to a time zone. Everything can be referenced back to UTC very easily. As a result, the majority of the time, you’ll see your servers’ times listed in UTC. So now that we’ve solved the problem of big data coming to us in all sorts of formats by parsing it, we’ve normalized that data. It’s all stored at the right time using UTC. We have one more problem we have to solve, and that’s secure logging. We have all this log data, and logs contain a lot of good information, but it is information that needs to be secured as well.

Now, in a large organization, you can generate a lot of data—in fact, gigabytes or terabytes of log data every single hour. Now, all that data has to be stored somewhere, and that storage needs to be secure. And so when you’re storing your log data, you have to make sure you’re securing it using the principles of confidentiality, integrity, and availability. For confidentiality, you want to make sure you’re encrypting that data when you’re storing it on the same server for integrity. You should use hashes to ensure that the data isn’t being modified or changed in availability. You want to make sure you’re doing backups. And you have redundancy in all the things you think about when we deal with the CIA. You need to consider that for your log data, as well.

5. Event Log (OBJ 3.1)

Event logs. In this lesson, we are going to talk about event logs. And event logs are logs that are created by your operating system on every client or server to record how the users and the software interact with that system. Now the format of these event logs is going to vary based on the operating system you’re dealing with. Each one uses its own format. So whether you’re using Windows or Mac or Linux, it’s going to be different. In this lesson, we are going to focus on Windows logs, and in the next lesson, we’ll talk more about Mac and Linux. Now, Windows event logs have five different categories of events that you can find inside your Windows event logs. These are application, security system set-up, and forwarded events. The first is the application.

An application focuses on events that are generated by applications and services. For instance, if you tried to start up a printer service and it wouldn’t run, that would actually get logged as an application event inside the Windows event log. The second one we have is security. And security is going to conduct things like audit events—things like failing to log on or access being denied to a file—anything that has to do with security. This is going to go into your security event logs. Then we have a system. System is our third one, and this is any event that is generated by the operating system and its services. So if Windows has an error or one of the Windows services has an error, it’s going to get logged under System. This would be things like your storage volume, health check failures, or something like that. And then we have our fourth one, which is set up. This is going to be an event that is generated during the installation of Windows. So if you have any kind of error or event that needs to be logged during the setup, that’s going to happen under the setup category. Our fifth and final example is forwarded events.

These are events that are sent to that local host that you’re looking at from other computers, and we’ll talk a little bit more about that towards the end of this lesson. Now, in addition to these five categories, we also have four categories of severity for each of these events, and this is all going to happen again inside your Windows event logs. These are informational warnings for events that aren’t necessarily problems but may become so in the future. For instance, you’re getting low on disc space errors, which are events that have significant problems and could result in reduced functionality, and audit success and failure, which is an event that indicates a user or service either fulfilled or did not fulfil the system’s audit policies. These are kind of unique inside your security log, and you’re not going to see them in some of the other logs. Now, in addition to all that, each time you look at an entry inside your event log, you’re going to get some details. You’re going to get the name of the event, the details of any errors that occurred, the event ID, the source of the event, and a description of what the warning or error means. This should all be reviewed for you because we did cover this back in A Plus and SecurityPlus as part of those exam objectives.

Now, the final thing I want to talk about is forwarding, and we talked about forwarded events as the fifth category. Forwarded Events is a relatively new concept that was introduced in modern Windows systems, including Windows 8 and later. Now this provides event subscriptions that forward all the events to a single host and allows for a more holistic view of network events using an XML-formatted message that ends in the evtx extension for event. Now the idea here is that we want to be able to take all the events from all of our systems and send them to a centralised server, just like we talked about with Seams. Well, with Seams, you might have used an agent and installed something on the system as of Windows 8 and Windows 10, but that has now been built into the operating system by Microsoft. So you don’t need to have a separate agent. There’s one built into the operating system, and that’s what this event forwarding is all about.

Leave a Reply

How It Works

img
Step 1. Choose Exam
on ExamLabs
Download IT Exams Questions & Answers
img
Step 2. Open Exam with
Avanset Exam Simulator
Press here to download VCE Exam Simulator that simulates real exam environment
img
Step 3. Study
& Pass
IT Exams Anywhere, Anytime!