CompTIA CASP+ CAS-004 Topic: Securing Networks (Domain 1)
December 14, 2022

14. DNSSEC (OBJ 1.1)

In this lesson, we’re going to discuss domain name system security extensions, also known as DNS SEC. Now, before we do that, let’s do a quick review of how DNS works. Every time a user tries to go to a website or clicks a link, they’re telling their computer they want to connect to some URL like deontraining.com. Now, if you go to our home page, you might see a link to something like our exam vouchers page. So you go to deontrain.com vouchers.

In this lesson, we’re going to discuss domain name system security extensions, also known as DNS SEC. Now, before we do that, let’s do a quick review of how DNS works. Every time a user tries to go to a website or clicks a link, they’re telling their computer they want to connect to some URL like deontraining.com. Now, if you go to our home page, you might see a link to something like our exam vouchers page. So you go to deontrain.com vouchers.

Now, your computer has no idea where deontrained.com is because computer networks work based on routing your requests from one IP address to another using either IPV 4 or IPV 6. As you know, computers like numbers better than names. So deontrain.com isn’t as easy for a computer to work with as 66, 23, 40, or 712, for example. Now, humans, on the other hand, remember names better than numbers. So when I tell you to visit my website at www.diontraining.com, that is much easier for you to remember than a long series of numbers like an IPV-4 or IPV-6 address. So our computers use DNS, or the Domain Name System, to convert domain names to IP addresses every time we click on a link, all in the background, without us having to do anything ourselves.

Now, unfortunately, DNS is not secure. It was designed back in the 1980s, when the Internet was a much smaller place and was treated almost like a large local area network. So when I tried to resolve the domain name of deontrain.com, my Web browser first asked my operating system if it knew what IP address it should use when talking to deontrain.com. And it does this using something known as a “stub resolver.” If the operating system doesn’t know the answer to this because it hasn’t already gotten it into its DNS cache, it will then use a recursive resolver to ask the next device upstream of it for its default gateway for that IP address for the given domain name. If that device doesn’t know it either, it’s going to ask its DNS server to resolve it. And if that DNS server doesn’t know it, it’s going to continue to resolve it recursively all the way up until it finds the right IP address as a response.

Now, the problem is that when these responses from the resolver come back, there’s no way to verify the authenticity of the response as having come from the authoritative name server for that particular domain, which ultimately is where the other resolvers got their information from. Basically, if an attacker can send a resolution to one of the resolvers, and they accept it as truthful and place it in their cache, that would be the result returned to me whenever I asked for the connection to that domain, not the authoritative source that I really want.

This is known as “cache poisoning.” So in the 1990s, to solve this issue, DNS SEC, or DNS Security Extensions, was created. DNS SEC strengthens authentication in DNS by using digital signatures based on public key cryptography to ensure the DNS data, as deemed authoritative, is digitally signed by the owner of that data, and therefore, it prevents spoofing of those DNS records. For every DNS zone, there is a public and privatekey pair, and the zone owner uses the zone’s privatekey to digitally sign the DNS data in that zone. Because only the zone owner has access to that private key, it ensures that DNS data signed with that private key is authoritative and has been approved by the zone’s owner.

The other key, the public key, is able to be used by anyone to decrypt the digital signature and validate the zone data as being correct. So any time a resolver gets some DNS data using DNS SEC, it’s going to validate it by properly looking at that digital signature before it accepts it and enters it into its own cache. and this prevents cache poisoning.

This provides data origin authentication for that data because only the zone owner could digitally sign it, and it provides data integrity protection for that data because the data cannot be changed after it’s been digitally signed by that zone owner. Now, for DNS SEC to work, though, both the zone owner and the resolvers need to configure their DNS servers to support it. So if you happen to run your own DNS server, you need to configure it to allow DNSSEC to prevent DNS cache poisoning from occurring and to protect the validity and authoritativeness of your own DNS servers, for which you are the authoritative server.

15. Load Balancer (OBJ 1.1)

In this lesson, we’re going to talk about the importance of load balancers. Now, load balancers, which are also known as content switches, are used to distribute the incoming requests across a number of servers inside a server farm or a cloud infrastructure. If you run a large website or service, you can’t do it all on a single server. For example, I currently have several hundred thousand students taking my courses and watching my videos.

There is no way a single server can handle that much load. So we have to distribute it across a lot of different servers all over the world. But when a student wants to visit my website, they don’t have to choose a specific server. Instead, they just go to deontrain.com, and our load balancer and content switch redirect that request to the next available server to be able to process it. In fact, we have tones of different servers sitting all over the world to handle all of those different requests. The exact same thing happens with Netflix, Hulu, Facebook, or Amazon, but on a much larger scale. All these large websites have to use a load balancer. Otherwise, a single server would simply crash under the load, and they would suffer a self-imposed denial-of-service attack because of the popularity of their website’s popularity.

Now, a load balancer essentially acts as a traffic cop, sitting in front of your servers and writing the client’s request to the servers that are most available to fulfil those requests at any given time. This maximises the speed at which you can respond to the user and more efficiently uses all of your existing server capacity for all of your user requests. The reason a load balancer is so important is that it is one of the key things we can do to help defend against a denial-of-service attack or a distributed denial-of-service attack. Now, what is a denial-of-service attack? Well, a denial of service attack involves the continual flooding of victim systems with requests for services, causing the system to run out of memory and eventually crash.

Now, most modern systems can’t be taken down by a single machine, so instead attackers use a DDoS, or distributed denial of service attack. In a distributed denial-of-service attack, instead of a single attacker targeting a server, there are hundreds or even thousands of machines simultaneously launching that attack on the server to force it offline. In March of 2018, for example, GitHub was hit by the largest DDoS attack to date, where tens of thousands of unique endpoints conducted a coordinated attack to hit that server with a spike in traffic that measured 135 terabytes per second. This forced the site offline for all of 5 minutes.

But the real question is: how can we survive one of these attacks and prevent it from taking down our organization’s servers? Well, we just need to look at Amazon for some of the best practices. As you can see, in February of 2020, Amazon was hit by the largest DDoS attack to date, even larger than GitHub. And there was a coordinated attack to hit that server with a spike in traffic that measured two to three terabytes per second. But due to the good security architecture they have and their ability to scale up resources to sustain that attack, they suffered no downtime during the attack, even though it was 70% larger than the one that took down GitHub. Now, the first technique they use is known as “black holing” or “synchro holing.” This is a technique that identifies any attacking IP addresses and routes all their traffic to a nonexistent server through an annul interface, effectively stopping the attack. Unfortunately, the attackers can always move to a new IP and restart the attack again.

So it won’t prevent a DDoS forever, but it will buy you some time as you work on other mitigations. IPSs, or intrusion prevention systems, can also be used to identify and respond to denial-of-service attacks. This may work well for small-scale attacks against your network, but it’s not going to have enough processing power to handle a truly large-scale attack like the one Amazon was targeted with. One of the most effective methods now is to use elastic cloud infrastructure, which allows you to scale up on demand as needed to withstand the attack. The issue with this strategy, though, is that most service providers will charge you based on your capacity and the resources that you’re using.

So as you scale up, this causes us to receive a much larger bill from our cloud service provider. And this can be something that we’re not getting any return on investment for because all that traffic wasn’t generating any revenue for our organization; it was all just part of the attack. That said, at least you’re remaining online to serve your paying customers. So it just becomes a matter of whether you can afford to last longer than the DDoS attack can keep going on. In addition to these strategies, there are also specialized cloud providers like Cloud flare and Akamai that provide web application filtering and content distribution on behalf of organizations. These service providers are focused on ensuring they have a highly robust, highly available network at all times. That ensures you can survive even high-bandwidth DDoS attacks by adding layered defenses throughout the OSI model’s layers. Now, as I mentioned already, we can use black hole as a strategy against the attacker’s IP. And this strategy does work to some extent, but it’s only going to be there to give us some more time to put other mitigations in place. If we had to individually block each attacker’s IP, though, it simply would not be practical. And so we would have to use a remote-triggered black hole to be able to do this type of thing for us.

Now, a remote-triggered black hole is effective in filtering Dodos and warm attacks, quarantining traffic destined for specific targets, and conducting basic blacklist filtering. For it to be truly effective, we should filter this at the edge of our organization’s network, or even with our upstream provider, like Cloudflare or Akamai or somebody like that. As previously mentioned, there are services that will do this for us, such as Cloud flare or Akamai, and they’re going to act as an upstream provider for us. Remotely triggered black holes are more effective and efficient than using an ACL as well. They require less processing load from our network devices, and they can remove large portions of offending traffic quickly by routing those to the null interface known as null zero. To perform this black hole function, we’re going to use IBGP as our protocol.

IBGP is the internal border gateway protocol, and it receives forwarding route advertisements from the external BGP router and distributes them throughout our internal network. When we’re black holes, the IBGP protocol doesn’t modify the next-top address within the autonomous system, but instead it uses it to advertise a prefix pointing to the static zero route. When the device identifies the source of the DDoS attack, it simply redirects that traffic to this nullzero route, effectively dropping it into the ether. The biggest advantage of this method is that the internal network can remain functional even while the external network connection is bogged down by this DDoS attack. This mitigates the damage and allows network administrators to reroute the internal network traffic out through a different wide-area connection if you have one available. And it will allow you to restore services for your servers and your end users.

Leave a Reply

How It Works

img
Step 1. Choose Exam
on ExamLabs
Download IT Exams Questions & Answers
img
Step 2. Open Exam with
Avanset Exam Simulator
Press here to download VCE Exam Simulator that simulates real exam environment
img
Step 3. Study
& Pass
IT Exams Anywhere, Anytime!