51. Establishing Private Links with VPC Endpoints
Hey everyone, and welcome back. In today’s video, we will be discussing geolocation routing. Now, as the name suggests, geolocation routing allows us to choose the resources based on the geographic location of the users. So basically, this can be understood with a simple example. So let’s say that you might want all the queries from the Asia location. So all of the DNA are queries that are being routed to the elastic load balancer in the Singapore region from the ratio location. Similarly, if you have a query that is from the US, then you might want to direct those queries to the North Virginia region. So all of those configurations can be achieved with the help of geolocation routing.
So let me quickly show you a quick demo on how exactly this might work before we discuss some of the important points. So, this is my route 53 console. Now, within my zone, there are two records. Both of these records now have the same name, “mydemo Klaus internal.” However, they have different IP addresses. One has the IP address 1077510, while the other has the address 1077520. Now, if you look into the routing policy over here, the routing policy is of type “geolocation,” and within here, we have actually specified the location as the United States, and we have given the idea of two. Similarly, in the other record set, we have the location of India and the idea of one. So basically, from CLI, I am actually logged into an instance in the North Virginia region here. If we quickly do a Lookup on my demo Kplabsinternal, you will see that the record set that has been sent in has a response of 1077 520. Because of the record set of 1077 520, this is the reason. We have the location set as the United States. You do have one more record set of 1077, five or ten, but it will only be served if the location is India.
So this type of geolocation-based routing is quite useful. so many times it has been observed. I’m sure you also might have observed that whenever you visit a website, you might get a message that the contents of this website are not visible in your country. So that can be easily achieved with the help of geolocation. You can also achieve it at the web server level, like at the NGINX level, but through Route 53 it is much simpler. Now, if you want to create geolocation-based routing, it is very simple. So you make a record set and give it a name; let’s say I call it “geolocation.” You can assign a value to it; I’ll say 1077 530. Now, in the routing policy, you can select a geolocation, and within the location, you can specify a continent. So you’ve got Africa, Entrance A. Asia, Europe, and North America are the three continents. Oceania. South America. All right, so you can specify by continent; you can even specify by country. So this is the granularity that the geolocation-based routing policy supports.
Now, there are certain important considerations that you should remember before you go ahead and implement geolocation routing within your production environment. The first is that geolocation routing works by mapping the database to IP addresses. Now, in such cases of mapping, the results are not always accurate. Now the reason is because there might be certain Internet service providers that might not have any geolocation data associated with them, or it can happen that some ISPs might move their IP block to a different country without notification. So in such cases, you might run into an issue. So for such cases, route 53 allows us to have a default resource block associated with the geolocation-based routing policy. So let me quickly show you that. So, if you goup within the location, there is a block of default. So this is the default block. So, yes, it is possible to receive a DNS query from an ISP that does not match any of the containers. So in such cases, you want to handle that in an appropriate manner. So for such cases, you can have the location set to “default” and then you can specify one of the IP addresses depending upon the use case that your organisation might have. So this does happen. I would not say it is very common, but you need to make sure that if you are implementing geolocation, you also have the reservation for the default location.
52. Understanding Enhanced Networking
Hey everyone, and welcome back to the KnowledgePool video series. In today’s lecture, we’re going to speak about enhanced networking. Now, this is a very important feature, specifically for a lot of organisations whose main bottleneck is their network. Now that I’m in my current organization, we have a very fast server, a good CPU, good RAM, and a good hard disc drive, which is an SSD, but the overall performance of the application is a little slow.
And then we realised our bottleneck was the network because the application used to transfer a lot of packets in and out for communication. Since the network was slow, overall application processing was affected. Now, this kind of scenario you will find in many organizations, and this is exactly why AWS introduced the feature of enhanced networking. So let’s go ahead and understand enhanced networking from the absolute basics of Niccard. Now, Nicard basically stands for “Network Interface Card” and is a hardware component that allows the computer to connect to a network. Now this is a very important point because during the old times, as I’m sure many of you remember, whenever we used to buy a computer, we had to additionally buy this network interface card, which we used to manually plug in on the motherboard. Now, behind this card, as you can see, is an Ethernet port.
So this is where you connect the RJ-45 connectors, and on that land cable, you can see where one end connects to the network interface card. Now, generally, because of the necessity nowadays and the low cost, it generally comes pre-built with most of the motherboards from the past many years. Now, there is one concern about the network interface card: if this card stops working, then your entire networking functionality gets affected. So specifically, when it comes to servers, they don’t just rely on one network interface card; they rely on two network interface cards for high availability. So, even if one Nic card fails, the second Nic card will still support networking functionality, and configuring these multiple Nic cards in high availability is a lot of fun. So, hopefully, we’ll be discussing this in our Linux course, which we’ll be bringing up soon. So this is about the servers and multiple LIC cards. Now, when you talk about the network interface card as far as the EC2 instances are concerned, I hope you remember that the network interface card comes up. If you look at these cards, you’ll notice a network interface; if I just click over here and enter the interface ID, this is the interface that is associated.
Now, I can create the interface card whenever I want and attach it to any instance. So, basically, you can attach multiple interfaces to the EC2 instance for high availability or fun purposes. So anyway, let’s come back to the topic where, generally, what happens in an EC2 instance is that if there is an EC2 instance, let’s consider this as an EC2 instance, and this is the virtualization layer. So basically, Amazon uses Zen as the virtualization layer. And let’s assume that there are two interface cards that are attached. One is the ETH Zero, and the other is the ETH One. This interface card, as well as any traffic that enters or exits EC2, must now pass through Zen’s virtualization layer. So this is where a lot of processing related to networking happens.
And from here, the packet can go outward towards another EC, an instance, or another network. Now, every network interface card has a specific bandwidth. It is not that I have one network interface card and it will provide me with unlimited bandwidth just like a car. Every car has a high speed limit; it cannot be unlimited, and the same goes with the network interface card. Now, specifically when you are using the network interface card along with the virtualization layer, there is some amount of bandwidth restriction that happens, and a lot of bandwidth gets affected. Especially if combined with a virtualization layer. Now, you see, within this layer, all the network packets have to go through the virtualization layer, and from there they go out, and this sometimes creates performance degradation. And when you talk about enhanced networking, enhanced networking is like good networking performance. And this is the reason why, when you talk about enhanced networking, the architecture is a bit different. When discussing enhanced networking, you will notice that instead of the interface interacting with the virtualization layer, there is now a new interface known as Intel 8 to 5-9-9. Now, this is a great network interface card. If you look up Intel 8259, you’ll see that it’s a 10-gigabit Ethernet controller. So this is a network interface card that supports up to ten gigabits of connection.
So this is what AWS uses for enhanced networking. So you have your Intel Network Interface card, and now the interface is directly connected to the Nic card, which is present over here and not a virtualization layer. So this provides a lot of benefits. The second point is that the Intel 8 to 599 virtual interface supports speeds of up to 10 Gbps, and instances that support enhanced networking with the Intel 8 to 599 will also support this maximum speed. Now, one more important thing to remember over here is that enhanced networking uses the single-root I/O virtualization technique to provide high-performance networking capability on supported instance types. So enhanced networking is not supported by all the instance types but only the selected instance types. When it comes to this Intel Niccard, you can see that it still has a limit of 10 Gbps, which is insufficient for many corporate applications.
However, performance with Intel 8 to 599 is limited to ten gigabits per second. And this is the reason why AWS came up with a new technology: hardware called the Elastic Network Adapter. So in the same way that you have an elastic network adapter and the EC2 instances’ network interface are directly connected over here, bypassing the virtualization layer, One of the advantages of the Ena is that it is a new PCI network device designed specifically for EC2 instances. Now, Ena supports network speeds up to 25 GB/s for the supported instance type. So this does not mean that Ena has a maximum speed limit of 25 gbps. Actually, the device interface supports up to 400 GB/s of networking capabilities. So in the future, this limit of 25 GB/s will be increased.
But one important thing that we should know is that this DNA is extremely fast. So these are the two technologies that AWS uses for enhanced networking. Now, when we talk about the supported instance types, depending on the instance type, enhanced networking can be enabled using one of the following mechanisms: Now, as we already discussed, there are two ways in which enhanced networking can be used. One approach is to use the Intel.A to Finite Line Virtual Interface. The second route is via the elastic network adapter. Now, each one of these supports a specific interface, EC to instance type, that you can just have a glimpse into. So it is not like it is mandatory to know which instance type supports which network interface card. But you should have an overview and understand that there are two types of enhanced networking capabilities that AWS offers.
So let’s look into the EC two instance type. So, within the EC2 instance type, if you go a bit down, there is an enhanced networking option. And the T2-series does not really support enhanced networking. It all starts with the M4 series. So M4 supports enhanced networking. Additionally, the enhanced networking is supported if you go a little lower for other instances. Now, there are a few important things to remember over here. If you look over here, M Fouruses describes the enhanced networking of Intel Eight to 599. Furthermore, better instances such as P2, P3, and R4 support the elastic network adapter. Except for the M4 dot 16 Xlarge, all M4s now support enhanced networking via an Intel virtual interface. Now, one last thing that we’ll be covering in this lecture is driver support.
For example, if you buy a gaming laptop with a good graphic card but do not have the graphic card driver, things will essentially make no sense. And in the same way, if you are using enhanced networking, you have to make sure that you have the proper drivers or modules installed within your EC instance that will make use of enhanced networking. and this is something that we are going to look into. So I have one EC2 instance running. So let me just go back. Now I have enhanced networking. So this is the name I’ve chosen. This EC2 instance is based on the “M-4 large” instance type. So this instance supports enhanced networking capability. So I’ll just log into this EC2 instance. Let me just go to the pseudo-suit.
There are the interfaces. So this is our primary interface, which is ETH Zero. Now, since M for Large supports enhanced networking, let’s just quickly verify if there are appropriate modules installed within this operating system that can take advantage. If you run Ethereum 2, Ethereum 0, there is one very important thing you must verify, and that is the driver. So this driver is Ixgbevf. So this is the Intel network interface driver for network enhanced networking. So you have to make sure that this specific driver is present if you are using enhanced networking. When discussing ENAB-based enhanced networking, the driver name will be ENA. However, for Intel-based systems, the driver name will be IXGBE VF. So this is basically the module name. Now, one more important thing that you might want to remember is that you can even check if the EC two goes through the EC to command. If you just run this command, let me specify the region, and when it comes to this specific network support, the value should be simple. As a result, enhanced networking is supported. Sometimes the value is null. So that is something you have to really check, otherwise you will not get a proper performance. So there are two things you should look into. Whether you have an appropriate driver and use AWS Easy to describe instances, make sure you get the simple value for this specific net support. So this is it. About this lecture:
53. VPC Endpoints – Architectural Perspective
Hey, everyone, and welcome back to the KP Labs course. Now, in the early lecture, we discussed the high-level overview of VPC endpoints and how they actually work. So in today’s lecture, which contains the same topic, we look into the VPC endpoints as far as the architectural view is concerned. So, let’s begin by understanding VPC endpoints from an architectural standpoint using a simple use case of EC2 to DynamoDB communication.
As a result, this is the first application of EC2 to DynamoDB communication. You see, this is the before clause, where “before” means before the VPC endpoints were introduced. So you have the EC2 instance over here, and you have the DynamoDB. Now, both of these belong to the same region. Now, before VPC endpoints were introduced, what used to happen was that if an EC2 instance wanted to communicate with the DynamoDB, the traffic would flow to the router. From the router, the traffic would flow to the Internet gateway, and from the Internet gateway, it would traverse the Internet. And then it used to reach DynamoDB. Now, even though both entities are within the same AWS region, the traffic was still traversing the Internet. Now, this led to a lot of issues related to latency and even security-related challenges. And this is the reason why a lot of customers were giving feedback to actually allow communication between two services that belong to the same AWS region through an AWS private link.
So in this second use case, you have the after scenario. Following the introduction of the VPC endpoints. So now, what used to happen was that you had the ECTwo instance and the DynamoDB within the same region. Now, EC2 wanted to send some traffic to DynamoDB. It reached the router, which verifies the destination of the DynamoDB and whether the destination belongs to the DynamoDB within the same region. If yes, then the router will send it to the VPC endpoint, and then from the VPC endpoint, it will traverse the DynamoDB. So in this kind of approach, the traffic is not actually going through the Internet gateway. It is actually residing within the AWS private network itself. So this is a good feature. It reduces the overall latency. It increases the security posture. Now, before we conclude this lecture, there is one interesting thing that I wanted to show you because, at some point, this part might confuse you. So let me show you one thing: So, this is the EC2 instance that is linked to the S3 gateway through the VPC endpoint. So this does not really have any Internet connectivity. So if I do a ping on Google.com, you see, I am not able to reach anywhere. So let’s do an AWS S threeLS, and let’s look into the output. So here I am getting a lot of “three buckets,” which belong to my account. Now, a question might arise because one of my colleagues recently asked him if he had implemented the VPC endpoints and run the same command.
However, the buckets that were enlisted were also for a different region. So let me show you. If I open up the S3 console, I have a lot of S3 buckets. Some belong to Singapore, some belong to Ohio, and some belong to Oregon as well. So currently, my EC-2 instance is within the North Virginia region. Now, one question that might arise is that if I do AWS S Three LS, and since this EC Two instance is connected to the VPC endpoint, it should only show the bucket listing of the US East One region, which is North Virginia. However, if you look into the bucket name “testkplabs,” which belongs to the Oregon region, and if you look into the listing, I am actually able to see that bucket as well.
So this might actually confuse a lot of people. So, one important thing to remember is that even though you have a VPC endpoint enabled, when you do a S3 listing, it will show you all the S3 buckets of all the regions. However, whenever you try to connect or you try to establish a connection to the S3 bucket belonging to a different region, it will not work. So if I do an AWS S3 LS3 output hyphenKP lapse, then these specific buckets belong to the Oregon region. However, my VPC endpoint and my EC2 instance belong to the North Virginia region. So if I hit Enter here, I get the listing output hyphen kplabs. Now, the question is: why? So let’s quickly verify that the output hyphen KPLabs belongs to the North Virginia region. So let’s try a different bucket, which is TestKP Labs, which belongs to the Oregon region. So, hyphen.KP Labs. So now, if you will see, it will not give me any output. So, even though it had listed me in the bucket of the Oregon region, I will be unable to establish connectivity. However, if I try to establish connectivity with the bucket belonging to the same region, it will work. However, it will not work in other regions. Only the described part will work. So this is one important thing that you should be remembering.
54. Understanding Interface VPC Endpoints
Hey everyone, and welcome back to the KP Labs course. So, in today’s lecture, we’ll go over the interface VPC endpoints. So let me just show you what I mean by this before we proceed with the lecture. So whenever you go into the create endpoint within this, there are two types of endpoints over here: one is the gateway, and the other is the interface. So in today’s lecture, I will speak about the interface-level endpoints. Now, in the earlier sessions, we already discussed in great detail what the gateway-level endpoints are all about, so let’s go ahead and discuss more about the interface-level VPC endpoints. So just revise the gateway VPC endpoints. So in the gateway VPC endpoints approach, the VPC endpoints were actually created outside the VPC, where you did not really have much control. So it was actually created on the Amazon side, and what you had to do was modify the route entry so that the traffic would flow to the VPC endpoints via the route table.
So the same is depicted in this diagram, where you have the EC to instance and, as you can see, the VPC endpoint is not within the VPC, and this is something over which you have little granular control. Because we work with the route table in gateway endpoints, we cannot use VPNs or direct connect connections to access the gateway endpoints. So let’s assume that you have a side-to-side tunnel between your AWS and the data center. Now you want to establish a private link between the data centre and the VPC endpoint based on the gateway approach. This is not possible because only easy instances have direct access to route entries. So you cannot extend your network when it comes to gateway VPC endpoints. We’ll understand this more when we discuss the interface-level endpoints. And access control was actually restricted through an Im-like JSON document. So we used to create an access policy on the gateway endpoints, and this is how the access control used to work. So it was not very granular. Also. Now, in order to solve these disadvantages, Amazon decided to launch the next generation of endpoints, which are called the interface endpoints. So interface endpoints can also be referred to as version two, which has numerous advantages. So one of the great advantages is that VPC endpoints based on interface are created within the VPC.
So if you will see over here, this is the interface endpoint, and this is not created outside, similar to the gateway, but is actually created inside the VPC within the subnet that you define. So they have the elastic network interface, which means that they have the private IP associated, and access control is done through the security group instead of access policies. So since they have this eni and they have a private IP, even the servers within the data centre have it. If there is a terminal associated, those servers in the data centre can directly make a call to this endpoint, and the traffic can be served via the private link. So let’s look into how exactly that might work. Like so, I’m connected to my ECTwo instance within the private subnet. So this does not really have any Internet connectivity. Let’s quickly verify again. You see, Internet connectivity is not there. Now, when you do AWS EC2, let me press Enter to describe instances. You see, now I am able to actually run this command successfully. Now, why is that? So, within the endpoint service, let me go back within the endpoints. I have two endpoints available over here. One is the gateway endpoint type for S Three.
And second is the interface endpoint type for EC 2. Now, keep in mind that the gateway that is currently available is only for S3, DynamoDB, and whatever newer services AWS launches. As far as the endpoint connections are concerned, most of them are based on the interface endpoint type. So for EC 2, I have created an interface. Endpoint. And this interface endpoint, if you will see it, is associated with a subnet and also has a private IP address. It also has the security group instead of the access control policy that we generally used to work with in the gateway endpoint type. So let’s quickly verify. If I go to the EC, I’ll show you exactly where the eni is created for the interface endpoints. So if you go to the network interface, you will see that this is the VPC endpoint that was created, and the IP address associated with this endpoint is 107-231-2666, and you will find the same over here.
So now, whenever someone runs the AWS easy-to-describe command or similar API calls from any of the ECTwo instances, those API calls are automatically routed to this IP address. So let’s put that to the test and see if it holds true. So, let’s use TCP dumpon with a destination port of 43. So, we’ve seen that whenever you make an API call, whether it’s to an S-3 service or an ECTwo service, it actually connects to the endpoints via AWS. So now let’s start the TCP dump. So now whenever you run this AWS easy-to-describe instance, and here you will see that this is my IP address of my EC2 server, And this call was routed to the IP address 107-231-2666, which corresponds to one of the endpoints we set up. As a result, all AWS EC2 to API calls will be routed to the interface-level endpoint. So this is what the interface endpoint is all about. I hope you had a high-level overview to understand the difference between a gateway endpoint and an interface endpoint.
55. Implementing Interface Endpoints
Hey everyone, and welcome back to the KP Labs course. So, in today’s lecture, we’ll look at interface endpoints from a practical standpoint. So I have two endpoints over here. One is the KP Lab One, and this has the public IP associated with it so that I can log in. And second is the Interface Endpoints EC Two instance, which does not really have any public IP. So I will not be able to directly log in over here. And this is the reason why I’ll first login to KP Lab One, and from here I’ll login to the Interface Endpoint EC Two instance because this does not really have any internet connectivity. However, the connectivity between these two instances is present because they are in the same VPC. Perfect. So before we begin, I’ll just show you the route table, which is associated with the interface endpoint.
So the subnet is 83 C. So let’s quickly filter by VPC. And if we go into the subnets, there is a subnet of 83 C. And if you look into the route table, it does not really have an internet gateway or a NAT gateway. So there is no way this instance will be able to communicate with the internet. So I’m already logged into the instance. So if you try and type in “google.com,” it will not really work. One disadvantage of this approach is that, like many organizations, it does not require internet connectivity. However, if they remove the Internet or the Nat gateway, even the AWS API calls will stop working, which is actually a big pain. So many times, you need the API calls to be present. As a result, businesses are compelled to use the internet or an ad gateway. So in order to solve this approach, again, we have the endpoint services that are launched by AWS. So let’s go ahead and deploy our interface endpoints. So go to the endpoints, and this time click on “Create Endpoint.” So again, there are gateway and interface endpoints. We’ll be selecting the interface endpoints associated with the EC 2.
Now the subnets where we want the interface endpoint to get launched should be the 83C ones associated with this EC2 instance. Basically, adding more subnets will result in the creation of an elastic network interface in each of these subnets. So, for now, I’ll just choose one subnet associated with USD one C, the security group. Because this creates the eni, you can also create a security group. I’ll just leave it to be a default for now, and I’ll click on Create Endpoint. So the endpoint status is now “pending.” It takes some time for the endpoint to come up, so we’ll just wait a minute or two for the status to change. Perfect. So our interface endpoint is up. So if you’ll see that the status is available now, as we’ve already discussed, those interface endpoints will create an elastic network interface. And just to quickly verify whether eni is created, let’s go to the EC2 console, go to the network interfaces, and in the first column you’ll see there is an elastic network interface that is created for the VPC endpoint.
You see, the description is “VPC endpoint.” Now, since this is an ENI-based approach, you can directly associate the security group. So if you go to the security group currently, it is using the default. So let’s select the eni, and within the eni, you should see the security group that is associated. I’ll click on it, and I’ll verify whether the inbound connectivity is there. And as far as the connectivity aspect is concerned, things seem to be working perfectly. Great. So now we have everything set. So, surprise, we don’t really have to modify the route table. Now the question is: why? As you can see, the service name for the EC2 API calls is Amazon AWS US East One, EC Two. So what Amazon does is change the DNS associated with this service to the IP address associated with the Elastic Network interface. So let me show you. So the IP address associated with the Elastic Network interface is 172-3127 dot 63. So Amazon will change the name of the service. So this specific service name is the associated DNS name associated. So it will modify this specific one for the Elastic Network Interface IP. So, let’s do an NS lookup, and you’ll see that you’re actually for this endpoint; this is the IP address that is assigned automatically. So now, whenever you do AWCC to describe instances, all the API calls will go to these endpoints, which are created.
And this is a good approach because you don’t really have to worry about managing the route table. So let’s quickly verify. For four, four, three TCP dumps, I’ll do a TCP destination porton. Great. And when you create an AWS easy-to-describe instance, it should be working perfectly. And the calls, if you will see the calls, are going to this specific IP range (170 to 3127), which happens to be the interface IP. Not only do the interface endpoints simplify things because we don’t need to change the route table. It also makes it much simpler because now we are not dealing with the JSON-based access control list; we are actually dealing with security groups. Furthermore, because this interface has a private IP address, if you have a direct connect connection or a virtual private network, your on-premises servers will be able to make calls on this specific IP address and receive results. So this is it for the deployment of an interface-level endpoint. I hope this has been informative for you, and I look forward to seeing you in the next lecture.