61. Understanding Redis Engine
Hey everyone, and welcome back to the Knowledge World video series. now continuing a journey with the caching subsystem. Today we’ll be discussing the memcache, which is a critical technology in caching. Now, in the earlier lecture, we were looking at the basics of what caching is all about. Now, one important thing to remember over here is that caching is not just limited to the HTTP protocol; it is actually used in a lot of use cases, including databases. It also includes the operating system and hardware. So, when you buy a CPU, it usually comes with cache such as L1 cache, L2 cache, and L3 cache. So even in hardware, the importance of caching is understood. And this is the reason why we actually decided to talk about cache in much greater detail.
So with this, let’s go ahead and talk about memcache. D So memcache is a general-purpose distributed memory caching system. So, this is the word in question. Memory is critical to remember because the data and objects stored in memory cache are stored in memory. Now, typically for a caching system, there are two places in which it can store the data. The first is memory, and the second is a hard drive. The amount of retrieval speed that occurs now varies greatly depending on the underlying storage technology that you use. We’ll be looking into it on the next slide. So memcache is often used to speed up dynamic database-driven websites by caching data and objects in RAM. So in order to understand memcache, we have a nice little animation that has been developed. So let’s look into it. So on the left-hand side, you have your client, or you can consider it your application. And on the right side, you have a database containing a specific object or data. Now, whenever an application wants to retrieve this data, the application would have to send a specific SQL query. So, like, select “star” from “star.”
So it has to send some kind of query to the database. So in the first step, the client or application will send a query to the database. The database will process that query, and it will fetch this data from the underlying storage. Now, there can be underlying storage like a hard disc drive or a solid state drive. It will fetch this data from the underlying storage, and it will give it back to the client. So far, this appears to be a happy situation. Now, what happens if after five minutes there is one more client who will send the same query to the database, and the database has to send the same operation and send the same object back to the other client? Now, when you talk about websites like LinkedIn, Twitter, and Facebook, there are certain articles or certain tweets that millions of users will read. And if millions of users read the same tweet, that basically means that a million times, the application will have to send the query to the database, retrieve the data, and send it back to the client. So, there are two problems over here. One is that the database is quite slow. So if this object is stored in the underlying hardware, it will take some time to retrieve the object.And the second is that the amount of data in the database will increase tremendously. So, in order to speed things up, caching technology is being used.
So, what happens is that there is middleware introduced, and once the application retrieves the data, it will store this data in this caching subsystem. Consider this to be memory cache T. Now, next time, when the application wants to retrieve the same data, instead of sending it to the app database, it will query the cache system and retrieve the data from the cache. Because this object is now stored in memory, retrieval time is extremely fast when compared to the database. So let’s compare how fast a hard disc driver for a solid state drive is compared to a RAM disk. So, if you look into the sequential readings, you have one, one, two. Take a look at the figure. Right now, you have one, one, two, and you have 477. So there is a significant distinction between a hard disc drive and a solid-state drive. When it comes to RAM disks, the number is 5766. So you see, the amount of difference that you will find in the numbers between a hard disc drive, a solid-state drive, and a RAM disc is tremendous. And it is for this reason that Memcache stores data or objects in memory: to speed things up. And whenever it does, the retrieval time is extremely short. So you will find that your website is loading very, very fast when you use some kind of memory-based caching system. So, now that we understand the basics of Memcache, let’s look into how we can integrate it with our application. So, there are three ways to integrate Memcache with our application. So, what you need to do first is try and fetch the data from Memcache.
So, whenever an application needs to retrieve data, create a record that first attempts to retrieve the data from the Mem cache. If the data is not present or is not found, then fetch the data from the database through a query. If there is no data, the application will generate a query. So you see, the query is already present, and this query will send it to the database. Once it retrieves the object from the database, it will store that data in the Memcache Database Caching subsystem. So next time when the application tries to receive this data, it will get it from the memory cache instead of the database. So this is about the theoretical part. Let’s try one thing, and let’s try the practical aspect as well. So what I’ll do is get my Ubuntu operating system up and running. So let’s go ahead and install Memcache. Perfect. So Memcache is installed; I’ll check to see if it’s running. You see, Memcache is not running, and I’ll start the Memcache service. Perfect. So, if you run a status on memcacheD, you’ll see that it’s up and running. Now let’s do a quick PS aux on Memcache, and you’ll find Memcache is running on port 1211. So this is the port where it is running. So I’ll do a quick telnet on 127001 on 11211. So now I’m connected to the Memcache service. So if I do a quick stat, it will basically show you the stats related to the Memcache service. Now, there are two important configurations that we have to look into. The first is the get underscore hits and misses.
So what exactly are these? Let’s understand. So whenever the application successfully retrieves the data from the memcache, the get underscore “Hits” counter gets updated. However, if the application tries to retrieve the data from Memcache and Memcache does not have that data stored, then the get underscore miss gets updated. So, in an ideal situation, get-under-score hits should be a higher number. So the higher the number you can say, the faster your website will be in a high-level overview. Anyway, this is about the statistics. Now there are a few important things that I wanted to show you. So there is a simple document I have written, and I’ll be posting it in our forum. So you can use this as a reference to see how exactly it would really work. So, in the first practical, we’ll use Set to set some data in our Memcache server. So I say, “Set KP Labs.” You have your flag. You have your expiration timer. So this is the expiration timer, the amount of bytes in data, and the associated value, which is mem cache D. Perfect. So basically, what we are doing is storing a key-value store.
So this is the key, and this is the value that is associated with it. So if I do a get on Kplabs, what you will find is that you will get a value, which is memcache D. Because we got this value from Memcache D, the get underscore hits should be updated. So, if I do a quick stats and goup, you can see that the underscore “Hits” has been increased by one. So let’s run the same query again. So I get KP labs and run stats once more. The underscore Hitsis has now been increased by one, bringing the total to two. So let’s try to emphasise “Misses” as well. So let’s do something that the cache does not have. So give me a Kplabs one. Now, you see, I did not get any response. That means the memcache does not have any response associated with this value. So ideally, this is a miss. So, if I run statistics now, you’ll notice that the get underscore misses has been updated by one. So this is how it really works. Now, apart from storing the value and retrieving it, memcache D also has a lot of operations that have been supported, which include the increment and decrement operations. So typically, in an application where voting is required, the memcache supports the increment and decrement operations.
So let’s look at how it goes. So I’ll do a set of votes, the flag, the expiration timer, and the bytes, and I’ll say ten set votes of 902 perfect. and I’ll show the value as ten. So now this value is stored. So when I get votes, I’ll get a value of 10, which is associated. Now, if I want to increment it, I’ll use increment votes of five. And now it has multiplied our value of ten by five. So you can even verify it with words and find that the value is 15. So you can increment or decrement the data that is associated with memcached. So there are a lot of other functions that are present in memcached. So this is the high-level overview of the memcached service.
Now, let’s look into some of the important points that we must remember. First is the memory cache, which is a simple volatile cache server. Now, since the data is stored in memory, remember that whenever the server reads the file, the entire file will be lost. So that is very important. Typically, in the situation where memcache is used, there are certain applications that are being used to dump the data from memory to the hard drive. So, if the server restarts, the data can be read back into memory. So, a very important thing to remember is that memcache stores data in memory. If the server restarts, all your data is lost. Perfect. The second important point is that it enables us to store a simple key-value pair with a value of up to 1 MB. So we stored a value of “KP Labs.” So KP Labs was the key, the value was the memcache, and the second time, the key was votes and the value was ten.
So it is a simple key-value based storage.The third point, which you already discussed, is that it is an in-memory caching solution. As a result, if the server is restarted, all data in the memcache will be lost. The following critical point is that memcache D is multithreaded. So if you have multiple threads, which is what modern CPUs have, it can use those threads in parallel, and things will become much, much faster. And last, since Memcached is a distributed system, it is quite easy to scale it horizontally. So these are some of the important points that you must remember. Now, we have already discussed that there are various other technologies that are present. Memcache, D, and Redis are three of the most famous technologies that are used in memory-based caching solutions, which are typically integrated with databases. So this is it about Memcache; in the upcoming lecture, we’ll be discussing more about Redis and the benefits of Redis.
62. Understanding Redis Data Types & Pub/Sub Capabilities
Hey everyone, and welcome back to part two of our Redis lecture. Now, in the earlier lecture, we discussed the basics of Redis. We looked into the installation steps, and we also performed a simple key-value-based operation on Redis. So in today’s lecture, we’ll be talking about some of the advanced data types that Redis supports, along with some additional very interesting features like publisher and subscriber base functionality. So let’s talk about data types. So, one of the very important data types supported by Redis is the hash. So, in very simple terms, hashes Basically, in hashes, you have a key and a value. Then again, you have a key and a value. Again, you have a key and a value. So there are some very interesting use cases that you will discover. So let’s do one thing; let’s try it out. So when I run this command, you get a success message of OK. Now, in order to fetch these values, you have to run a command called “Get All.” You cannot use “Get,” you have to do “H get all.” So let’s try this out. So I do get all the KP labs and lectures.
So now you see, you actually got the entire set of parameters that you sent. So, first, a name. This is the name; this is the description; and this is the value associated with the description. Then you have the key lecture. You have a value associated with the key, which is 45, and so on. So this is what hashes are all about. This has a lot of use cases in the real world. So this is about hashes. Next on the agenda is something very interesting. So Redis does support list functionality as well. So let me just show you what I mean by this. So, for those who come from a programming background or those who know the basics of Python, I hope you understand each one of these. So if you go ahead and understand in detail what hashes are all about, we need to actually create a scripting course. But we’ll avoid this for now. So let’s try this out. I’ll just copy and paste so that we don’t spend much time. So in order to create a list, we do an el push followed by the name. I’ll say I’ll hit the enter key. You say the number one. Let me try once more. This will be renamed “professional.” Press Enter. You have one more on your list. Then let me try one more. So this is also our great course. If it grows larger, I’ll hit Enter. So now what you have is in the list; you have three courses that are present. One is the solutions architect associate.
You have the solutions, architect professionals. You also have NGINX to help you progress. Now, if you want to fetch the data from the list, you have to run a command called L range. So earlier, in order to push the data, you used L push, and in order to fetch the data, you used L range. So if you use the L range followed by the KP Labs courses, I’ll say 00:23, and it will give me the list of all the courses that are associated with the KP Labs courses list. So this is about the list functionality. The sorted sets are the final and most important functionality to remember. Now, this is very, very important, and even in exams you have to remember this specific functionality. So this is the name that is used in this functionality. As you can see, there was a name associated with each data type. For hashes. There was a name. KP labs and lectures For list. There was a name. KP labs. Courses. And we have a name for sorted sets: CSP. CSP is basically the full form of a cloud service provider. So we’ll put the cloud service provider information here. Now, there is a number associated with each one of them. So this number is basically the score.
In general, you will notice that there is a score that indicates which cloud service provider is the best. And every cloud service provider has an associated score. So if you want to have a scoring type of functionality, Redis does allow that. So let’s try this out. Let me add a few more things. So I’ll say cloud service provider, with six as the line and CSP five as the secret. So in no way is this the actual score. This is something that I have randomly written. Personally, I use linenote for my servers. I really love it. Anyway, so let’s try this out. So I’ll copy this and paste it. I think it did not get copied. Let’s try this out again. Perfect. So now you have a line node that has an associated score of 6. So let’s try with others as well. So I’ll use ZD CSP. I’ll keep five of them as secrets. Okay, I’ll run the same command. I’ll put one under AWS. So if you see I’m storing data randomly, I started with six, then five, then one, then let’s do three, let’s do two, which is Azure, and last but not least, which is four. As well as Digital Ocean. Digital Ocean is also a great cloud platform.
I really encourage you to try it out. quite reasonable and quite good. Perfect. So now we have stored the value and the associated score for each one of them. Now, if I want to fetch the data along with the score in a sequential manner, I would run a Z range command followed by the key. So you see, the key associated with the Zadd, which is a sorted set, was CSP. Then you have to type zero through ten with scores. So now what it will do is automatically sort the data according to the scores. So you have one, then two, then three, then four, then five, and finally six. We actually had six stored first, if you recall. So this was the first entry that we did. But when we did Z-Drage, it automatically sorted the data according to the scores, and it gave us the output accordingly. So this is a very interesting functionality, specifically when it comes to the leaderboard. So, when you play games, you find out who the winner is and who is eliminated at the end. Like the first, it gives the name “second person” the most points. So that list is called the “sorted sets.” And Redis is actually very useful in those types of functionality where specific comparisons or leader boards are required. So this is about sorted sets. The next important functionality that we’ll be discussing is the publisher-subscriber capability. This is a very interesting one. So Redis pops up like a messaging system where the sender sends a message to a queue while the receiver can receive the message.
So there is a chat here. Consider this a chat. A publisher sends a message to the chat, and any subscriber or any consumer who is subscribed to this chat can receive this specific message. So let’s try this out. So I’ll open one more command prompt. Allow me to do a docker. PS: I’m actually running it in Docker. Perfect. So I’ll run a redis CLI here as well. Let me just perfect it. So now we have two docking terminals. I’ll just do it so that it will become much more visible. Perfect. So, publisher and subscriber, let me give you an example. So the first command that you should type is “pop up channels,” which will basically tell you if there are any current channels that are present. So far, it says empty list. Perfect. So let’s do one thing. Let’s subscribe to a channel. So we’ll call the channel a secret channel. So now I’m subscribed to this channel. Now what happens is what we have done: we have a channel. So this channel is called the secret channel.
And we have a consumer who has subscribed to this channel. If a publisher sends a message to this specific channel, that message will be sent to all consumers who are listening to that channel. So let’s try this out. Because we already have one consumer listening to the secret channel, we’ll create a publisher and send a message to it from the second terminal. So this is what this “publish” message is all about. So let’s try this out. I’ll open the second terminal as well, so that it will become much more visible. So from the second terminal, you see Publish. Then it tells us the channel name. So it asks us for the channel name. So our channel is called Secret Channel. And then it asks us for the message. The message. I say, this is the secret channel. Okay? So I’ll press enter. And now you see here that you got a message called “A.” This is a secret channel. So now if I run this command again, this is KP Labs’ video course. And now if I go to the second terminal, you see, I get this specific message. So this is similar to a chat feature. In a nutshell, you send a message from a specific consumer and receive it in the publish or from subscribers who are listening to the specific channel. So this is called a “pubsub functionality” or “capability” of Redis.
63. Understanding ElastiCache in AWS
Hey everyone, and welcome back to the Knowledge Pool video series. So, as we continue our journey through the Caching subsystems, today we will discuss Elastic Cache. Now, don’t worry, this is not another in-memory caching system. So you don’t have to learn more things for our exams. So let’s talk about elastic cache. Actually, this makes life much simpler. So let’s get started. Now, in previous lectures, we discussed memcache as well as Redis. So we installed both of these tools and ran specific commands to see exactly how each one worked. We also had a high-level overview of the difference between Memcached and Redis. Aside from what we’ve already done, there are a lot of things that should be done before deploying in production.
So this includes authentication, server hardening, and application hardening. It includes regular patch updates. So if you are running Redis, and if there is a vulnerability in that Redis version, then you have to install the newer version of that specific application version. That is a bit difficult because both Memcache D and the in-memory subsystem. So updating the patch is quite challenging. And you also have to worry about fault tolerance and high availability. What happens if your memcache goes down and the server goes down? What happens if your Redis goes down? So you must build your own clusters; you must build your own replica high availability.
So all of these things require a good amount of work. And this is the reason why Amazon introduced ElastiCache. So, ElastiCache is basically a fully managed AWS service. which makes it easier to deploy, operate, and scale memory data storage or caching in the cloud. So this is something like a managed service. So we have RDS for databases, which takes care of installing, multi-tenancy, and easy failover. Similar to that, Elastic Cache is the in-memory data store. For memcache. The android is Because it is a managed service, we can perform many tasks, such as clustering and high availability, with just two or three clicks. And the best thing is that we don’t really have to worry about patch updates or application hardening. All those things are being done on the Amazon side.
Now, ElastiCache also has the capability to detect and replace the failed nodes, thus reducing the overhead that the system administrator needs. So if you are a lazy guy like me, you’ll prefer the fully managed service. So let’s do one thing. Let’s explore more about elastic cache and see how exactly it works. So I have my AWS console over here. So let’s do elastic cache, and you’ll see that the description is in memory cache. So Elastic Cache is an AWS service, and within Elastic Cache, you will see if I click on “Get Started.” Now, there are two cluster engines that are present. One is memcached, and the second is redis. We have already discussed both of them, and depending upon the requirements that we have, we can either select Redis or we can even select Memcache. Now, as we discussed, since Elastic Cache is a managed service, all we have to do is click, and the entire cluster is ready for us. We don’t really have to worry about anything. So this is the beauty of managed services, and this is the beauty of elastic cache. So what we’ll do is deploy both Memcache and Redis in the upcoming lecture, and we’ll look into various options that we can use as part of the ElastiCache Cluster. So this is it. About this lecture:
64. ElastiCache – Deploying Memcached Cluster Engine
Hey everyone, and welcome back to the Knowledge Pool video series. We were discussing the fundamentals of AWS ElastiCache Service in the previous lecture. So we’ll be deploying our first cluster engine based on mem cache in ElastiCache today. So I am in the ElastiCache console, and I’ll click on “Get Started.” Now. As can be seen, there are two cluster engines. For this lecture, we’ll be selecting Memcache. So let’s select me cached. Now, there is one important thing that we should be doing. Idly is the subnet group. So go to the subnet groups first and create a subnet group. I’ll refer to this as the KP Labs subnet. I’d say this is a private subnet, and I have to select a VPC. So this is the VPC in which the elastic cache will be launched, similar to what RDS is all about.
Along with that, we have to select the subnet ID. So currently I have one subnet, and I’ll click on Add, and then I’ll click on Create Perfect. So once we have our subnet group created, I’ll go to Memcache and click on Create. I’ll select memcache. I’ll say KP Labs hyphen memcache this time. Now, engine version compatibility There are various engine versions that are available. I’ll select the default port, which is something that we already know, which is 11211. This is very similar to what we had deployed in our EC2 instance, or I would say Docker, rather. So node type is “micro,” so I’ll select “micro,” and then click “Save.” We’ll be speaking about parameter groups in a while. We’ve already discussed how Memcache is a distributed system now that it supports it. So we can actually enable the clusters. So there can be multiple nodes. So let’s assume that there is only one node that is present. So if there is one node, that means all the data will be part of that single node. However, if you want to scale horizontally, maybe you can have two nodes, and data will be partitioned across both nodes.
And this is how things can become much more rapid. So, depending on your needs, you can choose between one and two nodes. So you can have 20 as the maximum nodes by default. So for the time being, I’ll select the number of nodes as one. You can go to the advanced Memcache settings, and this is where you can configure the subnet group security groups as well as the SNS notification. So this is it. Just a few configuration settings, and we can click on “Create,” and this will go ahead and create our Memcache cluster. So, as you can see, we only need a few clicks to get our entire cluster up and running. So, depending on the number of nodes, the creation time is around two to three minutes, sometimes even five minutes. So let’s wait for a while, and we’ll go ahead and resume once the status is created. So it took quite a while, but the cluster is now created. So now you see what we have. Once this data is available, you get the configuration endpoint. So this is the cluster endpoint, which you can configure in your application. Now, along with that, there are a few important things that you need to do, namely, make sure that the cluster is associated with the security group. Now, if you go to the EC console, this is the security group with which the cluster is associated. So by default, you will not be able to connect to this cluster. So along with that, let me actually show you one important thing. This is quite important to remember. So the configuration endpoint that you see over here is: if I do a NS lookup, let me do A. So basically, whatever IP address is associated with this cluster is the private IP address. So the cluster can only be associated with the private IP. So you cannot directly access this cluster from outside. You need to access it via the EC2 instance only. So otherwise, even if you have a VPN, this is something that you will be able to access. So what I have done is create an EC2 instance here that is up and running, and we will try to connect this cluster to this EC2 instance. So both the EC2 instance and the elastic cluster cluster have the same security group ID, and this intercommunication is possible. Now, one thing that I need to do along with this is that I have to connect to the EC2 instance. So what I’ll do is enter port 22 and my IP address.
So connect from home, and I’ll click on Save. Okay, perfect. So, now that I’ve allowed my IP address for my suite home in the security group, let’s see if I can telnet to this specific IP address on port 22. Perfect. So the connection seems to be established. Let’s quickly connect to this EC2 instance. Perfect. So now I’m connected to the EC2 instance, and since we already know that the IP address associated with this cluster is a private IP address within the VPC, it can only be accessed by the EC2 instance or via VPN. So let’s try and connect to this cluster from the EC2 instance via telnet. So we will do telnet. However, we will paste the IP or host name followed by 11211. So this is very similar to what we had been doing when we had installed our memcache in the Ubuntu operating system. So here we gave one the local host IP 127901. The only thing that is changing is the host name. As you can see, it is now linked to the memcache. So if I run stats now, you will see that I am able to do everything that I was able to do when I had installed memcache on the local machine. So this is how you can configure your memcache in the AWS Elastic Cache service. Now, there are a few important things that I’d like to show before we conclude this lecture. is that when we go to Memcache, we always have the ability to add the number of nodes. So when we add nodes to the memcache cluster, there are a lot of internal things like remapping key spaces. So there are quite a lot of challenging things that need to be done.
Now, all of these things are being done by the elastic cache by itself. So, as system administrators or solutions architects, we don’t have to worry about all of the internals. All we have to do is place our node there and click on it, and Elastic Cache will take care of the rest. So last but not least, you have a nice little Cloud Watch, which is associated. Now, within the Cloud Watch, you have some nice metrics related to setting commands. You can actually have metrics related to hits. Get hits, and you’ll get the message. So this is something we talked about earlier. So instead of doing a stats run and looking into those metrics manually, you can see them through Cloud Watch, and you can even configure it with SNS to show notifications. So that’s all there is to memcache. I hope you understand the basics of Elastic Cache and how you can configure memcache based on the AWS Elastic Cache service. So this is it for this lecture. In the upcoming lecture, we’ll go ahead and understand more about the Redis Engine. Thanks for watching. Bye.
65. ElastiCache – Deploying Redis Cluster Engine
Hey everyone, and welcome back to the Knowledge is Power video series. Now, in the earlier lecture, we were discussingon how we can deploy a memcache basedinstance with the help of Elastic Ash. So now that you know, it is quite simple to do things via ElastiCache because this is a managed service. So, to continue our journey, we’ll talk about Redis and how we can create a Redis instance, as well as the various configuration parameters that are available. So I’m under the Elastic Cache Dashboard under the “Redis” tab. Now let’s go and create. So I’ll select the cluster engine as Redis. Now, within this, you will find an interesting configuration, which is the number of replicas. So basically, this is one of the advantages. We can have multiple replicas of our Redis cluster. So if our primary node fails, we already have a replica, which we can make primary.
So this is one very important configuration parameter. The second very important configuration parameter is “multi-host with auto failover.” So this is quite important. As you can see, if I set the number of replicas to zero, the Multi-Ac option disappears. For the multi-AZ with auto failure to work, you must have at least one replica. Now, I hope you already know exactly what multi-AZ is. There are two availability zones, and you have one primary server and one replica server, which works or whose data is replicated. In the event that the single primary server fails, the elastic cache will take that replica server and make it a primary. As a result, it is referred to as the multi, as if it were an autofiller. So both of these servers reside in different availability zones. So if one availability zone goes down, you still have the replica server, which can be turned primary. Now, this is automated by elastic cache. So if the primary node goes down, elastic cache will automatically make the replica node the new primary.
So, since this is a managed service, we don’t really have to take on the headache of doing all those things. Now, these two are very, very important things to remember as far as the résumé is concerned. Now, when you talk about memcache, it doesn’t really have all these features. So the next thing I wanted to do was enable automated tick backups. This is quite important. So Elastic Cache will automatically take backups of your Redis cluster. So you don’t really have to worry about things going down. Now, one very interesting configuration parameter here, you see, is “import data to cluster,” where it allows us to import the RDB file. As a result, an RDB file is essentially a snapshot of your Redis cluster. So when you are running a Redis cluster, you can take a snapshot and upload that snapshot to the S three.So I’m currently in Singapore. Now, if I want to create a new rediscluster in, let’s assume, the Oregon region with the data of my present redis, which is in Singapore, So what I can do is take a snapshot of my redis. I can upload that snapshot to S 3, and by going into the Oregon region’s redistribution, I can paste my S 3 path. So it will take that RDB file and create a new Redis with the snapshot that is present. So this is one of the important things that you need to remember. So let’s do one thing.
Let’s go ahead and create our Redis cluster. I’ll say “Kplabs hypnotis.” I’ll say this is a demo for the KP Labs course. Parameter groups are something that we’ll be discussing today while the cluster is getting created, because that takes quite a good amount of time. Now, node type. Since we are not or I am not one of the richest men in India, I will not select this cluster type. Instead, I’ll select the instance family as t 2, and I’ll put it as t 2 micro. I’ll click on “Save Number of Replicas.” I’ll select “none” for now. So for the subnet groups and security groups, I’ll just leave them at default. This is something we had already done during the Memcache period. And let’s click on “Create.” So in a very similar way to how Memcached was created, the Redis cluster is getting created. Now, as we discussed, parameter “Group” is something that we skipped last time. So what Parameter Group allows us to do is allow us to modify certain configuration-related parameters. Since we do not get SSH access to theredis, to the memcache, or even to the RDS instance, Amazon allows us to modify the configuration-related details for the application with the help of parameter groups. So if I go to the parameter groups, there are certain parameter groups that are created. So let me go to the default Reddit. And now, as you can see, there are various configuration parameters that are present over here. And these configuration parameters are something that we can modify according to our environment. The same is true for memcache.
There are certain configuration parameters that are present, and we can modify them according to our environment. Many times, depending on your environment, certain configuration changes are required. And this is something that Amazon allows us to do with the help of Parameter Grow. So this is one important thing to remember. When the Redis cluster is created, you will see a URL similar to what we saw in the Memcache-based instance. So let’s hold on for a while until this redis cluster gets created. Okay? So it took around two minutes for the cluster status to be available. So, similar to Mem cache, you get a primary endpoint, and you can connect to the Redis cluster using this primary endpoint. So this is a high-level overview of how we can create a Redis cluster based on ElastiCache. Go ahead and try this out, and I look forward to seeing you in the next lecture.