Amazon AWS Solutions Architect Associate SAA-C03 Topic: AWS Fundamentals: RDS + Aurora + ElastiCache
December 14, 2022

8. ElastiCache Hands On

So let’s try creating an elastic cage cluster. So we’ll go to the Elastic H service, and then we’ll click on “Get Started Now,” so we have two options for the cluster engine. We can either choose Redis or Me cache D. So if you have read this, this is what we know. It offers multi-AZ with auto follower and enhanced robustness, and we can even use it in cluster mode if we wanted to add even more robustness and scalability.

So because it’s Redis and has persistence, we can use it as a database cache and message broker. Whereas if you choose MemCache D, it’s a high-performance distributed memory object caching system, and this is really intended to serve as a pure cache, while Red scan can also be used as a database. So for the sake of this exercise, we’ll go ahead and just create a Redis cluster engine. But I invite you to explore the options for McGee. So we’ll say, “Okay, this is my first redis and my first redis instance.” I’ll just use the latest one for compatibility reasons. The port is the standard port for Redis 6379. The parameter group is the one I’ll choose by default, as is the note type. Because I don’t want to overpay, I’m not going to choose a cash or forex large; I’m going to go into T2 and choose a T2 micro, which is within the free tier.

I’ll click on Save, and for a number of replicas right now, I don’t want anything, so I’ll choose Zero; otherwise, I’m going to pay more money. So as you can see, if I had two, there would have been more options. There would be a multi-AZ with autofill over option, or even if I have it as one, I should have that setting. Here we go. It’s still here, but as soon as I have it at zero, you can see that I’m losing the multi-AZ. So let’s have it at zero, and I lose the multi-AZ setting. So I have one and I lose the multi-AZ. So we’ll keep it at zero because we want things to be free. But there you go. If there is a replica, obviously you can have multiple AZs. Then you need to create a subnet group. So I’ll create one, and I’ll call it my subnet group, my first subnet group, and my first subnet group. I’ll choose my VPC ID, and I’ll select one of these subnets. Perhaps two EUs and three A. I don’t have a preferred availability zone. I’ll scroll down.

Security group. I can have a default one. Do we want encryption at rest using KMS? And do we want encryption in transit? And if we do select encryption in transit, then we can turn RADIUS off. And with redis turned off, I can set token, so I can set whatever I want. And this token will be necessary for my applications to connect to Redis in order to work with Redis, but if I disable encryption transit, I have no options for turning Redis off. Finally, do we want to import it into your cluster? No. Do we want backups? Absolutely. So we’ll say yes, we want backups and one day of retention. And this is a Redis-only feature. We don’t get backups with me cache, and we don’t use the maintenance window for anything. That’s fine; we won’t specify it. And I’ll click on “Create,” and there we go.

Our Elastic ache redis cluster It’s only one instance, so it’s not really a cluster, but one instance is being created regardless. And to use it, I’m sorry; I can’t really demonstrate that to you. This is more an application-specific concern. You need to download the Redis driver and start interacting with your Redis cache. But, from an excluded standpoint, we’ve seen how to create a Redis cache, we’ve seen all of the configuration options, and the cache is currently being created, but I don’t need it. So what I’ll do is, just when this is done, I will remember to delete it. And so now I’m able to click on Actions and then delete my readers’ cluster. Once it’s been created, I can create a final backup. I’ll just say no, and I am done. Alright, that’s it for this lecture. I will see you at the next lecture.

9. ElastiCache for Solution Architect

Okay, so some extra bits are going into the Solutions Architect exam for Elastic ache. The first one is around cache security. So all caches in Elastic ache support SSL inflight encryption and do not support AM authentication. So if the exam question tries to trick you into thinking that Elastic ache can be authenticated, using it is not true.

I’m policies on Elastic ache are only used for AWS API-level security. So that includes creating a cluster, deleting a cluster, updating the configurations, and so on. Redis off now has authentication, and it’s called Redis off. And you can create something called a password or token when you create a Redis cluster. And, on top of security groups, this adds an extra layer of security to your cache. So any client that does not have this password or this token will not be able to connect to your Redis cluster and therefore will be rejected. So this is called reddish by default; redis does not have any of them, and anything or anyone can connect to your redis cluster. And that’s why it’s super important to use security groups as an extra level of security for your cache to make sure that only the networks you’ve authorized can access your Redis cluster from.

McGee is a little bit less important, but still good to know. It supports something called Sasso-based authentication, which is quite advanced, and I haven’t seen it yet in the exam, but redis up is mentioned quite frequently. So at a high level, what it means is that our EC2 clients, for example, our application running on EC2, are running inside of an EC2 security group, and they want to connect to our Redis class cache, and therefore there’s going to be a Redis security group around it.

So we want to make sure that the redissecurity group allows the EC-2 security group in. In addition, our clients may use SSL encryption to encrypt data in transit. And as an extra level of security for redis, we can enable redis to make sure that the EC2 clients will have the right password before connecting to our redis cache. Next, let’s talk about the different patterns for elastic cache. So we have three of them, and you don’t need to know them in depth, but it’s good to have seen them at a high level. The first is “lazy loading,” in which you read all the data and it is automatically cached, which in some cases can become stale in the cache all the way through, or you add or update data in the cache when it is written to a database. In this case, you have no stale data. and a session store, where you can cache temporary session data.

We’ll see this in the Solution Architecture sections, and we can use the TTL feature on top of it to expire data in your session store. Overall, caching is really hard. That’s why I don’t make such a big deal out of it. There are only two difficult things in computer science, according to a quote. There’s cache invalidation and naming things. So caches are really, really hard to use. And that’s why I don’t want to overload you with the patterns; I’m just going to give you lazy loading as an example.

So we have our application, we have Elastic cache, and we talk to RDS in the database. So maybe our application is going to request some data and hit the cache. So if the data already exists in the cache, it is called a cache hit, in which case it works. If it doesn’t hit the cache, it’s called a “cache miss,” and so it will miss the cache, then it will fetch the data from the database, read it, and then it will store it in the cache. And this is called lazy loading. And this is one strategy, and I haven’t illustrated what “right through” looks like, but it’s different, but at a high level. Just know what lazy loading through session stores is, that we can use TTL, and that caching is perfectly fine; that’s all there is to this lecture. I will see you at the next lecture.

Leave a Reply

How It Works

img
Step 1. Choose Exam
on ExamLabs
Download IT Exams Questions & Answers
img
Step 2. Open Exam with
Avanset Exam Simulator
Press here to download VCE Exam Simulator that simulates real exam environment
img
Step 3. Study
& Pass
IT Exams Anywhere, Anytime!