Amazon AWS Certified Security – Specialty SCS-C02 Exam Dumps and Practice Test Questions Set3 Q41-60

Visit here for our full Amazon AWS Certified Security – Specialty SCS-C02 exam dumps and practice test questions.

Question 41

A security team discovers that an IAM user’s access keys have been exposed in a public GitHub repository. What immediate actions should be taken?

A) Change the IAM user’s password immediately

B) Deactivate the exposed access keys and review CloudTrail logs for unauthorized usage

C) Delete the IAM user account

D) Rotate all access keys in the AWS account

Answer: B

Explanation:

When AWS credentials are exposed, immediate action is required to prevent or limit unauthorized access to AWS resources. The response must balance security urgency with the need to investigate the extent of potential compromise and avoid unnecessarily disrupting legitimate operations. Exposed access keys represent active credentials that can be used for programmatic access to AWS services.

The first critical step is deactivating the exposed access keys immediately to prevent further unauthorized use. IAM allows deactivating access keys without deleting them, which preserves the key ID for audit purposes while preventing their use for authentication. Deactivation is faster than deletion and can be reversed if needed during investigation, though compromised keys should ultimately be deleted.

After deactivating keys, reviewing CloudTrail logs determines whether unauthorized access occurred. Security teams should examine API calls made using the exposed credentials, looking for unusual activities such as resource creation, data access, privilege escalation, or attempts to establish persistence. The investigation scope includes the time from when keys were first exposed until they were deactivated.

A) Changing the IAM user’s password does not affect access keys. IAM users have separate credentials for console access (username and password) and programmatic access (access keys). Exposed access keys remain functional regardless of password changes. This action fails to address the actual security threat.

B) This is the correct answer because deactivating exposed keys immediately prevents further unauthorized use, CloudTrail log review determines if unauthorized access occurred and what actions were taken, this approach enables investigation while blocking active threats, and preserves audit information needed for incident response.

C) Immediately deleting the IAM user account is overly disruptive and may be premature. The user account may be legitimately needed, and hasty deletion could disrupt operations. Deletion also complicates investigation by removing the credential identity from CloudTrail logs before analysis is complete. Deletion should occur after investigation if appropriate.

D) Rotating all access keys account-wide is unnecessarily disruptive and does not address the immediate threat of the specific exposed keys. While rotating other keys might be warranted if investigation reveals broader compromise, it should not be the immediate response to a single key exposure event.

Question 42

An application requires temporary access to objects in an S3 bucket owned by a different AWS account. The bucket owner wants to grant access without creating IAM users. Which approach enables this access?

A) Make the S3 bucket public temporarily

B) The bucket owner creates a presigned URL for the specific objects

C) Configure cross-account IAM roles

D) Share the bucket owner’s access keys temporarily

Answer: B

Explanation:

Presigned URLs provide a mechanism for granting temporary, time-limited access to specific S3 objects without requiring recipients to have AWS credentials or IAM identities. The bucket owner generates presigned URLs using their own AWS credentials, embedding authentication information directly in the URL. Anyone with the presigned URL can perform the specified operation until the URL expires.

When generating presigned URLs, the bucket owner specifies the object key, the HTTP method (GET for downloads, PUT for uploads), and the expiration time. The presigned URL includes a signature computed using the bucket owner’s AWS credentials and the specified parameters. S3 validates the signature when the URL is used, confirming that the URL was created by a principal with permission to access the object.

Presigned URLs inherit the permissions of the credentials used to generate them. If the bucket owner has permission to access the object, presigned URLs created with those credentials grant the same access until expiration. This eliminates the need for IAM user creation, cross-account roles, or credential sharing while providing secure, temporary access with precise control over accessible objects and allowed operations.

A) Making buckets public temporarily exposes all bucket contents to anyone on the internet, not just the intended application. This creates significant security risks including data breaches and unauthorized access. Public buckets are frequently discovered and exploited by attackers. This approach is unacceptable for secure temporary access.

B) This is the correct answer because presigned URLs provide temporary access to specific objects without requiring IAM users, include authentication information embedded in the URL, automatically expire after the specified time, and can be created for individual objects with specific operations allowed.

C) Cross-account IAM roles are appropriate for ongoing programmatic access but require the application account to have IAM identities that can assume the role. This is more complex than necessary for simple temporary access and does not avoid IAM user creation as the question requests.

D) Sharing AWS access keys is a severe security anti-pattern that should never be done under any circumstances. Access keys are long-term credentials that provide broad access beyond just the specific objects needed. Shared keys cannot be easily revoked without disrupting the owner’s access, and accountability is lost when multiple parties use the same credentials.

Question 43

A company needs to detect and prevent unauthorized access attempts to AWS Management Console. The solution must identify suspicious login patterns and automatically block repeated failed login attempts. Which AWS service combination provides this capability?

A) AWS CloudTrail and AWS Lambda

B) Amazon GuardDuty and AWS WAF

C) Amazon Cognito and AWS Shield

D) AWS IAM and Amazon CloudWatch

Answer: B

Explanation:

Protecting AWS Management Console access requires detecting suspicious authentication patterns and implementing automated response mechanisms. Failed login attempts, access from unusual locations, or compromised credentials can indicate attacks against AWS accounts. Detecting these patterns and blocking malicious actors reduces the risk of unauthorized account access.

Amazon GuardDuty continuously monitors AWS CloudTrail management events including console sign-in activities. GuardDuty uses machine learning and threat intelligence to identify suspicious authentication patterns such as failed login attempts from unusual locations, logins following credential compromise, or access patterns inconsistent with normal behavior. When GuardDuty detects suspicious activity, it generates detailed findings with severity ratings.

AWS WAF can be integrated with Amazon CloudFront distributions or Application Load Balancers to filter HTTP/HTTPS requests. While WAF cannot directly protect the AWS Management Console, organizations can use GuardDuty findings to update WAF rules that block IP addresses associated with suspicious activity. Additionally, GuardDuty findings can trigger automated responses through EventBridge and Lambda that modify security groups, update IP blacklists, or disable compromised credentials.

A) CloudTrail logs console sign-in events but does not automatically analyze patterns or detect suspicious activity. Lambda could be used to build custom detection logic, but this requires significant development effort compared to GuardDuty’s built-in threat detection. This combination lacks native threat intelligence and pattern recognition.

B) This is the correct answer because GuardDuty automatically detects suspicious console login patterns including failed attempts and compromised credentials, integrates with threat intelligence feeds, generates actionable findings with severity ratings, and can trigger automated blocking responses when integrated with other AWS services.

C) Amazon Cognito provides user authentication and authorization for custom applications but does not protect AWS Management Console access. AWS Shield protects against DDoS attacks but does not detect or prevent unauthorized authentication attempts. This combination addresses different security concerns than console access protection.

D) IAM manages identities and permissions but does not detect suspicious login patterns or implement automated blocking. CloudWatch monitors and logs activity but requires custom metric filters and alarms to detect patterns. This combination lacks the threat intelligence and behavioral analysis that GuardDuty provides natively.

Question 44

An organization must ensure that Amazon EBS volumes containing sensitive data cannot be shared with external AWS accounts via snapshots. Which control implements this requirement?

A) Enable EBS encryption by default

B) Use IAM policies to deny ModifySnapshotAttribute actions with external account conditions

C) Implement S3 bucket policies to restrict snapshot access

D) Enable AWS Organizations SCPs to prevent snapshot sharing

Answer: B

Explanation:

EBS snapshots can be shared with other AWS accounts by modifying snapshot permissions using the ModifySnapshotAttribute API call. When snapshots contain sensitive data, preventing unauthorized sharing protects against data leakage to external entities. IAM policies provide granular control over API actions including the ability to restrict snapshot sharing based on target account conditions.

IAM policies can include condition keys that evaluate whether snapshot sharing targets are external accounts. The aws:PrincipalAccount condition key identifies the account of the principal making the request, while custom conditions can check whether the sharing target account matches approved internal accounts. Policies that deny ModifySnapshotAttribute when sharing with external accounts prevent users from creating public snapshots or sharing with unauthorized accounts.

The policy would explicitly deny the ModifySnapshotAttribute action when the request includes sharing permissions for accounts outside the organization. This preventive control blocks snapshot sharing at the API level before the operation can complete. Even users with broad EC2 permissions cannot circumvent this restriction because the IAM policy denial takes precedence over permission grants.

A) EBS encryption protects data at rest but does not prevent snapshot sharing. Encrypted snapshots can still be shared with external accounts, and if the recipient has access to the encryption key or if the snapshot is re-encrypted with a key they control, they can access the data. Encryption alone does not prevent sharing.

B) This is the correct answer because IAM policies can explicitly deny ModifySnapshotAttribute actions, conditions can check whether sharing targets are external accounts, this control prevents snapshot sharing at the API level, and the policy applies regardless of other permissions the user might have.

C) EBS snapshots are stored in an AWS-managed storage system, not in customer S3 buckets. S3 bucket policies cannot control EBS snapshot sharing permissions. This option reflects a misunderstanding of how EBS snapshots are stored and managed.

D) While SCPs could deny ModifySnapshotAttribute actions organization-wide, this would prevent all snapshot sharing including legitimate internal sharing. SCPs provide coarse-grained controls suitable for broad restrictions but lack the granularity to distinguish between internal and external sharing without complex condition logic.

Question 45

A security engineer needs to implement defense-in-depth for an application running on EC2 instances behind an Application Load Balancer. Which combination provides the MOST comprehensive protection?

A) AWS WAF on ALB, security groups on instances, and NACLs on subnets

B) AWS Shield only

C) Security groups on instances and ALB

D) AWS WAF and AWS Shield Advanced

Answer: A

Explanation:

Defense-in-depth implements multiple layers of security controls so that if one layer fails or is bypassed, other layers continue to provide protection. For web applications, this means deploying security controls at different levels including the application layer, network layer, and instance layer. Each layer addresses different types of threats and attack vectors.

AWS WAF provides application layer protection by inspecting HTTP/HTTPS requests and blocking attacks such as SQL injection, cross-site scripting, and malicious bots. WAF rules analyze request components including headers, cookies, query strings, and request bodies. Deploying WAF on the Application Load Balancer protects the application before traffic reaches backend instances.

Security groups act as virtual firewalls at the instance level, controlling inbound and outbound traffic based on IP addresses, protocols, and ports. NACLs provide stateless filtering at the subnet level. This three-layer approach ensures that attacks must bypass application-layer filtering (WAF), subnet-level filtering (NACLs), and instance-level filtering (security groups) to reach application workloads. Each layer provides protection against different attack types and compensates for potential misconfigurations in other layers.

A) This is the correct answer because WAF protects against application-layer attacks, security groups provide instance-level network filtering, NACLs add subnet-level stateless filtering, and the combination implements true defense-in-depth with multiple complementary security layers.

B) AWS Shield provides DDoS protection but does not address application-layer vulnerabilities, network segmentation, or instance-level access control. Shield alone provides protection against only one category of attacks and does not implement defense-in-depth for comprehensive application security.

C) Security groups on instances and ALB provide network-level access control but do not protect against application-layer attacks like SQL injection or XSS. This combination lacks application-layer security controls and does not implement adequate defense-in-depth.

D) WAF and Shield Advanced provide excellent protection against web exploits and DDoS attacks, but this combination operates primarily at the application and network edge layers. It does not include instance-level or subnet-level network segmentation controls that are essential components of comprehensive defense-in-depth architecture.

Question 46

An organization requires that all AWS API calls be made through VPC endpoints and never traverse the internet. Which IAM policy condition enforces this requirement?

A) aws:SourceIp with VPC CIDR ranges

B) aws:SourceVpc or aws:SourceVpce

C) aws:SecureTransport set to true

D) aws:RequestedRegion with specific regions

Answer: B

Explanation:

Organizations with strict security requirements may mandate that all API calls to AWS services occur through private network connections rather than over the internet. VPC endpoints enable private connectivity between VPCs and AWS services without requiring internet gateways, NAT devices, or VPN connections. IAM policies can enforce that API calls must originate from VPC endpoints.

The aws:SourceVpc condition key restricts access to requests that originate from a specific VPC, while aws:SourceVpce restricts access to requests made through specific VPC endpoints. When these conditions are applied to IAM policies, they ensure that API calls can only succeed when made through designated VPC endpoints. Requests made through internet gateways or from outside the VPC are denied.

These conditions are particularly useful for enforcing that administrators can only perform sensitive operations when connected through corporate VPNs that terminate in VPCs with appropriate VPC endpoints. They prevent API calls from arbitrary internet locations or personal devices that do not route through approved network paths. Combined with other controls, this creates strong network-based access restrictions.

A) The aws:SourceIp condition checks the source IP address of requests but cannot definitively determine whether requests traversed the internet or VPC endpoints. Requests through VPC endpoints and requests through internet gateways from the same IP ranges would both match IP-based conditions. This condition is insufficient for enforcing VPC endpoint usage.

B) This is the correct answer because aws:SourceVpc and aws:SourceVpce conditions explicitly check whether requests originate from specified VPCs or VPC endpoints, enforce that API calls must use private network paths, and deny requests that would traverse the internet regardless of source IP.

C) The aws:SecureTransport condition checks whether requests use HTTPS/SSL but does not determine network path. Requests over the internet using HTTPS and requests through VPC endpoints both use secure transport. This condition ensures encryption but not private network routing.

D) The aws:RequestedRegion condition restricts which AWS regions can be accessed but does not control whether requests use VPC endpoints or internet connectivity. Region restrictions address data residency and compliance but not network routing requirements.

Question 47

A company needs to grant a third-party vendor temporary access to specific S3 buckets for a data migration project lasting three months. What is the MOST secure approach?

A) Create an IAM user for the vendor with access keys

B) Share the AWS account root credentials temporarily

C) Create a cross-account IAM role with temporary credentials and specific S3 permissions

D) Make the S3 buckets public for three months

Answer: C

Explanation:

Granting third-party access to AWS resources requires balancing security with operational needs. The solution must provide necessary access for the vendor to complete their work while minimizing security risks, ensuring accountability, and enabling easy revocation when access is no longer needed. Cross-account IAM roles provide the most secure and manageable approach for third-party access scenarios.

Cross-account roles allow external AWS accounts to assume roles in your account without sharing permanent credentials. The vendor uses their own AWS account and credentials to assume the role, receiving temporary security credentials valid for a limited session duration. These temporary credentials automatically expire and must be renewed, reducing the risk window if credentials are compromised.

The role’s permissions can be precisely scoped to only the S3 buckets and operations needed for the migration project. Trust policies on the role specify which external accounts can assume it, and IAM conditions can add additional restrictions such as requiring MFA or limiting access to specific IP ranges. After the project completes, the role can be deleted or the trust relationship revoked, immediately ending vendor access.

A) Creating IAM users with access keys for external parties requires managing long-term credentials, distributing access keys securely, and tracking when access is no longer needed. Access keys represent permanent credentials that require manual revocation and rotation. This approach creates credential management overhead and security risks.

B) Sharing root account credentials is an extreme security violation that should never occur under any circumstances. Root credentials provide unlimited access to all account resources and billing, cannot be restricted through policies, and compromise of root credentials requires extensive remediation. This option is completely unacceptable.

C) This is the correct answer because cross-account roles provide temporary credentials that automatically expire, enable precise permission scoping to only necessary S3 resources, maintain clear audit trails of vendor actions, and allow immediate access revocation by deleting the role or modifying trust policies.

D) Making S3 buckets public exposes data to anyone on the internet, not just the intended vendor. Public buckets are frequently targeted by automated scanning and data exfiltration attempts. This approach creates unacceptable security risks and fails to provide the controlled, auditable access required for third-party vendor scenarios.

Question 48

A security team wants to implement automated incident response that isolates potentially compromised EC2 instances by removing them from security groups and creating forensic snapshots. Which AWS service combination enables this?

A) Amazon GuardDuty with EventBridge and Lambda

B) AWS Config with Systems Manager

C) AWS CloudTrail with SNS

D) Amazon Inspector with CloudWatch

Answer: A

Explanation:

Automated incident response reduces the time between threat detection and containment, limiting the potential damage from security incidents. For compromised EC2 instances, immediate isolation prevents attackers from using the instance to move laterally, exfiltrate data, or conduct further attacks. Automated responses must detect threats, initiate containment actions, and preserve evidence for investigation.

Amazon GuardDuty continuously monitors for malicious activity and generates findings when threats are detected, including indicators that EC2 instances may be compromised such as cryptocurrency mining, communication with known malicious IPs, or unusual API activity. GuardDuty findings are delivered to EventBridge in near real-time, enabling event-driven automated responses.

EventBridge rules can match specific GuardDuty finding types and trigger Lambda functions that perform incident response actions. The Lambda function can modify instance security groups to block network access, create EBS snapshots for forensic analysis, add tags marking the instance as compromised, send notifications to security teams, and document actions taken. This automation executes consistently and rapidly, often within seconds of detection.

A) This is the correct answer because GuardDuty detects compromised instances and suspicious activity, EventBridge provides real-time event routing from GuardDuty findings to response functions, and Lambda executes automated response actions including instance isolation and forensic snapshot creation.

B) AWS Config monitors resource configuration compliance and can trigger remediation for non-compliant configurations, but Config is not designed for threat detection or incident response. Config detects configuration drift but not active security threats like compromised instances. Systems Manager can execute actions but lacks the threat detection component.

C) CloudTrail logs API calls and can send notifications via SNS, but CloudTrail alone does not detect threats or compromised instances. CloudTrail provides audit logs that must be analyzed to identify incidents, but it lacks the behavioral analysis and threat detection capabilities that GuardDuty provides. SNS only sends notifications without executing response actions.

D) Amazon Inspector assesses instances for vulnerabilities and exposures but does not detect runtime threats or compromised instances. Inspector performs scheduled or on-demand assessments rather than continuous threat monitoring. CloudWatch provides monitoring and alerting but does not include threat detection or automated response execution.

Question 49

An application stores sensitive customer data in DynamoDB. The security team requires that all data be encrypted at rest using keys that can be immediately disabled in case of a security incident. Which encryption option meets this requirement?

A) DynamoDB default encryption with AWS owned keys

B) DynamoDB encryption with AWS managed keys

C) DynamoDB encryption with customer managed KMS keys

D) Application-level encryption before storing in DynamoDB

Answer: C

Explanation:

The ability to immediately disable encryption keys in response to security incidents provides an emergency control that renders encrypted data inaccessible. This capability is valuable when investigating potential data breaches or responding to compromised applications. Customer managed KMS keys provide the control needed to disable keys while AWS-managed encryption options do not offer this level of immediate control.

Customer managed KMS keys in AWS KMS can be disabled through the KMS console or API. Disabling a key immediately prevents its use for any encryption or decryption operations. When a DynamoDB table is encrypted with a customer managed key that is subsequently disabled, attempts to read encrypted data fail because DynamoDB cannot decrypt the data. This effectively makes the data inaccessible until the key is re-enabled.

Key disabling provides a rapid response mechanism during security incidents. If an application vulnerability is discovered that might expose database data, disabling the encryption key prevents data access while the vulnerability is investigated and remediated. After confirming that the security issue is resolved, the key can be re-enabled to restore normal operations. This capability provides an additional layer of control beyond traditional access controls.

A) AWS owned keys are managed entirely by AWS and shared across multiple customers. Customers have no control over these keys and cannot disable them. This option provides encryption but lacks the control and incident response capabilities required by the security team.

B) AWS managed keys are created and managed by AWS for use with specific services. While these keys provide encryption, customers cannot disable AWS managed keys. AWS retains control over key lifecycle operations, preventing immediate key disabling as an incident response action.

C) This is the correct answer because customer managed KMS keys can be immediately disabled through console or API, disabling keys prevents all decryption operations making encrypted data inaccessible, keys can be re-enabled after incident resolution, and customers have full control over key policies and lifecycle.

D) Application-level encryption provides defense-in-depth but requires the application to manage encryption keys. While applications could implement key disabling logic, this adds significant complexity compared to using DynamoDB encryption with KMS keys. Application-level encryption also requires code changes and does not provide the centralized key management that KMS offers.

Question 50

A company must maintain detailed audit logs showing who accessed specific S3 objects, when access occurred, and from which IP addresses. Which S3 feature provides this information?

A) S3 Inventory reports

B) S3 Server Access Logging

C) S3 Object Lock

D) S3 Storage Class Analysis

Answer: B

Explanation:

Comprehensive audit logging of S3 object access is essential for security monitoring, compliance requirements, and incident investigation. Organizations need detailed records showing who accessed objects, when access occurred, what operations were performed, and whether operations succeeded or failed. S3 Server Access Logging provides detailed logs of all requests made to S3 buckets.

S3 Server Access Logging captures detailed information about every request made to objects in a bucket, including the requester’s IP address, requester’s IAM identity or anonymous status, timestamp, request type (GET, PUT, DELETE, etc.), requested object key, HTTP status code, error codes, and bytes transferred. These logs are delivered to a separate S3 bucket where they can be analyzed using tools like Athena, stored long-term, or processed by security information and event management systems.

Access logs are delivered on a best-effort basis with typical delivery times of a few hours. For real-time access monitoring, organizations can use CloudTrail data events for S3, which provide near real-time delivery to CloudWatch Logs or EventBridge. However, for comprehensive audit logging with detailed request information, S3 Server Access Logging provides the most complete data set including information not captured by CloudTrail.

A) S3 Inventory generates lists of objects and their metadata for storage management purposes. Inventory reports contain information about objects themselves such as size, storage class, and encryption status, but do not log access events or requests. Inventory is designed for storage analytics, not access auditing.

B) This is the correct answer because S3 Server Access Logging records detailed information about every request including requester identity and IP address, timestamps of access, operations performed, and request outcomes, providing comprehensive audit logs for compliance and security analysis.

C) S3 Object Lock prevents object deletion or modification for compliance and retention purposes. Object Lock controls object mutability but does not log access events or maintain audit trails of who accessed objects. Object Lock addresses data retention, not access auditing.

D) S3 Storage Class Analysis analyzes access patterns to recommend optimal storage classes for cost optimization. Analysis focuses on identifying infrequently accessed objects for lifecycle transitions but does not provide detailed access logs or audit information about specific requests.

Question 51

An organization requires that all EC2 instances automatically receive security patches within 24 hours of release. Which AWS service combination implements this requirement?

A) AWS Systems Manager Patch Manager with maintenance windows

B) Amazon Inspector with automated remediation

C) AWS Config with custom Lambda functions

D) EC2 Image Builder with automated pipelines

Answer: A

Explanation:

Timely application of security patches is critical for protecting systems against known vulnerabilities. Unpatched systems are frequent targets of attacks that exploit publicly disclosed vulnerabilities. Automated patch management ensures consistent, timely patching across all instances without relying on manual processes that are error-prone and difficult to scale.

AWS Systems Manager Patch Manager automates the process of patching managed instances with security updates and other types of updates. Patch Manager uses patch baselines that define which patches should be installed, including rules based on severity, classification, and release date. Organizations can configure patch baselines to automatically approve security patches for installation within specified timeframes such as 24 hours after release.

Maintenance windows define when patching operations occur, allowing organizations to schedule patching during approved change windows to minimize service disruption. Patch Manager can be configured to scan instances for missing patches and automatically install approved patches during maintenance windows. The service provides comprehensive reporting showing patch compliance status across all managed instances, enabling security teams to verify that patching requirements are being met.

A) This is the correct answer because Patch Manager automates security patch deployment, patch baselines can be configured to auto-approve patches within 24 hours, maintenance windows control when patching occurs, and compliance reporting verifies that instances receive required patches.

B) Amazon Inspector assesses instances for vulnerabilities and missing patches but does not install patches. Inspector provides findings identifying missing patches but requires separate tools like Patch Manager to perform actual remediation. Inspector is a detection tool rather than an automated patching solution.

C) AWS Config detects non-compliant configurations and can trigger remediation through Lambda functions, but this approach requires custom development to implement patching logic. Config with Lambda is unnecessarily complex compared to using Patch Manager, which is purpose-built for automated patch management.

D) EC2 Image Builder creates and maintains AMIs including patched images, but Image Builder addresses AMI creation rather than patching running instances. Organizations would need to launch new instances from updated AMIs and terminate old instances, which is disruptive compared to in-place patching with Patch Manager.

Question 52

A security team needs to ensure that Amazon RDS databases are not publicly accessible from the internet. Which combination of controls implements this requirement?

A) Place RDS instances in private subnets and configure security groups to allow only VPC traffic

B) Use NACLs to block all inbound traffic

C) Enable RDS encryption at rest

D) Configure RDS parameter groups to disable public access

Answer: A

Explanation:

Preventing public internet access to databases is a fundamental security practice that reduces attack surface and protects against unauthorized access attempts. RDS instances should be deployed in network architectures that enforce network-level isolation, ensuring that databases can only be accessed from authorized sources within trusted network boundaries.

Private subnets in VPCs do not have routes to internet gateways, preventing direct inbound connections from the internet. Placing RDS instances in private subnets ensures they cannot be directly reached from outside the VPC. The RDS instance receives a private IP address from the subnet’s CIDR range and can only be accessed through the VPC network or connected networks like VPNs or Direct Connect.

Security groups act as virtual firewalls controlling inbound and outbound traffic to RDS instances. Configuring security groups to allow inbound traffic only from specific security groups associated with application servers ensures that only authorized application instances can connect to databases. The combination of private subnet placement and restrictive security groups creates multiple layers preventing public internet access.

A) This is the correct answer because private subnets lack internet gateway routes preventing direct public access, security groups restrict inbound connections to only authorized sources within the VPC, this combination implements defense-in-depth with network and firewall layers, and the configuration prevents public internet exposure.

B) Using NACLs to block all inbound traffic would prevent any access including legitimate application connections. NACLs alone are insufficient because they operate at the subnet level and provide stateless filtering. This approach would block necessary database access and is operationally infeasible.

C) RDS encryption at rest protects data stored on disk but does not control network accessibility. Encrypted databases can still be publicly accessible if deployed in public subnets with permissive security groups. Encryption addresses data confidentiality but not network exposure.

D) RDS parameter groups configure database engine settings but do not control network-level accessibility. There is no parameter group setting to disable public access. Network accessibility is controlled through VPC networking, subnet configuration, and security groups, not database parameters.

Question 53

An application uses AWS Secrets Manager to store database credentials. The security team requires that application code never handles credentials in plaintext. How should the application retrieve and use credentials?

A) Retrieve credentials from Secrets Manager and immediately hash them before use

B) Use Secrets Manager rotation Lambda functions to inject credentials directly into application environment variables

C) Retrieve credentials from Secrets Manager and parse the JSON response to extract username and password for database connections

D) Store credentials in code after retrieving them once from Secrets Manager

Answer: C

Explanation:

While the question asks how applications should handle credentials retrieved from Secrets Manager, the reality is that database connections require credentials in plaintext form to authenticate. The security goal is minimizing credential exposure by retrieving them securely, using them only in memory, never logging them, and not persisting them to disk or configuration files.

When applications retrieve secrets from Secrets Manager using the GetSecretValue API, Secrets Manager returns the secret value in JSON format. The application parses this JSON response to extract credential components like username and password, which are then used to establish database connections. These credentials exist briefly in application memory during connection establishment but are never written to disk or logged.

Best practices include retrieving credentials only when needed, storing them in memory for the minimum time necessary, clearing credential variables after use, and refreshing credentials periodically to use rotated values. Secrets Manager encrypts secrets at rest and in transit, ensuring that credentials are protected throughout retrieval. The application code should also implement proper error handling to prevent credential leakage through exception messages or logs.

A) Hashing credentials before use makes them unusable for database authentication. Database authentication requires credentials in plaintext form to establish connections. Hashing is appropriate for storing credentials for verification purposes but not for using credentials to authenticate to external systems.

B) Secrets Manager rotation Lambda functions update database credentials and secret values but do not inject credentials into application environments. Applications must retrieve credentials through the Secrets Manager API. Environment variables, while convenient, represent a form of plaintext storage that persists beyond immediate use.

C) This is the correct answer because applications must parse the JSON response from Secrets Manager to extract credentials, credentials are used in memory to establish database connections, this approach retrieves credentials securely through encrypted API calls, and credentials are not persisted to configuration files or disk.

D) Storing credentials in code after initial retrieval defeats the purpose of using Secrets Manager. This approach creates hardcoded credentials in application memory or configuration that do not rotate and persist longer than necessary. Credentials should be retrieved fresh from Secrets Manager when needed rather than cached long-term.

Question 54

A company must implement a solution that prevents users from disabling CloudWatch Logs on EC2 instances. Which approach enforces this requirement?

A) Use IAM policies to deny CloudWatch Logs configuration changes

B) Create AWS Config rules to detect and remediate disabled CloudWatch Logs

C) Use Systems Manager State Manager to ensure CloudWatch Logs agent is running

D) Implement Service Control Policies to prevent EC2 instance termination

Answer: C

Explanation:

Ensuring that logging agents remain operational on EC2 instances is critical for maintaining security visibility and meeting compliance requirements. Users with instance access can potentially stop or disable logging agents, creating blind spots in security monitoring. Systems Manager State Manager provides automated configuration enforcement that continuously ensures instances maintain desired configurations.

State Manager uses documents that define desired configurations for managed instances. A State Manager association can specify that the CloudWatch Logs agent must be installed, configured, and running. State Manager continuously evaluates instances against this desired state and automatically remediates drift by reinstalling or restarting the agent if it is stopped or removed.

The State Manager association runs on a defined schedule, regularly checking that the CloudWatch Logs agent remains operational. If users attempt to disable the agent, State Manager detects this change during the next evaluation cycle and automatically restores the agent to running status. This creates a self-healing configuration that maintains logging capabilities without manual intervention.

A) IAM policies control AWS API access but cannot prevent users with OS-level access to EC2 instances from stopping system processes like the CloudWatch Logs agent. Users with SSH or RDP access can stop services at the operating system level regardless of IAM permissions. IAM policies alone are insufficient for enforcing agent status.

B) AWS Config rules can detect when CloudWatch Logs are not being delivered, but Config rules evaluate AWS resource configurations rather than processes running on EC2 instances. Config cannot directly detect whether the CloudWatch Logs agent process is running on an instance’s operating system.

C) This is the correct answer because Systems Manager State Manager continuously enforces desired configurations on managed instances, automatically remediates configuration drift by restarting or reinstalling agents, operates regardless of user actions at the OS level, and ensures CloudWatch Logs agents remain operational.

D) SCPs that prevent EC2 instance termination do not address the requirement to keep CloudWatch Logs enabled. Users can still stop the CloudWatch Logs agent on running instances. Preventing termination addresses instance lifecycle but not logging agent status.

Question 55

An organization requires that all data in transit between application tiers be encrypted. The application consists of EC2 instances in an Auto Scaling group behind an Application Load Balancer. Which configuration ensures encryption in transit?

A) Enable SSL/TLS on the Application Load Balancer listener with ACM certificates

B) Configure HTTPS listeners on ALB and configure EC2 instances to use HTTPS with self-signed certificates

C) Enable VPC encryption for all traffic

D) Use VPN connections between application tiers

Answer: B

Explanation:

Comprehensive encryption in transit requires that data remains encrypted throughout its entire journey from client to application servers. Using HTTPS on the load balancer protects traffic from clients to the load balancer, but if traffic from the load balancer to backend instances uses HTTP, that segment remains unencrypted. True end-to-end encryption requires HTTPS for both segments of the connection.

Configuring HTTPS listeners on the Application Load Balancer with ACM certificates encrypts traffic from clients to the load balancer. Additionally, configuring backend EC2 instances to listen on HTTPS with certificates (which can be self-signed since the traffic remains within the VPC) encrypts traffic from the load balancer to instances. This configuration ensures encryption for the entire data path.

The load balancer health checks should also use HTTPS to maintain encryption even for health check traffic. Backend instance security groups should allow inbound HTTPS traffic from the load balancer security group. The combination of HTTPS listeners on ALB and HTTPS-configured backend instances provides comprehensive encryption in transit across all application tiers.

A) Configuring only the ALB listener with SSL/TLS encrypts traffic from clients to the load balancer but does not address the connection between the load balancer and backend EC2 instances. If backend connections use HTTP, that traffic segment remains unencrypted, failing to meet the requirement for comprehensive encryption in transit.

B) This is the correct answer because HTTPS listeners on ALB encrypt client-to-load-balancer traffic, configuring instances with HTTPS encrypts load-balancer-to-instance traffic, self-signed certificates are acceptable for internal VPC traffic, and this configuration provides complete end-to-end encryption across all application tiers.

C) AWS VPC does not provide automatic encryption for all traffic within the VPC. VPC provides network isolation but not automatic encryption. Traffic between resources in the same VPC flows unencrypted unless applications explicitly implement encryption protocols like HTTPS or TLS.

D) VPN connections are designed for connecting remote networks to VPCs, not for encrypting traffic between application tiers within the same VPC. Setting up VPN connections between application tiers would be architecturally complex, create performance overhead, and is not the intended use case for VPN functionality.

Question 56

A security engineer discovers that an S3 bucket has a bucket policy allowing public read access. The bucket contains sensitive company data. What is the fastest way to prevent public access while investigating how the policy was changed?

A) Delete the bucket policy immediately

B) Enable S3 Block Public Access for the bucket

C) Remove the public read ACL from all objects

D) Implement an SCP to prevent public bucket policies

Answer: B

Explanation:

When discovering publicly accessible S3 buckets containing sensitive data, immediate action is required to prevent unauthorized access and potential data breaches. The solution must be fast to implement, effective at blocking public access, and reversible to allow proper investigation without destroying evidence. S3 Block Public Access provides exactly these capabilities.

S3 Block Public Access settings can be enabled at the bucket level and take effect immediately, overriding any existing bucket policies or ACLs that grant public access. When enabled, Block Public Access prevents public access regardless of how bucket policies or ACLs are configured. This provides rapid incident response without requiring analysis of complex policy documents or modification of multiple configuration elements.

Enabling Block Public Access preserves the existing bucket policy, allowing security teams to investigate how the policy was modified and by whom. CloudTrail logs can be reviewed to identify who changed the bucket policy and when the change occurred. After investigation, the bucket policy can be permanently corrected or removed while Block Public Access remains as a safety control.

A) Deleting the bucket policy immediately stops public access but destroys evidence that may be needed for investigation. The policy document contains important information about what access was granted and may provide clues about how the security incident occurred. Preserving evidence is important for proper incident response and potential disciplinary or legal actions.

B) This is the correct answer because S3 Block Public Access takes effect immediately to stop public access, overrides bucket policies and ACLs granting public permissions, preserves existing policies for investigation, and can be configured quickly through console, CLI, or API.

C) Removing public read ACLs from all objects could be time-consuming for buckets containing many objects and may not address bucket-level policies that grant public access. This approach is slower to implement and does not provide comprehensive protection against bucket policies allowing public access.

D) Implementing an SCP requires AWS Organizations and takes time to create, test, and deploy. SCPs prevent future policy changes but do not immediately block existing public access. This approach is preventive for future incidents but does not provide the immediate response needed for an active security incident.

Question 57

A company needs to ensure that all AWS Lambda functions can only access AWS services through VPC endpoints and cannot reach the public internet. How should this be configured?

A) Deploy Lambda functions with VPC configuration and ensure subnets have no route to internet or NAT gateways

B) Use security groups to block all outbound traffic from Lambda functions

C) Configure Lambda execution role to deny all AWS service access

D) Enable AWS PrivateLink for all AWS services

Answer: A

Explanation:

Lambda functions by default execute in an AWS-managed environment with internet access. For sensitive workloads or compliance requirements, organizations may need Lambda functions to operate without internet connectivity, accessing AWS services only through private VPC endpoints. This requires configuring Lambda functions to run in customer VPCs with appropriate network controls.

When Lambda functions are configured with VPC settings, they run with elastic network interfaces in the specified subnets. These functions inherit the networking characteristics of the subnets, including route table configurations. If subnets have no routes to internet gateways or NAT gateways, Lambda functions cannot reach the public internet. AWS service access is provided through VPC endpoints configured in the VPC.

VPC endpoints for AWS services like S3, DynamoDB, and others allow Lambda functions to access these services through AWS private network infrastructure without internet traversal. The VPC must have endpoints configured for all AWS services that Lambda functions need to access. Route tables for the Lambda subnets should include routes to these VPC endpoints, enabling private service access.

A) This is the correct answer because deploying Lambda in VPC subnets without internet or NAT gateway routes prevents internet access, VPC endpoints provide private connectivity to AWS services, Lambda functions inherit subnet networking characteristics, and this configuration ensures all AWS service access occurs through private endpoints.

B) Security groups control inbound and outbound network traffic, but Lambda functions need outbound access to AWS services through VPC endpoints. Blocking all outbound traffic would prevent Lambda functions from accessing any services, including those reached through VPC endpoints. This approach is too restrictive and prevents necessary service access.

C) Lambda execution roles define what AWS API actions Lambda functions can perform. Denying all service access prevents Lambda functions from performing their intended tasks. IAM permissions control authorization for API actions but do not control whether functions access services through VPC endpoints versus internet routes.

D) PrivateLink provides private connectivity to AWS services and custom services, but simply enabling PrivateLink does not prevent Lambda functions from accessing the internet. Lambda functions must be deployed in VPCs with appropriate network configurations. PrivateLink is a component of the solution but not a complete answer on its own.

Question 58

An organization requires that all modifications to production IAM roles be reviewed and approved by the security team before taking effect. Which approach implements this requirement?

A) Use AWS Config rules to detect IAM role changes and remediate automatically

B) Enable MFA for all IAM users who can modify roles

C) Implement a custom approval workflow using Step Functions that captures role change requests and executes them after approval

D) Use CloudTrail to log all IAM changes and manually review them

Answer: C

Explanation:

Implementing pre-approval workflows for sensitive operations prevents unauthorized or accidental changes to critical security configurations. IAM roles control access to AWS resources, making role modifications high-risk operations that warrant additional oversight. Approval workflows ensure that changes are reviewed by security experts before taking effect, reducing the risk of misconfigurations.

AWS Step Functions orchestrates multi-step workflows including human approval steps. For IAM role change approvals, an API Gateway endpoint or Lambda function intercepts role modification requests and initiates a Step Functions workflow. The workflow sends approval requests to security team members via SNS or email and waits for their decision. Only after approval does the workflow proceed to execute the actual IAM role modification using Lambda functions with appropriate permissions.

The approval workflow can include additional logic such as requiring multiple approvers for certain changes, automatic approval for specific change types, escalation for unresponded requests, and detailed audit logging of all approval decisions. IAM policies prevent direct role modifications, requiring all changes to flow through the approval workflow. This creates a governed change management process for critical security configurations.

A) AWS Config detects changes after they occur and can remediate by reverting changes, but this is reactive rather than preventive. Changes would take effect before being reverted, creating a window where incorrect or malicious role modifications are active. Config does not provide pre-approval capabilities for changes.

B) MFA adds security to user authentication but does not implement approval workflows. Users with MFA-protected credentials can still make IAM role changes directly without review or approval from others. MFA improves authentication security but does not create multi-party approval processes.

C) This is the correct answer because Step Functions can orchestrate human approval workflows, the workflow captures change requests before execution, approval is required before changes take effect, and the solution provides audit trails of all approval decisions and role modifications.

D) CloudTrail logging provides detective controls by recording IAM changes after they occur. Manual review of CloudTrail logs does not prevent unauthorized changes or implement pre-approval requirements. This approach is reactive and allows changes to take effect before review occurs.

Question 59

A company must ensure that Amazon RDS database backups are encrypted and stored in a different AWS region for disaster recovery. Which configuration meets these requirements?

A) Enable automated backups with default encryption

B) Enable automated backups with encryption and configure cross-region snapshot copy with encryption

C) Use AWS Backup to create backup plans with cross-region copy

D) Manually create snapshots and copy to another region

Answer: B

Explanation:

Comprehensive backup strategies for databases include encryption for data protection and geographic distribution for disaster recovery. Storing backups in different regions protects against regional disasters and ensures business continuity even if the primary region becomes unavailable. RDS provides native capabilities for encrypted backups and cross-region replication.

RDS automated backups create daily snapshots and capture transaction logs for point-in-time recovery. When encryption is enabled for an RDS instance, automated backups are automatically encrypted using the same KMS key. These encrypted backups protect data at rest in the primary region. For disaster recovery, backups must be copied to additional regions.

RDS supports automated cross-region snapshot copying that can be configured to periodically copy snapshots to destination regions. During cross-region copy, snapshots can be re-encrypted using KMS keys in the destination region. This configuration ensures that backups exist in multiple regions, are encrypted in both locations, and can be used to restore databases in case of regional failures.

A) Automated backups with encryption protect data at rest but only store backups in the same region as the database. If a regional disaster occurs, backups in that region may be unavailable. This configuration does not meet the requirement for cross-region disaster recovery backups.

B) This is the correct answer because automated backups with encryption protect data at rest, cross-region snapshot copy stores backups in additional regions for disaster recovery, encryption can be maintained or changed in destination regions, and the configuration provides comprehensive backup protection.

C) AWS Backup can create backup plans with cross-region copy capabilities and is a valid alternative solution. However, the question asks for the specific configuration, and RDS native cross-region snapshot copy is the direct feature designed for this purpose. AWS Backup adds an abstraction layer that may be unnecessary for simple cross-region RDS backup requirements.

D) Manual snapshot creation and copying is error-prone, requires operational overhead, and does not provide the point-in-time recovery capabilities that automated backups offer. Manual processes are difficult to maintain consistently and increase the risk of backup failures or forgotten copies.

Question 60

A security team needs to detect when IAM policies are modified to grant overly permissive access such as full administrative privileges. Which AWS service provides this capability?

A) AWS CloudTrail

B) AWS IAM Access Analyzer

C) Amazon GuardDuty

D) AWS Security Hub

Answer: B

Explanation:

Detecting overly permissive IAM policies is critical for maintaining least privilege access controls and preventing privilege escalation. IAM policies can be complex, and unintended permissions may be granted through policy modifications. Automated analysis tools that evaluate IAM policies and identify security risks help organizations maintain proper access controls.

AWS IAM Access Analyzer continuously monitors IAM policies and generates findings when policies grant overly broad permissions. Access Analyzer uses automated reasoning to analyze resource policies and identifies access that extends beyond the expected boundaries. It specifically detects when policies grant administrative privileges, allow access to sensitive actions, or provide broader access than necessary.

Access Analyzer generates findings with severity levels indicating the risk associated with detected issues. For overly permissive policies such as those granting full administrative access, Access Analyzer creates high-severity findings. The service integrates with Security Hub for centralized security monitoring and can trigger automated alerts through EventBridge when new findings are generated.

A) CloudTrail logs IAM policy modifications showing who made changes and when they occurred, but CloudTrail does not analyze policy content to determine if policies are overly permissive. CloudTrail provides audit logs but not policy analysis or risk assessment capabilities.

B) This is the correct answer because IAM Access Analyzer continuously analyzes IAM policies for excessive permissions, identifies policies granting administrative or overly broad access, generates findings with severity ratings, and enables automated detection of policy risks without manual analysis.

C) Amazon GuardDuty detects threats through analysis of VPC Flow Logs, CloudTrail logs, and DNS logs. While GuardDuty can detect unusual IAM behavior such as compromised credentials, it does not perform static analysis of IAM policy documents to identify overly permissive permissions. GuardDuty focuses on behavioral threat detection.

D) AWS Security Hub aggregates security findings from multiple AWS services including Access Analyzer. Security Hub displays findings but does not perform the underlying policy analysis. The actual detection of overly permissive policies is performed by Access Analyzer, with Security Hub serving as a centralized dashboard.

Leave a Reply

How It Works

img
Step 1. Choose Exam
on ExamLabs
Download IT Exams Questions & Answers
img
Step 2. Open Exam with
Avanset Exam Simulator
Press here to download VCE Exam Simulator that simulates real exam environment
img
Step 3. Study
& Pass
IT Exams Anywhere, Anytime!