Amazon AWS Certified Security – Specialty SCS-C02 Exam Dumps and Practice Test Questions Set7 Q121-140

Visit here for our full Amazon AWS Certified Security – Specialty SCS-C02 exam dumps and practice test questions.

Question 121 

A security engineer needs to detect when AWS resources are exposed to the internet through misconfigurations such as public IP addresses, internet gateways, or permissive security groups. Which AWS service provides this visibility?

A) Amazon VPC Reachability Analyzer 

B) AWS Network Firewall 

C) Amazon Inspector 

D) VPC Flow Logs

Answer: A

Explanation: 

Understanding which resources are reachable from the internet is critical for identifying exposure risks. Network path analysis must evaluate routing, security groups, NACLs, and gateway configurations to determine actual reachability. VPC Reachability Analyzer provides automated network path analysis without requiring traffic generation.

Reachability Analyzer performs static configuration analysis to determine whether network paths exist between sources and destinations. Security teams can analyze paths from internet gateways to private resources, checking if misconfigurations allow unintended internet accessibility. The analysis evaluates route tables, security groups, NACLs, and other network components.

Reachability Analyzer generates detailed explanations showing why paths are reachable or blocked, identifying specific configuration components enabling or preventing connectivity. For internet exposure analysis, teams analyze paths from internet gateways to EC2 instances, RDS databases, or other resources, identifying misconfigurations that create public accessibility.

A) This is the correct answer because VPC Reachability Analyzer performs network path analysis to determine internet reachability, analyzes routing, security groups, NACLs, and gateway configurations, identifies misconfigurations enabling internet exposure, and provides detailed path explanations for remediation.

B) AWS Network Firewall provides stateful network traffic filtering and inspection but doesn’t analyze resource configurations to determine internet exposure. Network Firewall filters active traffic but doesn’t perform static configuration analysis identifying exposure risks.

C) Amazon Inspector assesses EC2 instances and container images for vulnerabilities and network exposure at the instance level but doesn’t provide VPC-wide network path analysis. Inspector focuses on vulnerability scanning rather than configuration-based reachability analysis.

D) VPC Flow Logs capture actual network traffic but don’t analyze configurations to determine potential reachability. Flow logs are reactive, showing traffic that occurred, but don’t identify exposure risks before exploitation. Configuration analysis is needed for proactive exposure detection.

Question 122 

An organization requires that all Amazon S3 bucket policies be reviewed and approved by security team before being applied. Which approach implements this requirement?

A) Use AWS CloudTrail to log bucket policy changes and manually review them 

B) Implement AWS Lambda functions triggered by S3 bucket policy changes that pause changes pending approval 

C) Enable S3 Block Public Access on all buckets 

D) Use AWS Config to detect policy changes after they occur

Answer: B

Explanation: 

Implementing approval workflows for S3 bucket policy changes prevents unauthorized or risky permissions from being applied. S3 bucket policies control access to data, making policy changes high-risk operations requiring oversight. Event-driven automation can intercept policy changes and implement approval workflows.

EventBridge receives S3 API events including PutBucketPolicy calls. Lambda functions triggered by these events can store proposed policy changes, revert buckets to previous policies, and initiate approval workflows using Step Functions. The workflow sends approval requests via SNS to security team members and waits for their decision before applying the policy.

The Lambda function retrieves the previous bucket policy before the change, stores both old and new policies for comparison, and reverts the bucket to the old policy pending approval. After approval, another Lambda function applies the approved policy. If denied, the bucket remains with the original policy and the requester is notified.

A) CloudTrail logging provides audit trails of policy changes after they occur but doesn’t prevent or pause changes pending approval. Logging is detective rather than preventive, allowing risky policies to take effect before review. This doesn’t meet the requirement for pre-approval.

B) This is the correct answer because Lambda functions triggered by policy change events can intercept changes, revert policies pending approval, initiate approval workflows using Step Functions, and apply policies only after security team approval.

C) S3 Block Public Access prevents public bucket access but doesn’t implement approval workflows for all bucket policy changes. Block Public Access addresses specific public access risks but doesn’t provide general policy change governance or approval requirements.

D) AWS Config detects policy changes after they occur and can trigger remediation but doesn’t provide pre-approval workflows. Config is reactive, identifying changes after they take effect. The requirement specifies approval before application, which Config doesn’t provide.

Question 123 

A company must ensure that AWS Lambda functions cannot access AWS Systems Manager Parameter Store parameters belonging to other applications. Which IAM policy configuration enforces this?

A) Grant Lambda functions full Parameter Store access 

B) Use IAM policies with conditions restricting access to parameters by path prefix matching the application name 

C) Manually review all Parameter Store access monthly 

D) Enable Parameter Store encryption

Answer: B

Explanation: 

Implementing least privilege for Parameter Store access requires restricting Lambda functions to only parameters needed for their applications. Parameter Store supports hierarchical naming using paths, enabling path-based access control. IAM policies with conditions can enforce access restrictions based on parameter paths.

IAM policies can use the ssm:ResourceTag condition key or resource ARN patterns to restrict Parameter Store access. By organizing parameters in hierarchical paths like /app1/database/password and /app2/api/key, policies can grant access to specific path prefixes. Lambda execution roles for app1 functions receive policies allowing access only to /app1/* parameters.

This path-based access control implements application-level segmentation within Parameter Store. Each application’s Lambda functions can only access their own parameters, preventing cross-application access. The approach scales well as new applications and parameters are added, requiring only consistent path naming conventions.

A) Granting full Parameter Store access to Lambda functions violates least privilege by allowing functions to access parameters for all applications. This excessive permission creates risks if functions are compromised, enabling access to sensitive configuration data for unrelated applications.

B) This is the correct answer because IAM policies with path-based conditions restrict Parameter Store access to specific parameter prefixes, path organization enables application-level segmentation, Lambda functions can only access their own application’s parameters, and this implements least privilege for configuration data access.

C) Manual monthly review doesn’t prevent cross-application parameter access and introduces 30-day windows where unauthorized access could occur. Manual processes are reactive rather than preventive and don’t enforce access restrictions at the authorization layer.

D) Parameter Store encryption protects parameter values at rest but doesn’t control which Lambda functions can access which parameters. Encryption addresses data confidentiality but not access control or application segmentation requirements.

Question 124 

A security team needs to implement monitoring that detects when EC2 instance IAM roles are modified to add permissions. Which solution provides this capability?

A) Use AWS CloudTrail to log IAM API calls and create CloudWatch alarms for role policy changes 

B) Manually review IAM roles weekly 

C) Use Amazon GuardDuty to detect role changes 

D) Enable AWS Config continuous recording

Answer: A

Explanation: 

Detecting IAM role modifications requires monitoring IAM API calls that attach policies or modify role trust relationships. CloudTrail logs all IAM operations, providing comprehensive audit trails. CloudWatch Logs and metric filters enable automated alerting on specific event patterns.

CloudTrail logs IAM API calls including AttachRolePolicy, PutRolePolicy, and UpdateAssumeRolePolicy that modify role permissions or trust relationships. These logs are sent to CloudWatch Logs where metric filters match specific event patterns. When role modification events are detected, metric filters increment CloudWatch metrics.

CloudWatch alarms monitor these metrics and trigger notifications when modifications occur. SNS topics can notify security teams of role changes in real-time, enabling rapid investigation. Alarms can also trigger Lambda functions that evaluate changes and automatically revert unauthorized modifications or escalate to security personnel.

A) This is the correct answer because CloudTrail logs IAM role modification API calls, CloudWatch metric filters detect role policy changes in near real-time, CloudWatch alarms trigger notifications to security teams, and this enables rapid detection and response to role modifications.

B) Manual weekly review introduces multi-day delays between role modifications and detection. Unauthorized or risky permission changes could remain active for up to a week before review. Manual processes don’t scale across organizations with many roles and frequent legitimate changes.

C) Amazon GuardDuty detects threats through behavioral analysis but doesn’t specifically monitor IAM role configuration changes. GuardDuty might detect anomalous behavior resulting from role compromise but doesn’t provide alerting on role policy modifications themselves.

D) AWS Config continuously records resource configurations including IAM roles but doesn’t provide real-time alerting on specific API calls. Config tracks configuration history but lacks the event-driven alerting that CloudTrail with CloudWatch provides for immediate change detection.

Question 125 

An organization requires that all AWS KMS customer-managed keys have key policies explicitly denying key deletion to prevent accidental or malicious key destruction. Which approach enforces this?

A) Manually add deny statements to all key policies 

B) Use Service Control Policies to deny ScheduleKeyDeletion actions across all accounts 

C) Enable key deletion protection in KMS settings 

D) Use AWS Config to detect keys without deletion protection

Answer: B

Explanation: 

Preventing accidental or malicious KMS key deletion is critical since key deletion makes all data encrypted with those keys permanently inaccessible. Key deletion represents catastrophic data loss risks requiring strong preventive controls. Service Control Policies provide organization-wide enforcement that individual accounts cannot override.

SCPs can explicitly deny the ScheduleKeyDeletion and CancelKeyDeletion API actions across all accounts in the organization. This preventive control ensures that even account administrators or compromised credentials cannot schedule keys for deletion. The organization-wide policy applies uniformly without requiring key-by-key policy modifications.

For exceptional cases requiring key deletion (such as key compromise or application decommissioning), a separate restricted account or role with SCP exceptions can be designated for key management operations. This controlled exception process ensures key deletion undergoes deliberate review while preventing casual or accidental deletion.

A) Manually adding deny statements to individual key policies doesn’t scale across many keys and accounts. Manual processes are error-prone and require ongoing verification that all keys maintain deletion protection. New keys might be created without proper policies, creating gaps in protection.

B) This is the correct answer because SCPs deny key deletion actions organization-wide, preventive controls block deletion regardless of key policies or IAM permissions, organization-wide enforcement ensures consistent protection, and this prevents accidental or malicious key destruction.

C) AWS KMS does not have a “key deletion protection” setting similar to RDS deletion protection. Key lifecycle is controlled through key policies and IAM permissions. Protection must be implemented through explicit deny policies or SCPs rather than service-level settings.

D) AWS Config can detect keys that lack deletion protection in their key policies but this is reactive. Keys could be deleted before Config detects missing protections. Preventive controls blocking deletion API calls are more effective than detective controls monitoring policy configurations.

Question 126 

A company must implement controls ensuring that VPC peering connections cannot be established with external AWS accounts without approval. Which solution enforces this?

A) Manually review all VPC peering connections monthly 

B) Use IAM policies with conditions denying VPC peering creation to external accounts 

C) Enable VPC Flow Logs on all VPCs 

D) Use AWS Config to detect peering connections after creation

Answer: B

Explanation: 

VPC peering with external accounts can create unintended network connectivity and data exposure risks. Preventing unauthorized peering requires controls that evaluate peering targets during connection creation. IAM policies with conditions can distinguish between intra-organization and external account peering.

IAM policies can use the ec2:AccepterVpc and ec2:RequesterVpc condition keys along with account comparisons to determine whether peering targets are external accounts. Policies denying CreateVpcPeeringConnection and AcceptVpcPeeringConnection when the peer account is not in an approved list prevent external peering.

For organization-wide enforcement, SCPs can deny VPC peering to external accounts across all member accounts. This ensures consistent enforcement without per-account configuration. Exceptions for approved external partners can be implemented through condition logic comparing peer account IDs against allowlists.

A) Manual monthly review is reactive and introduces 30-day windows where unauthorized peering connections could exist. External peering could enable data exfiltration or unauthorized access for significant periods before detection. Manual processes don’t prevent non-compliant peering creation.

B) This is the correct answer because IAM policies can evaluate VPC peering peer accounts using condition keys, policies can deny peering to external accounts while allowing internal peering, preventive controls block unauthorized peering at creation time, and SCPs can enforce this organization-wide.

C) VPC Flow Logs capture network traffic but don’t control or detect VPC peering connection establishment. Flow logs show traffic after peering exists but don’t prevent peering or identify peering configuration issues. Flow logs are insufficient for peering connection governance.

D) AWS Config can detect VPC peering connections after creation but doesn’t prevent unauthorized peering. Detective controls are reactive, allowing external peering to exist before remediation. Preventive controls blocking unauthorized peering at creation are more effective for this requirement.

Question 127 

A security engineer needs to implement automated remediation that removes public read access from S3 bucket ACLs when detected. Which solution implements this?

A) Manually review S3 bucket ACLs weekly 

B) Use AWS Config with automatic remediation to remove public read ACLs 

C) Enable S3 Block Public Access which overrides ACLs 

D) Use Lambda functions triggered by daily schedules to scan ACLs

Answer: B

Explanation: 

S3 bucket ACLs can grant public read access creating data exposure risks. Automated detection and remediation of public ACLs ensures rapid response to misconfigurations. AWS Config provides continuous compliance monitoring with automated remediation capabilities for S3 configurations.

AWS Config includes the s3-bucket-public-read-prohibited managed rule that detects buckets with public read access through ACLs or bucket policies. When Config detects non-compliant buckets, automatic remediation can trigger Systems Manager Automation documents or Lambda functions that remove public read ACLs while preserving other ACL permissions.

The remediation action calls the S3 PutBucketAcl API to update the ACL, removing the AllUsers or AuthenticatedUsers groups from read permissions. This automated enforcement ensures public access is removed within minutes of detection without manual intervention, minimizing exposure windows.

A) Manual weekly review introduces multi-day windows where public ACLs remain active. Data could be exposed or exfiltrated during these periods. Manual processes don’t scale across organizations with many buckets and don’t provide timely response to security misconfigurations.

B) This is the correct answer because AWS Config continuously monitors S3 bucket ACLs for public access, managed rules detect non-compliant configurations, automatic remediation removes public read permissions without manual intervention, and this minimizes exposure windows through rapid automated response.

C) S3 Block Public Access overrides ACLs and bucket policies to prevent public access, which is a preventive control. While Block Public Access is excellent for preventing public access, the question specifically asks for remediation of existing public ACLs, suggesting a detective and remediation approach rather than pure prevention.

D) Lambda functions on daily schedules introduce up to 24-hour detection delays. Daily scans are insufficient for rapid response to security misconfigurations. Config’s continuous monitoring provides much faster detection and remediation compared to scheduled batch processes.

Question 128 

An organization must ensure that Amazon RDS databases use SSL/TLS for all client connections. Which configuration enforces this requirement?

A) Manually configure each database to require SSL B) Use RDS parameter groups with require_secure_transport set to 1 (MySQL) or rds.force_ssl set to 1 (PostgreSQL) C) Enable RDS encryption at rest D) Use VPC endpoints for RDS connectivity

Answer: B

Explanation: Enforcing SSL/TLS for database connections requires database engine configuration that rejects unencrypted connection attempts. RDS parameter groups control database engine settings including SSL/TLS requirements. Different database engines use different parameters for SSL enforcement.

MySQL uses the require_secure_transport parameter which when set to 1 requires all connections to use SSL/TLS. PostgreSQL uses rds.force_ssl parameter with similar functionality. These parameters are set in custom RDS parameter groups which are then associated with RDS instances.

When these parameters are enabled, database engines reject connection attempts that don’t use SSL/TLS, returning errors to clients. This enforcement occurs at the database layer regardless of application configurations, ensuring comprehensive SSL/TLS usage. Applications must be configured with proper SSL certificates and connection string options to connect successfully.

A) Manual configuration of individual databases doesn’t scale and is error-prone. New databases might be created without SSL enforcement, creating security gaps. Parameter groups provide centralized configuration that can be applied consistently across multiple database instances.

B) This is the correct answer because RDS parameter groups control database engine SSL/TLS requirements, parameters like require_secure_transport and rds.force_ssl enforce encrypted connections, database engines reject unencrypted connection attempts, and parameter groups enable consistent SSL/TLS enforcement across multiple instances.

C) RDS encryption at rest protects data stored on disk but doesn’t enforce SSL/TLS for network connections. Encryption at rest and encryption in transit are separate security controls. Rest encryption doesn’t prevent unencrypted database connections.

D) VPC endpoints provide private connectivity to AWS services without traversing the internet but don’t enforce SSL/TLS for connections. VPC endpoints address network routing but not transport-layer encryption. Endpoints and SSL/TLS enforcement are complementary but independent controls.

Question 129 

A company needs to implement controls preventing IAM policies from being created that allow S3 actions without requiring encryption. Which solution enforces this?

A) Manually review all IAM policies monthly 

B) Use IAM Access Analyzer to detect policies lacking encryption requirements and alert security teams 

C) Create Service Control Policies requiring S3 encryption 

D) Enable S3 default encryption on all buckets

Answer: B

Explanation: 

Detecting IAM policies that allow S3 operations without enforcing encryption requires policy analysis that identifies missing security conditions. Policies allowing s3:PutObject without conditions like s3:x-amz-server-side-encryption create risks of unencrypted data storage. IAM Access Analyzer analyzes policies for security issues including missing required conditions.

IAM Access Analyzer evaluates IAM policies, S3 bucket policies, and other resource policies to identify security risks. For encryption requirements, Access Analyzer can identify policies granting S3 write permissions without conditions requiring encryption. These findings alert security teams to policies that should be refined to include encryption requirements.

Access Analyzer findings can trigger automated workflows through EventBridge. Lambda functions can evaluate findings and create tickets, send notifications to policy creators, or even automatically add encryption conditions to policies. This automated detection and response ensures encryption requirements are consistently enforced across IAM policies.

A) Manual monthly review doesn’t scale across organizations with numerous IAM policies and introduces 30-day detection delays. Policies lacking encryption requirements could allow unencrypted data storage for significant periods before detection. Manual processes are reactive and resource-intensive.

B) This is the correct answer because IAM Access Analyzer analyzes policy content for missing security conditions, identifies policies allowing S3 operations without encryption requirements, generates findings alerting security teams, and enables automated workflows for policy remediation.

C) Service Control Policies can require S3 encryption at the data plane level by denying PutObject without encryption headers, but SCPs don’t analyze or prevent creation of IAM policies lacking encryption conditions. SCPs enforce encryption usage rather than policy content requirements.

D) S3 default encryption ensures objects are encrypted when uploaded but doesn’t enforce that IAM policies include encryption conditions. Default encryption is a data plane control while the requirement is for control plane policy analysis. These are complementary but different controls.

Question 130 

A security team needs to detect when AWS resources in production accounts are tagged with “Environment: Development” indicating incorrect environment classification. Which solution provides automated detection?

A) Manually review resource tags monthly 

B) Use AWS Config rules to detect resources with incorrect environment tags in production accounts 

C) Use AWS Organizations to prevent tag creation 

D) Enable CloudTrail to log tagging operations

Answer: B

Explanation: 

Detecting incorrect resource tagging requires evaluating resource tags against account context. Resources in production accounts should have production environment tags, while development tags indicate misclassification or misconfiguration. AWS Config provides continuous configuration monitoring including tag evaluation with customizable rules.

AWS Config custom rules using Lambda functions can evaluate resource tags against expected values based on account classification. The Lambda function retrieves the account ID, determines expected environment tags (e.g., production accounts should only have “Environment: Production” tags), and compares actual resource tags against expectations.

When Config detects resources with incorrect environment tags, it marks them as non-compliant and generates findings. EventBridge rules can trigger notifications to resource owners, create automatic remediation workflows that update tags, or escalate to security teams for investigation. This continuous monitoring ensures tag hygiene and correct resource classification.

A) Manual monthly review doesn’t scale across organizations with numerous resources and introduces 30-day detection delays. Incorrectly classified resources could exist for significant periods, causing issues with cost allocation, access controls, or compliance reporting. Manual processes are reactive and resource-intensive.

B) This is the correct answer because AWS Config continuously evaluates resource configurations including tags, custom rules can compare tags against expected values for each account type, Config generates compliance findings for incorrectly tagged resources, and automated workflows can remediate tag issues.

C) AWS Organizations cannot prevent specific tag creation or enforce tag value requirements based on account context. Organizations provides account management and SCPs but not tag-level validation. Tag enforcement requires service-specific controls like Config rules.

D) CloudTrail logs tagging API calls showing when tags are created or modified but doesn’t evaluate tag values against expected standards. CloudTrail provides audit trails but not compliance monitoring or validation logic. Additional analysis tools are needed to detect incorrect tags.

Question 131 

An organization requires that all Amazon EC2 instances automatically receive antivirus software upon launch and maintain updated virus definitions. Which solution implements this requirement?

A) Manually install antivirus on each instance after launch

B) Create custom AMIs with antivirus pre-installed and use Systems Manager to maintain updates 

C) Use Amazon Inspector to scan for viruses 

D) Enable EC2 Instance Connect for security

Answer: B

Explanation: 

Ensuring consistent security software deployment across EC2 instances requires standardized base images and automated update mechanisms. Custom AMIs provide golden images with pre-installed security software, while Systems Manager automates ongoing maintenance. This combination ensures consistent deployment and continuous protection.

Custom AMIs are created from properly configured instances with antivirus software installed and configured according to organizational standards. EC2 instances launched from these AMIs automatically include antivirus software without requiring post-launch installation. AMI standardization ensures consistent security posture across all instances.

AWS Systems Manager Patch Manager or Run Command automates antivirus definition updates. Systems Manager can schedule regular update operations across all managed instances, ensuring virus definitions remain current. State Manager maintains desired configurations, automatically reinstalling or reconfiguring antivirus if it’s removed or misconfigured.

A) Manual installation after launch is operationally intensive, introduces gaps between launch and protection, is error-prone, and doesn’t scale. Instances remain vulnerable during the period between launch and manual installation. Manual processes require ongoing human intervention for each instance launch.

B) This is the correct answer because custom AMIs ensure antivirus is pre-installed on all instances at launch, Systems Manager automates virus definition updates across all instances, State Manager maintains antivirus configuration consistency, and this provides automated deployment and maintenance.

C) Amazon Inspector assesses instances for vulnerabilities and security exposures but does not provide antivirus capabilities or virus definition updates. Inspector performs vulnerability scanning rather than providing runtime malware protection.

D) EC2 Instance Connect provides secure SSH access management but does not deploy antivirus software or provide malware protection. Instance Connect addresses authentication rather than malware defense.

Question 132 

A company must ensure that AWS Lambda functions cannot make outbound connections to IP addresses outside the organization’s approved list. Which approach enforces this?

A) Deploy Lambda in VPC with security groups allowing only approved destinations 

B) Use Lambda reserved concurrency to limit connections 

C) Enable Lambda tracing with X-Ray 

D) Configure Lambda environment variables with approved IPs

Answer: A

Explanation: 

Controlling Lambda function outbound network connectivity requires placing functions in VPCs where security groups and NACLs provide network-level access control. Lambda functions in VPCs inherit subnet networking characteristics and are subject to security group rules controlling outbound traffic.

When Lambda functions are configured with VPC settings, they operate with elastic network interfaces in specified subnets. Security groups attached to these network interfaces define outbound connection rules. Security groups can be configured with explicit allow rules for approved destination IP addresses or CIDR ranges, implicitly denying all other destinations.

This network-based control ensures Lambda functions can only connect to approved external services, databases, or APIs. Attempts to connect to non-approved IP addresses are blocked at the network layer by security group rules. This approach provides defense-in-depth by enforcing network restrictions independent of application code.

A) This is the correct answer because Lambda in VPC uses security groups for network access control, security groups can explicitly allow only approved destination IPs, outbound connections to non-approved destinations are blocked, and this provides network-level enforcement of connectivity restrictions.

B) Lambda reserved concurrency controls the maximum number of concurrent function executions but does not control network connectivity or destination IP restrictions. Reserved concurrency addresses capacity management rather than network security.

C) Lambda tracing with X-Ray provides observability into function execution and downstream service calls but does not enforce network connectivity restrictions. X-Ray monitors and analyzes traffic but doesn’t block connections to specific destinations.

D) Environment variables store configuration data for Lambda functions but do not enforce network-level connectivity restrictions. While applications could read approved IPs from environment variables, this relies on application-level enforcement rather than network controls that cannot be bypassed.

Question 133 

A security team needs to detect when AWS resources are shared publicly through resource-based policies across all AWS accounts in the organization. Which solution provides comprehensive detection?

A) Manually review resource policies in each account monthly 

B) Use AWS IAM Access Analyzer with organization-wide analyzer to detect external access 

C) Enable CloudTrail in all accounts 

D) Use AWS Config in each account separately

Answer: B

Explanation: 

Detecting resource sharing across multiple accounts requires centralized analysis of resource-based policies. IAM Access Analyzer with organization-wide analyzers provides comprehensive cross-account visibility into resource sharing. This centralized approach identifies all resources shared outside the organization without requiring per-account configuration.

IAM Access Analyzer organization-wide analyzers are created in the organization’s management account or designated delegated administrator account. These analyzers automatically analyze resource-based policies across all member accounts, identifying resources shared with external principals outside the organization.

Access Analyzer generates findings for S3 buckets, IAM roles, KMS keys, Lambda functions, SQS queues, SNS topics, and other resources with policies granting external access. Findings show which resources are shared, with whom (external account IDs or public access), and what permissions are granted. This centralized visibility enables governance over cross-account sharing.

A) Manual monthly review across multiple accounts doesn’t scale and introduces 30-day detection delays. Resources could be shared externally for significant periods before detection. Manual processes are resource-intensive and error-prone across organizations with many accounts and resources.

B) This is the correct answer because IAM Access Analyzer organization-wide analyzers automatically analyze all accounts in the organization, detect resources shared with external principals, generate findings showing external access grants, and provide centralized visibility without per-account configuration.

C) CloudTrail logs API calls including policy modifications but does not analyze policy content to identify external sharing. CloudTrail provides audit logs but requires additional analysis to detect external access. Access Analyzer provides policy analysis that CloudTrail logs alone do not offer.

D) AWS Config in separate accounts provides per-account configuration monitoring but requires manual aggregation for organization-wide visibility. Config lacks the policy reasoning capabilities and centralized external access detection that Access Analyzer provides specifically for resource sharing analysis.

Question 134 

An organization requires that all Amazon EBS volumes attached to EC2 instances be encrypted. Existing unencrypted volumes must be identified for remediation. Which solution implements this?

A) Manually check each EBS volume monthly 

B) Use AWS Config with the encrypted-volumes managed rule to detect unencrypted volumes and trigger remediation 

C) Enable EBS encryption by default and ignore existing volumes 

D) Use CloudWatch to monitor EBS volumes

Answer: B

Explanation: 

Ensuring EBS volume encryption requires both preventive controls for new volumes and detective controls for existing volumes. AWS Config provides continuous compliance monitoring that identifies unencrypted volumes across all accounts. The encrypted-volumes managed rule specifically detects this compliance violation.

AWS Config continuously evaluates EBS volumes against the encrypted-volumes rule, marking unencrypted volumes as non-compliant. Config generates compliance findings that can trigger EventBridge rules for notifications or automated remediation. The continuous evaluation ensures newly created unencrypted volumes are detected immediately.

Config remediation can automatically address unencrypted volumes by creating encrypted snapshots, launching new encrypted volumes from those snapshots, and updating instance attachments. While this process requires brief instance downtime for volume replacement, it can be automated through Systems Manager Automation documents orchestrating the encryption migration.

A) Manual monthly checking doesn’t scale across organizations with numerous volumes and introduces 30-day detection delays. Unencrypted volumes could store sensitive data for significant periods before detection. Manual processes are operationally intensive and error-prone.

B) This is the correct answer because AWS Config continuously monitors EBS volume encryption status, the encrypted-volumes managed rule detects non-compliant unencrypted volumes, automatic remediation can migrate data to encrypted volumes, and this provides both detection and remediation capabilities.

C) Enabling EBS encryption by default prevents new unencrypted volumes but doesn’t address existing unencrypted volumes. The requirement specifically mentions identifying existing volumes for remediation, which requires detective controls beyond just enabling default encryption.

D) CloudWatch monitors operational metrics for AWS resources but does not evaluate EBS volume encryption configuration. CloudWatch lacks the compliance monitoring and configuration assessment capabilities that Config provides for detecting unencrypted volumes.

Question 135 

A company must implement controls preventing IAM users from creating access keys after the user has reached a maximum of one access key. Which solution enforces this?

A) Manually monitor IAM user access keys monthly 

B) Use IAM policies with conditions evaluating the number of existing access keys before allowing CreateAccessKey 

C) Enable MFA for all IAM users 

D) Use AWS Config to detect users with multiple access keys

Answer: B

Explanation: 

Limiting IAM user access keys reduces credential proliferation risks and simplifies credential management. IAM policies support conditions that evaluate the state of IAM resources during API requests. Custom condition logic can count existing access keys before permitting new key creation.

IAM policies can use custom conditions with policy variables and context keys. While IAM doesn’t provide a direct “number of access keys” condition key, Lambda-backed custom authorization can evaluate access key counts. Alternatively, organizational policy can use IAM policies denying CreateAccessKey combined with automated monitoring that revokes permissions when key limits are reached.

A more practical approach uses AWS Config to continuously monitor IAM users for access key counts. When users have one access key, Config marks them as compliant. When attempting to create a second key, preventive Lambda functions triggered by CloudTrail events can delete the newly created key and notify users that limits are exceeded.

A) Manual monthly monitoring is reactive and introduces 30-day windows where users could create multiple access keys. Manual processes don’t prevent creation of excess keys and don’t scale across organizations with many IAM users.

B) This is the correct answer because IAM policies with custom conditions can evaluate access key state before allowing creation, preventive controls block excess key creation at authorization time, Lambda-backed authorization enables complex condition logic, and this enforces key quantity limits.

C) MFA strengthens authentication security but does not limit the number of access keys users can create. MFA and access key quantity limits address different security concerns. MFA alone doesn’t enforce key count restrictions.

D) AWS Config can detect users with multiple access keys after creation but this is reactive. Keys would be created and functional before detection and remediation. Preventive controls blocking excess key creation are more effective than detective controls.

Question 136 

A security engineer needs to implement automated incident response that takes forensic snapshots of EC2 instances when GuardDuty detects cryptocurrency mining activity. Which solution implements this?

A) Manually create snapshots when alerted to GuardDuty findings 

B) Use GuardDuty findings with EventBridge to trigger Lambda functions that create EBS snapshots of affected instances 

C) Use AWS Backup to create daily snapshots 

D) Enable CloudWatch detailed monitoring

Answer: B

Explanation: 

Automated forensic evidence collection during security incidents ensures that instance state is preserved before attackers can destroy evidence or terminate resources. GuardDuty findings provide real-time threat detection, while EventBridge enables event-driven automation connecting findings to response actions.

When GuardDuty detects cryptocurrency mining through findings like CryptoCurrency:EC2/BitcoinTool.B, EventBridge rules match specific finding types and trigger Lambda functions. The Lambda function extracts instance details from the finding, retrieves all attached EBS volume IDs, and creates snapshots of each volume while tagging them with incident information.

Forensic snapshots preserve instance storage state for later analysis without disrupting the running instance initially. Security teams can subsequently isolate or terminate malicious instances after evidence is secured. Snapshots can be copied to security accounts for investigation, ensuring evidence remains available even if compromised accounts are modified.

A) Manual snapshot creation introduces delays allowing attackers time to destroy evidence or continue malicious activities. Manual processes require human availability and response time. During off-hours or high-volume incidents, manual response may be insufficient for timely evidence preservation.

B) This is the correct answer because GuardDuty findings trigger automated response through EventBridge, Lambda functions create EBS snapshots preserving instance state, snapshots capture forensic evidence before attacker evidence destruction, and automation provides rapid response without manual intervention.

C) AWS Backup creates scheduled snapshots for disaster recovery but doesn’t provide incident-triggered forensic snapshots. Backup schedules may not align with incident timing, potentially missing critical evidence. Backup addresses disaster recovery rather than security incident response.

D) CloudWatch detailed monitoring provides additional instance metrics but does not create EBS snapshots or automate incident response. CloudWatch monitors instance performance rather than orchestrating security response actions.

Question 137 

An organization requires that AWS Lambda functions can only be invoked by specific IAM roles and not directly by IAM users. Which configuration enforces this?

A) Use Lambda resource-based policies denying access from IAM users and allowing only specific roles 

B) Delete all IAM users in the account 

C) Enable Lambda reserved concurrency D) Use Lambda environment variables to store allowed roles

Answer: A

Explanation: 

Controlling Lambda function invocation requires authorization policies that distinguish between different principal types. Lambda resource-based policies provide function-level access control that can specify exactly which principals (users, roles, services) can invoke functions. These policies enable fine-grained invocation restrictions.

Lambda resource-based policies use the Principal element to specify allowed callers. Policies can explicitly allow specific IAM role ARNs while omitting IAM users from allowed principals. Alternatively, explicit deny statements can block invocations from IAM user principals using condition keys like aws:PrincipalType.

The aws:PrincipalType condition key distinguishes between different principal types (User, Role, AssumedRole). Lambda resource-based policies with deny statements for PrincipalType=User block direct user invocations while allowing role-based invocations. This ensures functions are called through application roles rather than user credentials.

A) This is the correct answer because Lambda resource-based policies control invocation authorization, policies can explicitly allow specific roles while denying user principals, aws:PrincipalType conditions distinguish between users and roles, and this enforces role-based invocation requirements.

B) Deleting all IAM users is impractical and eliminates legitimate uses of IAM users for console access and specific administrative tasks. The requirement is to prevent direct Lambda invocation by users, not eliminate users entirely. This extreme approach creates operational challenges.

C) Lambda reserved concurrency controls maximum concurrent executions but does not control authorization or which principals can invoke functions. Reserved concurrency addresses capacity management rather than access control.

D) Environment variables store configuration data but do not enforce authorization or invocation restrictions. While applications could check environment variables, this relies on application-level enforcement that can be bypassed. Resource-based policies provide AWS-enforced authorization.

Question 138 

A company must ensure that Amazon S3 bucket encryption cannot be disabled once enabled. Which approach enforces this?

A) Manually enable encryption on all buckets 

B) Use S3 bucket policies with conditions denying PutBucketEncryption API calls that disable encryption 

C) Enable S3 default encryption and ignore bucket-level settings 

D) Use AWS Config to detect encryption changes

Answer: B

Explanation: 

Preventing bucket encryption from being disabled requires policies that block API calls attempting to remove encryption configuration. S3 bucket policies can include conditions evaluating encryption settings in API requests. Deny statements ensure encryption configuration cannot be removed.

S3 bucket policies can deny the PutBucketEncryption action when the request would disable encryption. The s3:x-amz-server-side-encryption-enabled condition key or evaluation of request parameters can determine whether encryption is being disabled. Explicit deny statements prevent encryption removal regardless of IAM permissions.

Combining bucket policies with Service Control Policies provides defense-in-depth. SCPs can deny encryption removal organization-wide, ensuring no account can disable bucket encryption. This dual-layer approach prevents both intentional and accidental encryption removal through strong preventive controls.

A) Manual encryption enablement doesn’t prevent subsequent disabling. Without preventive policies, users with appropriate permissions can modify bucket encryption settings. Manual processes establish initial configuration but don’t maintain protection against changes.

B) This is the correct answer because S3 bucket policies can deny API calls that disable encryption, condition keys evaluate encryption parameters in requests, explicit denies prevent encryption removal regardless of IAM permissions, and this ensures encryption cannot be disabled once enabled.

C) S3 default encryption at the account level ensures new objects are encrypted but doesn’t prevent bucket-level encryption configuration from being disabled. Default encryption and bucket encryption settings are independent. Default encryption doesn’t provide the configuration immutability required.

D) AWS Config detects encryption configuration changes after they occur but doesn’t prevent changes. Config is reactive, allowing encryption to be disabled before remediation. Preventive controls blocking encryption removal are more effective than detective controls.

Question 139 

A security team needs to identify all IAM roles that have been inactive (not used) for more than 90 days for potential removal. Which approach provides this information?

A) Manually review CloudTrail logs for role usage 

B) Use IAM credential reports with last used information and automated analysis identifying inactive roles 

C) Delete all IAM roles and recreate as needed 

D) Enable GuardDuty to detect unused roles

Answer: B

Explanation: 

Identifying unused IAM roles supports least privilege principles by removing unnecessary access paths. IAM credential reports provide comprehensive information about IAM principals including last used timestamps. Automated analysis of these reports identifies roles meeting inactivity criteria.

IAM credential reports are CSV files containing information about all IAM users and roles including password last used, access key last used, and role last used dates. Security teams download these reports and use scripts or automated tools to identify roles with last used dates older than 90 days or roles that have never been used.

Automated analysis using Lambda functions or AWS Config custom rules can periodically evaluate credential reports, identify inactive roles, and generate notifications or tickets for review. Security teams review identified roles to determine whether they’re truly unnecessary or simply used infrequently but still required for specific scenarios.

A) Manual CloudTrail log review for role usage across 90 days doesn’t scale and is extremely time-consuming. CloudTrail logs are voluminous and lack the summarized “last used” information that credential reports provide. Manual analysis is impractical for this use case.

B) This is the correct answer because IAM credential reports contain last used information for all roles, automated analysis identifies roles inactive for specified periods, reports provide comprehensive role usage data in accessible format, and this enables systematic identification of unused roles.

C) Deleting all roles and recreating them is operationally disruptive and would cause widespread application failures. This extreme approach creates unnecessary outages and operational burden. The requirement is to identify unused roles for evaluation, not to delete all roles indiscriminately.

D) GuardDuty detects security threats through behavioral analysis but does not track IAM role usage for identifying inactive roles. GuardDuty focuses on threat detection rather than access management or role lifecycle analysis.

Question 140 

An organization requires that all AWS API calls from production accounts be logged to an immutable audit trail in a separate security account. Which solution implements this?

A) Enable CloudTrail in each production account separately 

B) Create an organization CloudTrail in the management account delivering logs to S3 in the security account with S3 Object Lock 

C) Use CloudWatch Logs in each account 

D) Enable VPC Flow Logs in all accounts

Answer: B

Explanation: 

Centralized immutable audit logging requires collecting logs from all accounts to a single secure location where they cannot be modified or deleted. AWS Organizations CloudTrail provides automatic multi-account logging, while S3 Object Lock ensures log immutability. This combination creates tamper-proof audit trails.

Organization CloudTrails created in the management account automatically log API activity from all member accounts without requiring individual trail configuration in each account. Logs from all accounts are delivered to a centralized S3 bucket in a designated security account, simplifying log management and analysis.

S3 Object Lock in compliance mode prevents log deletion or modification by any user including the AWS account root user. When Object Lock is enabled on the log bucket with appropriate retention periods, logs become immutable for the specified duration. This ensures audit trail integrity and meets compliance requirements for tamper-proof logging.

A) Enabling CloudTrail separately in each account requires per-account configuration and doesn’t provide centralized logging. Individual account administrators could potentially disable trails or delete logs. Separate trails don’t provide the centralized immutability required by the organization.

B) This is the correct answer because organization CloudTrails automatically log all member accounts, logs are delivered to centralized S3 bucket in security account, S3 Object Lock provides immutability preventing log modification or deletion, and this creates comprehensive tamper-proof audit trails.

C) CloudWatch Logs provides log aggregation but lacks native multi-account automatic collection and doesn’t provide immutability guarantees like S3 Object Lock. CloudWatch Logs retention is configurable but doesn’t offer the compliance-mode immutability required for audit trails.

D) VPC Flow Logs capture network traffic metadata but do not log AWS API calls. Flow logs provide network-level visibility but cannot replace CloudTrail for API activity auditing. Flow logs are complementary to CloudTrail but insufficient for audit trail requirements.

Leave a Reply

How It Works

img
Step 1. Choose Exam
on ExamLabs
Download IT Exams Questions & Answers
img
Step 2. Open Exam with
Avanset Exam Simulator
Press here to download VCE Exam Simulator that simulates real exam environment
img
Step 3. Study
& Pass
IT Exams Anywhere, Anytime!