Visit here for our full Amazon AWS Certified Security – Specialty SCS-C02 exam dumps and practice test questions.
Question 161
An organization requires that AWS CloudFormation stacks cannot be deleted without approval from the security team. Which solution implements this requirement?
A) Enable termination protection on all CloudFormation stacks
B) Use IAM policies denying DeleteStack and implement approval workflows
C) Manually monitor stack deletions
D) Use AWS Config to detect deleted stacks
Answer: B
Explanation:
Preventing unauthorized CloudFormation stack deletion requires both blocking direct deletion through IAM policies and implementing approval workflows for legitimate deletions. IAM policies prevent casual or accidental deletion, while approval workflows provide governance for necessary stack removal.
IAM policies deny the DeleteStack action for all users except specific infrastructure management roles. For legitimate stack deletion needs, approval workflows using Step Functions orchestrate multi-party authorization. Users submit deletion requests triggering workflows that notify security teams, wait for approval, and execute deletion only after authorization.
The approval workflow can integrate with ticketing systems, require multiple approvers for production stacks, and maintain comprehensive audit trails showing who approved deletions and why. Lambda functions with elevated permissions execute approved deletions, while direct API access is blocked by IAM policies denying DeleteStack.
A) CloudFormation termination protection prevents stack deletion but can be disabled by users with appropriate permissions. Termination protection is a helpful safeguard but doesn’t provide the approval workflow or prevent authorized users from disabling protection and deleting stacks.
B) This is the correct answer because IAM policies deny unauthorized stack deletion, approval workflows provide governance for legitimate deletions, multi-party authorization ensures security team review, and this combination prevents casual deletion while enabling controlled decommissioning.
C) Manual monitoring doesn’t prevent stack deletion and is reactive. Stacks would be deleted before monitoring detects the change. Manual processes don’t provide the preventive controls or approval workflows required for governance.
D) AWS Config detects deleted stacks after deletion occurs but doesn’t prevent deletion or implement approval workflows. Config is reactive, identifying stack removal after resources are destroyed. Preventive controls are needed for this requirement.
Question 162
A company must ensure that EC2 instances cannot communicate with Amazon S3 except through VPC endpoints. Which combination enforces this?
A) Place instances in private subnets only
B) Use S3 bucket policies requiring requests to originate from specific VPC endpoint IDs and remove NAT gateway/internet gateway routes from instance subnets
C) Enable S3 encryption
D) Use security groups to block S3 traffic
Answer: B
Explanation:
Enforcing VPC endpoint usage for S3 access requires both network-level controls preventing alternative routes and S3 bucket policies validating request sources. Removing internet/NAT gateway routes ensures instances cannot reach S3 via public internet, while bucket policies provide defense-in-depth by allowing only VPC endpoint traffic.
S3 bucket policies use the aws:SourceVpce condition key to require that requests originate from specific VPC endpoint IDs. Policies deny S3 operations unless requests come through approved VPC endpoints. This ensures that even if network misconfigurations occur, bucket policies enforce VPC endpoint usage.
Removing routes to internet gateways and NAT gateways from instance subnet route tables prevents instances from reaching S3 public endpoints. With only VPC endpoint routes available, instances must use private VPC endpoints for S3 access. This network-level enforcement complements bucket policy controls.
A) Private subnets alone don’t prevent instances from accessing S3 through NAT gateways if NAT gateways exist. Private subnet placement is one component but insufficient without removing NAT gateway routes and implementing bucket policies requiring VPC endpoint usage.
B) This is the correct answer because S3 bucket policies require VPC endpoint origins, removing internet/NAT routes prevents alternative S3 access paths, dual-layer controls ensure VPC endpoint enforcement, and this provides comprehensive restriction of S3 access to VPC endpoints only.
C) S3 encryption protects data at rest but doesn’t control network paths or enforce VPC endpoint usage. Encryption and access path controls address different security concerns. Encryption alone doesn’t restrict communication methods.
D) Security groups cannot block S3 traffic specifically since S3 is accessed via HTTPS (port 443) which instances need for other services. Security groups lack the granularity to block S3 while allowing other HTTPS traffic. S3 bucket policies provide the necessary application-level control.
Question 163
A security engineer needs to detect when IAM policies are created that grant cross-account access to external AWS accounts not in the organization. Which solution provides this detection?
A) Use IAM Access Analyzer to detect cross-account access grants to external accounts
B) Manually review IAM policies monthly
C) Enable CloudTrail logging
D) Use GuardDuty
Answer: A
Explanation:
Detecting cross-account access requires analyzing IAM policies and resource-based policies to identify external access grants. IAM Access Analyzer specifically analyzes policies for cross-account sharing, identifying access granted to principals outside the organization. This automated analysis provides comprehensive visibility into external access.
IAM Access Analyzer continuously evaluates IAM role trust policies, resource-based policies, and other policies that can grant cross-account access. Analyzer generates findings when policies grant access to AWS accounts outside the organization’s trusted zone. Findings show which resources are accessible to external accounts and what permissions are granted.
Access Analyzer findings can trigger automated alerts through EventBridge. Security teams receive notifications when new cross-account access is granted, enabling rapid review of whether external sharing is authorized. Analyzer provides ongoing monitoring as policies change, catching new external access grants immediately.
A) This is the correct answer because IAM Access Analyzer specifically detects cross-account access in policies, analyzer identifies external account access grants, findings show which resources are shared with external principals, and this provides comprehensive cross-account access visibility.
B) Manual monthly policy review doesn’t scale across organizations with numerous policies and introduces 30-day detection delays. Cross-account access could exist for significant periods before detection. Manual analysis cannot match the comprehensive policy evaluation Access Analyzer provides.
C) CloudTrail logs policy creation and modification events but doesn’t analyze policy content to identify cross-account access grants. CloudTrail provides audit trails but requires additional analysis to detect external sharing. Access Analyzer provides the policy analysis CloudTrail lacks.
D) GuardDuty detects security threats through behavioral analysis but doesn’t analyze IAM policies for cross-account access configuration. GuardDuty focuses on detecting malicious activity rather than identifying policy-based external access grants.
Question 164
An organization requires that all AWS Lambda environment variables containing sensitive data be encrypted using customer-managed KMS keys. Which configuration implements this?
A) Lambda encrypts environment variables automatically
B) Configure Lambda functions with environment variable encryption using customer-managed KMS keys
C) Store sensitive data in Lambda code
D) Use Lambda layers for sensitive data
Answer: B
Explanation:
Lambda environment variables can be encrypted at rest using KMS keys. While Lambda provides default encryption using AWS-managed keys, organizations requiring customer-managed keys must explicitly configure functions with specific KMS key ARNs. This ensures sensitive configuration data is encrypted using keys under organizational control.
When configuring Lambda functions, environment variable encryption is enabled by specifying a customer-managed KMS key ARN. The KMS key must grant Lambda service permissions to decrypt environment variables during function initialization. After configuration, all environment variables are encrypted at rest using the specified customer-managed key.
Lambda automatically decrypts environment variables using the configured KMS key when functions are invoked. Applications access environment variables as plaintext through standard process.env access, with decryption transparent to application code. The KMS key must have policies allowing the Lambda execution role to use it for decryption operations.
A) Lambda does encrypt environment variables at rest by default but uses AWS-managed keys rather than customer-managed keys. Default encryption doesn’t meet requirements for customer-managed keys with organizational control over key policies, rotation, and audit capabilities.
B) This is the correct answer because Lambda functions can be configured with customer-managed KMS keys for environment variable encryption, configuration specifies the KMS key ARN during function creation or update, this provides encryption using keys under organizational control, and decryption is automatic during function execution.
C) Storing sensitive data directly in Lambda code creates security risks by embedding secrets in deployment packages. Code is often stored in version control systems where secrets become permanently embedded in repository history. Environment variables with encryption provide proper secrets management.
D) Lambda layers package shared libraries and dependencies but are not designed for sensitive data storage. Layers are deployment artifacts distributed across functions and don’t provide the encryption and access control that encrypted environment variables offer.
Question 165
A company must implement controls ensuring that Amazon DynamoDB tables cannot have their deletion protection disabled. Which solution enforces this?
A) Enable deletion protection on all tables manually
B) Use IAM policies or Service Control Policies denying UpdateTable actions that disable deletion protection
C) Use DynamoDB backups
D) Enable DynamoDB encryption
Answer: B
Explanation:
DynamoDB deletion protection prevents accidental table deletion by requiring deletion protection to be disabled before tables can be deleted. Preventing deletion protection from being disabled requires IAM policies that block UpdateTable operations attempting to set DeletionProtectionEnabled to false. This ensures protection remains active.
IAM policies can use the dynamodb:DeletionProtectionEnabled condition key to evaluate UpdateTable requests. Policies explicitly deny UpdateTable actions when the request attempts to set DeletionProtectionEnabled to false. This preventive control ensures tables maintain deletion protection regardless of user permissions.
Service Control Policies provide organization-wide enforcement preventing any account from disabling deletion protection on DynamoDB tables. Even account administrators cannot override SCPs, ensuring consistent deletion protection across all accounts. This defense-in-depth approach provides maximum protection against accidental table deletion.
A) Manually enabling deletion protection establishes initial configuration but doesn’t prevent subsequent disabling by users with UpdateTable permissions. Without preventive policies, deletion protection can be removed, eliminating its protective value.
B) This is the correct answer because IAM policies can deny UpdateTable operations that disable deletion protection, condition keys evaluate DeletionProtectionEnabled parameter values, preventive controls block deletion protection removal, and SCPs enable organization-wide enforcement.
C) DynamoDB backups enable recovery from deletion but don’t prevent tables from being deleted or deletion protection from being disabled. Backups address disaster recovery rather than preventing destructive operations on active tables.
D) DynamoDB encryption protects data at rest but doesn’t prevent table deletion or deletion protection removal. Encryption and deletion controls address different security concerns. Encryption alone doesn’t enforce deletion protection requirements.
Question 166
A security team needs to detect when AWS Elastic Load Balancers are created without access logging enabled. Which solution provides automated detection?
A) Manually review load balancers monthly
B) Use AWS Config with custom rules detecting load balancers without access logging enabled
C) Enable VPC Flow Logs
D) Use GuardDuty
Answer: B
Explanation:
Ensuring load balancer access logging provides critical visibility into traffic patterns, request sources, and potential security issues. Detecting load balancers without logging requires continuous configuration monitoring. AWS Config with custom rules evaluates load balancer configurations against organizational logging requirements.
AWS Config custom rules use Lambda functions to evaluate Elastic Load Balancer configurations. The Lambda function retrieves load balancer attributes, checks whether access logging is enabled, and marks load balancers without logging as non-compliant. Config continuously monitors load balancers, detecting new ones without logging immediately.
When Config detects non-compliant load balancers, automated remediation can enable access logging by updating load balancer attributes through the ModifyLoadBalancerAttributes API. Remediation specifies S3 bucket destinations for logs and enables logging without requiring manual intervention. This ensures comprehensive logging coverage.
A) Manual monthly review introduces 30-day windows where load balancers operate without logging. During these periods, security teams lack visibility into load balancer traffic. Manual processes don’t scale across organizations with many load balancers.
B) This is the correct answer because Config custom rules continuously monitor load balancer configurations, rules detect missing access logging, automated remediation can enable logging on non-compliant load balancers, and this ensures comprehensive logging coverage.
C) VPC Flow Logs capture network traffic at the network interface level but don’t provide application-layer visibility into HTTP requests, user agents, or request patterns that load balancer access logs capture. Flow logs complement but don’t replace load balancer logging.
D) GuardDuty detects security threats through behavioral analysis but doesn’t monitor load balancer configuration for logging enablement. GuardDuty focuses on threat detection rather than configuration compliance monitoring.
Question 167
An organization requires that AWS Systems Manager Session Manager sessions be logged to both CloudWatch Logs and S3 for compliance. Which configuration implements this?
A) Session Manager logs to CloudWatch automatically
B) Configure Session Manager preferences to send session logs to CloudWatch Logs and S3 simultaneously
C) Enable CloudTrail logging
D) Use VPC Flow Logs
Answer: B
Explanation:
Session Manager provides centralized logging configuration through Session Manager Preferences in Systems Manager. These preferences define where session logs and output are delivered. Organizations can configure multiple log destinations including CloudWatch Logs and S3 for comprehensive logging and compliance.
Session Manager Preferences allow specifying CloudWatch Logs log group ARNs and S3 bucket names for session output logging. When configured, all Session Manager sessions automatically send logs to both destinations. CloudWatch Logs enables real-time monitoring and alerting, while S3 provides long-term cost-effective storage for compliance retention.
Session logs contain comprehensive information about shell commands executed, output generated, and session metadata including who initiated sessions, when they occurred, and which instances were accessed. Dual logging ensures redundancy and meets compliance requirements for multiple log copies in different storage systems.
A) Session Manager does not automatically log to CloudWatch without configuration. Logging destinations must be explicitly configured through Session Manager Preferences. Without configuration, session activity may not be logged or only locally logged on instances.
B) This is the correct answer because Session Manager Preferences configure log destinations, preferences support simultaneous CloudWatch Logs and S3 logging, configuration ensures all sessions are logged to both destinations, and this meets compliance requirements for dual logging.
C) CloudTrail logs Session Manager API calls like StartSession but doesn’t capture session output or commands executed during sessions. CloudTrail provides API-level audit trails but not the detailed session content that Session Manager logging captures.
D) VPC Flow Logs capture network traffic metadata but don’t log Session Manager session content, commands executed, or output generated. Flow logs provide network visibility but cannot replace application-level session logging.
Question 168
A company must ensure that Amazon RDS automated backups are retained for the maximum allowed period to meet regulatory requirements. What is the maximum retention period?
A) 7 days
B) 35 days
C) 90 days
D) 1 year
Answer: B
Explanation:
RDS automated backup retention is configurable from 0 (disabled) to 35 days. This automated backup retention applies to daily snapshots and transaction logs enabling point-in-time recovery. For organizations requiring longer retention, manual snapshots or AWS Backup provide extended retention capabilities.
Automated backups with 35-day retention provide over a month of point-in-time recovery capability. Organizations can restore databases to any point within the retention window, recovering from data corruption, accidental deletions, or other issues. The 35-day limit balances storage costs with recovery flexibility.
For regulatory requirements exceeding 35 days, organizations must implement supplementary backup strategies. Manual RDS snapshots have indefinite retention until explicitly deleted. AWS Backup provides centralized backup management with retention policies supporting years of retention, copying backups across regions and accounts.
A) 7 days is the default automated backup retention for RDS instances but not the maximum. Organizations can configure longer retention up to 35 days for automated backups meeting various compliance requirements.
B) This is the correct answer because 35 days is the maximum automated backup retention period for Amazon RDS, automated backups within this period enable point-in-time recovery, and longer retention requires manual snapshots or AWS Backup.
C) 90 days exceeds RDS automated backup maximum retention. While AWS Backup can maintain RDS backups for 90 days or longer, automated RDS backup retention specifically has a 35-day maximum limit.
D) One year greatly exceeds automated backup retention limits. Long-term retention requires manual snapshot management or AWS Backup policies. Automated backups are designed for short to medium-term recovery rather than long-term archival.
Question 169
A security engineer needs to implement controls preventing AWS Systems Manager Parameter Store parameters from being deleted. Which solution enforces this?
A) Use IAM policies denying DeleteParameter and DeleteParameters actions
B) Enable parameter encryption
C) Use Parameter Store versioning
D) Store parameters in Secrets Manager instead
Answer: A
Explanation:
Preventing Parameter Store parameter deletion requires IAM policies that block delete operations. IAM policies can explicitly deny DeleteParameter and DeleteParameters actions, ensuring parameters cannot be removed even by users with broad Systems Manager permissions. This preventive control protects critical configuration data.
IAM policies use explicit deny statements for parameter deletion actions. Deny statements take precedence over allow statements, ensuring even administrators cannot delete parameters. Policies can target specific parameter paths, protecting production parameters while allowing deletion of test or development parameters.
Service Control Policies provide organization-wide enforcement of parameter deletion restrictions. SCPs deny deletion across all accounts, ensuring consistent protection. For exceptional cases requiring deletion, controlled processes through designated roles with SCP exceptions enable authorized parameter lifecycle management.
A) This is the correct answer because IAM policies can explicitly deny parameter deletion actions, deny statements prevent deletion regardless of other permissions, policies can protect specific parameter paths, and SCPs enable organization-wide enforcement.
B) Parameter encryption protects parameter values at rest but doesn’t prevent parameter deletion. Encryption and deletion controls address different security concerns. Encrypted parameters can still be deleted by users with appropriate permissions.
C) Parameter Store versioning maintains parameter value history but doesn’t prevent parameter deletion. Versioning enables recovery of previous values but doesn’t block deletion of the parameter itself. Deletion removes all versions.
D) Migrating to Secrets Manager doesn’t solve the deletion prevention requirement. Secrets Manager secrets also require IAM policies to prevent deletion. The protection mechanism (IAM deny policies) is the same regardless of which service stores the data.
Question 170
An organization requires that all Amazon S3 bucket policies explicitly deny HTTP (non-TLS) requests. Which bucket policy condition implements this?
A) Use aws:SourceIp condition
B) Use aws:SecureTransport condition key denying requests where SecureTransport is false
C) Enable S3 encryption at rest
D) Use S3 Block Public Access
Answer: B
Explanation:
Enforcing TLS for S3 requests ensures data in transit is encrypted. The aws:SecureTransport condition key evaluates whether requests use HTTPS (TLS) or HTTP. Bucket policies denying requests when SecureTransport equals false enforce TLS usage, blocking unencrypted HTTP requests.
S3 bucket policies include explicit deny statements for all actions (s3:*) when the aws:SecureTransport condition evaluates to false. This means any request using HTTP instead of HTTPS is denied regardless of authentication or authorization. The policy ensures all S3 access occurs over encrypted connections.
This bucket-level enforcement provides defense-in-depth beyond application configuration. Even if applications are misconfigured to use HTTP, bucket policies block unencrypted requests at the S3 service level. This prevents data transmission over unencrypted channels protecting data confidentiality in transit.
A) The aws:SourceIp condition evaluates request source IP addresses but doesn’t determine whether requests use TLS. SourceIp addresses network access but not transport encryption. TLS enforcement requires the SecureTransport condition.
B) This is the correct answer because aws:SecureTransport condition evaluates whether requests use HTTPS, bucket policies can deny requests when SecureTransport is false, this enforces TLS usage for all S3 access, and unencrypted HTTP requests are blocked.
C) S3 encryption at rest protects stored objects but doesn’t enforce TLS for requests. Encryption at rest and encryption in transit are separate controls. Rest encryption doesn’t prevent unencrypted HTTP requests.
D) S3 Block Public Access prevents public access to buckets but doesn’t enforce TLS for requests. Block Public Access addresses authorization concerns while TLS enforcement addresses transport encryption. These are complementary but independent controls.
Question 171
A company must ensure that AWS Lambda functions cannot access the internet for any purpose. Which configuration implements this?
A) Deploy Lambda in VPC with no NAT gateway or internet gateway routes and no VPC endpoints
B) Use security groups to block all outbound traffic
C) Disable Lambda internet access in function settings
D) Use Lambda reserved concurrency
Answer: A
Explanation:
Completely isolating Lambda functions from internet access requires deploying them in VPC subnets without any outbound connectivity routes. This means no NAT gateways, internet gateways, or VPC endpoints. Without routing paths to external networks, Lambda functions cannot make any outbound connections.
Lambda functions in VPC subnets inherit the subnet’s routing configuration. If route tables contain no routes to NAT gateways, internet gateways, or VPC endpoints, functions have no network path to any external resources. This complete isolation ensures functions cannot communicate with internet services or AWS service public endpoints.
This configuration is extremely restrictive and typically used for highly sensitive workloads that process data without external dependencies. Functions can only interact with resources in the same VPC and cannot access AWS services unless privately accessible through VPC endpoints. Most workloads require some AWS service access.
A) This is the correct answer because Lambda in VPC subnets without any gateway routes has no network path to the internet, removing NAT/internet gateways and VPC endpoints eliminates all external connectivity, functions are completely network-isolated, and this prevents all internet access.
B) Security groups control inbound and outbound traffic but blocking all outbound traffic prevents necessary communications within the VPC. Security groups filter traffic but don’t prevent internet access if routing paths exist. Complete isolation requires removing routing paths.
C) Lambda doesn’t have a function-level “disable internet access” setting. Internet connectivity is controlled through VPC configuration and routing. Function settings control execution parameters but not network connectivity capabilities.
D) Lambda reserved concurrency controls maximum concurrent executions but has no relationship to network connectivity or internet access. Reserved concurrency addresses capacity management rather than network isolation.
Question 172
A security team needs to detect when AWS KMS customer-managed keys enter “Pending Deletion” state. Which solution provides real-time alerting?
A) Manually check key status daily
B) Use EventBridge rules matching KMS key state change events to trigger SNS notifications
C) Enable CloudTrail logging
D) Use AWS Config
Answer: B
Explanation:
KMS key deletion is scheduled through a waiting period (7-30 days) during which keys are in “Pending Deletion” state. Detecting this state change requires monitoring KMS events in real-time. EventBridge receives KMS state change events enabling immediate alerting when keys enter pending deletion.
KMS publishes state change events to EventBridge when keys are scheduled for deletion through the ScheduleKeyDeletion API. EventBridge rules match events where key state transitions to “PendingDeletion”, triggering immediate actions. SNS notifications alert security teams that keys are scheduled for deletion.
Real-time alerting enables security teams to investigate whether key deletion is authorized and cancel deletion if necessary. The 7-30 day waiting period provides time to prevent accidental or malicious key deletion. Immediate detection ensures maximum time for investigation and corrective action.
A) Manual daily checking introduces up to 24-hour detection delays. Keys could enter pending deletion state and substantial waiting period time could elapse before detection. Manual processes don’t provide the immediate alerting required for rapid response.
B) This is the correct answer because EventBridge receives KMS key state change events in real-time, rules match PendingDeletion state transitions, SNS notifications provide immediate alerting to security teams, and this enables rapid investigation and potential cancellation of unauthorized deletions.
C) CloudTrail logs ScheduleKeyDeletion API calls showing when deletion was scheduled but doesn’t provide event-driven alerting. CloudTrail provides audit logs requiring analysis while EventBridge provides event-driven automation for immediate alerting.
D) AWS Config monitors key configuration including deletion state but Config evaluation cycles introduce delays compared to EventBridge’s event-driven architecture. Config is designed for configuration compliance rather than real-time operational event alerting.
Question 173
An organization requires that EC2 instance console screenshots and logs be automatically captured when security incidents are detected for forensic analysis. Which solution implements this?
A) Manually capture screenshots during incidents
B) Use GuardDuty findings with EventBridge to trigger Lambda functions that capture console output and screenshots via EC2 APIs
C) Enable CloudWatch detailed monitoring
D) Use Systems Manager Session Manager
Answer: B
Explanation:
Forensic evidence collection during security incidents requires capturing instance state including console output and screenshots showing boot sequences and system messages. Automated capture ensures evidence is preserved immediately upon incident detection before attackers can destroy it or instances are terminated.
When GuardDuty detects compromised instances, EventBridge rules trigger Lambda functions that use EC2 GetConsoleOutput and GetConsoleScreenshot APIs. These APIs retrieve instance console text output and screenshots, which Lambda stores in S3 buckets with incident metadata tags for forensic analysis.
Console output contains boot messages, system logs, and error messages valuable for understanding how instances were compromised. Screenshots show visual console state helpful for identifying boot issues, kernel panics, or other system-level problems. Automated capture ensures this ephemeral data is preserved for investigation.
A) Manual screenshot and log capture introduces delays allowing attackers time to destroy evidence or for instance state to change. Manual processes require human availability and may miss critical evidence if incidents occur during off-hours. Automation ensures immediate evidence preservation.
B) This is the correct answer because GuardDuty detects compromised instances, EventBridge triggers automated forensic collection, Lambda functions use EC2 APIs to capture console output and screenshots, and automation ensures immediate evidence preservation for forensic analysis.
C) CloudWatch detailed monitoring provides additional metrics but doesn’t capture console output or screenshots. Detailed monitoring addresses performance metrics rather than forensic evidence collection for security incidents.
D) Systems Manager Session Manager provides interactive shell access but doesn’t automatically capture console output or screenshots during incidents. Session Manager enables investigation but doesn’t provide automated forensic evidence collection.
Question 174
A company must ensure that AWS Lambda functions can only invoke other Lambda functions within the same AWS account and not cross-account. Which solution enforces this?
A) Use Lambda resource-based policies denying cross-account invocations
B) Deploy Lambda in VPC
C) Enable Lambda reserved concurrency
D) Use X-Ray tracing
Answer: A
Explanation:
Controlling Lambda function invocation sources requires resource-based policies defining which principals can invoke functions. Lambda resource-based policies can restrict invocations to same-account principals while denying cross-account access. This ensures functions aren’t invoked by external accounts.
Lambda resource-based policies use the Principal and Condition elements to control invocation authorization. Policies can allow invocations only when the aws:PrincipalAccount matches the function’s account ID, denying requests from other accounts. This prevents cross-account Lambda invocations even if trust relationships exist.
Organization-wide enforcement requires applying these policies to all Lambda functions through automated deployment processes or AWS Config remediation. Lambda deployment pipelines can automatically attach restrictive resource-based policies ensuring consistent same-account invocation restrictions across all functions.
A) This is the correct answer because Lambda resource-based policies control invocation authorization, policies can restrict invocations to same-account principals, conditions evaluate request account context, and this prevents cross-account Lambda invocations.
B) Lambda VPC deployment controls network connectivity but doesn’t restrict which accounts can invoke functions. VPC configuration addresses network access while resource-based policies control invocation authorization. These are separate concerns.
C) Lambda reserved concurrency controls maximum concurrent executions but doesn’t restrict which accounts can invoke functions. Reserved concurrency addresses capacity management rather than invocation authorization or account restrictions.
D) X-Ray tracing provides distributed request tracing for observability but doesn’t control invocation authorization or prevent cross-account invocations. X-Ray monitors invocations but doesn’t restrict them.
Question 175
A security engineer needs to implement automated remediation that removes overly permissive inline IAM policies (policies with Action: “” and Resource: ““) from IAM users. Which solution implements this?
A) Manually review IAM user policies monthly
B) Use AWS Config with custom rules detecting overly permissive inline policies and Lambda remediation deleting them
C) Enable GuardDuty
D) Use CloudTrail to monitor policy changes
Answer: B
Explanation:
Detecting and remediating overly permissive IAM policies requires continuous policy evaluation and automated remediation. AWS Config with custom rules analyzes inline IAM policies attached to users, identifying policies granting wildcard permissions. Lambda remediation automatically removes non-compliant policies.
Config custom rules use Lambda functions to evaluate IAM user inline policies. The evaluation function retrieves all inline policies for users, parses policy documents, and identifies policies containing Action: “” with Resource: ““. Users with such policies are marked non-compliant.
Automated remediation triggers Lambda functions that delete overly permissive inline policies using DeleteUserPolicy API. The remediation preserves managed policies and other inline policies, removing only those violating least privilege principles. This automated enforcement prevents excessive privilege escalation through inline policies.
A) Manual monthly review doesn’t prevent overly permissive policies from being created and introduces 30-day windows where excessive permissions exist. Manual processes are operationally intensive and don’t scale across organizations with many IAM users.
B) This is the correct answer because Config custom rules continuously evaluate IAM user inline policies, rules detect policies with wildcard actions and resources, automated remediation deletes overly permissive policies, and this enforces least privilege for IAM user permissions.
C) GuardDuty detects security threats through behavioral analysis but doesn’t evaluate IAM policy content for excessive permissions. GuardDuty focuses on detecting malicious activity rather than identifying policy configuration issues violating least privilege.
D) CloudTrail logs policy creation and modification events but doesn’t analyze policy content for excessive permissions. CloudTrail provides audit trails requiring additional analysis while Config provides policy evaluation and automated remediation.
Question 176
An organization requires that Amazon ECS task definitions cannot use the “awsvpc” network mode without encryption in transit enabled. Which solution enforces this?
A) Manually review task definitions
B) Use IAM policies with conditions evaluating task definition network mode and requiring TLS configuration
C) Enable ECS encryption at rest
D) Use VPC Flow Logs
Answer: B
Explanation:
Enforcing security requirements for ECS task definitions requires IAM policies that evaluate task definition parameters during registration. Policies can assess network mode configuration and enforce additional security requirements like encryption in transit. This preventive control blocks non-compliant task definitions at creation.
IAM policies for RegisterTaskDefinition can use condition keys to evaluate task definition JSON. Custom authorization can parse network configuration and require that when networkMode is “awsvpc”, additional configuration enforcing TLS is present. Policies deny registration of task definitions not meeting encryption requirements.
While ECS doesn’t have native “encryption in transit” settings for task definitions, organizations can enforce that container definitions include environment variables or configuration requiring TLS for inter-service communication. IAM policy conditions validate these configurations are present before allowing task definition registration.
A) Manual task definition review is operationally intensive and introduces deployment delays. Manual processes don’t scale across organizations with many ECS services and frequent deployments. Automated policy enforcement provides consistent compliance without operational overhead.
B) This is the correct answer because IAM policies can evaluate task definition parameters during registration, conditions can check network mode and require encryption configuration, preventive controls block non-compliant task definitions, and this enforces encryption in transit requirements.
C) ECS encryption at rest addresses data stored in volumes but doesn’t enforce encryption in transit for network communications. Rest and transit encryption are separate concerns. ECS rest encryption doesn’t control network mode or transit encryption requirements.
D) VPC Flow Logs capture network traffic metadata but don’t evaluate ECS task definition configurations or enforce encryption requirements. Flow logs are reactive, showing traffic after it occurs, but don’t prevent non-compliant task definition registration.
Question 177
A company must ensure that AWS Organizations Service Control Policies cannot be modified without approval from the security team. Which solution implements this?
A) Use IAM policies denying SCP modification actions and implement approval workflows
B) Manually monitor SCP changes
C) Enable CloudTrail logging D) Use AWS Config
Answer: A
Explanation:
Service Control Policies govern access across AWS Organizations accounts, making SCP modifications high-impact operations requiring governance. Preventing unauthorized SCP changes requires IAM policies denying modification actions except through controlled approval workflows. This ensures SCP changes receive proper oversight.
IAM policies deny AttachPolicy, DetachPolicy, CreatePolicy, UpdatePolicy, and DeletePolicy actions for SCPs. Only designated security team roles with proper authorization can execute these actions through approval workflows. Workflows use Step Functions orchestrating multi-party authorization before executing approved SCP changes.
Approval workflows notify security leadership when SCP changes are requested, collect approvals, and execute modifications only after authorization. Lambda functions with elevated permissions perform approved changes while direct API access is blocked by IAM denies. This provides governance over organization-wide security policies.
A) This is the correct answer because IAM policies deny unauthorized SCP modifications, approval workflows provide governance for legitimate changes, multi-party authorization ensures security team review, and this prevents unauthorized modification of organization-wide security controls.
B) Manual monitoring doesn’t prevent SCP modifications and is reactive. SCPs would be changed before monitoring detects modifications. Manual processes don’t provide the preventive controls or approval workflows required for governance of organization-wide policies.
C) CloudTrail logs SCP modification events providing audit trails but doesn’t prevent unauthorized changes or implement approval workflows. CloudTrail is reactive, recording changes after they occur, but doesn’t provide preventive governance controls.
D) AWS Config monitors configuration changes but doesn’t prevent SCP modifications or implement approval workflows. Config detects changes after they occur but doesn’t provide pre-modification governance or multi-party authorization for high-impact policy changes.
Question 178
A security team needs to detect when EC2 instances are launched with public IP addresses in accounts that prohibit public-facing instances. Which solution provides automated detection?
A) Manually review instances weekly
B) Use AWS Config with a custom rule detecting instances with public IPs and automated remediation terminating them
C) Enable VPC Flow Logs
D) Use GuardDuty
Answer: B
Explanation:
Detecting EC2 instances with public IP addresses requires continuous monitoring of instance network configurations. AWS Config provides configuration compliance monitoring with custom rules capable of evaluating instance public IP assignments. Automated remediation ensures non-compliant instances are addressed immediately.
Config custom rules use Lambda functions to evaluate EC2 instance public IP addresses. The evaluation retrieves instance network interface configurations, checks for public IP assignments, and marks instances with public IPs as non-compliant. Config continuously monitors instances, detecting new public IP assignments immediately.
Automated remediation can terminate non-compliant instances or remove Elastic IP associations depending on organizational policies. Termination provides strong enforcement preventing public-facing instances, while EIP disassociation preserves instances but removes public accessibility. The remediation choice depends on security posture requirements.
A) Manual weekly review introduces multi-day windows where public-facing instances exist undetected. During these periods, instances are exposed to internet-based attacks. Manual processes don’t scale across dynamic environments with frequent instance launches.
B) This is the correct answer because Config custom rules continuously monitor EC2 instance public IP configurations, rules detect instances with public IPs immediately, automated remediation terminates or remediates non-compliant instances, and this enforces policies prohibiting public-facing instances.
C) VPC Flow Logs capture network traffic but don’t evaluate instance configurations for public IP assignments. Flow logs are reactive, showing traffic after it occurs, but don’t detect configuration violations before exploitation.
D) GuardDuty detects security threats through behavioral analysis but doesn’t monitor EC2 instance network configurations for public IP policy compliance. GuardDuty focuses on threat detection rather than configuration compliance monitoring.
Question 179
An organization requires that all Amazon S3 objects be encrypted with customer-managed KMS keys and that these keys rotate quarterly. Which combination implements this?
A) Enable S3 default encryption with AWS-managed keys
B) Use S3 bucket policies requiring encryption with specific customer-managed KMS keys and implement quarterly custom key rotation
C) Enable S3 versioning
D) Use S3 Object Lock
Answer: B
Explanation:
Enforcing specific encryption keys for S3 objects requires bucket policies that deny uploads without proper encryption headers. S3 bucket policies can require that PutObject requests include server-side encryption with specific customer-managed KMS keys. Combined with custom quarterly key rotation, this meets organizational requirements.
S3 bucket policies deny s3:PutObject actions unless requests include x-amz-server-side-encryption header set to “aws:kms” with specific KMS key ARNs. This ensures all objects are encrypted with designated customer-managed keys. Policies prevent unencrypted uploads or uploads using AWS-managed keys.
KMS automatic rotation occurs annually, so quarterly rotation requires custom processes. Lambda functions triggered every 90 days create new customer-managed keys, update S3 bucket policies to reference new keys, and maintain old keys for decrypting existing objects. This custom rotation provides more frequent key material changes than automatic rotation.
A) S3 default encryption with AWS-managed keys doesn’t meet requirements for customer-managed keys with controlled rotation. AWS-managed keys rotate every three years and don’t provide the organizational control or quarterly rotation required.
B) This is the correct answer because S3 bucket policies enforce encryption with specific customer-managed KMS keys, policies deny uploads without proper encryption, custom key rotation enables quarterly rotation schedules, and this provides encryption with controlled quarterly-rotating keys.
C) S3 versioning preserves object versions but doesn’t enforce encryption or control encryption keys. Versioning and encryption are separate features addressing different requirements. Versioning alone doesn’t ensure objects are encrypted with specific keys.
D) S3 Object Lock prevents object deletion and modification but doesn’t enforce encryption or control encryption keys. Object Lock addresses data immutability while encryption requirements are separate concerns.
Question 180
A company must ensure that AWS Lambda functions automatically retry failed invocations with exponential backoff. Which Lambda feature provides this?
A) Lambda reserved concurrency
B) Lambda asynchronous invocation configuration with retry attempts and maximum event age
C) Lambda environment variables
D) Lambda VPC configuration
Answer: B
Explanation:
Lambda provides built-in retry behavior for asynchronous invocations including exponential backoff. Asynchronous invocation configuration allows customizing retry attempts (0-2 retries) and maximum event age (60-21600 seconds). This native functionality handles temporary failures with automatic retry logic.
When Lambda functions are invoked asynchronously (e.g., from S3 events, SNS, EventBridge), Lambda automatically retries failed invocations twice with exponential backoff between attempts. Asynchronous invocation configuration customizes this behavior, specifying how many retries occur and how long events remain eligible for processing.
Dead Letter Queues (DLQ) or destination configurations capture events that exceed maximum retry attempts or age limits. This ensures failed events aren’t lost and can be analyzed or reprocessed. The combination of retries, exponential backoff, and DLQ provides comprehensive error handling for asynchronous invocations.
A) Lambda reserved concurrency controls maximum concurrent executions but doesn’t configure retry behavior or error handling. Reserved concurrency addresses capacity management rather than invocation retry logic.
B) This is the correct answer because asynchronous invocation configuration controls retry behavior, Lambda provides automatic exponential backoff between retries, configuration customizes retry attempts and maximum event age, and this provides built-in resilient invocation handling.
C) Lambda environment variables store configuration data but don’t control invocation retry behavior. Environment variables provide application configuration while retry logic is a platform-level feature configured through invocation settings.
D) Lambda VPC configuration controls network connectivity but doesn’t affect invocation retry behavior. VPC deployment addresses network access while retry configuration handles error conditions and temporary failures.