Visit here for our full Amazon AWS Certified Security – Specialty SCS-C02 exam dumps and practice test questions.
Question 101
A security team needs to ensure that AWS Lambda functions cannot access the internet but can access DynamoDB. Which network configuration meets this requirement?
A) Deploy Lambda in VPC with no NAT gateway, use VPC endpoint for DynamoDB
B) Use security groups to block internet access
C) Deploy Lambda in public subnets
D) Enable Lambda reserved concurrency
Answer: A
Explanation:
Controlling Lambda function network access requires deploying functions in VPCs where routing and connectivity can be managed. Lambda functions need private connectivity to AWS services without internet access. VPC endpoints enable private service access without requiring NAT gateways or internet gateways.
When Lambda functions are configured with VPC settings, they operate within specified subnets and inherit subnet routing configurations. Subnets without routes to internet gateways or NAT gateways cannot reach the internet. VPC endpoints for DynamoDB add routes to Lambda subnets that direct DynamoDB API traffic through AWS private network infrastructure.
This configuration ensures Lambda functions can invoke DynamoDB APIs through VPC endpoints while being unable to reach internet destinations. No NAT gateway exists to translate private IP addresses for internet access, and no internet gateway route exists for direct connectivity. The function maintains necessary AWS service access while being isolated from internet threats.
A) This is the correct answer because Lambda deployed in VPC subnets without NAT gateways cannot access the internet, VPC endpoints provide private connectivity to DynamoDB, Lambda inherits subnet routing preventing internet access, and this meets requirements for DynamoDB access without internet connectivity.
B) Security groups control traffic to and from network interfaces but cannot completely block internet access if NAT gateways or internet gateways exist in subnet routes. Security groups filter traffic but do not override routing decisions that enable internet connectivity.
C) Public subnets have routes to internet gateways, providing internet connectivity. Deploying Lambda in public subnets enables rather than prevents internet access, directly contradicting the requirement to block internet connectivity.
D) Lambda reserved concurrency controls function scaling and concurrent execution limits but has no relationship to network access or connectivity. Reserved concurrency is a capacity management feature, not a network security control.
Question 102
An organization requires that all API calls to AWS services from on-premises environments be authenticated using temporary credentials that expire every 15 minutes. Which solution implements this requirement?
A) Create IAM users for on-premises applications with access key rotation
B) Use AWS STS AssumeRole with SAML 2.0 federation and 15-minute session duration
C) Store long-term credentials in AWS Secrets Manager
D) Use AWS Organizations to manage on-premises access
Answer: B
Explanation:
On-premises applications accessing AWS services require authentication mechanisms that don’t rely on long-term credentials. Federated identity enables on-premises identity providers to grant temporary AWS credentials through SAML 2.0 integration. AWS Security Token Service supports federation with configurable session durations for temporary credential validity.
SAML 2.0 federation allows on-premises Active Directory or other identity providers to authenticate users and applications. After successful authentication, the identity provider issues SAML assertions that are exchanged with AWS STS for temporary security credentials. The AssumeRoleWithSAML API accepts session duration parameters, enabling 15-minute credential expiration.
Temporary credentials include access keys and session tokens that automatically expire after the specified duration. Applications must re-authenticate with the identity provider and obtain new credentials when sessions expire. This automatic credential rotation every 15 minutes minimizes the risk window if credentials are compromised and eliminates long-term credential storage in on-premises environments.
A) IAM users provide long-term credentials through access keys, not temporary credentials. While access keys can be rotated, they remain valid until explicitly rotated or deleted. IAM users do not provide automatic 15-minute credential expiration.
B) This is the correct answer because STS with SAML 2.0 federation generates temporary credentials for federated users, session duration can be configured to 15 minutes, credentials automatically expire requiring re-authentication, and this eliminates long-term credential storage on-premises.
C) Secrets Manager stores and manages secrets but does not generate temporary AWS credentials or integrate with on-premises identity providers for federation. Secrets Manager would still involve managing credentials rather than using federation.
D) AWS Organizations manages multiple AWS accounts but does not provide identity federation or temporary credential generation for on-premises applications. Organizations focuses on account management rather than authentication mechanisms.
Question 103
A security engineer needs to implement controls that prevent S3 buckets from being shared publicly while still allowing specific buckets to be accessible through CloudFront distributions. Which approach enables this?
A) Enable S3 Block Public Access for all buckets without exceptions
B) Use S3 bucket policies allowing access only from CloudFront origin access identity (OAI) or origin access control (OAC)
C) Make buckets public but use bucket policies to restrict access
D) Use S3 ACLs to grant CloudFront access
Answer: B
Explanation:
Balancing public content delivery through CloudFront with preventing direct public bucket access requires distinguishing between CloudFront-mediated access and direct public access. CloudFront Origin Access Identity and Origin Access Control enable CloudFront to access private S3 buckets without making them publicly accessible.
OAI and OAC are special CloudFront users that can be granted access to private S3 buckets through bucket policies. Bucket policies grant s3:GetObject permissions to the OAI or OAC, while denying access to all other principals. CloudFront distributions configured with OAI/OAC can retrieve objects from the bucket, but direct public access to the bucket is denied.
This configuration allows content delivery through CloudFront while preventing users from bypassing CloudFront by accessing S3 URLs directly. The bucket remains private from a network perspective, with access controlled through CloudFront. S3 Block Public Access can remain enabled because OAI/OAC access is not considered public access.
A) S3 Block Public Access prevents all public access including through CloudFront if not properly configured. While OAI/OAC access isn’t blocked, this option suggests no exceptions which would be too restrictive for CloudFront integration.
B) This is the correct answer because S3 bucket policies can grant access to CloudFront OAI/OAC specifically, buckets remain private without public access, CloudFront can retrieve content while direct public access is denied, and this enables content delivery without public bucket exposure.
C) Making buckets public and using policies to restrict access creates inherent security risks. Public buckets are frequently misconfigured and targeted by attackers. Using OAI/OAC with private buckets is more secure than attempting to restrict access to public buckets.
D) S3 ACLs are a legacy access control mechanism that AWS recommends against using. ACLs provide coarse-grained permissions insufficient for properly securing CloudFront access. Bucket policies with OAI/OAC provide more secure and flexible access control.
Question 104
A company must ensure that all Amazon ECR container images are scanned for vulnerabilities before being deployed to ECS clusters. Which solution implements this requirement?
A) Enable ECR image scanning on push and use deployment pipelines that check scan results before deployment
B) Manually scan images weekly
C) Use GuardDuty to scan container images
D) Enable CloudTrail logging for ECR
Answer: A
Explanation:
Preventing vulnerable container images from reaching production requires automated scanning integrated into deployment pipelines. Image vulnerabilities must be detected before deployment, with non-compliant images blocked from proceeding. ECR provides integrated image scanning that can be automated as part of CI/CD workflows.
ECR image scanning can be configured to automatically scan images when they are pushed to repositories. Basic scanning powered by Clair or enhanced scanning powered by Inspector analyzes image layers for known vulnerabilities, generating findings with CVE IDs and severity ratings. Scan results are available through ECR APIs and console.
Deployment pipelines integrate ECR scanning by checking scan results before deploying images to ECS. Pipeline logic queries ECR for scan findings, evaluates severity levels against policy thresholds, and blocks deployments if critical or high-severity vulnerabilities are found. Approved images proceed to deployment while vulnerable images are rejected, requiring remediation before resubmission.
A) This is the correct answer because ECR scan-on-push automatically scans images when uploaded, deployment pipelines can programmatically check scan results, pipeline logic can block deployments of images with vulnerabilities, and this prevents vulnerable images from reaching production.
B) Manual weekly scanning introduces delays between image creation and vulnerability detection, allowing vulnerable images to be deployed during the gap. Manual processes don’t scale and cannot provide real-time protection required for CI/CD workflows.
C) GuardDuty detects runtime threats in containerized environments but does not scan container images for vulnerabilities before deployment. GuardDuty focuses on behavioral threat detection rather than static image vulnerability analysis.
D) CloudTrail logging records ECR API calls but does not scan container images for vulnerabilities. CloudTrail provides audit logs but not vulnerability assessment or security scanning capabilities.
Question 105
A security team needs to detect when EC2 security groups are modified to allow access from 0.0.0.0/0 on sensitive ports like 22 or 3389. Which solution provides real-time detection and automated remediation?
A) Use AWS Config with managed rules and automatic remediation
B) Review security groups manually each week
C) Use VPC Flow Logs to detect unauthorized access
D) Enable CloudWatch detailed monitoring
Answer: A
Explanation:
Real-time detection of security group misconfigurations requires continuous monitoring that evaluates security group rules against security policies. Modifications allowing unrestricted internet access to sensitive ports create immediate vulnerabilities that must be detected and remediated rapidly. AWS Config provides continuous configuration monitoring with automated remediation.
AWS Config includes managed rules specifically designed to detect security group misconfigurations, such as restricted-ssh (checks for unrestricted SSH access) and restricted-common-ports (checks multiple sensitive ports). These rules continuously evaluate security groups and immediately generate findings when rules allow 0.0.0.0/0 access on restricted ports.
Config can trigger automatic remediation using Systems Manager Automation documents that remove non-compliant security group rules. When Config detects a security group allowing unrestricted SSH access, it automatically invokes an automation document that revokes the offending inbound rule, closing the security gap within seconds without manual intervention.
A) This is the correct answer because AWS Config managed rules continuously monitor security groups for unrestricted access, rules detect non-compliant configurations in real-time, automatic remediation removes offending security group rules, and this prevents security group misconfigurations from creating sustained vulnerabilities.
B) Manual weekly reviews create multi-day windows where vulnerable security group configurations remain active. Manual processes cannot provide real-time protection and don’t scale across dynamic environments with frequent security group changes.
C) VPC Flow Logs capture network traffic after it occurs but do not detect security group configuration issues. Flow logs are reactive, showing that unauthorized access occurred rather than preventing it through configuration monitoring and correction.
D) CloudWatch detailed monitoring provides additional EC2 instance metrics but does not monitor security group configurations or detect policy violations. CloudWatch metrics focus on instance performance rather than security configuration compliance.
Question 106
An organization requires that all AWS Lambda functions be tagged with owner, cost-center, and environment information. Which approach enforces this requirement?
A) Manually add tags to all Lambda functions
B) Use Service Control Policies requiring specific tags on CreateFunction API calls
C) Use AWS Config to detect untagged functions
D) Create Lambda functions only through CloudFormation
Answer: B
Explanation:
Enforcing resource tagging at creation time ensures consistent metadata for cost allocation, access control, and resource management. Preventive controls that require tags during resource creation are more effective than detective controls that identify untagged resources after creation. Service Control Policies provide organization-wide enforcement of tagging requirements.
SCPs can enforce tagging requirements by using condition keys that evaluate tag presence and values in resource creation API calls. For Lambda functions, SCPs can deny CreateFunction requests unless required tags are present. The aws:RequestTag condition key checks whether specific tags are included in creation requests with required values.
Organization-wide SCP enforcement ensures all accounts must comply with tagging requirements without per-account configuration. Lambda functions cannot be created without proper tags regardless of who attempts creation or their IAM permissions. This preventive control establishes consistent tagging from creation, simplifying resource management and governance.
A) Manual tagging does not enforce requirements and relies on user compliance. Users can forget tags or use inconsistent values, creating incomplete or inaccurate resource metadata. Manual processes don’t scale and cannot guarantee compliance.
B) This is the correct answer because SCPs can require specific tags on Lambda function creation, aws:RequestTag conditions evaluate tag presence and values, preventive controls block creation of non-compliant functions, and organization-wide enforcement ensures consistent tagging.
C) AWS Config can detect untagged Lambda functions after creation, but this is reactive. Functions would exist without proper tags until Config detects and potentially remediates them. Preventive controls that require tags at creation are more effective.
D) CloudFormation can apply consistent tags to resources it creates, but it doesn’t prevent users from creating Lambda functions through console, CLI, or APIs without CloudFormation. CloudFormation alone doesn’t enforce organization-wide tagging requirements.
Question 107
A company must ensure that Amazon S3 objects cannot be deleted for 90 days to meet regulatory compliance requirements. Which S3 feature enforces this retention period?
A) S3 Versioning
B) S3 Object Lock in compliance mode with 90-day retention
C) S3 Lifecycle policies
D) S3 Intelligent-Tiering
Answer: B
Explanation:
Regulatory compliance often requires immutable data retention where objects cannot be deleted or modified for specified periods. This ensures audit trails and records remain intact for compliance, legal, or investigative purposes. S3 Object Lock provides WORM (Write Once Read Many) capabilities that enforce retention periods.
S3 Object Lock compliance mode prevents objects from being deleted or modified by any user, including the AWS account root user, until the retention period expires. When objects are written to buckets with Object Lock enabled, retention periods are set preventing deletion. No user or process can bypass or remove retention during the lock period.
Compliance mode differs from governance mode, which allows users with specific IAM permissions to override retention. Compliance mode provides absolute immutability required for strict regulatory requirements. After the 90-day retention period expires, objects can be deleted normally, but during the retention window, deletion is cryptographically prevented.
A) S3 Versioning preserves previous versions of objects including when deleted, but versioning does not prevent deletion. Versioned objects can be deleted by creating delete markers, and versions themselves can be permanently deleted. Versioning provides recovery but not immutability.
B) This is the correct answer because S3 Object Lock compliance mode enforces immutable retention periods, prevents deletion or modification by any user including root, 90-day retention period can be configured per object or bucket default, and this meets regulatory requirements for immutable data retention.
C) S3 Lifecycle policies automate object transitions and expiration but do not prevent deletion. Lifecycle policies manage object lifecycle but don’t enforce retention preventing deletion. They define when objects should be deleted, not when they cannot be deleted.
D) S3 Intelligent-Tiering automatically moves objects between storage classes based on access patterns for cost optimization. Intelligent-Tiering addresses storage costs but does not enforce retention periods or prevent object deletion.
Question 108
A security engineer needs to implement controls ensuring that Amazon RDS databases in production accounts cannot be deleted without approval. Which solution implements this requirement?
A) Enable RDS deletion protection on all production databases
B) Use IAM policies denying DeleteDBInstance for all users
C) Create manual snapshots before any deletion
D) Use AWS Backup to protect databases
Answer: A
Explanation:
Preventing accidental database deletion requires safeguards that block deletion attempts even by authorized administrators. RDS deletion protection is a database-level setting that prevents deletion regardless of IAM permissions, requiring explicit disabling before deletion can proceed. This creates an intentional barrier preventing accidental deletion.
When deletion protection is enabled on RDS instances, attempts to delete the database fail with an error indicating protection is enabled. To delete a protected database, administrators must first modify the instance to disable deletion protection, then perform the deletion. This two-step process ensures deletion is intentional and provides opportunity for review.
Deletion protection can be enforced using AWS Config rules that detect databases without protection enabled, automatically remediating by enabling protection. Service Control Policies can deny ModifyDBInstance calls that disable deletion protection, preventing users from removing this safeguard. Combined, these controls ensure production databases maintain deletion protection.
A) This is the correct answer because RDS deletion protection prevents database deletion regardless of IAM permissions, requires explicit disabling before deletion can proceed, creates intentional barriers preventing accidental deletion, and can be enforced using Config rules and SCPs.
B) IAM policies denying DeleteDBInstance for all users would prevent all database deletion including legitimate decommissioning with approval. This approach is too restrictive and doesn’t provide an approval mechanism, instead completely blocking deletion.
C) Creating manual snapshots before deletion is a best practice for backup but does not prevent deletion. Snapshots enable recovery but don’t provide the preventive control needed to block accidental or unauthorized deletion attempts.
D) AWS Backup creates and manages database backups but does not prevent database deletion. Backup ensures recoverability but doesn’t provide deletion protection or approval workflows for decommissioning databases.
Question 109
An organization needs to implement monitoring that detects when IAM policies are created that grant administrator access (Action: ““, Resource: ““). Which solution provides this detection?
A) Use AWS IAM Access Analyzer to detect overly permissive policies
B) Manually review all IAM policies weekly
C) Use CloudWatch to monitor IAM API calls
D) Enable GuardDuty for IAM monitoring
Answer: A
Explanation:
Detecting overly permissive IAM policies requires analyzing policy documents to identify broad permissions that grant excessive access. Policies allowing all actions on all resources effectively grant administrator access, violating least privilege principles. IAM Access Analyzer specifically analyzes policies for excessive permissions.
IAM Access Analyzer evaluates IAM policies attached to users, roles, and groups, identifying policies that grant broad permissions. Access Analyzer recognizes patterns indicating overly permissive access, such as Action: “” with Resource: ““, and generates findings with severity ratings indicating the risk level.
Access Analyzer findings provide detailed information about which policies grant excessive permissions, which principals have these policies, and recommendations for refinement. Security teams can configure EventBridge rules to receive notifications when Access Analyzer generates findings about overly permissive policies, enabling rapid response to policy violations.
A) This is the correct answer because IAM Access Analyzer specifically analyzes IAM policies for excessive permissions, detects policies granting administrator access through wildcard actions and resources, generates findings with severity ratings, and enables automated notifications for policy violations.
B) Manual weekly review of IAM policies does not scale, introduces detection delays, and is error-prone. With numerous policies across multiple accounts, manual review cannot reliably identify all overly permissive policies, especially as new policies are created daily.
C) CloudWatch monitors AWS resources and can receive CloudTrail logs, but CloudWatch does not analyze IAM policy content to identify excessive permissions. CloudWatch would require custom metric filters and analysis logic to detect policy patterns.
D) GuardDuty detects threats through behavioral analysis but does not analyze IAM policy documents for excessive permissions. GuardDuty focuses on detecting malicious activity rather than evaluating policy configurations for least privilege compliance.
Question 110
A company must ensure that all data stored in Amazon EFS file systems is encrypted at rest using customer-managed KMS keys. Which approach enforces this requirement?
A) Manually enable encryption on all EFS file systems
B) Use Service Control Policies denying CreateFileSystem without customer-managed key encryption
C) Enable encryption by default for EFS
D) Use AWS Config to detect unencrypted file systems
Answer: B
Explanation:
Ensuring consistent encryption with specific key types across all EFS file systems requires preventive controls that enforce encryption configuration at creation time. Service Control Policies provide organization-wide enforcement that prevents creation of non-compliant resources regardless of individual account permissions.
SCPs can deny the CreateFileSystem API call unless encryption is enabled with customer-managed KMS keys. The elasticfilesystem:Encrypted condition key checks whether encryption is enabled, while kms:EncryptionContextKeys can validate that customer-managed keys are specified rather than AWS-managed keys. Combined conditions ensure compliant file system creation.
Organization-wide SCP enforcement prevents any account from creating unencrypted EFS file systems or file systems using AWS-managed keys. This preventive control ensures all file systems meet encryption requirements from creation, eliminating the need for detective controls and remediation of non-compliant resources.
A) Manual configuration does not enforce requirements and allows creation of non-compliant file systems. Manual processes rely on user compliance and don’t scale across organizations with multiple accounts and frequent file system creation.
B) This is the correct answer because SCPs can deny EFS file system creation without customer-managed key encryption, condition keys enforce both encryption enablement and key type, preventive controls block non-compliant creation, and organization-wide enforcement ensures consistent compliance.
C) EFS does not have an account-level “encryption by default” setting like EBS. Encryption must be explicitly enabled during file system creation. While encryption is enabled by default for new file systems in the console, API calls must specify encryption explicitly.
D) AWS Config can detect unencrypted file systems or those using AWS-managed keys, but this is reactive. File systems would exist in non-compliant states before detection and remediation. Preventive controls are more effective than detective controls for this requirement.
Question 111
A security team needs to analyze AWS CloudTrail logs to identify all API calls that resulted in AccessDenied errors to investigate potential unauthorized access attempts. Which tool enables this analysis?
A) Amazon Athena querying CloudTrail logs in S3
B) VPC Flow Logs
C) AWS X-Ray
D) Amazon CloudWatch Metrics
Answer: A
Explanation:
Investigating API calls with specific error codes requires querying large volumes of CloudTrail logs to filter relevant events. CloudTrail logs stored in S3 contain comprehensive API activity including error codes and response elements. Amazon Athena enables SQL-based querying of S3 data without requiring data loading or transformation.
Athena queries can filter CloudTrail logs for events where errorCode equals “AccessDenied” or “UnauthorizedOperation”. Queries can aggregate results by user identity, source IP address, or target resource to identify patterns of unauthorized access attempts. This analysis helps security teams identify compromised credentials, misconfigured permissions, or malicious activity.
CloudTrail log analysis with Athena supports complex queries including time range filtering, joins across multiple log files, and statistical aggregations. Security teams can identify which IAM principals are generating the most access denied errors, which resources are being targeted, and whether patterns suggest automated scanning or manual probing.
A) This is the correct answer because Athena queries CloudTrail logs stored in S3 using SQL, can filter for specific error codes like AccessDenied, enables analysis of unauthorized access patterns, and supports complex queries for security investigations.
B) VPC Flow Logs capture network traffic metadata but do not log AWS API calls or error codes. Flow logs show network connections but cannot identify API-level access denied errors or authorization failures.
C) AWS X-Ray traces application requests and performance but does not capture CloudTrail API logs or analyze authorization failures. X-Ray focuses on application performance analysis rather than security audit log investigation.
D) CloudWatch Metrics capture numerical data about AWS resources but do not contain detailed CloudTrail event information. Metrics show aggregate statistics but lack the event-level detail needed for investigating specific API calls and errors.
Question 112
An organization requires that Amazon S3 buckets used for logging be immutable, preventing any modification or deletion of log files for 7 years. Which S3 feature implements this requirement?
A) S3 Versioning with lifecycle policies
B) S3 Object Lock in compliance mode with 7-year retention
C) S3 Glacier with vault lock
D) S3 Intelligent-Tiering
Answer: B
Explanation:
Long-term log retention with immutability requires absolute protection against modification or deletion, even by privileged users. Regulatory compliance and security investigations depend on tamper-proof audit logs. S3 Object Lock compliance mode provides cryptographically enforced immutability for specified retention periods.
S3 Object Lock in compliance mode prevents any user, including the AWS account root user, from deleting or overwriting objects until retention periods expire. For 7-year log retention, objects are locked for 7 years from creation. During this period, no modifications or deletions are possible regardless of IAM permissions or user requests.
Compliance mode differs from governance mode, which allows authorized users to override retention. For audit logs and compliance scenarios requiring absolute immutability, compliance mode ensures logs remain intact throughout retention periods. After 7 years, retention expires and objects can be managed normally.
A) S3 Versioning preserves object versions but does not prevent deletion. Versioning creates delete markers when objects are deleted, and versions themselves can be permanently deleted by authorized users. Versioning provides recovery but not immutability.
B) This is the correct answer because S3 Object Lock compliance mode provides absolute immutability preventing any modification or deletion, 7-year retention periods can be configured, no user including root can bypass retention during the lock period, and this meets regulatory requirements for immutable audit logs.
C) S3 Glacier with Vault Lock provides immutability for Glacier vaults but requires transitioning objects to Glacier storage class. While Vault Lock provides similar immutability, S3 Object Lock is more direct for S3 buckets without requiring storage class transitions.
D) S3 Intelligent-Tiering automatically optimizes storage costs by moving objects between access tiers but does not provide immutability or retention enforcement. Intelligent-Tiering addresses cost optimization, not data retention or tamper-proofing.
Question 113
A security engineer needs to implement controls preventing IAM users from creating access keys for their own accounts. Which approach enforces this requirement?
A) Manually monitor IAM user access keys monthly
B) Create IAM policies denying CreateAccessKey action for users’ own accounts
C) Use AWS Config to detect new access keys
D) Enable MFA for all IAM users
Answer: B
Explanation:
Preventing IAM users from creating their own access keys reduces risks of credential sprawl and unauthorized programmatic access. Organizations may require centralized access key management where only administrators can create keys. IAM policies can deny self-service access key creation while allowing administrators to create keys for users.
IAM policies support conditions that compare the user making the request with the user for whom access keys are being created. The iam:PassedToService or comparison between aws:username and the username in the API request can enforce that users cannot create access keys for themselves. Deny statements prevent self-service key creation.
This approach allows a centralized security team with appropriate permissions to create access keys for users when needed, while preventing users from independently creating keys. Combined with periodic access key audits and rotation policies, this centralized approach reduces credential management risks.
A) Manual monthly monitoring does not prevent access key creation, only detects it after the fact. Monthly review introduces 30-day windows where unauthorized access keys could exist and be used for programmatic access.
B) This is the correct answer because IAM policies can deny CreateAccessKey action using conditions, policies can distinguish between self-creation and administrator-created keys, deny statements prevent users from creating their own access keys, and this enables centralized access key management.
C) AWS Config can detect new access keys after creation but does not prevent creation. Config is a detective control that identifies keys after they exist, not a preventive control that blocks creation at the authorization layer.
D) MFA adds authentication strength but does not prevent access key creation. Users with MFA-enabled accounts can still create access keys for programmatic access. MFA and access key controls address different security concerns.
Question 114
A company must ensure that Amazon RDS databases automatically failover to secondary availability zones during outages. Which RDS feature provides this capability?
A) RDS Read Replicas
B) RDS Multi-AZ deployments
C) RDS automated backups
D) RDS encryption at rest
Answer: B
Explanation:
High availability for databases requires automatic failover to standby instances during primary instance failures, minimizing downtime and maintaining service continuity. RDS Multi-AZ deployments provide synchronous replication to standby instances in different availability zones with automatic failover.
RDS Multi-AZ maintains a standby replica in a different availability zone from the primary instance. All database writes are synchronously replicated to the standby before transactions are acknowledged as complete. During primary instance failures, infrastructure problems, or planned maintenance, RDS automatically fails over to the standby.
Failover typically completes within 60-120 seconds, during which the database is unavailable. RDS updates DNS records to point the database endpoint to the new primary instance (formerly standby). Applications using the RDS endpoint automatically connect to the new primary after DNS propagation, requiring no configuration changes.
A) RDS Read Replicas provide read scalability through asynchronous replication but do not automatically failover during outages. Read replicas can be manually promoted to standalone instances but don’t provide automatic high availability like Multi-AZ deployments.
B) This is the correct answer because RDS Multi-AZ deployments maintain standby replicas in different availability zones, provide synchronous replication ensuring data durability, automatically failover during outages or maintenance, and enable high availability with minimal configuration.
C) RDS automated backups enable point-in-time recovery but do not provide automatic failover. Backups restore databases from snapshots and transaction logs but require manual restoration, introducing significant downtime compared to automatic failover.
D) RDS encryption at rest protects data stored on disk but does not provide high availability or failover capabilities. Encryption and availability are separate concerns addressing different aspects of data protection.
Question 115
A security team discovers that an EC2 instance is communicating with an IP address known for cryptocurrency mining. They need to immediately block this communication while investigating. Which action accomplishes this fastest?
A) Modify security groups to deny traffic to the malicious IP
B) Modify NACLs to deny traffic to the malicious IP
C) Terminate the EC2 instance immediately
D) Use AWS WAF to block the IP
Answer: B
Explanation:
Rapidly blocking specific network communications during security incidents requires immediate network-level controls. Network ACLs provide stateless packet filtering at the subnet level with explicit deny rules that take effect immediately. NACLs can block traffic based on IP addresses without requiring instance-level changes.
NACLs support explicit deny rules with priority ordering. Adding a NACL rule denying outbound traffic to the malicious IP address blocks the EC2 instance from communicating with that destination. NACL rules apply to all instances in associated subnets and take effect immediately without requiring instance restarts or security group changes.
While security groups could also block traffic, security groups are stateful and process rules differently. NACLs provide more direct control for blocking specific IPs with explicit denies. During incidents requiring rapid response, NACLs offer clear, immediate blocking without considering rule interactions or stateful connections.
A) Security groups could deny outbound traffic, but security groups are stateful and typically configured with allow rules. Adding deny capability to security groups is less direct than NACL explicit denies. Security groups also don’t support explicit deny rules – they only support allow rules.
B) This is the correct answer because NACLs support explicit deny rules for specific IP addresses, rules take effect immediately at the subnet level, NACL changes don’t require instance restarts, and this provides rapid incident response blocking malicious communication.
C) Terminating the instance stops the malicious communication but destroys forensic evidence and causes service disruption. Investigation requires preserving the instance for analysis. Termination should be a last resort after forensic data collection.
D) AWS WAF filters HTTP/HTTPS traffic to web applications but cannot block general network traffic from EC2 instances. WAF protects web applications, not instance-level network communications. The instance’s outbound traffic doesn’t flow through WAF.
Question 116
An organization requires that all AWS Lambda functions include tracing for security monitoring and performance analysis. Which AWS service provides this capability?
A) AWS CloudTrail
B) AWS X-Ray
C) Amazon CloudWatch Logs
D) VPC Flow Logs
Answer: B
Explanation:
Application tracing captures request flows through distributed systems, showing how requests traverse components and identifying performance bottlenecks or errors. For Lambda functions, tracing provides visibility into function execution, downstream service calls, and end-to-end request paths. AWS X-Ray provides distributed tracing for AWS services including Lambda.
Lambda functions can be configured to enable X-Ray tracing, which instruments function code to capture trace data. X-Ray records information about function initialization, execution time, downstream service calls to DynamoDB, S3, or other AWS services, and errors or exceptions. Traces show complete request paths across services.
X-Ray service maps visualize application architecture and dependencies, showing how Lambda functions interact with other AWS services. Performance analysis identifies slow functions or service calls. Security teams use X-Ray traces to understand request flows during investigations, identifying which functions handled sensitive data or made suspicious API calls.
A) CloudTrail logs AWS API calls including Lambda function invocations but does not provide distributed tracing or request flow visualization. CloudTrail shows that functions were invoked but not how requests flowed through systems.
B) This is the correct answer because AWS X-Ray provides distributed tracing for Lambda functions, captures execution details and downstream service calls, visualizes request flows through service maps, and enables performance and security analysis of function behavior.
C) CloudWatch Logs captures function output and logging statements but does not provide distributed tracing or request flow visualization. Logs show function-level information but not cross-service request paths or performance traces.
D) VPC Flow Logs capture network traffic metadata for VPC resources but do not trace Lambda function execution or application requests. Flow logs provide network-level visibility, not application-level tracing.
Question 117
A company must implement key rotation for AWS KMS customer managed keys used to encrypt Amazon S3 data. Keys must rotate every 90 days instead of the default annual rotation. How should this be implemented?
A) Enable automatic key rotation in KMS (rotates annually)
B) Implement custom key rotation using Lambda functions triggered every 90 days to create new key material
C) Manually create new keys every 90 days
D) Use AWS-managed keys which rotate automatically
Answer: B
Explanation:
KMS automatic key rotation rotates backing key material annually but does not support configurable rotation intervals. Organizations with requirements for more frequent rotation must implement custom rotation processes. Custom rotation creates new key versions while maintaining access to data encrypted with previous versions.
Custom key rotation uses Lambda functions triggered by EventBridge scheduled rules every 90 days. The Lambda function creates new customer managed keys, updates applications and services to use new keys for encryption operations, and maintains old keys for decrypting existing data. Key aliases can be updated to point to new keys, enabling applications to reference aliases rather than specific key IDs.
This approach provides rotation flexibility while maintaining data access. Old keys remain enabled for decryption but are removed from active encryption use. After ensuring all data is re-encrypted with new keys or retention periods expire, old keys can be disabled or scheduled for deletion.
A) KMS automatic key rotation only rotates annually and cannot be configured for different intervals. While automatic rotation is simpler, it doesn’t meet the 90-day requirement. Organizations needing more frequent rotation must implement custom processes.
B) This is the correct answer because custom key rotation using Lambda enables configurable rotation intervals like 90 days, new keys can be created on schedule, applications can be updated to use new keys, and old keys are maintained for decrypting existing data.
C) Manual key creation every 90 days is operationally intensive, error-prone, and doesn’t scale. Manual processes require documentation, scheduled reminders, and verification steps. Automated rotation reduces operational burden and ensures consistent compliance.
D) AWS-managed keys rotate automatically every three years, not every 90 days. Organizations cannot control AWS-managed key rotation intervals or implement custom rotation schedules. AWS-managed keys don’t meet the 90-day requirement.
Question 118
A security team needs to implement automated remediation that isolates compromised EC2 instances by moving them to a quarantine security group while preserving them for forensic analysis. Which solution implements this?
A) Use AWS Config to detect compromised instances and terminate them
B) Use Amazon GuardDuty findings with EventBridge to trigger Lambda functions that modify instance security groups
C) Manually change security groups when incidents are detected
D) Use AWS Systems Manager to stop compromised instances
Answer: B
Explanation:
Automated incident response for compromised instances requires detecting threats and executing containment actions while preserving forensic evidence. Amazon GuardDuty identifies compromised instances through behavioral analysis and threat intelligence. EventBridge provides event-driven automation connecting GuardDuty findings to response actions.
When GuardDuty generates findings indicating instance compromise (such as cryptocurrency mining, backdoor communication, or unusual API activity), EventBridge rules match specific finding types and trigger Lambda functions. The Lambda function retrieves instance details from the finding, creates or identifies a quarantine security group with no inbound or outbound rules, and modifies the instance to use the quarantine security group.
The quarantine security group isolates the instance from network communications, preventing attackers from using it for lateral movement or data exfiltration. The instance remains running, preserving memory contents and system state for forensic analysis. Security teams can later use Session Manager or snapshot the instance for detailed investigation without risking further compromise.
A) AWS Config detects configuration compliance issues but does not identify compromised instances through threat detection. Terminating instances destroys forensic evidence needed for investigation. Config is unsuitable for threat-based incident response requiring evidence preservation.
B) This is the correct answer because GuardDuty detects compromised instances through threat intelligence, EventBridge triggers automated response to findings, Lambda functions can modify security groups to isolate instances, and instances are preserved for forensic analysis while being network-isolated.
C) Manual security group changes during incidents introduce delays allowing attackers more time for malicious activity. Manual processes don’t scale during widespread incidents and require human availability. Automated response provides rapid containment reducing attacker dwell time.
D) Systems Manager can stop instances but this approach loses memory contents valuable for forensics and doesn’t provide the network isolation that quarantine security groups offer. Stopped instances can’t be actively investigated, and evidence may be lost.
Question 119
An organization must ensure that all data in Amazon DynamoDB tables is encrypted using customer-managed KMS keys that are rotated quarterly. Which configuration meets this requirement?
A) Use DynamoDB default encryption with AWS owned keys
B) Enable DynamoDB encryption with customer-managed KMS keys and implement custom quarterly rotation
C) Use application-level encryption before storing data
D) Enable DynamoDB encryption with AWS-managed keys
Answer: B
Explanation:
DynamoDB encryption at rest protects table data, indexes, streams, and backups using KMS keys. Customer-managed keys provide control over key policies, rotation schedules, and usage auditing. Quarterly rotation requires implementing custom rotation processes since KMS automatic rotation only occurs annually.
Customer-managed KMS keys for DynamoDB can be specified when enabling encryption on new tables or when restoring encrypted backups. All table data encryption uses the specified customer-managed key. CloudTrail logs all key usage, providing comprehensive audit trails of encryption operations for compliance and security monitoring.
Implementing quarterly rotation requires Lambda functions triggered by EventBridge scheduled rules every 90 days. The function creates new customer-managed keys, updates DynamoDB table encryption settings to use new keys for new data, and maintains old keys for decrypting existing data. Table re-encryption may be performed by creating backups with new keys or using DynamoDB point-in-time recovery.
A) AWS owned keys are managed entirely by AWS and shared across customers. Rotation schedules cannot be controlled, key usage cannot be audited through CloudTrail, and keys don’t provide the control needed for meeting quarterly rotation requirements. AWS owned keys don’t satisfy the requirement for customer-managed keys.
B) This is the correct answer because customer-managed KMS keys enable DynamoDB encryption with full control, custom rotation processes can implement quarterly rotation schedules, CloudTrail provides comprehensive key usage auditing, and this meets both encryption and rotation requirements.
C) Application-level encryption requires applications to encrypt data before writing to DynamoDB and decrypt after reading. This adds significant complexity, doesn’t leverage DynamoDB native encryption, requires custom key management and rotation, and is unnecessarily complex compared to using DynamoDB encryption with KMS.
D) AWS-managed keys for DynamoDB rotate automatically every three years, not quarterly. Organizations cannot control AWS-managed key rotation schedules or implement custom rotation. AWS-managed keys don’t meet the quarterly rotation requirement.
Question 120
A company needs to implement controls ensuring that EC2 instances can only be launched in specific AWS regions to meet data residency requirements. Which solution enforces this?
A) Manually monitor instance launches across all regions
B) Use Service Control Policies with region-based conditions denying EC2 actions in non-approved regions
C) Configure IAM policies in each account separately
D) Use AWS Config to detect instances in unauthorized regions
Answer: B
Explanation:
Data residency requirements mandate that resources and data remain within specific geographic regions for legal, compliance, or privacy reasons. Preventing resource creation outside approved regions requires organization-wide preventive controls that cannot be bypassed by individual accounts. Service Control Policies provide this capability.
SCPs use the aws:RequestedRegion condition key to evaluate which region API requests target. Policies can deny EC2 actions like RunInstances, CreateVolume, and other resource creation APIs when the requested region is not in the approved list. This preventive control blocks non-compliant resource creation at the authorization layer before instances are launched.
Organization-wide SCP enforcement ensures all accounts comply with regional restrictions without requiring per-account configuration. Even account administrators cannot override SCPs, ensuring consistent enforcement. Combined with similar restrictions for other services like S3 and RDS, organizations comprehensively enforce data residency requirements.
A) Manual monitoring across multiple regions is operationally intensive and reactive. Instances would be launched in unauthorized regions before detection, potentially violating data residency requirements. Manual processes don’t prevent non-compliant launches and don’t scale.
B) This is the correct answer because SCPs with aws:RequestedRegion conditions restrict resource creation to approved regions, preventive controls block launches in unauthorized regions before they occur, organization-wide enforcement ensures consistent compliance, and this meets data residency requirements.
C) IAM policies in individual accounts could be modified by account administrators, allowing circumvention of regional restrictions. Per-account configuration doesn’t scale across organizations and lacks the immutability that SCPs provide. IAM policies alone are insufficient for mandatory regional restrictions.
D) AWS Config detects instances in unauthorized regions after launch but doesn’t prevent creation. Detective controls are reactive, allowing compliance violations before remediation. Data residency violations, even brief ones, may breach regulatory requirements making preventive controls essential.