Visit here for our full Amazon AWS Certified Security – Specialty SCS-C02 exam dumps and practice test questions.
Question 81
An organization requires that all AWS Lambda functions use the latest runtime versions to ensure security patches are applied. Which approach enforces this requirement?
A) Use AWS Config rules to detect Lambda functions using deprecated runtimes and trigger automatic updates
B) Create Service Control Policies denying Lambda function creation with deprecated runtimes
C) Enable Lambda runtime automatic updates in account settings
D) Use Lambda layers to update runtime dependencies
Answer: B
Explanation:
Lambda runtime versions include the execution environment, language libraries, and security patches. AWS periodically deprecates older runtime versions when they reach end of support, at which point they no longer receive security updates. Organizations must ensure functions migrate to supported runtimes to maintain security posture. Preventive controls that block creation of functions with deprecated runtimes enforce this requirement.
Service Control Policies in AWS Organizations can include conditions that check the Lambda runtime parameter in CreateFunction and UpdateFunctionConfiguration API calls. SCPs can deny these operations when they specify deprecated or soon-to-be-deprecated runtimes. This preventive control ensures that new functions and function updates use only current, supported runtime versions.
The SCP approach prevents non-compliant functions from being created rather than detecting and remediating them after creation. Combined with AWS Config rules that identify existing functions using deprecated runtimes, organizations can prevent new violations while addressing legacy functions through managed migration efforts. This two-pronged approach of prevention and detection ensures comprehensive runtime version management.
A) AWS Config can detect Lambda functions using deprecated runtimes and trigger notifications or create tickets for remediation, but Config cannot automatically update function runtimes because runtime updates may require code changes or testing. Automatic runtime updates could break functions if code is not compatible with newer runtime versions.
B) This is the correct answer because SCPs can deny Lambda function creation with deprecated runtimes, preventive controls block non-compliant functions before they exist, runtime version checks can be enforced organization-wide, and this ensures new functions use current, supported runtime versions.
C) AWS Lambda does not provide automatic runtime update functionality. Runtime versions are explicitly specified during function creation and must be manually updated through UpdateFunctionConfiguration API calls. Organizations must manually manage runtime migrations as AWS deprecates older versions.
D) Lambda layers package libraries and dependencies for functions but do not control or update the Lambda runtime itself. Layers provide code sharing and dependency management but do not address runtime version requirements or security patch application to the execution environment.
Question 82
A security engineer needs to implement a solution that automatically rotates SSH keys for EC2 instances and distributes updated public keys to administrators. Which approach accomplishes this?
A) Use AWS Systems Manager Session Manager which does not require SSH keys
B) Manually rotate SSH keys monthly using custom scripts
C) Use EC2 Instance Connect to manage SSH key rotation automatically
D) Store SSH keys in AWS Secrets Manager with automatic rotation
Answer: A
Explanation:
Traditional SSH key management for EC2 instance access creates operational and security challenges. SSH private keys must be distributed securely to administrators, stored safely, and rotated periodically. Lost or compromised keys require emergency rotation and redistribution. AWS Systems Manager Session Manager provides an alternative approach that eliminates SSH key management entirely.
Session Manager provides browser-based or CLI shell access to EC2 instances without requiring SSH keys, bastion hosts, or open inbound SSH ports. Authentication and authorization occur through IAM policies rather than SSH key pairs. Administrators use temporary session tokens to connect to instances, with sessions fully logged in CloudTrail and optionally recorded for audit purposes.
Eliminating SSH key dependencies removes the entire lifecycle of key generation, distribution, storage, rotation, and revocation. Instance security groups do not need inbound SSH rules, reducing attack surface. Access control through IAM policies provides fine-grained control over who can access which instances. This approach addresses the root cause rather than attempting to automate traditional SSH key management.
A) This is the correct answer because Session Manager eliminates SSH key requirements entirely, uses IAM for authentication and authorization, provides secure instance access without managing SSH keys, and reduces operational overhead while improving security posture.
B) Manual SSH key rotation with custom scripts is operationally intensive, error-prone, and scales poorly across many instances and administrators. Manual processes rely on human execution consistency and do not address the fundamental challenges of secure key distribution and storage. This approach perpetuates SSH key management problems.
C) EC2 Instance Connect provides one-time SSH access by temporarily pushing SSH public keys to instance metadata, but it still requires managing SSH key pairs and uses SSH protocol. Instance Connect simplifies some aspects of SSH key management but does not eliminate keys entirely or provide the comprehensive session logging that Session Manager offers.
D) Secrets Manager stores and rotates secrets like database credentials but does not specifically support SSH key rotation and distribution workflows. While Secrets Manager could theoretically store SSH keys, it does not automate distributing public keys to instances or provide mechanisms for secure SSH key lifecycle management.
Question 83
A company must ensure that all Amazon EBS volumes created in production accounts are encrypted. Which solution enforces this requirement at volume creation time?
A) Use AWS Config to detect unencrypted volumes and delete them
B) Enable EBS encryption by default in account settings
C) Create IAM policies requiring encrypted volume parameter in CreateVolume calls
D) Use CloudTrail to monitor volume creation and send alerts
Answer: B
Explanation:
Ensuring consistent encryption of EBS volumes across an organization requires preventive controls that make encryption mandatory rather than optional. Account-level settings that enforce encryption by default eliminate the possibility of creating unencrypted volumes even through accidental omission or misconfiguration. AWS provides EBS encryption by default as an account and region-specific setting.
When EBS encryption by default is enabled, all new EBS volumes and snapshots are automatically encrypted regardless of whether encryption is explicitly specified in API calls or console actions. Users cannot create unencrypted volumes even if they try to disable encryption. This setting applies to volumes created directly through EC2 APIs and volumes created automatically during instance launches.
Encryption by default uses the default KMS key for EBS unless a different key is specified. Organizations can enable this setting across all accounts and regions using automation or manually through the console. Once enabled, the setting provides consistent enforcement without requiring policy management or ongoing monitoring, eliminating the risk of unencrypted volume creation.
A) AWS Config can detect unencrypted volumes after creation and trigger remediation including deletion, but this approach is reactive rather than preventive. There would be a time window where unencrypted volumes exist before detection and remediation. Detective controls are less effective than preventive controls for this requirement.
B) This is the correct answer because EBS encryption by default prevents unencrypted volume creation at the account level, applies automatically to all new volumes regardless of how they are created, requires no policy management or monitoring, and provides consistent enforcement across the account.
C) IAM policies can require specific parameters in API calls but would need to deny CreateVolume calls without encryption parameters. This approach is complex because it requires crafting policies that correctly evaluate encryption parameters. EBS encryption by default provides simpler and more reliable enforcement.
D) CloudTrail monitoring with alerts is a detective control that identifies unencrypted volumes after creation but does not prevent their creation. Alerts enable rapid response but allow a window where unencrypted volumes exist. This approach is reactive rather than preventive and does not enforce the requirement at creation time.
Question 84
A security team needs to investigate a potential security incident by analyzing historical API calls made by a specific IAM role over the past month. Which AWS service provides this information?
A) AWS CloudTrail with Athena queries on log data
B) Amazon CloudWatch Logs Insights
C) AWS Config with Advanced Queries
D) Amazon GuardDuty findings
Answer: A
Explanation:
Security incident investigations require analyzing historical activity to understand the scope and timeline of potential compromises. CloudTrail provides comprehensive logging of all AWS API calls with retention of log data for analysis. However, searching through large volumes of CloudTrail logs requires efficient query mechanisms. Amazon Athena enables SQL-based analysis of CloudTrail logs stored in S3.
CloudTrail logs delivered to S3 can be queried directly using Athena without requiring data loading or transformation. Security analysts write SQL queries filtering for specific IAM roles, time ranges, API actions, or other parameters to identify relevant activity during investigations. Athena scales automatically to handle large log volumes and provides results quickly even when analyzing months of data.
For role-specific investigations, Athena queries filter CloudTrail events where the user identity matches the role ARN or role session name. Queries can identify all actions performed by the role, resources accessed, timestamps, source IP addresses, and error codes. This comprehensive analysis helps investigators understand what actions a potentially compromised role performed and assess the incident’s impact.
A) This is the correct answer because CloudTrail logs all API calls including those made by IAM roles, logs are stored in S3 for long-term retention and analysis, Athena enables SQL queries to filter and analyze CloudTrail data efficiently, and queries can identify all activity by specific roles over historical time periods.
B) CloudWatch Logs Insights queries logs sent to CloudWatch Logs, but CloudTrail logs must be explicitly sent to CloudWatch Logs rather than being automatically available there. Additionally, CloudWatch Logs has retention limits and higher storage costs than S3 for long-term historical data. Logs Insights is better suited for recent operational data rather than month-long historical investigations.
C) AWS Config tracks resource configuration changes over time but does not provide comprehensive API call logging like CloudTrail. Config Advanced Queries can analyze configuration history but lack the detailed API activity logs needed for security incident investigations involving user and role actions.
D) Amazon GuardDuty generates security findings when detecting threats but does not provide queryable access to historical API call logs. GuardDuty analyzes CloudTrail data to detect threats but does not expose the underlying log data for custom queries. GuardDuty is for threat detection rather than historical log analysis.
Question 85
An application requires fine-grained access control where different users can perform different actions on the same DynamoDB table items based on attribute values. Which DynamoDB feature supports this requirement?
A) DynamoDB Streams with filtering
B) IAM policies with condition expressions checking item attributes
C) DynamoDB attribute-level encryption
D) DynamoDB global tables with regional access control
Answer: B
Explanation:
Fine-grained access control in DynamoDB requires evaluating item attributes during authorization to determine whether specific users can access or modify particular items. This enables applications to implement data-level permissions where users can only interact with items they own or have permissions for. IAM policies support condition expressions that compare request parameters and item attributes.
DynamoDB-specific IAM condition keys enable attribute-based access control. The dynamodb:LeadingKeys condition restricts access based on partition key values, while dynamodb:Attributes conditions can control which attributes can be read or written. These conditions are evaluated during API request authorization, ensuring that users can only access data permitted by policy conditions.
For example, a policy might allow users to query items where the partition key equals their user ID, or allow modifications only to specific attributes while protecting others. These IAM policy conditions implement attribute-based access control at the authorization layer without requiring application-level permission checks. The approach provides strong security guarantees enforced by AWS rather than application code.
A) DynamoDB Streams capture item-level changes for event-driven processing but do not provide access control mechanisms. Streams are for building reactive architectures that respond to data changes, not for implementing fine-grained authorization or restricting user access based on attributes.
B) This is the correct answer because IAM policies support conditions checking DynamoDB item attributes, dynamodb:LeadingKeys restricts access based on partition keys, dynamodb:Attributes controls attribute-level access, and these conditions enable fine-grained attribute-based access control enforced at the authorization layer.
C) DynamoDB attribute-level encryption using client-side encryption libraries protectsD) Implementing an SCP requires AWS Organizations and takes time to create, test, and deploy. SCPs prevent future policy changes but do not immediately block existing public access. This approach is preventive for future incidents but does not provide the immediate response needed for an active security incident.
Question 86
A company needs to implement a solution that automatically blocks IP addresses that attempt more than 10 failed login attempts to the AWS Management Console within 5 minutes. Which solution implements this requirement?
A) Use Amazon GuardDuty to detect failed logins and AWS WAF to block IPs
B) Create CloudWatch Logs metric filters for failed console logins, trigger Lambda via CloudWatch Alarms to add IPs to AWS WAF IP sets
C) Use AWS Shield Advanced to block brute force attacks
D) Configure VPC NACLs to automatically block suspicious IPs
Answer: B
Explanation:
Implementing automated blocking of IP addresses based on failed login attempts requires detecting authentication failures, counting attempts within time windows, and dynamically updating IP blocking rules. CloudWatch Logs receives CloudTrail events including console sign-in attempts, enabling pattern matching and automated responses.
CloudWatch Logs metric filters can match failed console sign-in events and increment metrics. When the metric exceeds thresholds within specified time periods, CloudWatch Alarms trigger. These alarms can invoke Lambda functions that add offending IP addresses to AWS WAF IP sets configured to block requests from those addresses.
The Lambda function extracts the source IP from the failed login event, checks if it should be blocked based on attempt count, and updates WAF IP set rules. WAF, when associated with CloudFront distributions or ALBs protecting access to management resources, blocks subsequent requests from the blacklisted IPs. This creates an automated defense against brute force authentication attacks.
A) GuardDuty detects threats including compromised credentials but does not count failed login attempts or provide automatic IP blocking. GuardDuty generates findings but requires additional automation for blocking actions.
B) This is the correct answer because CloudWatch Logs metric filters detect and count failed console logins, CloudWatch Alarms trigger when thresholds are exceeded, Lambda functions can dynamically update WAF IP sets, and WAF blocks requests from blacklisted IPs.
C) AWS Shield Advanced protects against DDoS attacks but does not analyze application-layer authentication failures or implement brute force protection based on login attempts. Shield focuses on volumetric attacks rather than authentication security.
D) VPC NACLs control network traffic to VPC resources but cannot block AWS Management Console access, which does not flow through customer VPCs. Console access occurs through AWS-managed infrastructure that NACLs cannot control.
Question 87
An organization must ensure that all S3 buckets have server access logging enabled to track access requests. Which approach provides automated enforcement?
A) Manually enable logging on each bucket
B) Use AWS Config with remediation to enable logging on non-compliant buckets
C) Create Lambda functions triggered by bucket creation events
D) Use S3 Inventory to identify buckets without logging
Answer: B
Explanation:
Ensuring consistent security configurations across all S3 buckets requires automated detection and remediation of non-compliant configurations. Server access logging provides detailed records of requests made to buckets, essential for security monitoring and compliance. AWS Config provides continuous compliance monitoring with automated remediation capabilities.
AWS Config includes a managed rule (s3-bucket-logging-enabled) that checks whether S3 buckets have server access logging configured. When Config detects buckets without logging enabled, it marks them as non-compliant and can automatically trigger remediation actions using Systems Manager Automation documents or custom Lambda functions.
The remediation action calls the S3 PutBucketLogging API to enable server access logging on non-compliant buckets, specifying a target bucket for log delivery. This automated enforcement ensures that new buckets and existing buckets without logging are automatically configured to meet compliance requirements without manual intervention.
A) Manual configuration does not scale, is error-prone, and does not automatically handle new bucket creation. Manual processes require ongoing attention and do not provide consistent enforcement across the organization.
B) This is the correct answer because AWS Config continuously monitors S3 bucket logging configuration, managed rules detect non-compliant buckets, automated remediation enables logging without manual intervention, and the solution ensures consistent compliance across all buckets.
C) Lambda functions triggered by bucket creation events could enable logging for new buckets but would not address existing non-compliant buckets. This approach requires custom development and maintenance compared to using Config’s managed rules and remediation.
D) S3 Inventory generates lists of buckets and their properties but does not automatically enforce logging configuration or remediate non-compliant buckets. Inventory is a reporting tool rather than an enforcement mechanism.
Question 88
A security team needs to ensure that EC2 instances cannot be launched without approved instance metadata service version 2 (IMDSv2) configuration. Which solution enforces this requirement?
A) Create an IAM policy condition requiring ec2:MetadataHttpTokens to equal “required”
B) Use AWS Config to detect instances using IMDSv1
C) Manually configure all instances to use IMDSv2
D) Use Systems Manager to update instance configurations
Answer: A
Explanation:
Instance Metadata Service version 2 provides enhanced security for accessing instance metadata by requiring session tokens, protecting against SSRF attacks and unauthorized metadata access. Organizations can enforce IMDSv2 usage through IAM policies that require specific metadata service configurations during instance launch.
IAM policies support the ec2:MetadataHttpTokens condition key that evaluates the HttpTokens parameter in RunInstances API calls. Policies that deny instance launches unless HttpTokens equals “required” enforce IMDSv2-only configuration. This preventive control blocks non-compliant instance launches at the authorization layer before instances are created.
Service Control Policies can enforce this requirement organization-wide, ensuring that all accounts must launch instances with IMDSv2 required. Combined with policies enforcing ec2:MetadataHttpPutResponseHopLimit settings, organizations can comprehensively secure metadata service access across their infrastructure.
A) This is the correct answer because IAM policy conditions can require IMDSv2 configuration during instance launch, ec2:MetadataHttpTokens condition key checks the metadata service version, preventive controls block non-compliant launches before instances exist, and this enforces IMDSv2 usage across the environment.
B) AWS Config can detect instances using IMDSv1 after they are launched, but this is a detective control rather than preventive. Non-compliant instances would exist before detection and remediation, creating a security risk window.
C) Manual configuration of instances does not prevent future non-compliant launches and does not scale across dynamic environments. Manual processes rely on user compliance rather than technical enforcement.
D) Systems Manager can update instance configurations but operates on existing instances rather than preventing non-compliant launches. Systems Manager is useful for remediation but does not provide preventive enforcement at launch time.
Question 89
A company must implement encryption for all data flowing between Amazon CloudFront and origin servers. Which CloudFront configuration ensures this encryption?
A) Enable HTTPS for viewer to CloudFront connections only
B) Configure CloudFront to require HTTPS for both viewer protocol and origin protocol policies
C) Use S3 bucket policies to require SSL
D) Enable encryption at rest for CloudFront cache
Answer: B
Explanation:
End-to-end encryption for CloudFront distributions requires securing both segments of content delivery: from viewers to CloudFront edge locations and from CloudFront to origin servers. Configuring only viewer-facing HTTPS leaves the CloudFront-to-origin connection potentially unencrypted, failing to provide complete data protection.
CloudFront viewer protocol policies control how viewers connect to edge locations, with options including HTTP only, HTTPS only, or redirect HTTP to HTTPS. Origin protocol policies control how CloudFront connects to origin servers, with options for HTTP only, HTTPS only, or match viewer. Complete encryption requires setting viewer protocol policy to HTTPS only or redirect, and origin protocol policy to HTTPS only.
When both policies enforce HTTPS, all data transmission is encrypted: viewers connect to CloudFront using HTTPS/TLS, and CloudFront connects to origins using HTTPS/TLS. This configuration ensures that data never flows unencrypted across networks, protecting against interception or tampering throughout the content delivery path.
A) Enabling HTTPS only for viewer connections protects the viewer-to-CloudFront segment but allows CloudFront-to-origin connections to use HTTP. This leaves data unencrypted between CloudFront and origin servers, failing to meet the requirement for complete encryption.
B) This is the correct answer because requiring HTTPS for both viewer and origin protocol policies ensures encryption for the entire data path, protects data from viewers to CloudFront and from CloudFront to origins, and provides comprehensive end-to-end encryption in transit.
C) S3 bucket policies can require SSL for bucket access but do not control CloudFront protocol policies or ensure encryption between CloudFront and origins. Bucket policies complement CloudFront configuration but do not replace proper origin protocol settings.
D) CloudFront does not provide encryption at rest for cached content. Edge location caches are managed by AWS. The requirement specifies encryption in transit between CloudFront and origins, not encryption of cached data.
Question 90
An organization needs to detect when IAM policies are created that allow S3 actions without encryption requirements. Which AWS service identifies these policy risks?
A) AWS CloudTrail
B) AWS IAM Access Analyzer
C) Amazon GuardDuty
D) AWS Config
Answer: B
Explanation:
Analyzing IAM policies for security risks requires automated reasoning that evaluates policy logic and identifies potential vulnerabilities. Policies that allow S3 actions without enforcing encryption create risks of unencrypted data storage. IAM Access Analyzer uses automated reasoning to analyze policies and identify security issues.
IAM Access Analyzer evaluates resource policies, identity policies, and service control policies to identify permissions that grant access to resources. For S3-related policies, Access Analyzer can identify policies that allow s3:PutObject without requiring encryption conditions, enabling unencrypted object uploads. These findings help security teams identify and remediate risky policy configurations.
Access Analyzer generates findings when policies grant broader permissions than expected or lack required security conditions. For encryption enforcement, findings would highlight policies missing conditions like s3:x-amz-server-side-encryption. Security teams can review findings and update policies to include proper encryption requirements.
A) CloudTrail logs policy creation events showing who created policies and when, but does not analyze policy content for security risks or missing encryption conditions. CloudTrail provides audit logs but not policy security analysis.
B) This is the correct answer because IAM Access Analyzer analyzes policy content for security risks, identifies policies lacking required security conditions like encryption requirements, uses automated reasoning to evaluate policy logic, and generates findings for policies that allow unencrypted data operations.
C) Amazon GuardDuty detects threats through behavioral analysis but does not perform static analysis of IAM policy documents. GuardDuty focuses on detecting active threats rather than analyzing policy configurations for potential vulnerabilities.
D) AWS Config monitors resource configurations and compliance but does not analyze IAM policy logic for missing security conditions. Config can detect policy existence and basic attributes but lacks the policy reasoning capabilities that Access Analyzer provides.
Question 91
A company requires that all AWS Lambda functions execute with the minimum necessary permissions following least privilege principles. Which approach helps identify overly permissive Lambda execution roles?
A) Manually review all Lambda function IAM policies
B) Use IAM Access Analyzer to identify unused permissions in Lambda execution roles
C) Enable AWS CloudTrail logging for all Lambda functions
D) Use AWS X-Ray to trace Lambda execution
Answer: B
Explanation:
Implementing least privilege requires identifying permissions that Lambda functions have but do not use, enabling removal of unnecessary permissions. Manual review of policies does not scale and cannot definitively determine which permissions are actually used. IAM Access Analyzer provides automated analysis of permission usage.
IAM Access Analyzer can analyze CloudTrail logs to determine which IAM actions Lambda execution roles actually invoke during function execution. By comparing granted permissions in execution role policies against actions actually used, Access Analyzer identifies unused permissions that could be removed to reduce privilege scope.
Access Analyzer generates findings showing specific permissions that are granted but never used based on CloudTrail activity over analyzed time periods. Security teams can use these findings to refine Lambda execution role policies, removing unused permissions while retaining necessary actions. This automated approach enables continuous least privilege improvement.
A) Manual policy review is time-consuming, does not scale across many Lambda functions, and cannot definitively determine which permissions are actually used versus simply granted. Manual processes lack the activity analysis needed for evidence-based least privilege implementation.
B) This is the correct answer because IAM Access Analyzer analyzes CloudTrail logs to identify unused permissions, compares granted permissions against actual usage, generates findings showing permissions that can be removed, and enables automated least privilege refinement for Lambda execution roles.
C) CloudTrail logging captures Lambda function activity and API calls but does not automatically analyze permissions to identify unused grants. CloudTrail provides the data for analysis but requires additional tools like Access Analyzer to generate actionable least privilege insights.
D) AWS X-Ray traces request flows and performance but does not analyze IAM permissions or identify unused policy grants. X-Ray focuses on application performance and debugging rather than permission analysis for least privilege implementation.
Question 92
A security engineer needs to implement controls preventing users from disabling VPC Flow Logs. Which approach enforces this requirement?
A) Use AWS Config to detect and remediate disabled flow logs
B) Create IAM policies denying DeleteFlowLogs and StopFlowLogs actions
C) Manually monitor flow logs weekly
D) Use CloudWatch to alert on flow log changes
Answer: B
Explanation:
Preventing users from disabling VPC Flow Logs requires preventive controls that block deletion or stopping of flow logs at the API authorization level. Detective controls that identify disabled logs after the fact leave gaps in logging coverage. IAM policies with explicit denials provide strong preventive controls.
IAM policies support explicit deny statements for specific actions including DeleteFlowLogs and StopFlowLogs. When policies with explicit denies are applied through Service Control Policies or IAM policies, they prevent users from disabling flow logs regardless of other permissions they might have. Deny statements always take precedence in IAM policy evaluation.
For organization-wide enforcement, SCPs can deny flow log deletion across all accounts. This ensures that even account administrators cannot disable flow logs, maintaining continuous network traffic logging for security monitoring and compliance. The preventive control eliminates the possibility of logging gaps.
A) AWS Config can detect disabled flow logs and trigger remediation to re-enable them, but this is reactive. There would be a time window between flow logs being disabled and Config detecting and remediating the issue, during which network traffic logging would be interrupted.
B) This is the correct answer because IAM policies can explicitly deny flow log deletion and stopping actions, deny statements prevent users from disabling logs regardless of other permissions, this provides preventive controls blocking log disruption, and SCPs can enforce this organization-wide.
C) Manual monitoring does not prevent flow log disabling and introduces significant detection delays. Weekly monitoring creates multi-day windows where disabled logs could go unnoticed, resulting in substantial logging gaps for security analysis.
D) CloudWatch can alert when flow logs are modified or deleted, but this is a detective control that notifies after logs are already disabled. Alerts enable rapid response but do not prevent the logging interruption from occurring.
Question 93
An application requires temporary credentials to access multiple AWS services. The credentials should automatically rotate every hour. Which AWS service provides this capability?
A) AWS Secrets Manager
B) AWS STS AssumeRole with one-hour session duration
C) IAM users with hourly key rotation
D) AWS Systems Manager Parameter Store
Answer: B
Explanation:
Temporary credentials with automatic expiration and rotation provide enhanced security by limiting the validity window if credentials are compromised. AWS Security Token Service generates temporary security credentials with configurable expiration times, automatically invalidating credentials when sessions expire.
IAM roles can be assumed using the AssumeRole API call with session durations ranging from 15 minutes to 12 hours. For hourly rotation, applications assume roles with a one-hour session duration. When credentials approach expiration, applications re-assume the role to receive new temporary credentials, creating automatic rotation.
AWS SDKs handle credential refresh automatically when using IAM roles, transparently re-assuming roles before credentials expire. This ensures applications always have valid credentials without manual intervention. The temporary credentials include access key, secret key, and session token that automatically become invalid after the session duration.
A) AWS Secrets Manager stores and rotates secrets like database credentials but is designed for application secrets rather than AWS service access credentials. Secrets Manager rotates stored secrets, not AWS API access credentials. IAM roles provide the proper mechanism for temporary AWS service credentials.
B) This is the correct answer because STS AssumeRole generates temporary credentials with configurable session durations, one-hour session duration provides hourly automatic rotation, credentials automatically expire and must be refreshed, and AWS SDKs handle automatic credential refresh transparently.
C) IAM users provide long-term credentials, not temporary credentials. Hourly rotation of IAM user access keys would be operationally intensive and is not a supported automation pattern. IAM users are not designed for frequent credential rotation.
D) Systems Manager Parameter Store stores configuration data and secrets but does not generate temporary AWS service credentials. Parameter Store can store IAM user credentials, but this does not provide automatic hourly rotation or temporary credential generation.
Question 94
A company needs to implement network segmentation that prevents developer environments from accessing production databases while allowing production applications to access them. Which approach implements this requirement?
A) Use the same security groups for all environments
B) Create separate VPCs for development and production with no VPC peering
C) Create separate security groups where production database security groups allow inbound traffic only from production application security groups
D) Use NACLs to block all traffic between environments
Answer: C
Explanation:
Network segmentation for environment isolation requires granular control over communication paths between resources. Security groups provide stateful firewall capabilities with source filtering based on security group membership, enabling precise control over which resources can communicate with protected resources like databases.
Creating separate security groups for each environment and role (development applications, production applications, production databases) enables fine-grained access control. Production database security groups define inbound rules allowing connections only from production application security groups on database ports, while denying connections from development security groups.
This security group configuration ensures that even if development and production resources reside in the same VPC or subnets, network-level access control prevents developers from accessing production databases. Source security group filtering in rules provides environment-based access control without requiring network separation, implementing segmentation through firewall policies.
A) Using the same security groups for all environments eliminates environment-based access control. Resources in both environments would have identical network access, failing to implement segmentation between development and production.
B) Separate VPCs with no peering provides strong isolation but is overly restrictive if some cross-environment access is needed. Complete network separation might complicate legitimate use cases while being unnecessary for basic environment segmentation.
C) This is the correct answer because security groups can restrict inbound database access to specific source security groups, production database security groups allow only production application security groups, this implements environment-based network segmentation, and the approach provides fine-grained access control without requiring VPC separation.
D) NACLs blocking all traffic between environments would prevent necessary communications within environments and is too coarse-grained. NACLs operate at the subnet level and would require complex configuration to allow some traffic while blocking other traffic based on environment membership.
Question 95
A security team discovers that an S3 bucket policy was modified to allow public read access. They need to identify who made the change and when. Which AWS service provides this information?
A) Amazon S3 Server Access Logs
B) AWS CloudTrail
C) Amazon CloudWatch Logs
D) AWS Config
Answer: B
Explanation:
Investigating unauthorized configuration changes requires comprehensive audit logs showing who performed actions, when they occurred, and what changes were made. S3 bucket policy modifications are AWS API calls that must be logged for security analysis and accountability. CloudTrail provides API-level audit logging for AWS services.
CloudTrail logs all S3 API calls including PutBucketPolicy operations that modify bucket policies. Log entries contain detailed information including the IAM principal (user or role) that made the call, timestamp, source IP address, request parameters showing the new policy document, and response elements indicating success or failure.
Security teams query CloudTrail logs filtering for PutBucketPolicy events on the specific bucket within relevant timeframes. The logs reveal exactly who modified the bucket policy, what permissions were granted, and when the change occurred. This information is essential for incident investigation, determining whether changes were authorized, and implementing corrective actions.
A) S3 Server Access Logs capture requests made to S3 bucket objects (GET, PUT, DELETE operations on objects) but do not log bucket configuration changes like policy modifications. Server access logs show data access but not management operations.
B) This is the correct answer because CloudTrail logs all S3 API calls including bucket policy modifications, log entries contain detailed information about who made changes and when, CloudTrail captures request parameters showing the new policy content, and this provides comprehensive audit information for investigating policy changes.
C) CloudWatch Logs is a log aggregation service that can receive logs from various sources but does not automatically capture S3 bucket policy changes. CloudTrail would need to send logs to CloudWatch for analysis, but the source of bucket policy audit information is CloudTrail.
D) AWS Config records resource configuration changes over time and could show that bucket policy changed, but Config focuses on configuration state rather than detailed audit information about who made changes. CloudTrail provides the detailed audit trail needed for investigating unauthorized changes.
Question 96
An organization must ensure that Amazon RDS database parameter groups do not allow SSL/TLS connections to be disabled. Which approach enforces this requirement?
A) Manually review all parameter groups monthly
B) Use AWS Config with custom rules to detect parameter groups with SSL disabled
C) Enable RDS encryption at rest
D) Use CloudWatch to monitor parameter group changes
Answer: B
Explanation:
RDS parameter groups control database engine configuration including SSL/TLS connection requirements. Parameters like require_secure_transport (MySQL) or rds.force_ssl (PostgreSQL) determine whether databases accept unencrypted connections. Ensuring these parameters remain properly configured requires continuous monitoring and automated detection of non-compliant settings.
AWS Config continuously evaluates resource configurations against defined rules. Custom Config rules using Lambda functions can evaluate RDS parameter groups, checking specific parameters that control SSL/TLS requirements. When Config detects parameter groups that allow unencrypted connections, it generates compliance findings.
Config rules can trigger automatic remediation that modifies parameter groups to enforce SSL/TLS, or generate notifications for manual review and correction. Continuous evaluation ensures that parameter group changes are immediately detected, preventing databases from accepting unencrypted connections due to configuration drift or errors.
A) Manual monthly review introduces 30-day windows where non-compliant parameter groups could allow unencrypted connections. Manual processes do not scale, are error-prone, and provide delayed detection of compliance issues.
B) This is the correct answer because AWS Config continuously monitors parameter group configurations, custom rules can evaluate SSL/TLS-related parameters, Config detects non-compliant configurations immediately after changes, and automated remediation can enforce SSL/TLS requirements.
C) RDS encryption at rest protects data stored on disk but does not control SSL/TLS for network connections. Encryption at rest and encryption in transit are separate security controls addressing different threats. Encryption at rest does not enforce encrypted network connections.
D) CloudWatch can monitor and alert on parameter group modifications but does not evaluate parameter content for compliance with SSL/TLS requirements. CloudWatch provides change notifications but lacks the parameter-level evaluation logic that Config rules provide.
Question 97
A company needs to implement a security control that prevents EC2 instances from being launched with public IP addresses in production accounts. Which solution enforces this requirement?
A) Manually review all EC2 instances weekly
B) Create an SCP denying RunInstances when network interfaces have public IP addresses
C) Use only private subnets for all instances
D) Use AWS Config to detect and terminate instances with public IPs
Answer: B
Explanation:
Preventing EC2 instances from receiving public IP addresses requires blocking instance launches that would assign public IPs. While private subnet placement prevents automatic public IP assignment, users could still explicitly request public IPs or associate Elastic IPs. Service Control Policies provide preventive controls at the API authorization level.
SCPs can deny the RunInstances action when request parameters indicate public IP assignment. The ec2:AssociatePublicIpAddress condition key evaluates whether the request includes parameters assigning public IPs to network interfaces. Policies denying RunInstances with AssociatePublicIpAddress set to true prevent instances with public IPs from being launched.
This preventive control blocks non-compliant instance launches before they occur, ensuring that production instances cannot have public IP addresses regardless of subnet configuration or user intentions. SCP enforcement applies organization-wide across all accounts, providing consistent security controls without per-account configuration.
A) Manual weekly review is reactive and introduces week-long windows where non-compliant instances could exist. Manual processes do not prevent non-compliant launches and cannot scale across dynamic environments with frequent instance creation.
B) This is the correct answer because SCPs can deny EC2 instance launches with public IP addresses, ec2:AssociatePublicIpAddress condition key evaluates public IP assignment requests, preventive controls block non-compliant launches before instances are created, and SCP enforcement applies organization-wide.
C) Private subnets prevent automatic public IP assignment but do not prevent users from explicitly requesting public IPs during launch or associating Elastic IPs after launch. Subnet configuration alone provides incomplete protection without API-level controls.
D) AWS Config can detect instances with public IPs after launch and trigger termination, but this is reactive. Instances would exist briefly before detection and termination, creating security risk windows. Preventive controls are more effective than detective controls for this requirement.
Question 98
A security engineer needs to detect when AWS resources are shared with external AWS accounts through resource-based policies. Which AWS service provides this capability?
A) AWS CloudTrail
B) AWS IAM Access Analyzer
C) Amazon GuardDuty
D) AWS Config
Answer: B
Explanation:
Detecting resource sharing with external entities requires analyzing resource-based policies to identify permissions granted to principals outside the organization. Many AWS resources support resource-based policies that can grant access to external accounts, potentially creating unintended data exposure. IAM Access Analyzer specifically addresses external access detection.
IAM Access Analyzer continuously analyzes resource-based policies for S3 buckets, IAM roles, KMS keys, Lambda functions, SQS queues, SNS topics, and other resources. When policies grant access to external AWS accounts, IAM principals, or public access, Access Analyzer generates findings showing what resources are shared and with whom.
Access Analyzer findings include details about the external principals with access, the permissions granted, and the policy statements enabling the access. Security teams can review findings to determine whether external sharing is intentional and appropriate, or represents security risks requiring remediation. This visibility enables governance over cross-account resource sharing.
A) CloudTrail logs API calls that create or modify resource policies but does not analyze policy content to identify external access. CloudTrail provides audit logs but not policy analysis or external access detection. Additional analysis would be required to identify external sharing from CloudTrail logs.
B) This is the correct answer because IAM Access Analyzer specifically detects resources shared with external accounts, analyzes resource-based policies across multiple resource types, generates findings showing external access grants, and provides visibility into cross-account resource sharing.
C) Amazon GuardDuty detects threats through behavioral analysis but does not analyze resource policies for external access permissions. GuardDuty focuses on detecting malicious activity rather than identifying policy-based resource sharing configurations.
D) AWS Config monitors resource configurations but does not specifically analyze policies to identify external access grants. Config can detect policy existence and changes but lacks the policy reasoning and external access detection capabilities that Access Analyzer provides.
Question 99
An application requires temporary elevated permissions for one specific API call during initialization. Which approach implements this securely?
A) Grant the application IAM role permanent elevated permissions
B) Use IAM role session policies to request temporary elevated permissions only for the required action
C) Use the root account for the API call
D) Create a separate IAM user with elevated permissions
Answer: B
Explanation:
Temporary privilege elevation for specific operations implements least privilege by granting elevated permissions only when needed and only for required actions. IAM role session policies provide a mechanism for requesting reduced or focused permissions when assuming roles, enabling temporary permission refinement.
When assuming an IAM role using AssumeRole, applications can provide an optional session policy that further restricts the permissions available during that session. While session policies can only reduce permissions from the role’s policies, applications can assume roles with broad permissions but use session policies to limit themselves to specific actions during particular sessions.
For initialization requiring elevated permissions, the application’s permanent role has minimal permissions for normal operations. During initialization, the application assumes an elevated privilege role with a session policy that limits permissions to only the specific API call needed. After initialization completes, the temporary credentials expire, returning the application to minimal permissions.
A) Granting permanent elevated permissions violates least privilege by maintaining elevated access beyond the brief period when it’s needed. Permanent elevation increases risk if the application is compromised, as attackers would have elevated permissions for the application’s entire lifecycle.
B) This is the correct answer because IAM role session policies enable temporary permission refinement when assuming roles, applications can assume elevated roles with session policies limiting permissions to specific actions, credentials automatically expire after the session, and this implements time-bound privilege elevation with minimal permission scope.
C) Using the root account for any operational task is a severe security violation. Root account usage should be reserved for a minimal set of account management tasks and should never be used by applications regardless of the reason.
D) Creating separate IAM users with elevated permissions requires managing long-term credentials and does not provide automatic permission revocation after the initialization task completes. IAM users provide permanent credentials rather than temporary elevation.
Question 100
A company must ensure that deleted Amazon RDS snapshots cannot be recovered or accessed after deletion. Which approach provides cryptographic assurance of data unrecoverability?
A) Manually delete snapshots through the console
B) Use encrypted RDS instances where snapshot deletion also deletes associated encryption keys
C) Enable RDS deletion protection
D) Use lifecycle policies to automatically delete old snapshots
Answer: B
Explanation:
Ensuring deleted data cannot be recovered requires cryptographic guarantees that encrypted data cannot be decrypted. Simply deleting snapshots may leave encrypted data on physical storage media. If encryption keys are also destroyed, encrypted data becomes cryptographically unrecoverable even if physical storage is accessed.
When RDS instances use encryption with customer managed KMS keys, snapshots are encrypted with data keys protected by the KMS key. If the KMS key is scheduled for deletion after snapshots are deleted, the encrypted snapshot data becomes unrecoverable because the encryption keys needed for decryption are destroyed. This provides cryptographic data sanitization.
Organizations can implement policies where RDS instance termination triggers KMS key deletion scheduling after ensuring all snapshots are deleted. The combination of snapshot deletion and key deletion provides strong assurance that data cannot be recovered from physical storage media, meeting data destruction requirements for sensitive information.
A) Manual snapshot deletion removes snapshot entries from AWS but does not provide cryptographic guarantees about data recoverability from underlying storage. Deleted encrypted snapshots could theoretically be recovered if encryption keys remain available and physical storage is accessed.
B) This is the correct answer because encrypted RDS instances protect snapshot data with encryption keys, deleting both snapshots and encryption keys provides cryptographic assurance of data unrecoverability, encrypted data cannot be decrypted without keys even if storage media is accessed, and this meets data destruction requirements.
C) RDS deletion protection prevents accidental database deletion but does not address data recoverability after intentional deletion. Deletion protection is about preventing premature deletion, not ensuring data unrecoverability after authorized deletion.
D) Lifecycle policies automate snapshot deletion based on age but do not provide cryptographic guarantees about data recoverability. Automated deletion addresses retention management but does not ensure cryptographic data sanitization through key destruction.