Amazon AWS Certified Security – Specialty SCS-C02 Exam Dumps and Practice Test Questions Set1 Q1-20

Visit here for our full Amazon AWS Certified Security – Specialty SCS-C02 exam dumps and practice test questions.

Question 1

A company needs to encrypt data at rest in Amazon S3. The security team requires that encryption keys be rotated automatically every year and that key usage be logged for audit purposes. Which solution meets these requirements?

A) Use Amazon S3 default encryption with Amazon S3-managed keys (SSE-S3)

B) Use server-side encryption with AWS KMS keys (SSE-KMS) with automatic key rotation enabled

C) Use client-side encryption with customer-provided keys

D) Use server-side encryption with customer-provided keys (SSE-C)

Answer: B

Explanation:

When organizations need to implement encryption for S3 data with automatic key rotation and audit logging capabilities, SSE-KMS provides the most comprehensive solution. This encryption method integrates directly with AWS Key Management Service, which offers built-in features for both automatic key rotation and detailed logging through CloudTrail.

SSE-KMS allows customers to create and manage Customer Master Keys that automatically rotate on an annual basis when the rotation feature is enabled. This rotation happens seamlessly without requiring any changes to applications or requiring re-encryption of existing data. The service maintains previous versions of keys to ensure that older data encrypted with previous key versions remains accessible.

One of the most significant advantages of SSE-KMS is its integration with AWS CloudTrail. Every time a KMS key is used to encrypt or decrypt data, CloudTrail logs the event, including details about who accessed the key, when it was accessed, and which resources were involved. This comprehensive audit trail is essential for compliance requirements and security investigations.

A) SSE-S3 uses Amazon-managed keys that rotate automatically, but it does not provide the same level of audit logging or control over keys that SSE-KMS offers. Organizations cannot track individual key usage events with SSE-S3.

B) This is the correct answer because SSE-KMS meets all requirements: automatic annual key rotation, detailed audit logging through CloudTrail, and granular control over encryption keys. Organizations can also implement key policies to control who can use keys for encryption and decryption operations.

C) Client-side encryption requires the application to manage encryption before uploading data to S3. While this provides control, it does not offer automatic key rotation and requires significant application-level implementation.

D) SSE-C requires customers to provide encryption keys with each request. There is no automatic key rotation, and AWS does not store the keys, making audit logging of key usage impossible.

Question 2

An organization’s security policy requires all API calls to AWS services to be signed using AWS Signature Version 4. A legacy application currently uses Signature Version 2. What is the MOST secure way to ensure compliance?

A) Update the application code to use AWS SDK that supports Signature Version 4

B) Create an AWS Lambda function to translate Signature Version 2 to Version 4

C) Use an Application Load Balancer to modify the signature version

D) Enable Signature Version 4 in IAM policy conditions

Answer: A

Explanation:

AWS Signature Version 4 is the current standard for signing API requests to AWS services and provides enhanced security features compared to the older Signature Version 2. Organizations with compliance requirements must ensure all applications use the latest signature version to maintain proper security posture and meet regulatory standards.

The most effective and secure approach is to update the application to use modern AWS SDKs that natively support Signature Version 4. AWS provides SDKs for multiple programming languages including Java, Python, JavaScript, .NET, Ruby, PHP, and Go. These SDKs handle all the complexity of request signing automatically, ensuring that every API call is properly authenticated using the current security standards.

Modern AWS SDKs offer several advantages beyond just signature compatibility. They provide automatic retry logic, error handling, credential management, and support for the latest AWS services and features. By updating to current SDK versions, organizations not only achieve compliance with signature requirements but also benefit from improved reliability, performance, and access to new AWS capabilities.

A) This is the correct answer because updating the application to use modern AWS SDKs ensures native Signature Version 4 support, provides long-term maintainability, and offers access to the latest security features and AWS services. This approach eliminates technical debt and ensures the application follows current best practices.

B) Creating a Lambda function as an intermediary adds unnecessary complexity, introduces potential points of failure, and increases latency. This approach does not address the underlying problem of outdated application code and creates additional maintenance burden.

C) Application Load Balancers operate at the HTTP/HTTPS level and cannot modify AWS API signature versions. This solution is technically infeasible for signing AWS service API calls.

D) While IAM policies can enforce requirements for Signature Version 4 by denying requests that do not meet this condition, this approach would cause the legacy application to fail rather than fixing the compatibility issue.

Question 3

A company wants to implement defense-in-depth for its Amazon RDS databases. Which combination of security measures provides the MOST comprehensive protection?

A) Enable RDS encryption at rest and implement database-level user authentication

B) Use security groups, network ACLs, RDS encryption, IAM database authentication, and enable automated backups with encryption

C) Implement VPC endpoints and enable Multi-AZ deployment

D) Use AWS WAF and enable RDS Enhanced Monitoring

Answer: B

Explanation:

Defense-in-depth is a comprehensive security strategy that implements multiple layers of security controls to protect resources. For Amazon RDS databases, this means deploying security measures at the network level, authentication level, encryption level, and operational level. A truly robust security posture requires combining multiple complementary security mechanisms rather than relying on any single control.

The most comprehensive approach integrates network security through security groups and network ACLs, which control traffic at different layers of the networking stack. Security groups act as virtual firewalls at the instance level, while network ACLs provide subnet-level traffic filtering. This dual-layer approach ensures that even if one control is misconfigured, the other provides protection.

Encryption forms another critical layer, protecting data both at rest and in transit. RDS encryption at rest uses AWS KMS to encrypt the database storage, automated backups, read replicas, and snapshots. This ensures that even if physical storage media is compromised, the data remains protected. IAM database authentication provides an additional authentication layer beyond traditional database passwords, using temporary credentials that rotate automatically.

A) While encryption and database authentication are important, this option lacks network-level controls and operational security measures like encrypted backups. This approach provides only partial protection and does not implement true defense-in-depth.

B) This is the correct answer because it implements security controls at multiple layers including network security through security groups and NACLs, encryption for data protection, modern authentication through IAM database authentication, and operational security through encrypted automated backups. This comprehensive approach ensures multiple security barriers must be bypassed for a successful attack.

C) VPC endpoints and Multi-AZ deployments are important for network security and availability respectively, but they do not address authentication, encryption, or comprehensive defense-in-depth requirements. These features alone provide insufficient security protection.

D) AWS WAF protects web applications and cannot directly protect RDS databases. Enhanced Monitoring provides operational visibility but does not implement security controls to prevent unauthorized access or protect data confidentiality.

Question 4

A security team needs to detect and respond to unauthorized AWS API calls in real-time. The solution must provide detailed information about the identity making the calls and the resources accessed. Which AWS service combination meets these requirements?

A) Amazon CloudWatch Logs and AWS Config

B) AWS CloudTrail, Amazon EventBridge, and AWS Lambda

C) AWS Security Hub and Amazon GuardDuty

D) VPC Flow Logs and Amazon CloudWatch Alarms

Answer: B

Explanation:

Real-time detection and response to unauthorized AWS API calls requires a combination of logging, event processing, and automated response capabilities. AWS CloudTrail serves as the foundation by capturing all API calls made within an AWS account, including the identity of the caller, the time of the call, the source IP address, the request parameters, and the response elements returned by the AWS service.

CloudTrail logs every management event and can also log data events for specific services like S3 and Lambda. These logs contain comprehensive information about who performed what action, when it occurred, and which resources were affected. However, CloudTrail alone provides historical logging without real-time alerting or automated response capabilities, which is why additional services are needed.

Amazon EventBridge provides the event-driven architecture needed for real-time detection. CloudTrail can be configured to deliver events to EventBridge in near real-time, allowing security teams to create rules that match specific patterns of API activity. When suspicious API calls are detected, EventBridge triggers automated responses through targets like Lambda functions, SNS topics, or Step Functions workflows.

A) CloudWatch Logs can receive CloudTrail logs, and AWS Config tracks resource configuration changes, but this combination lacks real-time event processing and automated response capabilities. Config focuses on configuration compliance rather than API call monitoring.

B) This is the correct answer because CloudTrail captures detailed API call information including caller identity and resources accessed, EventBridge provides real-time event detection and routing based on custom rules, and Lambda enables automated response actions. This combination delivers comprehensive real-time detection and response capabilities.

C) Security Hub aggregates security findings from multiple sources and GuardDuty detects threats using machine learning, but neither provides the detailed API call information and customizable real-time response that CloudTrail with EventBridge offers for specific API monitoring requirements.

D) VPC Flow Logs capture network traffic information at the network interface level but do not log AWS API calls or provide information about the identity making API requests. This combination is unsuitable for API call monitoring.

Question 5

An application running on Amazon EC2 instances needs to access objects in an Amazon S3 bucket. The security team prohibits storing long-term credentials on EC2 instances. What is the MOST secure solution?

A) Store AWS access keys in the application configuration file

B) Attach an IAM role to the EC2 instances with appropriate S3 permissions

C) Use AWS Systems Manager Parameter Store to store credentials

D) Create a dedicated IAM user and rotate credentials weekly

Answer: B

Explanation:

One of the fundamental security principles in AWS is avoiding the use of long-term credentials whenever possible. Long-term credentials, such as IAM user access keys, pose significant security risks because they can be compromised through various means including code repositories, configuration files, or memory dumps. Once compromised, these credentials remain valid until explicitly rotated or revoked.

IAM roles provide a superior alternative by offering temporary security credentials that are automatically rotated by AWS. When an IAM role is attached to an EC2 instance, the AWS infrastructure automatically provisions temporary credentials to the instance through the instance metadata service. These credentials are automatically rotated before they expire, ensuring that the application always has valid credentials without any manual intervention.

Applications running on EC2 instances can retrieve these temporary credentials from the instance metadata service at the endpoint 169.254.169.254. Modern AWS SDKs automatically retrieve and refresh these credentials, making the process transparent to the application code. The temporary credentials typically expire after several hours and are automatically refreshed by the SDK before expiration.

A) Storing access keys in configuration files is a security anti-pattern that exposes credentials to anyone with file system access. These credentials become long-term secrets that require manual rotation and can be accidentally committed to source control repositories.

B) This is the correct answer because IAM roles provide temporary credentials that are automatically rotated, eliminating the need to store long-term credentials on instances. Roles also integrate seamlessly with AWS SDKs and provide fine-grained permissions through IAM policies.

C) While Parameter Store can securely store credentials, this approach still requires managing and rotating long-term credentials. It does not eliminate the fundamental security risk of long-term credential storage and requires additional code to retrieve and use stored credentials.

D) Creating a dedicated IAM user still involves long-term credentials that must be stored somewhere on the instance. Weekly rotation improves security compared to never rotating, but it does not eliminate the risks associated with long-term credential storage.

Question 6

A company must ensure that all data stored in Amazon S3 buckets is encrypted using company-managed keys, and the keys must never leave AWS. Which encryption method meets these requirements?

A) Server-side encryption with Amazon S3-managed keys (SSE-S3)

B) Server-side encryption with AWS KMS keys (SSE-KMS)

C) Client-side encryption with AWS KMS

D) Server-side encryption with customer-provided keys (SSE-C)

Answer: B

Explanation:

Organizations often have compliance requirements that mandate the use of customer-managed encryption keys while ensuring those keys remain within secure boundaries. AWS Key Management Service provides a managed solution that allows customers to create, control, and audit encryption keys without those keys ever leaving the AWS infrastructure’s secure boundaries.

When using SSE-KMS, customers can create Customer Managed Keys in AWS KMS and use these keys to encrypt S3 objects. The keys remain within KMS at all times and never leave the service in plaintext form. All encryption and decryption operations are performed within KMS’s hardware security modules (HSMs), which are FIPS 140-2 validated cryptographic modules. This ensures that key material is protected according to strict security standards.

KMS provides granular control over keys through key policies and IAM policies. Organizations can define who can use keys for encryption and decryption, who can manage keys, and who can delete keys. KMS also integrates with CloudTrail to log all key usage, providing a comprehensive audit trail of when keys were used, by whom, and for which operations.

A) SSE-S3 uses encryption keys that are managed entirely by AWS, not by the customer. While the keys remain within AWS infrastructure, customers do not have control over key policies, rotation, or audit capabilities, failing to meet the requirement for company-managed keys.

B) This is the correct answer because SSE-KMS allows companies to create and manage their own KMS keys while ensuring the keys never leave AWS infrastructure. KMS provides full control over key policies, automatic or manual rotation, and comprehensive audit logging through CloudTrail.

C) Client-side encryption with KMS requires the application to request data keys from KMS and perform encryption before uploading to S3. While keys remain in AWS, this approach requires significant application changes and complexity compared to server-side encryption.

D) SSE-C requires customers to provide encryption keys with each request, meaning the keys must be transmitted to AWS for each operation. This fails the requirement that keys must never leave the customer’s control and adds operational complexity.

Question 7

A security engineer needs to analyze VPC traffic patterns to identify potential security threats. The solution must capture IP traffic information and retain logs for 90 days. Which AWS service meets these requirements?

A) AWS CloudTrail

B) Amazon VPC Flow Logs

C) AWS Config

D) Amazon CloudWatch Logs

Answer: B

Explanation:

Network traffic analysis is a critical component of security monitoring in cloud environments. Organizations need visibility into network communications to detect anomalous patterns, unauthorized access attempts, and potential data exfiltration. VPC Flow Logs provide this capability by capturing information about the IP traffic flowing through network interfaces in a VPC.

VPC Flow Logs capture details including source and destination IP addresses, source and destination ports, protocol numbers, packet and byte counts, action taken (accept or reject), and timestamps. This information enables security teams to analyze traffic patterns, identify suspicious communications, troubleshoot connectivity issues, and meet compliance requirements for network monitoring.

Flow logs can be created at three different levels: VPC level to capture all traffic in the entire VPC, subnet level to capture traffic for specific subnets, or network interface level for granular monitoring of individual instances. Logs can be published to CloudWatch Logs for analysis and alerting or to S3 for long-term storage and analysis using tools like Athena. The retention period is configurable, easily meeting the 90-day requirement.

A) CloudTrail logs API calls made to AWS services, not network traffic. While CloudTrail is essential for audit logging and security analysis, it does not capture IP traffic information or provide visibility into network communications between resources.

B) This is the correct answer because VPC Flow Logs specifically capture IP traffic information flowing through VPC network interfaces, can be retained for 90 days or longer through CloudWatch Logs or S3, and provide the data needed for traffic pattern analysis and threat detection.

C) AWS Config records configuration changes to AWS resources and evaluates resource configurations against desired states. It does not capture network traffic information or provide visibility into IP communications between resources in the VPC.

D) CloudWatch Logs is a log aggregation service that can receive logs from various sources including VPC Flow Logs. However, CloudWatch Logs alone does not generate network traffic information without VPC Flow Logs configured to send data to it.

Question 8

An organization requires that all Amazon RDS database snapshots be encrypted. Existing unencrypted snapshots must be converted to encrypted snapshots. What is the correct approach?

A) Enable encryption on the existing snapshots using the RDS console

B) Copy the unencrypted snapshots and specify encryption with a KMS key during the copy operation

C) Use AWS Database Migration Service to migrate data to an encrypted database

D) Enable encryption on the source database and take new snapshots

Answer: B

Explanation:

Amazon RDS does not support directly encrypting existing unencrypted snapshots or databases. The encryption attribute is set when a database instance or snapshot is created and cannot be changed afterward. This design decision ensures data integrity and prevents potential security issues that could arise from in-place encryption of existing data. Organizations needing to encrypt existing unencrypted snapshots must use the copy operation.

The snapshot copy process in RDS provides the ability to create an encrypted copy of an unencrypted snapshot. During the copy operation, users specify a KMS key to use for encrypting the new snapshot copy. RDS then creates a new snapshot with encryption enabled, copying all data from the source snapshot while encrypting it with the specified KMS key. The original unencrypted snapshot remains unchanged.

After creating an encrypted copy of a snapshot, organizations can restore the encrypted snapshot to create a new encrypted database instance. This new instance will have encryption enabled and all subsequent automated backups and snapshots will also be encrypted. The original unencrypted database and snapshots can then be deleted after verifying the encrypted replacements meet all requirements.

A) RDS does not support enabling encryption on existing snapshots. The encryption attribute is immutable once a snapshot is created. This option is technically not possible through the RDS console or API.

B) This is the correct answer because copying an unencrypted snapshot with encryption enabled creates an encrypted version of the snapshot. This is the supported method for converting unencrypted snapshots to encrypted snapshots in RDS.

C) While DMS could technically migrate data to a new encrypted database, this approach is unnecessarily complex for snapshot encryption. DMS is designed for database migrations and ongoing replication, not for simple snapshot encryption tasks. This method would also require running source and target databases simultaneously.

D) You cannot enable encryption on an existing unencrypted RDS database. New snapshots from an unencrypted database will also be unencrypted. This option does not solve the problem of encrypting existing unencrypted snapshots.

Question 9

A company’s security policy requires that all network traffic between the application tier and database tier be encrypted in transit. The application uses Amazon EC2 instances and Amazon RDS MySQL. What should the security engineer implement?

A) Enable VPC encryption for all traffic within the VPC

B) Configure the RDS instance to require SSL/TLS connections and update the application to use SSL

C) Use AWS PrivateLink to encrypt traffic between tiers

D) Enable IPSec tunnels between the EC2 instances and RDS instance

Answer: B

Explanation:

Protecting data in transit is as important as protecting data at rest. When applications communicate with databases over networks, the data transmitted can potentially be intercepted or observed by unauthorized parties. Encryption in transit ensures that even if network traffic is captured, the data cannot be read without the proper decryption keys. For RDS MySQL databases, SSL/TLS provides industry-standard encryption for client-server communications.

Amazon RDS for MySQL supports SSL/TLS encryption for connections between applications and database instances. RDS provides SSL certificates that applications can use to establish encrypted connections. The RDS parameter group can be configured to require SSL connections by setting the require_secure_transport parameter, ensuring that all connections to the database must use encryption.

Applications must be configured to use SSL when connecting to the RDS instance. This typically involves specifying SSL options in the database connection string and providing the RDS certificate authority certificate. Modern database drivers and AWS SDKs support SSL connections with minimal configuration. Once implemented, all data transmitted between the application and database is encrypted, protecting sensitive information from network-based attacks.

A) AWS VPC does not provide built-in encryption for all traffic within the VPC. While the VPC provides network isolation, traffic between resources in the same VPC is not automatically encrypted unless specific measures are implemented at the application or transport level.

B) This is the correct answer because RDS MySQL supports SSL/TLS connections that encrypt data in transit between the application and database. Configuring RDS to require SSL and updating applications to use SSL connections ensures all database traffic is encrypted.

C) AWS PrivateLink provides private connectivity to AWS services and endpoints, but it does not automatically encrypt traffic. While PrivateLink keeps traffic within the AWS network, it does not provide transport-layer encryption like SSL/TLS does for database connections.

D) Implementing IPSec tunnels between individual EC2 instances and RDS instances would be extremely complex and is not a supported configuration pattern. RDS is a managed service that does not support custom network configurations like IPSec tunnels at the instance level.

Question 10

An organization needs to implement centralized logging for all AWS accounts in their organization. Logs must be stored in a dedicated security account and be immutable. Which solution meets these requirements?

A) Enable CloudTrail in each account with logs sent to a central S3 bucket in the security account, then enable S3 Object Lock

B) Use AWS Organizations to automatically enable CloudTrail and store logs locally in each account

C) Configure CloudWatch Logs subscription filters to send logs to the security account

D) Use AWS Config to record configuration changes and send to the security account

Answer: A

Explanation:

Centralized logging is a critical security requirement that ensures all audit logs from multiple AWS accounts are collected in a single location for analysis, correlation, and long-term retention. This approach prevents individual account administrators from tampering with or deleting logs related to their actions, establishing a strong audit trail that meets compliance and security requirements. Combining centralized logging with immutability features creates a robust security architecture.

AWS CloudTrail supports organization trails that automatically log all AWS API activity across all accounts in an AWS Organization. When configured, CloudTrail delivers logs from all member accounts to a central S3 bucket located in a designated security or logging account. This centralized approach simplifies log management and ensures consistent logging across the entire organization without requiring individual trail configuration in each account.

S3 Object Lock provides immutability by preventing objects from being deleted or modified for a specified retention period. When Object Lock is enabled on the central logging bucket with compliance mode, even the AWS account root user cannot delete or alter the logs before the retention period expires. This feature is essential for regulatory compliance and ensures the integrity of audit logs.

A) This is the correct answer because organization trails in CloudTrail automatically collect logs from all accounts and deliver them to a central S3 bucket, while S3 Object Lock ensures logs cannot be modified or deleted, creating an immutable audit trail that meets security and compliance requirements.

B) While Organizations can help manage CloudTrail across multiple accounts, storing logs locally in each account does not meet the centralized logging requirement. This approach also allows account administrators to potentially modify or delete logs in their respective accounts.

C) CloudWatch Logs subscription filters can forward logs between accounts, but this approach requires manual configuration in each account and does not provide the same level of centralized control and management that CloudTrail organization trails offer for API logging.

D) AWS Config tracks resource configuration changes but does not provide comprehensive API call logging like CloudTrail. Config is complementary to CloudTrail but cannot replace it for centralized API audit logging across an organization.

Question 11

A security team needs to detect and prevent data exfiltration from Amazon S3 buckets. The solution must identify and block attempts to copy sensitive data to external accounts. Which AWS service provides this capability?

A) Amazon GuardDuty

B) Amazon Macie

C) AWS CloudTrail

D) AWS Config

Answer: B

Explanation:

Data exfiltration prevention requires the ability to identify sensitive data, monitor access patterns, and detect anomalous activities that might indicate unauthorized data copying or sharing. Amazon Macie is a fully managed data security service that uses machine learning and pattern matching to discover, classify, and protect sensitive data stored in Amazon S3.

Macie automatically discovers and classifies sensitive data such as personally identifiable information, financial information, and intellectual property. It continuously monitors S3 buckets for security and access control changes, generating findings when it detects potential security issues like unencrypted buckets, publicly accessible buckets, or buckets shared with external accounts. Macie also analyzes S3 access patterns to identify anomalous activities.

One of Macie’s key capabilities is detecting suspicious data access patterns that might indicate data exfiltration attempts. It identifies activities such as unusual API calls to replicate or copy objects, large data transfers to external accounts, and access from unusual geographic locations. When combined with automated response mechanisms through EventBridge and Lambda, organizations can automatically block or alert on detected exfiltration attempts.

A) GuardDuty focuses on threat detection across AWS accounts and workloads by analyzing VPC Flow Logs, CloudTrail logs, and DNS logs. While it can detect some anomalous S3 access patterns, it does not provide the data classification and sensitive data discovery capabilities that Macie offers specifically for S3 data protection.

B) This is the correct answer because Macie specifically focuses on discovering and protecting sensitive data in S3 buckets, monitors for data exfiltration attempts, and can detect when sensitive data is being accessed or copied to external accounts through its continuous monitoring and anomaly detection capabilities.

C) CloudTrail logs all API calls including S3 operations, but it does not automatically analyze access patterns to detect data exfiltration attempts. CloudTrail provides the raw log data, but additional analysis and detection logic would need to be implemented separately.

D) AWS Config monitors resource configuration compliance but does not analyze data access patterns or detect exfiltration attempts. Config focuses on configuration state and changes rather than data classification and access monitoring.

Question 12

An application needs to securely store and retrieve database credentials. The solution must support automatic credential rotation and provide audit logging of credential access. Which AWS service meets these requirements?

A) AWS Systems Manager Parameter Store with SecureString parameters

B) AWS Secrets Manager

C) Amazon S3 with encryption

D) AWS KMS with encrypted data keys

Answer: B

Explanation:

Managing database credentials securely is a critical security concern for applications. Traditional approaches of hardcoding credentials or storing them in configuration files create significant security risks. Credentials need to be rotated regularly to minimize the impact of potential compromises, and organizations need to audit who accesses credentials and when they are used.

AWS Secrets Manager is specifically designed for storing and managing secrets such as database credentials, API keys, and other sensitive information. One of its most powerful features is automatic credential rotation. Secrets Manager can automatically rotate credentials for supported databases including Amazon RDS, Amazon DocumentDB, and Amazon Redshift without requiring application code changes. The service uses Lambda functions to execute the rotation process.

When automatic rotation is enabled, Secrets Manager follows a multi-step process to rotate credentials. It creates new credentials, updates the database with the new credentials, updates the secret value in Secrets Manager, and verifies that the new credentials work correctly. Throughout this process, the application continues to function because Secrets Manager maintains both old and new credentials during the transition period. This rotation can occur on a schedule, such as every 30 or 90 days.

A) Parameter Store can store encrypted parameters and provides some auditing through CloudTrail, but it does not offer automatic credential rotation for databases. Organizations would need to build custom rotation logic using Lambda functions and additional services, adding complexity and maintenance burden.

B) This is the correct answer because Secrets Manager provides native support for automatic credential rotation for major database services, comprehensive audit logging through CloudTrail, encryption at rest and in transit, fine-grained access control through IAM policies, and versioning of secrets.

C) While S3 can store encrypted objects, it is not designed for secrets management. S3 lacks automatic rotation capabilities, does not provide secret versioning suitable for credentials, and would require significant custom development to implement credential rotation and management.

D) AWS KMS is an encryption service that manages cryptographic keys, not application secrets. While KMS can encrypt data keys, it does not store or manage application credentials, provide automatic rotation for database passwords, or offer built-in integration with database services.

Question 13

A company requires that all Amazon EC2 instances be scanned for vulnerabilities within 24 hours of launch. The security team must be notified immediately if critical vulnerabilities are detected. Which solution implements this requirement?

A) Use Amazon Inspector with EventBridge rules to trigger SNS notifications for critical findings

B) Install third-party antivirus software on all instances

C) Use AWS Config rules to check for vulnerabilities

D) Enable VPC Flow Logs and analyze traffic for vulnerability signatures

Answer: A

Explanation:

Automated vulnerability scanning is essential for maintaining security posture across dynamic cloud environments where instances are frequently launched and terminated. Organizations need continuous assessment of their instances to identify security vulnerabilities, software bugs, and deviations from best practices before they can be exploited by attackers. Amazon Inspector provides automated security assessment services specifically designed for this purpose.

Amazon Inspector automatically assesses EC2 instances for vulnerabilities and network exposure. It evaluates instances against security best practices and common vulnerabilities and exposures (CVEs) in operating systems and applications. Inspector can be configured to automatically scan instances upon launch, ensuring that new instances are evaluated within the required 24-hour window. The service uses AWS Systems Manager agents installed on instances to perform assessments.

Inspector generates findings with severity ratings ranging from informational to critical. These findings can be integrated with Amazon EventBridge, which allows security teams to create rules that match specific patterns such as critical severity findings. When critical vulnerabilities are detected, EventBridge can trigger automated responses including SNS notifications to security teams, Lambda functions to apply patches, or Systems Manager automation documents to remediate issues.

A) This is the correct answer because Amazon Inspector automatically scans EC2 instances for vulnerabilities including within 24 hours of launch, assigns severity ratings to findings, and integrates with EventBridge to enable automated notifications through SNS when critical vulnerabilities are discovered.

B) Antivirus software protects against malware and viruses but does not perform comprehensive vulnerability scanning for operating system and application vulnerabilities. This approach does not meet the requirement for detecting and reporting critical vulnerabilities like unpatched software or misconfigurations.

C) AWS Config evaluates resource configurations against defined rules and can detect configuration compliance issues, but it is not designed for vulnerability scanning of software packages and operating systems. Config focuses on configuration state rather than vulnerability assessment.

D) VPC Flow Logs capture network traffic metadata but cannot identify vulnerabilities in software or operating systems. Flow log analysis might detect exploitation attempts but cannot proactively identify vulnerable systems before they are exploited.

Question 14

An organization must ensure that Amazon EBS volumes are encrypted and cannot be shared with external AWS accounts. Which combination of controls implements this requirement?

A) Enable EBS encryption by default and create an SCP to prevent sharing unencrypted volumes

B) Enable EBS encryption by default and use IAM policies to deny ModifySnapshotAttribute actions that share volumes externally

C) Use AWS Config rules to detect unencrypted volumes and VPC endpoints to prevent external sharing

D) Implement AWS KMS key policies to restrict encryption key usage

Answer: B

Explanation:

Ensuring encryption of EBS volumes and preventing unauthorized sharing requires multiple layers of control. EBS encryption protects data at rest on volumes and snapshots, while access controls prevent sharing encrypted volumes or their snapshots with unauthorized external accounts. These controls work together to maintain data confidentiality and prevent data leakage.

EBS encryption by default is an account-level setting that automatically encrypts all new EBS volumes and snapshots created in the account. When enabled, users cannot create unencrypted volumes even if they explicitly try to do so. This preventive control ensures consistent encryption across all volumes without requiring users to remember to enable encryption for each volume. Existing unencrypted volumes remain unencrypted until explicitly migrated to encrypted volumes.

Preventing volume sharing to external accounts requires controlling snapshot sharing permissions. When a snapshot is created from an encrypted volume, it is also encrypted. The ModifySnapshotAttribute API call allows users to modify snapshot permissions including making snapshots public or sharing them with specific AWS accounts. IAM policies can explicitly deny this action when the target includes external accounts, preventing users from sharing snapshots outside the organization.

A) Service Control Policies can enforce encryption requirements, but the specific concern about sharing unencrypted volumes is better addressed through IAM policies on snapshot sharing. SCPs are better suited for broad organizational controls rather than specific API action restrictions related to snapshot attributes.

B) This is the correct answer because enabling EBS encryption by default ensures all volumes are encrypted, and IAM policies denying ModifySnapshotAttribute actions prevent users from sharing snapshots with external accounts, comprehensively addressing both requirements.

C) While AWS Config can detect unencrypted volumes for monitoring purposes, it does not prevent their creation. VPC endpoints control network access to services but do not prevent snapshot sharing through API calls, which can occur from any location with valid credentials.

D) KMS key policies can control who can use encryption keys, which indirectly affects encrypted volume access. However, this does not prevent the creation of unencrypted volumes or specifically address the snapshot sharing concern for volumes encrypted with that key.

Question 15

A security engineer must implement a solution that detects and remediates security group rules that allow unrestricted inbound access from the internet (0.0.0.0/0). Which approach provides automated detection and remediation?

A) Use AWS Config with managed rules and Systems Manager Automation for remediation

B) Write a Lambda function triggered by CloudWatch Events to scan security groups daily

C) Use VPC Flow Logs to identify traffic from 0.0.0.0/0 and manually update security groups

D) Implement AWS Firewall Manager to manage security group rules

Answer: A

Explanation:

Security group misconfigurations are among the most common security issues in AWS environments. Overly permissive rules that allow unrestricted access from the internet create significant security vulnerabilities by exposing resources to potential attacks. Automated detection and remediation of these misconfigurations is essential for maintaining a strong security posture without requiring constant manual monitoring.

AWS Config provides continuous compliance monitoring by evaluating resource configurations against predefined or custom rules. AWS Config includes managed rules specifically designed to detect security group misconfigurations, such as rules that check for unrestricted SSH access, unrestricted RDP access, or unrestricted access on any port. These rules continuously monitor security groups and generate findings when non-compliant configurations are detected.

When AWS Config detects a non-compliant security group, it can automatically trigger remediation actions using AWS Systems Manager Automation documents. These automation documents define a series of steps to correct the non-compliant configuration, such as removing the offending security group rule or modifying it to restrict access to specific IP ranges. The remediation can be configured to occur automatically or require manual approval.

A) This is the correct answer because AWS Config managed rules continuously monitor security groups for unrestricted inbound access, and Systems Manager Automation provides automated remediation capabilities to remove or modify non-compliant rules without manual intervention.

B) While a Lambda function could scan security groups and identify issues, daily scanning does not provide continuous monitoring and introduces delays in detection. This approach also requires custom code development and maintenance compared to using managed AWS Config rules.

C) VPC Flow Logs show network traffic but cannot identify security group misconfigurations before they are exploited. This reactive approach only detects issues after traffic occurs and requires manual remediation, failing to prevent unauthorized access attempts.

D) AWS Firewall Manager can centrally manage security groups across accounts, but it is primarily designed for managing AWS WAF rules and security group policies at scale. It does not provide the continuous compliance monitoring and automated remediation that AWS Config with Systems Manager Automation offers.

Question 16

A company needs to implement cross-account access for an application in Account A to read objects from an S3 bucket in Account B. What is the MOST secure method to grant this access?

A) Share the AWS access keys from Account B with Account A

B) Create an IAM role in Account B with S3 read permissions and establish a trust relationship with Account A

C) Make the S3 bucket public and use bucket policies to restrict access

D) Use S3 bucket ACLs to grant access to Account A

Answer: B

Explanation:

Cross-account access is a common requirement in AWS environments where different applications, teams, or organizations need to share resources securely. The challenge is to enable this access without compromising security by sharing credentials or making resources publicly accessible. IAM roles with cross-account trust relationships provide the most secure and manageable solution for this scenario.

When implementing cross-account access using IAM roles, you create a role in the account that owns the resource (Account B in this case) and define a trust policy that specifies which external accounts or principals can assume the role. The trust policy establishes the trust relationship, while the role’s permission policies define what actions can be performed on resources when the role is assumed.

In Account A, the application can assume the cross-account role in Account B using the AssumeRole API call. AWS Security Token Service returns temporary security credentials that the application uses to access the S3 bucket. These temporary credentials automatically expire after a defined period, reducing the risk associated with credential compromise. The entire process avoids sharing long-term credentials between accounts.

A) Sharing AWS access keys between accounts is a severe security anti-pattern. Access keys are long-term credentials that, if compromised, provide ongoing access until manually revoked. This approach also makes credential rotation difficult and creates audit trail complications since actions appear to come from the same user in Account B.

B) This is the correct answer because IAM roles with cross-account trust relationships provide temporary credentials, eliminate the need for sharing long-term credentials, enable fine-grained permission control, and create clear audit trails showing when Account A assumes the role to access resources in Account B.

C) Making an S3 bucket public exposes it to anyone on the internet, not just Account A. Even with bucket policies attempting to restrict access, public buckets are frequently targeted by attackers. This approach introduces unnecessary security risk and does not meet the requirement for secure cross-account access.

D) S3 bucket ACLs are a legacy access control mechanism that provides coarse-grained permissions. While ACLs can technically grant cross-account access, they lack the flexibility, auditability, and security features of IAM roles. AWS recommends using IAM roles and bucket policies instead of ACLs for modern applications.

Question 17

A security team needs to ensure that all data in an Amazon DynamoDB table is encrypted at rest. The encryption keys must be rotated annually and key usage must be auditable. Which solution meets these requirements?

A) Use DynamoDB default encryption with AWS owned keys

B) Use DynamoDB encryption at rest with AWS managed KMS keys

C) Use DynamoDB encryption at rest with customer managed KMS keys

D) Implement application-level encryption before storing data in DynamoDB

Answer: C

Explanation:

DynamoDB provides multiple encryption options for data at rest, each offering different levels of control over encryption keys and key management capabilities. Organizations with compliance requirements often need control over key rotation schedules and comprehensive audit trails of key usage, which requires customer-managed encryption keys rather than AWS-managed alternatives.

Customer managed KMS keys provide the highest level of control and auditability. Organizations create and manage these keys in AWS KMS, define key policies that control who can use and manage keys, enable automatic annual rotation, and monitor all key usage through CloudTrail logs. When a customer managed key is used for DynamoDB encryption, all table data including primary key values, local and global secondary indexes, streams, global tables, backups, and point-in-time recovery data is encrypted.

KMS automatic key rotation rotates the cryptographic material annually while maintaining the same key ID and key ARN. This means applications and configurations continue to reference the same key without changes, but the underlying cryptographic material used for encryption operations is rotated. KMS maintains all previous versions of the cryptographic material to decrypt data encrypted with older versions, ensuring seamless operation during and after rotation.

A) AWS owned keys are managed entirely by AWS and shared across multiple customer accounts. While they provide encryption at rest with no additional cost, customers have no control over key rotation schedules, cannot audit key usage, and cannot apply key policies. This option fails to meet the auditability and annual rotation control requirements.

B) AWS managed KMS keys are created and managed by AWS for use with DynamoDB. While they provide encryption at rest and automatic rotation every three years, customers cannot control the rotation schedule, cannot modify key policies, and have limited auditing capabilities. This fails the annual rotation requirement.

C) This is the correct answer because customer managed KMS keys allow organizations to enable automatic annual key rotation, provide comprehensive CloudTrail logging of all key usage for auditability, and offer full control over key policies and permissions.

D) Application-level encryption adds complexity by requiring applications to encrypt data before writing to DynamoDB and decrypt after reading. While this provides defense in depth, it does not meet the requirement for DynamoDB encryption at rest with auditable key rotation and introduces significant development overhead.

Question 18

An organization must meet compliance requirements that mandate all privileged user actions be logged and retained for seven years. Which AWS service combination provides this capability?

A) AWS CloudTrail with S3 lifecycle policies for seven-year retention

B) Amazon CloudWatch Logs with seven-year retention period

C) AWS Config with seven-year retention

D) VPC Flow Logs stored in S3

Answer: A

Explanation:

Compliance frameworks commonly require long-term retention of audit logs documenting privileged user actions. These logs serve as evidence during audits, support forensic investigations, and help organizations meet regulatory obligations. AWS CloudTrail provides comprehensive API logging that captures all management console activities, API calls, and SDK actions, including those performed by privileged users.

CloudTrail logs capture detailed information about each API call including the identity of the caller, the time of the call, the source IP address, the request parameters, and the response elements. For privileged user actions, CloudTrail logs identify whether the action was performed by the root account, an IAM user, an assumed role, or a federated user. This comprehensive logging meets compliance requirements for documenting privileged access.

For long-term retention, CloudTrail logs should be delivered to an S3 bucket configured with appropriate lifecycle policies. S3 provides durable, cost-effective storage for logs, and lifecycle policies can automatically transition older logs to lower-cost storage classes like S3 Glacier or S3 Glacier Deep Archive while maintaining the seven-year retention requirement. S3 Object Lock can be added to ensure logs cannot be deleted before the retention period expires.

A) This is the correct answer because CloudTrail captures all privileged user actions as API calls with detailed attribution, and S3 with lifecycle policies provides cost-effective, durable storage for seven years. This combination meets both logging and retention requirements while minimizing storage costs.

B) CloudWatch Logs can receive CloudTrail logs, but CloudWatch Logs has a maximum retention period of 10 years and is more expensive than S3 for very long-term storage. While technically capable of seven-year retention, S3 is more cost-effective for this use case.

C) AWS Config records resource configuration changes and compliance status but does not provide comprehensive logging of all privileged user actions. Config focuses on configuration state rather than action-level audit logging, making it unsuitable as the primary solution for this requirement.

D) VPC Flow Logs capture network traffic metadata but do not log privileged user actions or API calls. Flow logs provide network-level visibility but cannot document administrative actions or privileged operations performed through the management console or APIs.

Question 19

A development team needs temporary access to production AWS resources for troubleshooting. The security team requires that this access be time-limited and automatically expire after four hours. Which solution meets this requirement?

A) Create IAM users for developers with four-hour password expiration

B) Use IAM roles with a maximum session duration of four hours

C) Use temporary security credentials from AWS STS with a four-hour expiration

D) Create IAM policies with time-based conditions for four-hour access

Answer: B

Explanation:

Temporary access to production resources is a common requirement that must be carefully controlled to maintain security. Time-limited access reduces the risk associated with credential compromise and ensures that developers do not retain unnecessary access after completing their troubleshooting tasks. IAM roles provide the most effective mechanism for implementing time-limited access to AWS resources.

IAM roles support configurable maximum session durations ranging from one hour to twelve hours. When a role is assumed using the AssumeRole API call, AWS Security Token Service issues temporary security credentials that remain valid for the specified duration, up to the role’s maximum session duration. These temporary credentials automatically expire when the session duration elapses, requiring the user to re-authenticate and re-assume the role to continue access.

The maximum session duration can be configured at the role level, enforcing a consistent time limit regardless of what duration users request when assuming the role. If a user attempts to assume a role with a longer duration than the role’s maximum, AWS STS issues credentials with the maximum allowed duration instead. This ensures that temporary access cannot exceed the desired four-hour limit.

A) IAM user password expiration does not limit the validity of access keys or current sessions. Even after a password expires, existing access keys and active sessions remain valid. Password expiration also requires manual password resets, creating administrative overhead without providing automatic access revocation.

B) This is the correct answer because IAM roles with a four-hour maximum session duration automatically issue temporary credentials that expire after four hours, require no manual intervention to revoke access, and provide a clean audit trail of when developers assumed roles and when their access expired.

C) While STS can issue temporary credentials with custom expiration times, this approach requires writing custom code to call STS APIs and manage credential distribution. Using IAM roles provides the same temporary credentials through STS but with a simpler, more maintainable implementation through the AssumeRole workflow.

D) Time-based conditions in IAM policies can restrict when actions can be performed based on the current date and time, but they do not create time-limited credentials that automatically expire. Users would retain valid credentials beyond the four-hour window, requiring manual revocation to end access.

Question 20

A company must ensure that all API calls to AWS services originate from within their corporate network or AWS VPC. Which approach enforces this requirement?

A) Implement VPC endpoints for all AWS services

B) Use IAM policy conditions to restrict access based on source IP addresses or VPC endpoints

C) Enable AWS PrivateLink for all services

D) Configure security groups to allow only internal IP addresses

Answer: B

Explanation:

Restricting AWS API access based on network location is an important security control that prevents unauthorized access from unknown locations. Organizations often require that administrative actions and sensitive operations can only be performed from trusted network locations such as corporate offices or within AWS VPCs. IAM policies with condition keys provide the mechanism to enforce these network-based access controls.

IAM policies support condition keys that evaluate the source of API requests. The aws:SourceIp condition key checks the IP address of the request, allowing policies to permit or deny actions based on whether the request originates from specified IP address ranges. For corporate networks, organizations can specify their public IP address ranges. The aws:SourceVpce condition key restricts access to requests that come through specific VPC endpoints.

These conditions can be combined in IAM policies attached to users, groups, or roles. For example, a policy might allow S3 actions only if the request comes from the corporate IP range or through a specific VPC endpoint. This dual condition ensures that access is permitted from either the corporate network or from resources within the VPC, while blocking access from other locations like unauthorized devices or public networks.

A) VPC endpoints provide private connectivity between VPCs and AWS services without traversing the internet, but they do not enforce network-based access restrictions. Resources can still access services through internet gateways or NAT gateways unless additional controls are implemented. VPC endpoints alone do not restrict access based on network origin.

B) This is the correct answer because IAM policy conditions with aws:SourceIp and aws:SourceVpce keys enforce network-based restrictions, allowing API calls only from specified IP ranges (corporate network) or through VPC endpoints (AWS VPC), and automatically denying requests from unauthorized network locations.

C) AWS PrivateLink enables private connectivity to AWS services and endpoints, but like VPC endpoints, it does not automatically enforce network-based access restrictions. PrivateLink provides the connectivity mechanism but must be combined with IAM policy conditions to enforce access controls.

D) Security groups control network traffic to and from AWS resources at the instance level but do not control API calls to AWS services. API calls are authenticated using IAM credentials and are subject to IAM policies, not security group rules. Security groups cannot restrict AWS API access based on network location.

Leave a Reply

How It Works

img
Step 1. Choose Exam
on ExamLabs
Download IT Exams Questions & Answers
img
Step 2. Open Exam with
Avanset Exam Simulator
Press here to download VCE Exam Simulator that simulates real exam environment
img
Step 3. Study
& Pass
IT Exams Anywhere, Anytime!