Amazon AWS Certified Security – Specialty SCS-C02 Exam Dumps and Practice Test Questions Set2 Q21-40

Visit here for our full Amazon AWS Certified Security – Specialty SCS-C02 exam dumps and practice test questions.

Question 21

A security engineer discovers that an Amazon S3 bucket containing sensitive data was accidentally made public. Which immediate action should be taken to prevent data access?

A) Delete the S3 bucket immediately

B) Enable S3 Block Public Access settings for the bucket

C) Remove all objects from the bucket

D) Change the bucket name to make it harder to discover

Answer: B

Explanation:

When a sensitive S3 bucket is discovered to be publicly accessible, immediate action is required to prevent unauthorized data access while preserving the data and minimizing service disruption. S3 Block Public Access provides a rapid response mechanism that overrides existing bucket policies and ACLs that grant public access, immediately securing the bucket without requiring data deletion or complex policy modifications.

S3 Block Public Access settings can be applied at both the account and bucket levels. When enabled for a specific bucket, these settings prevent the bucket from becoming publicly accessible regardless of bucket policies or ACLs. Block Public Access has four settings: BlockPublicAcls prevents new public ACLs, IgnorePublicAcls ignores existing public ACLs, BlockPublicPolicy prevents new public bucket policies, and RestrictPublicBuckets restricts access to buckets with public policies.

Enabling Block Public Access is non-destructive and reversible. It immediately stops public access without deleting data or disrupting legitimate internal access through IAM roles and policies. After enabling Block Public Access, the security team can investigate how the bucket became public, review access logs to determine if unauthorized access occurred, and implement additional controls to prevent future incidents.

A) Deleting the bucket would prevent access but results in permanent data loss. This extreme action should only be considered if the data has been completely compromised and no recovery is possible. In most cases, data should be preserved for investigation and potential restoration of proper access controls.

B) This is the correct answer because S3 Block Public Access immediately prevents public access to the bucket and its objects, preserves all data for investigation and legitimate use, can be implemented in seconds through the console or API, and does not disrupt internal access through proper IAM credentials.

C) Removing all objects prevents public access but results in data loss and service disruption. Legitimate applications relying on the data would fail, and the data would need to be restored from backups. This approach is unnecessarily disruptive compared to simply blocking public access.

D) Changing the bucket name requires creating a new bucket, copying all data, updating all application configurations, and deleting the old bucket. This process is time-consuming and does not immediately address the security issue. Additionally, the old bucket name remains publicly accessible until deleted.

Question 22

An organization requires all Amazon EC2 instances to use specific approved AMIs that have been hardened according to security standards. How can this requirement be enforced?

A) Manually review all EC2 instances weekly

B) Use AWS Config with a custom rule to detect instances launched from non-approved AMIs

C) Implement EC2 Image Builder to create AMIs automatically

D) Use AWS Organizations SCPs to prevent EC2 instance launches

Answer: B

Explanation:

Ensuring that all EC2 instances use approved, hardened AMIs is critical for maintaining consistent security baselines across an organization’s infrastructure. Approved AMIs typically include security patches, hardened configurations, security monitoring agents, and organizational compliance requirements. Automated detection and enforcement of AMI usage requirements prevents the deployment of instances with unknown security postures.

AWS Config continuously monitors and records AWS resource configurations, including EC2 instances and their associated AMIs. Organizations can create custom Config rules that evaluate whether instances are launched from a list of approved AMI IDs. When Config detects an instance launched from an unapproved AMI, it marks the instance as non-compliant and can trigger automated responses.

Custom Config rules use AWS Lambda functions to implement the evaluation logic. The Lambda function receives instance configuration data, checks the AMI ID against a list of approved AMIs stored in a parameter or configuration file, and returns a compliance status. Config can be configured to trigger remediation actions such as stopping the instance, sending notifications to security teams, or creating service tickets for investigation.

A) Manual weekly reviews introduce significant delays between instance launch and detection of non-compliant AMIs. During this window, vulnerable or misconfigured instances remain operational, potentially exposing the organization to security risks. Manual reviews also do not scale well as the number of instances grows.

B) This is the correct answer because AWS Config with custom rules provides continuous automated monitoring of instance AMI compliance, detects non-compliant instances immediately after launch, enables automated remediation actions, and maintains compliance history for audit purposes.

C) EC2 Image Builder helps create and maintain approved AMIs through automated pipelines, but it does not enforce that only approved AMIs are used for instance launches. Image Builder is complementary to compliance enforcement but does not prevent users from launching instances with unapproved AMIs.

D) Service Control Policies can restrict EC2 actions across an organization, but using SCPs to completely prevent EC2 launches would block all instance creation including those from approved AMIs. SCPs do not provide the granular control needed to allow only specific AMIs while blocking others.

Question 23

A company uses AWS Organizations with multiple accounts. The security team needs to ensure that no one can disable CloudTrail logging in any account. Which solution implements this requirement?

A) Enable CloudTrail in each account with termination protection

B) Create a Service Control Policy that denies cloudtrail:StopLogging and cloudtrail:DeleteTrail actions

C) Use IAM policies in each account to restrict CloudTrail modifications

D) Enable AWS Config rules to detect CloudTrail changes

Answer: B

Explanation:

Protecting audit logging infrastructure is paramount for security and compliance. If attackers or malicious insiders can disable CloudTrail logging, they can perform actions without creating audit trails, severely hampering incident investigation and detection capabilities. Organizations need preventive controls that make it impossible to disable logging, rather than detective controls that only alert after logging has been disabled.

Service Control Policies are a feature of AWS Organizations that provide centralized access control across all accounts in the organization. SCPs define the maximum available permissions for member accounts and affect all users and roles in those accounts, including the account root user. An SCP that denies CloudTrail stop and delete actions prevents anyone in member accounts from disabling CloudTrail, regardless of their IAM permissions.

The SCP should explicitly deny the cloudtrail:StopLogging and cloudtrail:DeleteTrail actions. When combined with an organization trail that logs across all accounts, this configuration creates an immutable audit logging system. Even if an account has local IAM policies that would normally permit CloudTrail modifications, the SCP denial takes precedence and prevents the actions from succeeding.

A) CloudTrail does not have a termination protection feature like RDS or EC2 instances. While organization trails provide centralized logging, they do not prevent the trail itself from being stopped or deleted without additional controls like SCPs to protect them.

B) This is the correct answer because SCPs with explicit denials for CloudTrail stop and delete actions create an organizational-level control that prevents CloudTrail from being disabled in any member account, overriding any permissive IAM policies and protecting even against compromised accounts.

C) IAM policies in individual accounts can be modified by users with appropriate permissions in those accounts. An attacker who compromises an account with IAM administrative permissions could modify the IAM policies to permit CloudTrail modifications and then disable logging. IAM policies alone do not provide sufficient protection.

D) AWS Config rules detect configuration changes after they occur, but they do not prevent CloudTrail from being disabled. While Config can alert on CloudTrail changes, there would be a window where logging is disabled before the alert is received and action is taken. This approach is detective rather than preventive.

Question 24

An application requires access to objects in an S3 bucket for a limited time. The security team wants to ensure that access URLs expire after one hour and cannot be reused. Which S3 feature provides this capability?

A) S3 bucket policies with time-based conditions

B) S3 presigned URLs with one-hour expiration

C) S3 Access Points with temporary credentials

D) S3 Object Lock with one-hour retention

Answer: B

Explanation:

Providing temporary, time-limited access to S3 objects is a common requirement for applications that need to share files with users or external systems without granting permanent access. Traditional approaches like making objects public create security risks, while managing complex permission systems adds operational overhead. S3 presigned URLs offer an elegant solution that provides temporary, secure access to specific objects.

Presigned URLs are generated by AWS credentials holders who have permission to access the object. The URL includes an authentication signature and an expiration timestamp. Anyone with the presigned URL can perform the specified operation (GET, PUT, DELETE, etc.) on the object until the URL expires, without needing their own AWS credentials. After expiration, the URL becomes invalid and requests using it are denied.

When generating a presigned URL, the creator specifies the expiration time, which can range from seconds to days. For one-hour access, the expiration would be set to 3600 seconds. The presigned URL inherits the permissions of the AWS credentials used to generate it at the time it was created. If the generating credentials are revoked, presigned URLs created with those credentials also become invalid.

A) S3 bucket policies with time-based conditions can restrict access during specific time windows, but they apply to all requests matching the policy conditions, not individual URLs. They cannot provide per-request temporary access or create URLs that automatically expire after being generated.

B) This is the correct answer because S3 presigned URLs provide temporary access to specific objects with configurable expiration times, require no additional AWS credentials from the user, automatically expire after the specified duration, and cannot be reused once expired or if the generating credentials are revoked.

C) S3 Access Points simplify managing access to shared S3 buckets by creating unique hostnames with specific permissions, but they do not create time-limited URLs. Access Points are endpoints with permanent access policies, not temporary access mechanisms.

D) S3 Object Lock prevents object deletion or modification for compliance and data retention purposes. It controls object mutability over time but does not provide temporary access URLs or control read access to objects. Object Lock addresses a completely different use case than temporary access.

Question 25

A security engineer needs to implement network segmentation in a VPC to isolate database servers from web servers while allowing controlled communication between them. Which approach implements this requirement?

A) Place all servers in the same subnet and use security groups to control traffic

B) Create separate subnets for web and database tiers with security groups controlling inter-tier traffic

C) Use different VPCs for web and database tiers connected by VPC peering

D) Implement NACLs on all subnets to block traffic

Answer: B

Explanation:

Network segmentation is a fundamental security principle that isolates different application tiers to limit the scope of potential security breaches. By separating web servers and database servers into different network segments, organizations can implement defense-in-depth architectures where compromising a web server does not automatically grant access to databases. AWS VPCs provide multiple mechanisms for implementing effective network segmentation.

Separating application tiers into different subnets creates logical and routing-level boundaries. Web servers typically reside in public subnets with internet gateway routes for external access, while database servers reside in private subnets with no direct internet access. This subnet separation ensures that databases cannot be directly reached from the internet, even if security group rules are misconfigured.

Security groups provide stateful firewall capabilities to control traffic between tiers. The database security group would allow inbound traffic only from the web server security group on specific database ports like 3306 for MySQL or 5432 for PostgreSQL. The web server security group allows inbound HTTPS traffic from the internet. This configuration implements the principle of least privilege by permitting only necessary communication paths.

A) Placing all servers in the same subnet with only security groups for separation provides logical isolation but lacks the defense-in-depth benefits of subnet separation. If a web server is compromised and security groups are misconfigured or bypassed, the attacker has direct layer 2 network access to databases.

B) This is the correct answer because separate subnets provide routing-level isolation and support different internet connectivity models (public vs private), while security groups enforce stateful firewall rules controlling exactly which traffic can flow between tiers. This combination implements effective network segmentation with defense-in-depth.

C) Using completely separate VPCs for different tiers is overly complex for typical application segmentation within a single application stack. While VPC separation provides strong isolation, it introduces complexity in networking, increases costs, and complicates management compared to subnet-based segmentation within a single VPC.

D) Network ACLs alone, especially configured to block all traffic, would prevent necessary communication between tiers. While NACLs can complement security groups, they should be used carefully as they are stateless and can complicate troubleshooting. Blocking all traffic is operationally infeasible for functioning applications.

Question 26

An organization must ensure that deleted data in Amazon EBS volumes cannot be recovered. Which approach meets this requirement when terminating EC2 instances?

A) Enable EBS encryption on all volumes

B) Manually overwrite all data before deleting volumes

C) Enable DeleteOnTermination for EBS volumes and use encrypted volumes

D) Take final snapshots before terminating instances

Answer: C

Explanation:

Data remanence is the residual representation of data that remains after attempts have been made to remove or erase the data. For organizations handling sensitive information, ensuring that deleted data cannot be recovered is critical for compliance and security. When EC2 instances are terminated, associated EBS volumes can persist if not properly configured, and even after volume deletion, data sanitization concerns must be addressed.

The DeleteOnTermination attribute for EBS volumes controls whether volumes are automatically deleted when their attached EC2 instance is terminated. When set to true, the volume is deleted immediately upon instance termination, preventing orphaned volumes from containing residual data. Without this setting, volumes persist after instance termination, creating potential security risks if not manually deleted.

EBS encryption ensures that data on volumes is encrypted at rest using AES-256 encryption. When an encrypted EBS volume is deleted, AWS zeroizes the encryption keys, making the data cryptographically irretrievable even if the physical storage media is later accessed. This approach provides stronger data sanitization than physical overwriting methods because the encrypted data cannot be decrypted without the keys.

A) Encryption alone does not automatically delete volumes when instances terminate. Without DeleteOnTermination enabled, encrypted volumes would persist after instance termination, and while the data is encrypted, the volumes themselves remain accessible and could be attached to other instances if proper access controls are not maintained.

B) Manual data overwriting before deletion is time-consuming, error-prone, and may not fully sanitize data on modern SSDs due to wear-leveling algorithms. This approach requires custom scripts, delays instance termination, and provides no cryptographic guarantee that data cannot be recovered from the physical media.

C) This is the correct answer because combining DeleteOnTermination with EBS encryption ensures volumes are automatically deleted when instances terminate, and encryption key zeroization makes any residual data on physical media cryptographically unrecoverable, providing strong assurance against data recovery.

D) Taking final snapshots before terminating instances preserves the data rather than ensuring it cannot be recovered. While snapshots are useful for backup purposes, they directly contradict the requirement to prevent data recovery after instance termination.

Question 27

A company’s security policy requires that AWS root account credentials never be used for daily operations. Which measures enforce and monitor this requirement?

A) Delete the root account access keys and enable MFA, use CloudTrail to monitor root account usage

B) Disable the root account completely

C) Change the root account password monthly

D) Create an IAM policy preventing root account usage

Answer: A

Explanation:

The AWS root account has unrestricted access to all resources and billing information in an account. Using root account credentials for daily operations creates significant security risks because there are no permission boundaries or restrictions on what the root account can do. If root credentials are compromised, attackers have complete control over the account, can delete all resources, and can cause devastating financial and operational damage.

AWS best practices strongly recommend against using the root account for any tasks except those that explicitly require root privileges, such as changing account settings, closing the account, or managing consolidated billing. To prevent root account usage, organizations should delete any root account access keys that may have been created. The root account uses an email address and password for console access, not access keys for programmatic access.

Multi-factor authentication adds an additional security layer requiring a second factor (typically a hardware or virtual MFA device) beyond the password for root account authentication. Even if the root password is compromised, attackers cannot log in without the MFA code. CloudTrail logs all AWS API calls including those made using root credentials, enabling security teams to detect and investigate any root account usage.

A) This is the correct answer because deleting root access keys prevents programmatic root account access, MFA protects console access even if passwords are compromised, and CloudTrail monitoring enables detection of any root account usage for investigation and response.

B) The AWS root account cannot be completely disabled. It is fundamental to account ownership and is required for certain account management operations. Organizations must secure the root account rather than attempting to disable it.

C) While regular password changes improve security, monthly password rotation alone does not prevent root account usage or provide monitoring capabilities. Password changes without access key deletion, MFA, and monitoring provide insufficient protection and do not address the core requirement of preventing daily operational use.

D) IAM policies cannot restrict the root account. The root account has permissions that supersede all IAM policies and cannot be limited through policy mechanisms. This fundamental characteristic of the root account means prevention must focus on credential protection and access monitoring rather than permission restrictions.

Question 28

A security team discovers unusual API calls being made from an EC2 instance at 2 AM daily. They need to investigate the source and nature of these calls. Which AWS service provides the necessary information?

A) Amazon CloudWatch Metrics

B) AWS CloudTrail

C) VPC Flow Logs

D) AWS Config

Answer: B

Explanation:

Investigating suspicious API activity requires detailed audit logs that capture what actions were performed, when they occurred, who initiated them, and from which source. AWS CloudTrail is specifically designed to provide comprehensive API activity logging, making it the essential tool for security investigations involving AWS API calls. CloudTrail creates a detailed record of events that can be analyzed to understand user behavior and detect anomalous activity.

CloudTrail logs capture extensive information about each API call including the IAM identity that made the call (user, role, or service), the timestamp, the source IP address, the AWS service and action called, the request parameters, and the response elements. For an EC2 instance making API calls, CloudTrail would show whether the calls used an instance profile role, which specific actions were performed, and which resources were accessed.

For investigating the 2 AM suspicious activity, security teams would query CloudTrail logs for the specific time window, filter for API calls originating from the instance’s IAM role or instance ID, and analyze the actions performed. CloudTrail Insights can also automatically detect unusual API activity patterns, such as sudden increases in specific API calls or actions performed at unusual times, helping identify anomalies without manual log analysis.

A) CloudWatch Metrics capture performance and operational data such as CPU utilization, network traffic, and custom application metrics. Metrics show that activity occurred but do not provide details about specific API calls, actions performed, or user identities. Metrics are insufficient for investigating API-level security incidents.

B) This is the correct answer because CloudTrail provides comprehensive logs of all API calls including detailed information about who made the calls, when they occurred, from which IP address or instance, what actions were performed, and which resources were accessed. This information is essential for investigating suspicious API activity.

C) VPC Flow Logs capture network traffic metadata showing IP addresses, ports, and protocols for connections to and from network interfaces. Flow logs show network-level activity but do not capture API calls or provide information about which specific AWS actions were performed.

D) AWS Config records resource configuration changes over time and evaluates configurations against compliance rules. While Config can show configuration state changes, it does not log the detailed API call information needed to investigate who made specific calls and when they occurred.

Question 29

An organization uses AWS Lambda functions to process sensitive data. The security team requires that all Lambda functions run in a VPC and cannot access the internet. How can this requirement be enforced?

A) Configure all Lambda functions with VPC settings manually

B) Use Service Control Policies to deny Lambda function creation without VPC configuration and internet access

C) Implement AWS Config rules to detect non-compliant Lambda functions

D) Use IAM policies to restrict Lambda permissions

Answer: B

Explanation:

AWS Lambda functions by default execute in an AWS-managed VPC with internet access. For sensitive workloads, organizations may require Lambda functions to run in customer-managed VPCs where network access can be controlled through security groups and network ACLs. Additionally, preventing internet access ensures functions can only interact with resources within the VPC or through VPC endpoints, reducing the attack surface and preventing data exfiltration.

Service Control Policies provide preventive controls at the organizational level that can enforce requirements across all accounts. An SCP can be created to deny the creation or update of Lambda functions unless they include VPC configuration parameters. Additionally, the SCP can require that Lambda functions only access VPC endpoints rather than having NAT gateway or internet gateway access for outbound connectivity.

The SCP would use condition keys to check for the presence of VPC configuration in Lambda function creation and update API calls. Functions created without proper VPC settings or with internet access capabilities would be blocked at the API level before they are created. This preventive control is superior to detective controls because non-compliant functions never exist, eliminating the security risk window.

A) Manual configuration of Lambda functions is error-prone and does not prevent developers from creating functions without VPC configuration or with internet access. This approach relies on human processes rather than technical enforcement and does not scale across large organizations with many Lambda functions.

B) This is the correct answer because SCPs provide organization-wide preventive controls that deny Lambda function creation without VPC configuration, can enforce that functions do not have internet access through policy conditions, and prevent non-compliant functions from being created regardless of individual IAM permissions.

C) AWS Config rules provide detective controls that identify non-compliant Lambda functions after they are created. While Config can trigger remediation, there is a window between function creation and detection where non-compliant functions exist and could potentially access sensitive data or the internet.

D) IAM policies control what actions principals can perform but cannot enforce specific configuration requirements like VPC settings on Lambda functions. IAM policies could prevent Lambda function creation entirely but lack the granularity to require specific configurations while allowing compliant functions.

Question 30

A company needs to scan Docker images for vulnerabilities before deploying them to Amazon ECS. Which AWS service provides this capability?

A) Amazon GuardDuty

B) Amazon Inspector

C) Amazon ECR image scanning

D) AWS Security Hub

Answer: C

Explanation:

Container security is critical in modern application architectures. Docker images can contain vulnerabilities in base operating systems, system libraries, or application dependencies. Deploying vulnerable images to production environments exposes applications to known exploits. Organizations need automated vulnerability scanning integrated into their container deployment pipelines to identify and remediate vulnerabilities before deployment.

Amazon Elastic Container Registry provides integrated image scanning capabilities using two scanning options: basic scanning powered by Clair and enhanced scanning powered by Amazon Inspector. ECR image scanning automatically scans images pushed to repositories, identifying software vulnerabilities by comparing packages in the image against databases of common vulnerabilities and exposures.

Enhanced scanning provides continuous monitoring and comprehensive vulnerability detection. When enabled, ECR automatically scans images when pushed and rescans them periodically to detect newly discovered vulnerabilities. Scan results include CVE IDs, severity ratings, descriptions, and remediation recommendations. Organizations can configure scan-on-push to ensure every image is scanned before it can be deployed.

A) GuardDuty is a threat detection service that monitors AWS accounts for malicious activity and unauthorized behavior. While GuardDuty can detect runtime threats related to containers, it does not scan container images for vulnerabilities before deployment. GuardDuty focuses on behavioral analysis rather than static image scanning.

B) Amazon Inspector assesses EC2 instances and Lambda functions for vulnerabilities and network exposure. Inspector recently added support for container image scanning through ECR integration, but the scanning capability is delivered through ECR’s image scanning feature, making ECR the direct answer for this use case.

C) This is the correct answer because ECR image scanning is specifically designed to scan Docker images for vulnerabilities before deployment, integrates directly with container deployment pipelines, provides continuous scanning for newly discovered vulnerabilities, and generates detailed findings with CVE information and remediation guidance.

D) Security Hub aggregates security findings from multiple AWS services including ECR image scanning results. While Security Hub can display vulnerability findings, it does not perform the actual image scanning. Security Hub is a centralized view of security alerts rather than a scanning service.

Question 31

An application requires fine-grained access control where different users can access different items in a DynamoDB table based on their user ID. Which approach implements this requirement?

A) Create separate DynamoDB tables for each user

B) Use IAM policies with condition keys to filter table access based on leading keys

C) Implement application-level filtering after retrieving all items

D) Use DynamoDB streams to filter data

Answer: B

Explanation:

Fine-grained access control in DynamoDB enables applications to restrict user access to specific items or attributes based on user identity. This is essential for multi-tenant applications where each user should only access their own data. Rather than implementing access control in application code, IAM policies can enforce these restrictions at the AWS API level, providing stronger security guarantees.

IAM policies support dynamodb:LeadingKeys condition key that restricts access to items where the partition key matches a specific value. For user-based access control, the DynamoDB table would use user IDs as partition keys or as part of composite keys. IAM policies attached to user roles can include conditions that compare the partition key with the user’s identity, allowing users to access only items where the key matches their user ID.

The policy would use variables like aws:username or custom attributes from federated identity providers to dynamically match against the DynamoDB partition key. When users make API calls to DynamoDB, IAM evaluates the policy conditions and ensures that Query or Scan operations only return items where the partition key matches the user’s ID, enforcing data isolation at the authorization layer.

A) Creating separate tables for each user is impractical for applications with many users. This approach creates massive management overhead, does not scale, increases costs, and complicates application architecture. DynamoDB is designed to handle multi-tenant data efficiently within single tables using proper key design and access controls.

B) This is the correct answer because IAM policies with LeadingKeys conditions enforce fine-grained access control at the API level, restrict users to accessing only items matching their identity, require no application-level filtering logic, and provide strong security guarantees through AWS-enforced authorization.

C) Application-level filtering after retrieving all items is inefficient and insecure. This approach requires the application to have overly broad permissions to read all items, wastes network bandwidth and DynamoDB capacity by retrieving unnecessary data, and relies on application code rather than AWS authorization mechanisms to enforce security.

D) DynamoDB Streams capture item-level changes for processing by other applications. Streams are designed for event-driven architectures and data replication, not for implementing fine-grained access control. Streams do not provide mechanisms to filter user access to table items.

Question 32

A security engineer needs to rotate IAM user access keys for compliance requirements. Which approach minimizes the risk of service disruption during rotation?

A) Delete the old access key immediately after creating a new one

B) Create a second access key, update applications to use the new key, verify functionality, then delete the old key

C) Rotate access keys during scheduled maintenance windows only

D) Use AWS Secrets Manager to rotate access keys automatically

Answer: B

Explanation:

Access key rotation is a critical security practice that limits the exposure window if keys are compromised. However, improperly executed rotation can cause service disruptions when applications lose access to AWS services. IAM supports having two access keys active simultaneously per user, which enables a safe rotation process with zero downtime and the ability to rollback if issues are discovered.

The safest rotation process involves creating a second access key while the first remains active. Applications are then updated to use the new key and thoroughly tested to ensure all functionality works correctly. During this testing period, the old key remains active, providing a fallback if issues are discovered with the new key or if some components were missed during the update.

After verifying that all application components successfully use the new key and that no errors occur over a monitoring period, the old key can be safely deleted. This approach eliminates the risk of service disruption because the old key remains functional until confirmed that the new key works everywhere. If issues arise, applications can quickly revert to the old key without waiting for key generation or distribution.

A) Immediately deleting the old key after creating a new one creates high risk of service disruption. If any application component still uses the old key or if the new key distribution failed, those components immediately lose access. This approach provides no safety margin or rollback capability.

B) This is the correct answer because IAM supports two active access keys simultaneously, allowing creation of new keys before deactivating old ones, providing time to update and test all application components, enabling rollback if issues occur, and eliminating service disruption risks during rotation.

C) While scheduling rotations during maintenance windows reduces user impact of potential disruptions, it does not eliminate the risk. The two-key rotation approach allows continuous operation even during key rotation, making maintenance windows unnecessary for this specific task.

D) Secrets Manager can store and rotate secrets but does not natively rotate IAM user access keys. Secrets Manager is designed for rotating database credentials and other application secrets. IAM access key rotation requires manual processes or custom automation rather than Secrets Manager’s built-in rotation.

Question 33

An organization must ensure that all data uploaded to Amazon S3 is encrypted before leaving the client’s premises. Which encryption method meets this requirement?

A) Server-side encryption with S3-managed keys (SSE-S3)

B) Server-side encryption with KMS keys (SSE-KMS)

C) Client-side encryption with AWS Encryption SDK

D) Server-side encryption with customer-provided keys (SSE-C)

Answer: C

Explanation:

For some organizations, security or compliance requirements mandate that data must be encrypted before it leaves their control and travels across networks to AWS. This ensures that even if network traffic is intercepted, the data remains protected. Client-side encryption provides this capability by encrypting data on the client’s premises before uploading it to S3.

The AWS Encryption SDK is a client-side encryption library that simplifies the process of encrypting and decrypting data. Applications integrate the SDK and encrypt data before calling S3 PUT operations. The encrypted data travels across the network and is stored encrypted in S3. AWS never receives or processes the unencrypted data. Decryption occurs on the client side after retrieving objects from S3.

Client-side encryption with the AWS Encryption SDK supports multiple key providers including KMS, providing flexibility in key management while maintaining client-side encryption. The SDK handles cryptographic best practices including envelope encryption, authenticated encryption, and key derivation. Applications control the encryption keys and process, ensuring that data confidentiality is maintained end-to-end.

A) SSE-S3 encrypts data server-side after AWS receives it. Data travels unencrypted from the client to AWS servers over HTTPS, where it is then encrypted. This fails the requirement that data must be encrypted before leaving the client’s premises. While HTTPS provides transport encryption, the data itself is not encrypted until reaching AWS.

B) SSE-KMS also encrypts data server-side after AWS receives it. Like SSE-S3, data travels to AWS in plaintext (within HTTPS) and is encrypted upon receipt. This does not meet the requirement for encrypting data before it leaves client premises.

C) This is the correct answer because the AWS Encryption SDK performs encryption on the client side before data leaves the premises, encrypted data travels across networks and is stored encrypted in S3, AWS never receives unencrypted data, and clients maintain control over encryption keys and processes.

D) SSE-C requires customers to provide encryption keys with each request, but encryption still occurs server-side after data reaches AWS. The data travels in plaintext (within HTTPS) to AWS where AWS performs encryption using the provided key. This does not satisfy client-side encryption requirements.

Question 34

A company’s security policy prohibits storing any secrets or credentials in application source code or configuration files. An application needs database credentials to connect to RDS. What is the MOST secure solution?

A) Store credentials in environment variables

B) Retrieve credentials from AWS Secrets Manager at application startup

C) Hard-code credentials with encryption in the application

D) Store credentials in S3 with encryption

Answer: B

Explanation:

Storing credentials in source code or configuration files is a critical security vulnerability. Source code is often stored in version control systems where credentials become permanently embedded in repository history. Configuration files are typically deployed alongside applications where file system access or memory dumps can expose credentials. Credentials in code also require application redeployment for rotation, creating operational challenges.

AWS Secrets Manager is specifically designed to eliminate hardcoded credentials by providing centralized secret storage with programmatic retrieval. Applications retrieve credentials at runtime using the Secrets Manager API. The credentials never exist in source code, configuration files, or deployment artifacts. Secrets Manager encrypts secrets at rest using KMS and transmits them securely using HTTPS.

Applications typically retrieve credentials at startup and cache them in memory. Secrets Manager supports automatic rotation for database credentials, which updates both the database and the secret value without requiring application changes or redeployment. Applications can optionally refresh credentials periodically by calling Secrets Manager, ensuring they receive rotated credentials without restart.

A) Environment variables are accessible to any process running under the same execution context and can be exposed through process listings, error messages, or logging. Environment variables offer minimal security improvement over configuration files and still require credential management during deployment, failing to address the core security concern.

B) This is the correct answer because Secrets Manager eliminates credentials from source code and configuration files, provides encrypted storage and secure transmission of credentials, supports automatic credential rotation without application changes, and enables fine-grained access control through IAM policies.

C) Encrypting credentials in source code does not solve the fundamental problem. The encryption key must also be managed, creating a circular dependency. Encrypted credentials in code still require code changes for rotation and can be decrypted by anyone with access to the code and key.

D) While S3 can store encrypted credentials, it is not purpose-built for secrets management. This approach lacks automatic rotation, requires custom code for credential retrieval, does not integrate with databases for automatic updates, and adds unnecessary complexity compared to Secrets Manager.

Question 35

A security team needs to identify all publicly accessible S3 buckets across multiple AWS accounts. Which AWS service provides this capability?

A) Amazon Macie

B) AWS Config

C) Amazon S3 Inventory

D) AWS Trusted Advisor

Answer: A

Explanation:

Identifying publicly accessible S3 buckets is critical for preventing data breaches and unauthorized data exposure. Organizations with multiple AWS accounts and numerous S3 buckets need automated tools to continuously monitor bucket permissions and identify potential security issues. Public buckets represent significant security risks, especially when they contain sensitive data.

Amazon Macie automatically discovers and continuously monitors S3 buckets across AWS accounts within an organization. Macie evaluates bucket policies, ACLs, and block public access settings to determine if buckets are publicly accessible. It generates findings for buckets that allow public access, including detailed information about the access permissions and the potential risk level.

Macie provides a dashboard showing inventory of all S3 buckets with their encryption status, public accessibility, and sharing status. Security teams can quickly identify all publicly accessible buckets and take remediation action. Macie also identifies buckets shared with external AWS accounts, which may represent intentional but risky configurations requiring review. The findings integrate with Security Hub for centralized security monitoring.

A) This is the correct answer because Macie automatically discovers all S3 buckets across multiple accounts, continuously monitors bucket permissions and access controls, identifies publicly accessible buckets with detailed findings, and provides comprehensive visibility into bucket security posture organization-wide.

B) AWS Config can monitor S3 bucket configurations using managed rules that check for public access. However, Config requires manual rule configuration for each account, does not provide the same level of automated discovery and classification that Macie offers, and focuses on compliance checking rather than comprehensive bucket security analysis.

C) S3 Inventory generates lists of objects within buckets for storage management and analytics purposes. Inventory reports do not analyze bucket-level permissions or identify publicly accessible buckets. Inventory is a storage management tool rather than a security analysis service.

D) Trusted Advisor provides recommendations for cost optimization, performance, security, and fault tolerance. While Trusted Advisor includes checks for S3 bucket permissions in some support tiers, it provides less comprehensive and continuous monitoring than Macie and is not specifically designed for detailed S3 security analysis.

Question 36

An application running on EC2 instances in a private subnet needs to access an S3 bucket without traversing the internet. Which solution meets this requirement?

A) Configure a NAT gateway for the private subnet

B) Create a VPC endpoint for S3 and update route tables

C) Use VPC peering to connect to S3

D) Enable S3 transfer acceleration

Answer: B

Explanation:

Applications in private subnets typically cannot access AWS services directly because private subnets lack internet gateway routes. Traditional solutions involve NAT gateways or NAT instances to provide internet access, but this approach routes traffic outside the VPC to the internet and back to AWS services. VPC endpoints provide a superior solution by enabling private connectivity to AWS services without internet traversal.

S3 VPC endpoints are gateway endpoints that add routes to VPC route tables directing S3 traffic to the endpoint rather than through internet gateways or NAT devices. Traffic between EC2 instances and S3 flows through the AWS private network without leaving the Amazon network infrastructure. This provides security benefits by avoiding internet exposure and performance benefits by reducing latency and data transfer costs.

When creating an S3 VPC endpoint, you specify which route tables should receive routes to the endpoint. Subnets associated with those route tables automatically gain private connectivity to S3. The endpoint can be configured with policies that control which S3 buckets can be accessed through the endpoint, providing additional security controls. Applications require no code changes and continue using standard S3 APIs.

A) NAT gateways provide internet access for private subnet resources, allowing them to reach S3 via the internet. However, this approach routes traffic outside the VPC to the internet, incurs NAT gateway costs and data transfer charges, and does not meet the requirement to avoid internet traversal.

B) This is the correct answer because S3 VPC endpoints enable private connectivity from private subnets to S3 without internet access, route traffic through AWS private network infrastructure, require no application changes, and can enforce additional security through endpoint policies.

C) VPC peering connects two VPCs for private communication between them. S3 is not hosted in a VPC and cannot be reached through VPC peering. Peering is designed for VPC-to-VPC connectivity, not for accessing AWS services like S3.

D) S3 Transfer Acceleration speeds up uploads to S3 by routing data through CloudFront edge locations. Transfer Acceleration still requires internet connectivity and does not provide private connectivity from VPCs. It addresses upload performance rather than private network access requirements.

Question 37

A company must comply with regulations requiring that all cryptographic keys be generated and stored in hardware security modules (HSMs). Which AWS service meets this requirement?

A) AWS Key Management Service with customer managed keys

B) AWS CloudHSM

C) AWS Secrets Manager

D) AWS Certificate Manager

Answer: B

Explanation:

Certain compliance frameworks and regulations such as PCI DSS and FIPS 140-2 Level 3 require that cryptographic keys be generated, stored, and processed within certified hardware security modules. HSMs are tamper-resistant hardware devices designed specifically for secure cryptographic operations. AWS CloudHSM provides dedicated HSM instances that meet these stringent compliance requirements.

AWS CloudHSM provides single-tenant HSM instances running in your VPC. The HSMs are FIPS 140-2 Level 3 certified devices where customers have exclusive control over key generation, storage, and usage. CloudHSM clusters provide high availability with automatic synchronization across multiple HSMs in different availability zones. Customers control all aspects of key management including key creation, backup, and destruction.

Applications integrate with CloudHSM using industry-standard APIs including PKCS#11, Java Cryptography Extensions, and Microsoft CryptoNG. The HSM generates cryptographic keys within the hardware boundary and never exports them in plaintext. All cryptographic operations occur within the HSM, ensuring keys remain protected by hardware security boundaries throughout their lifecycle.

A) AWS KMS customer managed keys are backed by HSMs that AWS operates and manages. While KMS uses FIPS 140-2 Level 2 validated HSMs, customers do not have exclusive control over the HSMs, and the service is multi-tenant. Some compliance requirements mandate single-tenant HSMs under customer control, which KMS does not provide.

B) This is the correct answer because CloudHSM provides FIPS 140-2 Level 3 certified single-tenant HSMs under customer control, generates and stores keys within hardware security boundaries, meets stringent compliance requirements for HSM-based key management, and provides industry-standard APIs for application integration.

C) Secrets Manager stores application secrets and credentials but does not provide HSM-backed storage or key generation. Secrets Manager uses KMS for encryption, which provides HSM-backed encryption but not the single-tenant HSM control that strict compliance frameworks require.

D) AWS Certificate Manager provisions and manages SSL/TLS certificates for use with AWS services. While ACM stores private keys securely, it does not provide customer-controlled HSMs or meet requirements for generating and storing cryptographic keys in dedicated HSMs under customer control.

Question 38

A security engineer needs to analyze network traffic to and from EC2 instances to detect data exfiltration attempts. The solution must capture all traffic details including payload content. Which approach provides this capability?

A) Enable VPC Flow Logs with maximum detail level

B) Deploy traffic mirroring to capture packets and send to analysis tools

C) Use Amazon GuardDuty to monitor network traffic

D) Enable CloudTrail data events for network traffic

Answer: B

Explanation:

Deep network traffic analysis for security purposes often requires examining packet payloads to detect patterns indicative of data exfiltration, malware communication, or unauthorized access attempts. VPC Flow Logs capture metadata about network traffic but do not include packet payloads. Organizations needing payload-level analysis require packet capture capabilities similar to traditional network monitoring tools.

VPC Traffic Mirroring captures network packets from elastic network interfaces and sends them to monitoring and analysis tools. Traffic Mirroring creates exact copies of network traffic including full packet payloads, which can be sent to security appliances, intrusion detection systems, or packet analysis tools running on EC2 instances or network load balancers. This provides the same capabilities as traditional network taps or SPAN ports in physical networks.

Traffic Mirroring can be configured with filters to capture specific traffic based on source and destination addresses, protocols, or ports. Organizations typically mirror traffic to specialized security tools that perform deep packet inspection, protocol analysis, and behavioral analysis to detect threats. These tools can examine packet contents for sensitive data patterns, detect encrypted tunnels used for data exfiltration, or identify command and control communications.

A) VPC Flow Logs capture metadata including source and destination IP addresses, ports, protocols, packet counts, and byte counts. However, Flow Logs do not capture packet payloads or application-layer data, which is required for detecting data exfiltration patterns in traffic content. Flow Logs are insufficient for payload analysis.

B) This is the correct answer because Traffic Mirroring captures complete network packets including payloads, sends traffic to security analysis tools for deep packet inspection, enables detection of data exfiltration in packet contents, and supports filtering to focus on specific traffic of interest.

C) GuardDuty analyzes VPC Flow Logs, CloudTrail logs, and DNS logs using machine learning to detect threats. While GuardDuty can identify some network-based threats, it does not capture packet payloads and cannot perform deep payload inspection. GuardDuty analyzes metadata rather than traffic content.

D) CloudTrail logs AWS API calls and management events but does not capture network traffic. CloudTrail is designed for audit logging of AWS service usage, not for network traffic analysis or packet capture. This option is completely unsuitable for the stated requirement.

Question 39

An organization requires that all API calls to modify IAM policies be approved by a security team member before execution. Which approach implements this requirement?

A) Enable AWS CloudTrail and manually review all IAM changes

B) Use AWS Step Functions to create an approval workflow with SNS notifications

C) Implement Service Control Policies to prevent IAM changes

D) Use AWS Config rules to detect unauthorized IAM changes

Answer: B

Explanation:

Implementing approval workflows for sensitive operations provides an additional control layer that prevents unauthorized or accidental changes to critical security configurations. IAM policy modifications can significantly impact security posture, making them ideal candidates for approval workflows. While AWS does not provide built-in approval mechanisms for IAM changes, organizations can build approval workflows using AWS services.

AWS Step Functions orchestrates multi-step workflows that can include human approval steps. For IAM policy change approvals, an API Gateway endpoint or Lambda function intercepts IAM modification requests and initiates a Step Functions workflow. The workflow sends approval requests via SNS or SES to security team members and waits for their response. Only after approval does the workflow proceed to execute the actual IAM modification using Lambda functions with appropriate permissions.

The approval workflow can include timeouts, automatic denial for certain types of changes, integration with ticketing systems, and audit logging of approval decisions. Step Functions maintains state throughout the approval process and can handle multiple approvers, escalations, and conditional logic based on the type of change requested. This approach provides flexible, auditable approval processes without requiring significant custom development.

A) CloudTrail logging provides detective controls by capturing IAM changes after they occur. Manual review of CloudTrail logs does not prevent unauthorized changes or implement an approval requirement. This approach is reactive rather than preventive and does not meet the requirement for pre-approval.

B) This is the correct answer because Step Functions can implement approval workflows that block IAM changes until approved, SNS enables notification and response collection from approvers, the workflow provides auditability of approval decisions, and Lambda functions execute approved changes programmatically.

C) Service Control Policies can restrict IAM changes but provide binary allow/deny controls rather than approval workflows. SCPs could completely block IAM changes or allow them, but cannot implement conditional approval based on security team review. This option prevents changes entirely rather than requiring approval.

D) AWS Config rules detect IAM changes after they occur and can trigger remediation, but Config does not provide pre-change approval capabilities. Config is a detective control that identifies non-compliant states rather than a preventive control that blocks changes pending approval.

Question 40

A company must ensure that data stored in Amazon RDS databases is encrypted at rest and that encryption keys are rotated every 90 days. Which solution meets these requirements?

A) Enable RDS encryption with AWS managed KMS keys

B) Enable RDS encryption with customer managed KMS keys and enable automatic rotation

C) Use client-side encryption for all database data

D) Enable RDS encryption with customer managed KMS keys and rotate manually every 90 days

Answer: B

Explanation:

RDS encryption at rest protects database storage, automated backups, read replicas, and snapshots using KMS keys. While RDS supports multiple key options, customer managed KMS keys provide the control and flexibility needed to meet specific rotation requirements. Automatic key rotation in KMS simplifies compliance with rotation policies while maintaining operational continuity.

Customer managed KMS keys support automatic rotation that occurs annually by default. However, for more frequent rotation such as every 90 days, organizations need to implement custom key rotation logic using Lambda functions triggered by EventBridge rules. The custom rotation updates database encryption keys without requiring database downtime or re-encryption of existing data by creating new key material while maintaining previous versions for decryption.

When KMS automatic annual rotation is enabled, KMS rotates the backing key cryptographic material every year while keeping the same key ID and ARN. All new encryption operations use the new key material, while KMS maintains previous key versions to decrypt data encrypted with older versions. Applications and RDS configurations continue using the same key ID, making rotation transparent to operations.

A) AWS managed KMS keys automatically rotate every three years, not every 90 days. Organizations cannot modify the rotation schedule for AWS managed keys or trigger rotation manually. This option fails to meet the 90-day rotation requirement and provides no control over rotation timing.

B) This is the correct answer because customer managed KMS keys enable organizations to control rotation schedules, automatic rotation can be configured for required intervals, RDS encryption uses KMS keys transparently without requiring application changes, and rotation maintains access to data encrypted with previous key versions.

C) Client-side encryption requires applications to encrypt data before sending to the database and decrypt after retrieval. This adds significant application complexity, does not leverage RDS native encryption, and requires custom key management and rotation logic, making it unnecessarily complex compared to RDS encryption with KMS.

D) Manual rotation every 90 days is operationally intensive, error-prone, and risks human errors that could cause data access issues. Manual rotation requires documentation, scheduled procedures, and verification steps. Automatic rotation eliminates these operational burdens and ensures consistent rotation compliance.

Leave a Reply

How It Works

img
Step 1. Choose Exam
on ExamLabs
Download IT Exams Questions & Answers
img
Step 2. Open Exam with
Avanset Exam Simulator
Press here to download VCE Exam Simulator that simulates real exam environment
img
Step 3. Study
& Pass
IT Exams Anywhere, Anytime!