Visit here for our full Amazon AWS Certified Developer – Associate DVA-C02 exam dumps and practice test questions.
Question 41
A developer is building a serverless application using AWS Lambda that processes files uploaded to an S3 bucket. The Lambda function occasionally times out when processing large files. What is the maximum timeout value that can be configured for a Lambda function?
A) 5 minutes
B) 10 minutes
C) 15 minutes
D) 30 minutes
Answer: C
Explanation:
The maximum timeout value that can be configured for an AWS Lambda function is 15 minutes (900 seconds). This is the absolute maximum execution time allowed for any Lambda function regardless of trigger source or configuration. When processing large files or performing time-intensive operations, developers must ensure operations complete within this 15-minute window or redesign architectures to handle longer processing times. For operations exceeding 15 minutes, alternative approaches include breaking processing into smaller chunks, using Step Functions for workflow orchestration, or moving long-running tasks to ECS, Fargate, or EC2 instances.
Option A is incorrect because 5 minutes is not the maximum timeout limit for Lambda functions. While 5 minutes might be a reasonable timeout for many use cases and was historically a limit in early Lambda versions, the current maximum is 15 minutes. Setting a 5-minute timeout when processing requires more time would cause unnecessary timeouts. Developers should configure timeouts based on actual processing requirements up to the 15-minute maximum.
Option B is incorrect because 10 minutes is also not the maximum Lambda timeout limit, though it represents a substantial execution window suitable for many processing tasks. AWS extended the Lambda timeout maximum to 15 minutes to accommodate longer-running workloads while maintaining the serverless execution model. Organizations with processes requiring 10-15 minutes can leverage Lambda within the actual 15-minute limit rather than being constrained to 10 minutes.
Option D is incorrect because Lambda functions cannot execute for 30 minutes. The hard limit of 15 minutes is an architectural constraint of the Lambda service designed to keep functions ephemeral and stateless. For operations requiring 30 minutes or more, AWS recommends using services designed for longer-running workloads such as ECS containers, Fargate tasks, EC2 instances, or Step Functions orchestrating multiple shorter Lambda invocations.
Question 42
A developer needs to implement cross-origin resource sharing (CORS) for an API hosted on API Gateway. Which HTTP response header must be included to allow requests from a specific origin?
A) Access-Control-Allow-Methods
B) Access-Control-Allow-Headers
C) Access-Control-Allow-Origin
D) Access-Control-Max-Age
Answer: C
Explanation:
The Access-Control-Allow-Origin header must be included in HTTP responses to allow cross-origin requests from specific origins. This header specifies which origins are permitted to access resources, either by listing specific domains (like https://example.com) or using wildcard (*) for public APIs. Browsers enforce same-origin policy by default, blocking cross-origin requests unless the server explicitly grants permission through this header. API Gateway can be configured to return appropriate CORS headers either through API Gateway’s CORS configuration or by including headers in backend responses. Proper CORS configuration is essential for web applications making API calls from different domains.
Option A is incorrect because Access-Control-Allow-Methods specifies which HTTP methods (GET, POST, PUT, DELETE, etc.) are allowed for cross-origin requests, not which origins can make requests. While this header is part of complete CORS configuration, it doesn’t grant cross-origin access by itself. Without Access-Control-Allow-Origin, browsers block requests regardless of method permissions. Allow-Methods controls what operations permitted origins can perform but doesn’t establish which origins have access.
Option B is incorrect because Access-Control-Allow-Headers specifies which HTTP headers can be included in cross-origin requests, such as custom headers or Content-Type. This header is used in preflight responses to inform browsers which headers are acceptable in actual requests. Like Allow-Methods, this is part of comprehensive CORS configuration but doesn’t authorize origins to make requests. Without Allow-Origin, browsers reject requests before checking allowed headers.
Option D is incorrect because Access-Control-Max-Age indicates how long browsers can cache preflight request results, reducing the number of preflight requests for repeated cross-origin calls. This header optimizes performance by caching CORS permissions but doesn’t grant initial access. Max-Age is an optional optimization header that works in conjunction with other CORS headers but cannot substitute for Access-Control-Allow-Origin in establishing cross-origin permissions.
Question 43
A developer is implementing a DynamoDB table for a gaming application that needs to track high scores. Which DynamoDB feature should be used to automatically remove old score records after 30 days?
A) DynamoDB Streams
B) Time To Live (TTL)
C) Global Secondary Index (GSI)
D) Conditional writes
Answer: B
Explanation:
Time To Live (TTL) is the DynamoDB feature that automatically removes old records after specified time periods without consuming write capacity or requiring manual deletion logic. TTL works by defining an attribute containing Unix epoch timestamps indicating when items should expire. DynamoDB automatically scans for expired items and deletes them in the background, typically within 48 hours of expiration time. For the gaming application, each score record would include a TTL attribute set to current timestamp plus 30 days (2,592,000 seconds). TTL provides cost-effective automatic data lifecycle management without application logic or scheduled jobs.
Option A is incorrect because DynamoDB Streams captures item-level changes (inserts, updates, deletes) and makes them available for processing by Lambda or other consumers, but doesn’t automatically delete items. Streams enable event-driven architectures and data replication but require custom logic to implement deletion behavior. Using Streams for time-based deletion would require Lambda functions monitoring item ages and issuing delete operations, consuming write capacity and adding complexity that TTL handles natively.
Option C is incorrect because Global Secondary Indexes provide alternative query access patterns with different partition and sort keys but don’t delete data. GSIs enable querying data by attributes other than the primary key but are querying mechanisms, not data lifecycle management tools. While a GSI could help identify old records for deletion, it doesn’t perform automatic deletion. GSIs and TTL serve completely different purposes, with GSIs optimizing query patterns and TTL managing data retention.
Option D is incorrect because conditional writes enable atomic operations with conditions that must be met for writes to succeed, providing optimistic locking and preventing race conditions. Conditional writes ensure write operations only occur when specified conditions are true but don’t implement time-based automatic deletion. Using conditional writes would still require application logic or scheduled jobs to check timestamps and delete old records, consuming write capacity that TTL-based deletion doesn’t require.
Question 44
A developer needs to deploy an application that requires specific environment variables for different deployment stages (dev, test, prod). Which AWS service provides centralized management of configuration and secrets?
A) AWS Systems Manager Parameter Store
B) Amazon S3
C) Amazon DynamoDB
D) AWS CloudFormation only
Answer: A
Explanation:
AWS Systems Manager Parameter Store provides centralized management of configuration data and secrets with support for encryption, versioning, and hierarchical organization. Parameter Store enables storing configuration values, database connection strings, API keys, and other settings as parameters that applications retrieve at runtime. Parameters can be organized hierarchically (like /dev/database/connection-string) enabling environment-specific configurations. Parameter Store integrates with IAM for access control, KMS for encryption of sensitive values, and CloudFormation for infrastructure as code deployments. Both standard (free, up to 10,000 parameters) and advanced (higher throughput, larger values, parameter policies) tiers are available.
Option B is incorrect because while Amazon S3 can store configuration files, it’s not designed specifically for configuration management and lacks features like native encryption of individual values, built-in versioning suitable for configurations, and seamless integration with application runtimes. S3 requires applications to download and parse configuration files rather than retrieving individual parameters via API. S3 doesn’t provide the hierarchical parameter organization, audit logging, or change notifications that Parameter Store offers for configuration management use cases.
Option C is incorrect because Amazon DynamoDB is a NoSQL database service that could technically store configuration data but isn’t purpose-built for configuration management. Using DynamoDB requires creating custom schemas, managing access patterns, and implementing encryption separately. DynamoDB lacks Parameter Store’s configuration-specific features like hierarchical paths, native integration with CloudFormation and Lambda, and simplified secrets management. While DynamoDB works for many data storage needs, specialized configuration management services provide better solutions.
Option D is incorrect because while AWS CloudFormation can manage infrastructure and pass parameters during deployment, it’s an infrastructure as code service rather than a runtime configuration management solution. CloudFormation defines infrastructure resources but doesn’t provide centralized storage for application configuration values that change independently of infrastructure. Applications need runtime access to configuration values, which CloudFormation alone doesn’t provide. Parameter Store integrates with CloudFormation but serves the distinct purpose of runtime configuration management.
Question 45
A developer is building a REST API using API Gateway and Lambda. The API should return different responses based on whether the request comes from a mobile app or web browser. Which API Gateway feature enables this functionality?
A) Request validation
B) Mapping templates
C) Authorizers
D) API keys
Answer: B
Explanation:
Mapping templates in API Gateway enable transformation of request and response data, including conditional logic based on request headers like User-Agent to determine client type. Mapping templates use Velocity Template Language (VTL) to access request context including headers, query parameters, and body. Templates can implement conditional logic examining User-Agent headers to identify mobile apps versus browsers and transform responses accordingly. This allows returning different data formats, structures, or content based on client characteristics without modifying backend Lambda code. Mapping templates provide integration layer flexibility for client-specific customization.
Option A is incorrect because request validation ensures incoming requests match defined schemas for required parameters, headers, and body structure but doesn’t enable conditional response transformation based on client type. Validation rejects malformed requests before they reach backend integrations but doesn’t modify responses based on request characteristics. While validation improves API robustness, it serves quality control rather than client-specific response customization purposes.
Option C is incorrect because authorizers (Lambda authorizers or Cognito authorizers) authenticate and authorize API requests, controlling access based on identity and permissions. Authorizers determine whether requests should be allowed or denied but don’t transform responses based on client characteristics. While authorizers can pass context to backend integrations for personalization, they don’t directly implement client-type-based response variations. Authorization and response transformation serve different purposes in API architecture.
Option D is incorrect because API keys identify clients for usage tracking, throttling, and quota management but don’t enable conditional response transformation. API keys provide client identification and access control but don’t distinguish between mobile apps and browsers or modify responses accordingly. Keys might associate with usage plans having different rate limits but don’t affect response content or structure. Client-type-based response customization requires mapping templates or backend logic, not API keys.
Question 46
A developer needs to implement a solution where multiple Lambda functions process messages from an SQS queue in parallel. What is the maximum number of concurrent Lambda invocations that can process messages from a standard SQS queue?
A) 10 concurrent invocations
B) 100 concurrent invocations
C) 1,000 concurrent invocations
D) Unlimited scaling based on queue depth
Answer: C
Explanation:
When Lambda polls a standard SQS queue, it can scale up to 1,000 concurrent function invocations processing messages in parallel. Lambda automatically scales polling behavior, starting with 5 concurrent invocations and scaling up by 60 additional instances per minute until reaching the 1,000 concurrent execution limit or clearing the queue. This scaling behavior enables high-throughput message processing while respecting Lambda concurrency limits. For workloads requiring more than 1,000 concurrent invocations, developers can request concurrency limit increases from AWS or implement additional queues with separate Lambda functions.
Option A is incorrect because 10 concurrent invocations would severely limit throughput for queues with high message volumes. While Lambda might start with small concurrency and scale gradually, the maximum of 10 would create processing bottlenecks unable to handle production workloads. AWS designs Lambda-SQS integration to support significant parallel processing far exceeding 10 concurrent invocations. Actual scaling reaches much higher limits to handle real-world queue processing requirements.
Option B is incorrect because 100 concurrent invocations, while substantial, is not the maximum limit for Lambda processing SQS messages. Lambda’s scaling mechanism for SQS event sources supports significantly higher concurrency to handle large message volumes. Organizations with high-throughput requirements would find 100 concurrent invocations insufficient for rapid queue processing. The actual 1,000 concurrent limit provides order-of-magnitude more processing capacity.
Option D is incorrect because Lambda concurrency is not unlimited and has explicit limits. The standard account concurrency limit is 1,000 concurrent executions across all functions, with SQS-triggered Lambda scaling up to this limit. Reserved concurrency can allocate guaranteed capacity to specific functions but doesn’t provide unlimited scaling. While AWS can increase limits upon request, default behavior has defined maximums. Unlimited scaling without limits would risk runaway costs and resource exhaustion.
Question 47
A developer is implementing a microservices architecture where services need to discover and communicate with each other. Which AWS service provides service discovery for container-based applications?
A) AWS Cloud Map
B) Amazon S3
C) Amazon DynamoDB
D) AWS Lambda
Answer: A
Explanation:
AWS Cloud Map provides service discovery for cloud resources including container-based applications, enabling services to discover each other using DNS queries or API calls. Cloud Map maintains a registry of service instances with health status, metadata, and connection information. As services scale up or down, Cloud Map automatically updates the registry, ensuring clients always connect to healthy instances. Cloud Map integrates with ECS, EKS, and other compute services, supporting both DNS-based discovery (for traditional applications) and API-based discovery (for modern cloud-native applications). Health checks ensure only healthy instances receive traffic.
Option B is incorrect because Amazon S3 is object storage for files and data, not a service discovery mechanism. While S3 could theoretically store service endpoint configurations, it lacks dynamic updates, health checking, and automatic registration that service discovery requires. S3 doesn’t provide DNS integration or APIs specifically designed for service discovery patterns. Using S3 for service discovery would require significant custom implementation without the benefits of purpose-built service discovery solutions.
Option C is incorrect because Amazon DynamoDB is a NoSQL database that could store service endpoint information but isn’t designed for service discovery. DynamoDB lacks health checking, automatic DNS integration, and service discovery-specific APIs. Implementing service discovery with DynamoDB requires custom code for registration, deregistration, health checks, and endpoint retrieval. Cloud Map provides these capabilities natively, making it purpose-built for service discovery rather than adapting a general database.
Option D is incorrect because AWS Lambda is a serverless compute service for running code without managing servers, not a service discovery solution. Lambda functions themselves might need service discovery to locate dependencies but don’t provide service discovery for other services. Lambda integrates with Cloud Map or other discovery mechanisms as a consumer, not a provider of discovery functionality. Service discovery requires dedicated infrastructure that Lambda doesn’t provide.
Question 48
A developer needs to implement caching for a read-heavy DynamoDB table to reduce read capacity consumption and improve response times. Which AWS service provides in-memory caching for DynamoDB?
A) Amazon ElastiCache
B) AWS CloudFront
C) Amazon DynamoDB Accelerator (DAX)
D) Amazon S3
Answer: C
Explanation:
Amazon DynamoDB Accelerator (DAX) provides in-memory caching specifically designed for DynamoDB, delivering microsecond response times for cached reads without application code changes. DAX is a fully managed, highly available caching service that sits between applications and DynamoDB tables. DAX caches GetItem and Query results, serving repeated reads from memory rather than DynamoDB. Applications connect to DAX clusters using DynamoDB-compatible APIs, making integration transparent. DAX handles cache invalidation, reduces DynamoDB read capacity consumption, and provides significant performance improvements for read-heavy workloads while requiring minimal code modifications.
Option A is incorrect because while Amazon ElastiCache (Redis or Memcached) provides general-purpose in-memory caching, it requires significant application code changes to implement caching logic, manage cache keys, handle invalidation, and serialize/deserialize DynamoDB data. ElastiCache doesn’t understand DynamoDB query patterns or automatically cache results. While ElastiCache works for DynamoDB caching, it requires custom integration code that DAX handles transparently. DAX is purpose-built for DynamoDB, providing simpler integration and DynamoDB-specific optimizations.
Option B is incorrect because AWS CloudFront is a content delivery network (CDN) for caching static content and API responses at edge locations globally. CloudFront doesn’t directly cache DynamoDB queries or integrate with DynamoDB tables. CloudFront sits in front of web applications or APIs, caching HTTP responses but not database-level operations. While CloudFront could cache API Gateway responses that query DynamoDB, it’s not a DynamoDB-specific caching solution and operates at a different architectural layer than DAX.
Option D is incorrect because Amazon S3 is object storage for files and data, not an in-memory cache. S3 provides durability and accessibility for stored objects but doesn’t offer microsecond latency or cache DynamoDB queries. Using S3 for caching would require applications to write query results to S3 and check S3 before querying DynamoDB, adding complexity and latency rather than improving performance. S3 serves completely different purposes than in-memory caching solutions.
Question 49
A developer is building a serverless application that needs to execute code in response to changes in a DynamoDB table. Which AWS service captures and processes these changes?
A) AWS CloudTrail
B) Amazon CloudWatch Logs
C) DynamoDB Streams with AWS Lambda
D) Amazon Kinesis Data Firehose
Answer: C
Explanation:
DynamoDB Streams with AWS Lambda captures and processes table changes in near real-time, enabling event-driven serverless architectures. DynamoDB Streams records item-level modifications (inserts, updates, deletes) in time-ordered sequence, maintaining change records for 24 hours. Lambda functions can be triggered by Stream events, receiving batches of change records for processing. This pattern enables use cases like data replication, audit logging, notifications, and derived data updates. Lambda automatically polls Streams, manages checkpoints, and scales to handle throughput, providing seamless integration for change data capture scenarios.
Option A is incorrect because AWS CloudTrail logs API calls to AWS services for auditing and compliance but doesn’t capture item-level data changes within DynamoDB tables. CloudTrail records control plane operations like CreateTable or DeleteTable but not data plane operations like PutItem or UpdateItem. CloudTrail provides audit trails for governance but doesn’t enable real-time processing of table data changes. DynamoDB Streams specifically captures item-level modifications that CloudTrail doesn’t record.
Option B is incorrect because Amazon CloudWatch Logs stores and monitors log data from applications and services but doesn’t automatically capture DynamoDB table changes. CloudWatch Logs receives application-generated log entries but doesn’t integrate directly with DynamoDB to stream table modifications. While applications could log DynamoDB operations to CloudWatch Logs, this requires custom instrumentation and doesn’t provide the structured change data capture that Streams offers. Logs serve monitoring purposes rather than change data processing.
Option D is incorrect because Amazon Kinesis Data Firehose loads streaming data into data stores like S3, Redshift, or Elasticsearch but doesn’t directly capture DynamoDB changes. Firehose processes streaming data but doesn’t integrate natively with DynamoDB tables to capture modifications. While DynamoDB Streams could feed into Kinesis for further processing, Firehose alone doesn’t capture table changes. DynamoDB Streams with Lambda provides the direct integration needed for change data capture and processing.
Question 50
A developer needs to implement authentication for a mobile application that should allow users to sign in using social identity providers like Facebook or Google. Which AWS service provides this functionality?
A) AWS IAM users
B) Amazon Cognito
C) AWS STS
D) AWS Directory Service
Answer: B
Explanation:
Amazon Cognito provides authentication, authorization, and user management for mobile and web applications, including social identity provider integration. Cognito supports sign-in through Facebook, Google, Amazon, and Apple, allowing users to authenticate with existing social accounts. Cognito handles OAuth flows, token exchange, and user profile synchronization from social providers. After authentication, Cognito provides JWT tokens for API access and can federate with IAM to grant temporary AWS credentials for accessing AWS resources. Cognito User Pools manage user directories while Identity Pools provide AWS credential federation.
Option A is incorrect because AWS IAM users are for managing access to AWS services and resources, not for authenticating application end users. IAM users are designed for administrators, developers, and systems accessing AWS APIs, not for customer-facing application authentication. Creating individual IAM users for application users would violate AWS best practices and create management nightmares. IAM doesn’t support social identity provider integration for application authentication. Cognito specifically addresses application user authentication that IAM isn’t designed for.
Option C is incorrect because AWS Security Token Service (STS) provides temporary security credentials for AWS API access but doesn’t handle user authentication or social identity provider integration. STS is a backend service for credential generation, not a complete authentication solution with user interfaces and provider integration. While Cognito uses STS internally to generate temporary credentials, applications need Cognito’s authentication features, not direct STS integration. STS serves as infrastructure for temporary credentials, not end-user authentication.
Option D is incorrect because AWS Directory Service provides managed Active Directory for enterprise identity management and Windows-based authentication but doesn’t integrate with social identity providers or provide modern web/mobile authentication. Directory Service serves enterprise scenarios requiring Active Directory compatibility, not consumer application authentication. Social identity provider integration requires OAuth-compatible services like Cognito. Directory Service addresses different use cases than consumer mobile app authentication.
Question 51
A developer is implementing a Lambda function that needs to access secrets like database passwords. Which AWS service should be used to securely store and retrieve these secrets?
A) Environment variables in Lambda configuration
B) AWS Secrets Manager
C) Hard-coded in Lambda code
D) Amazon S3 public bucket
Answer: B
Explanation:
AWS Secrets Manager provides secure storage, rotation, and retrieval of secrets like database passwords, API keys, and credentials. Secrets Manager encrypts secrets at rest using KMS, provides audit logging of secret access, and supports automatic rotation for supported databases and services. Lambda functions retrieve secrets at runtime using Secrets Manager APIs, ensuring credentials never appear in code or environment variables. Secrets Manager integrates with RDS and other services for automatic credential rotation without application changes. IAM policies control which Lambda functions can access which secrets, providing granular access control.
Option A is incorrect because while Lambda environment variables can store configuration values, they’re not designed for highly sensitive secrets requiring rotation and enhanced security. Environment variables are visible in the Lambda console to anyone with read access to the function, creating security risks for credentials. Environment variables don’t support automatic rotation or comprehensive audit logging. For sensitive secrets, Secrets Manager provides encryption, rotation, versioning, and access control that environment variables lack.
Option C is incorrect because hard-coding secrets in Lambda code creates severe security vulnerabilities. Hard-coded credentials appear in source code repositories, deployment packages, and potentially logs. Anyone with code access sees credentials, and updating credentials requires code changes and redeployment. Hard-coding violates fundamental security principles and makes credential rotation nearly impossible. Secrets Manager enables dynamic secret retrieval without embedding credentials in code, maintaining security while simplifying rotation.
Option D is incorrect because storing secrets in Amazon S3 public buckets is catastrophically insecure, exposing sensitive credentials to the entire internet. Public S3 buckets allow anonymous access without authentication, making any stored secrets immediately available to attackers. Even private S3 buckets aren’t appropriate for secrets management as they lack encryption designed for secrets, automatic rotation, and audit logging specific to credential access. Secrets Manager is purpose-built for secrets with security features S3 doesn’t provide.
Question 52
A developer needs to implement a solution where API Gateway caches responses to reduce backend load and improve response times. What is the maximum cache Time To Live (TTL) that can be configured in API Gateway?
A) 300 seconds (5 minutes)
B) 1,800 seconds (30 minutes)
C) 3,600 seconds (1 hour)
D) 7,200 seconds (2 hours)
Answer: C
Explanation:
The maximum cache Time To Live (TTL) that can be configured for API Gateway caching is 3,600 seconds (1 hour). Cache TTL determines how long API Gateway stores and returns cached responses before querying the backend again. TTL can be set from 0 to 3,600 seconds, with default being 300 seconds. Appropriate TTL values balance response freshness with cache effectiveness—longer TTLs reduce backend load but may return stale data, while shorter TTLs ensure fresher data but reduce cache benefits. API Gateway caching reduces latency and backend costs for read-heavy APIs with relatively static responses.
Option A is incorrect because 300 seconds (5 minutes) is the default cache TTL for API Gateway, not the maximum. While 300 seconds is reasonable for many use cases balancing freshness and cache effectiveness, APIs with more static content can benefit from longer TTLs up to the maximum 3,600 seconds. The default provides conservative caching but isn’t the upper limit. APIs can configure longer TTLs when appropriate for their data staleness tolerance.
Option B is incorrect because 1,800 seconds (30 minutes) is within the allowed range but not the maximum cache TTL. API Gateway supports cache TTL values up to 3,600 seconds, double this value. While 30 minutes provides substantial caching benefits for relatively static content, the actual maximum of one hour enables even more aggressive caching when appropriate. Understanding the true maximum helps architects design optimal caching strategies.
Option D is incorrect because 7,200 seconds (2 hours) exceeds API Gateway’s maximum cache TTL. Cache durations cannot be configured beyond 3,600 seconds (1 hour). For content requiring longer cache durations, alternative approaches include CloudFront CDN in front of API Gateway (supporting longer TTLs) or application-level caching. The one-hour maximum represents API Gateway’s current limit for balancing caching benefits with content freshness.
Question 53
A developer is building an application that needs to process images uploaded to S3. The processing should start automatically when images are uploaded. Which approach is most appropriate?
A) Poll S3 every minute using Lambda scheduled with CloudWatch Events
B) Configure S3 event notification to trigger Lambda function
C) Manually invoke Lambda after each upload
D) Use EC2 instances checking S3 continuously
Answer: B
Explanation:
Configuring S3 event notification to trigger Lambda function provides immediate, automatic, and efficient image processing when uploads occur. S3 event notifications push events to Lambda, SNS, or SQS when specific object operations occur (create, delete, etc.). Lambda receives event metadata including bucket and object key, enabling immediate processing without polling delays or overhead. This event-driven architecture ensures prompt processing, eliminates polling costs, and scales automatically with upload volume. S3-Lambda integration is purpose-built for processing uploaded objects reactively.
Option A is incorrect because polling S3 every minute using scheduled Lambda introduces unnecessary delay, costs, and complexity. Polling checks for new objects at fixed intervals regardless of upload activity, consuming Lambda invocations even when no uploads occurred. One-minute intervals mean processing delays up to 60 seconds, while shorter intervals increase polling costs without corresponding value. Event-driven architectures eliminate polling overhead, providing immediate processing when events occur. Polling represents outdated patterns for scenarios where event notifications are available.
Option C is incorrect because manually invoking Lambda after each upload defeats automation purposes and doesn’t scale. Manual invocation requires human intervention for every upload, creating operational burden and introducing delays. For applications with thousands of daily uploads, manual invocation is completely impractical. Automated event-driven architectures handle uploads without human involvement, providing scalability and reliability that manual processes cannot achieve. Manual invocation is appropriate only for ad-hoc testing, not production workloads.
Option D is incorrect because using EC2 instances continuously checking S3 introduces significant costs, operational overhead, and complexity without benefits over serverless event-driven approaches. Continuous polling requires running instances 24/7, paying for compute time regardless of upload activity. EC2 requires instance management, scaling configuration, and monitoring. Event-driven Lambda with S3 notifications provides better economics, simpler operations, and automatic scaling. EC2 polling represents over-engineering for scenarios where serverless event notifications suffice.
Question 54
A developer needs to implement an application that requires executing long-running workflows with human approval steps. Which AWS service provides visual workflow orchestration with approval integration?
A) AWS Lambda alone
B) AWS Step Functions
C) Amazon SQS
D) AWS CloudFormation
Answer: B
Explanation:
AWS Step Functions provides visual workflow orchestration with support for long-running processes, human approval steps, and complex state management. Step Functions uses state machines defined in Amazon States Language (ASL) to coordinate multiple AWS services including Lambda, ECS, SNS, and others. For human approvals, Step Functions supports callback patterns where workflows pause awaiting external signals or timeout configurations. Step Functions maintains workflow state, handles retries and error handling, and provides visual monitoring of execution progress. This enables building complex workflows including approval processes without managing orchestration infrastructure.
Option A is incorrect because AWS Lambda alone is designed for short-duration (up to 15 minutes) function executions and doesn’t provide workflow orchestration or human approval integration. Lambda functions execute code in response to events but don’t maintain long-running workflow state or coordinate multi-step processes with approval gates. While Lambda can be part of Step Functions workflows, Lambda by itself lacks orchestration capabilities. Long-running workflows requiring human input need orchestration services beyond individual Lambda functions.
Option C is incorrect because Amazon SQS is a message queue service for decoupling application components and buffering messages but doesn’t provide workflow orchestration or human approval integration. SQS delivers messages between producers and consumers but doesn’t maintain workflow state, implement approval gates, or visualize process flow. While SQS can be part of workflow architectures for asynchronous communication, it doesn’t orchestrate workflows or manage approval steps. SQS is messaging infrastructure, not workflow orchestration.
Option D is incorrect because AWS CloudFormation is infrastructure as code service for deploying and managing AWS resources, not an application workflow orchestration tool. CloudFormation provisions infrastructure but doesn’t execute application business logic or coordinate runtime workflows with human approvals. CloudFormation handles resource lifecycle management during infrastructure provisioning, fundamentally different from application workflow orchestration that Step Functions provides. CloudFormation operates at infrastructure layer, not application workflow layer.
Question 55
A developer needs to implement API throttling for different customer tiers (free, basic, premium) with different rate limits. Which API Gateway feature enables this functionality?
A) Request validation
B) Usage Plans with API Keys
C) Authorizers
D) Mapping templates
Answer: B
Explanation:
Usage Plans with API Keys in API Gateway enable tiered throttling by associating API keys with usage plans that define rate limits and quotas. Usage plans specify throttle rates (requests per second) and quotas (maximum requests per day/week/month) for different customer tiers. Customers receive API keys associated with their subscription tier’s usage plan. API Gateway enforces limits automatically, throttling requests exceeding plan limits with 429 Too Many Requests responses. This provides granular control over API consumption without custom throttling logic, supporting freemium and tiered pricing models.
Option A is incorrect because request validation ensures incoming requests match defined schemas and parameter requirements but doesn’t implement throttling or rate limiting. Validation rejects malformed requests before reaching backend integrations but treats all valid requests equally without considering request source or customer tier. Validation improves API quality and security but doesn’t provide the tiered throttling capabilities that usage plans deliver. Validation and throttling serve different API management purposes.
Option C is incorrect because authorizers (Lambda or Cognito) authenticate and authorize API requests, controlling access based on identity and permissions. While authorizers determine whether requests should be allowed, they don’t inherently implement tiered throttling based on customer subscription levels. Authorizers could pass customer tier information to backend logic for custom throttling implementation, but usage plans provide native throttling without custom code. Authorization and throttling address different aspects of API security and management.
Option D is incorrect because mapping templates transform request and response data using Velocity Template Language (VTL) but don’t implement throttling or rate limiting. Templates modify data structure and format but don’t control request rates or enforce quotas. While templates could theoretically reject requests based on custom logic, this would be inefficient and incorrect approach compared to usage plans designed specifically for throttling. Mapping templates handle data transformation, not rate limiting.
Question 56
A developer is implementing a microservice that needs to publish events when orders are created. Multiple other microservices need to receive and process these events independently. Which AWS service provides this publish-subscribe messaging pattern?
A) Amazon SQS
B) Amazon SNS
C) AWS Lambda direct invocation
D) Amazon DynamoDB
Answer: B
Explanation:
Amazon SNS (Simple Notification Service) provides publish-subscribe messaging enabling one-to-many message delivery to multiple subscribers independently. SNS topics receive published messages and fan out to all subscribers including SQS queues, Lambda functions, HTTP endpoints, email, and SMS. When orders are created, publishing to an SNS topic enables multiple microservices to receive notifications without the publisher knowing subscriber details. Each subscriber processes messages independently at their own pace without affecting others. SNS decouples publishers from subscribers, providing flexible fan-out patterns ideal for event-driven microservices architectures.
Option A is incorrect because Amazon SQS provides point-to-point messaging where each message is consumed by exactly one consumer, not publish-subscribe fan-out. While multiple consumers can poll an SQS queue, each message is delivered to only one consumer, requiring message duplication for multiple processors. SQS suits work queue patterns where tasks should be processed once, not event broadcasting to multiple independent subscribers. For multiple microservices receiving the same event, SNS pub-sub is more appropriate than SQS point-to-point.
Option C is incorrect because AWS Lambda direct invocation enables synchronous or asynchronous function execution but doesn’t provide publish-subscribe messaging infrastructure. Direct invocation requires the publisher to know and explicitly invoke each Lambda function, creating tight coupling between publishers and subscribers. Adding new subscribers requires modifying publisher code. Lambda can subscribe to SNS topics, but direct invocation doesn’t provide the decoupling and fan-out capabilities that pub-sub messaging delivers. SNS manages subscriptions allowing dynamic subscriber additions without publisher changes.
Option D is incorrect because Amazon DynamoDB is a NoSQL database for storing and retrieving data, not a publish-subscribe messaging service. While DynamoDB Streams can notify subscribers of table changes, this is different from general-purpose event publishing. Using DynamoDB for event distribution would require subscribers to poll tables or streams, creating complexity without the benefits of purpose-built messaging services. DynamoDB serves data persistence, not event distribution, making SNS the appropriate choice for publish-subscribe patterns.
Question 57
A developer needs to implement distributed tracing for a microservices application to identify performance bottlenecks across service calls. Which AWS service provides distributed tracing capabilities?
A) Amazon CloudWatch Logs only
B) AWS X-Ray
C) AWS CloudTrail
D) Amazon S3
Answer: B
Explanation:
AWS X-Ray provides distributed tracing for microservices applications, enabling visualization of service-to-service calls, latency analysis, and bottleneck identification. X-Ray collects trace data from application requests as they flow through services, creating service maps showing dependencies and performance characteristics. X-Ray SDKs instrument applications to capture trace segments, and X-Ray daemon aggregates and sends data to the X-Ray service. Developers can analyze request traces end-to-end, identifying slow services, errors, and performance issues across distributed architectures. X-Ray integrates with Lambda, ECS, API Gateway, and other AWS services for comprehensive distributed tracing.
Option A is incorrect because Amazon CloudWatch Logs stores and searches log data but doesn’t provide distributed tracing with service maps and request correlation across microservices. While logs contain valuable information, analyzing distributed request flows through log correlation is difficult and time-consuming. CloudWatch Logs show individual service logs but don’t automatically trace requests across service boundaries or visualize service dependencies. X-Ray specifically addresses distributed tracing needs that log aggregation alone cannot fulfill efficiently.
Option C is incorrect because AWS CloudTrail logs API calls to AWS services for auditing and compliance but doesn’t trace application-level requests across microservices. CloudTrail records AWS management operations (creating resources, changing configurations) but not application traffic flowing between services. CloudTrail provides audit trails for governance but doesn’t offer the performance analysis, latency tracking, and service mapping that X-Ray delivers for application tracing. CloudTrail and X-Ray serve different monitoring purposes.
Option D is incorrect because Amazon S3 is object storage for files and data, not a distributed tracing service. S3 stores application logs or trace data but doesn’t analyze request flows, generate service maps, or identify performance bottlenecks. While X-Ray can export trace data to S3 for long-term storage and analysis, S3 itself doesn’t provide tracing capabilities. S3 is storage infrastructure, not an application monitoring and tracing solution.
Question 58
A developer is building a serverless API that needs to interact with an RDS MySQL database. The Lambda function should retrieve database credentials securely without hard-coding them. What is the best approach?
A) Store credentials in Lambda environment variables unencrypted
B) Hard-code credentials in the Lambda function code
C) Store credentials in AWS Secrets Manager and retrieve at runtime
D) Store credentials in a public GitHub repository
Answer: C
Explanation:
Storing credentials in AWS Secrets Manager and retrieving them at runtime provides secure credential management for Lambda functions accessing RDS databases. Secrets Manager encrypts credentials at rest using KMS, provides IAM-based access control, supports automatic credential rotation, and logs access for auditing. Lambda functions retrieve credentials using Secrets Manager APIs during initialization or execution, ensuring credentials never appear in code or configuration. Secrets Manager integration with RDS enables automatic rotation without Lambda code changes. This approach follows AWS security best practices for secrets management.
Option A is incorrect because storing credentials in Lambda environment variables unencrypted exposes them to anyone with read access to the Lambda function configuration. Environment variables appear in plaintext in the Lambda console, CloudFormation templates, and potentially logs. While environment variables can be encrypted with KMS, this still provides less security than Secrets Manager with its rotation capabilities and audit logging. For database credentials requiring regular rotation, Secrets Manager provides better security and operational benefits than environment variables.
Option B is incorrect because hard-coding credentials in Lambda function code creates severe security vulnerabilities with credentials visible in source code repositories, deployment packages, and version control history. Anyone accessing code repositories sees credentials, and updating credentials requires code changes and redeployment. Hard-coding violates fundamental security principles and prevents credential rotation. Secrets Manager enables dynamic credential retrieval maintaining security while simplifying management and rotation processes.
Option D is incorrect because storing credentials in public GitHub repositories is catastrophically insecure, exposing database access credentials to anyone on the internet. Public repositories allow anonymous access, making stored credentials immediately available to attackers who scan GitHub for exposed secrets. This represents one of the most serious security mistakes, often leading to compromised systems and data breaches. Credentials must never be stored in version control, especially public repositories. Secrets Manager provides proper credential storage and management.
Question 59
A developer needs to implement a solution where an application running on EC2 needs temporary access to an S3 bucket. Which approach follows AWS security best practices?
A) Create an IAM user with access keys and hard-code in application
B) Attach an IAM role to the EC2 instance with appropriate S3 permissions
C) Make the S3 bucket publicly accessible
D) Store AWS credentials in application configuration files
Answer: B
Explanation:
Attaching an IAM role to the EC2 instance with appropriate S3 permissions follows AWS security best practices for providing temporary credentials without managing long-term access keys. IAM roles for EC2 provide automatic credential rotation through the instance metadata service, eliminating the need for hard-coded credentials. Applications use AWS SDKs to automatically retrieve temporary credentials from instance metadata, transparently handling credential refresh. This approach implements least privilege by granting only necessary S3 permissions and provides audit trails through CloudTrail. IAM roles prevent credential exposure and simplify credential management.
Option A is incorrect because creating IAM users with long-term access keys and hard-coding them in applications violates security best practices. Access keys require manual rotation, can be accidentally exposed in code or logs, and create persistent credentials that remain valid if compromised. Hard-coded credentials appear in deployment packages and potentially version control. IAM roles provide superior security by offering temporary credentials that rotate automatically without application changes. Long-term access keys should be avoided for applications running on AWS infrastructure.
Option C is incorrect because making S3 buckets publicly accessible exposes data to the entire internet without authentication or authorization. Public buckets allow anonymous access to all objects, creating massive security risks and potential data breaches. S3 public access should be limited to specific use cases requiring anonymous public content delivery, never for internal application data access. IAM roles provide granular access control ensuring only authorized EC2 instances access S3 resources while maintaining security.
Option D is incorrect because storing AWS credentials in application configuration files creates security vulnerabilities similar to hard-coding. Configuration files might be committed to version control, included in backups, or accessible through application vulnerabilities. Credentials in configuration files require manual updates and don’t rotate automatically. IAM roles eliminate credential management burden by providing temporary credentials through instance metadata without storing permanent credentials anywhere in the application.
Question 60
A developer is implementing error handling for a Lambda function that processes messages from an SQS queue. What happens to messages that cause Lambda function failures?
A) Messages are immediately deleted from the queue
B) Messages are automatically retried based on the queue’s redrive policy and maxReceiveCount
C) Messages remain invisible indefinitely
D) Messages are moved to S3 automatically
Answer: B
Explanation:
Messages that cause Lambda function failures are automatically retried based on the queue’s redrive policy and maxReceiveCount configuration. When Lambda fails to process a message, it remains in the queue and becomes visible again after the visibility timeout expires, allowing retry attempts. After the configured maxReceiveCount failures, messages move to a dead-letter queue (DLQ) if configured, preventing infinite retry loops. The Lambda-SQS integration respects queue settings for retry behavior, with Lambda deleting messages only after successful processing. Proper redrive policy configuration ensures failed messages receive appropriate retry attempts before moving to DLQ for investigation.
Option A is incorrect because messages are not immediately deleted when Lambda functions fail—deletion only occurs after successful processing. SQS requires explicit message deletion after processing succeeds. When Lambda successfully processes messages, it automatically deletes them from the queue. However, failures prevent deletion, leaving messages in the queue for retry. Immediate deletion on failure would lose messages without retry opportunities, making reliable processing impossible. Message retention with retry is fundamental to SQS reliability guarantees.
Option C is incorrect because messages don’t remain invisible indefinitely after Lambda failures. Visibility timeout determines how long messages remain invisible to other consumers after being received. When Lambda fails to process messages within the timeout period, messages become visible again for retry. Indefinite invisibility would prevent retry attempts and create message loss. SQS uses time-bound visibility timeouts ensuring messages become available for processing again after failures, enabling the retry behavior necessary for reliable message processing.
Option D is incorrect because SQS doesn’t automatically move messages to S3 after Lambda failures. Messages move to configured dead-letter queues after exceeding maxReceiveCount, not to S3. While applications can implement custom logic moving DLQ messages to S3 for long-term storage and analysis, this isn’t automatic SQS behavior. S3 is object storage without message queuing semantics. Failed messages follow SQS’s retry and DLQ mechanisms, with S3 integration requiring explicit configuration through separate processing of DLQ messages.