Cisco 350-901 Developing Applications using Core Platforms and APIs (DEVCOR) Exam Dumps and Practice Test Questions Set 10 Q 181-200

Visit here for our full Cisco 350-901 exam dumps and practice test questions.

Question 181

A developer is building a microservice that calls multiple Cisco DNA Center APIs sequentially. Each API call takes approximately 500ms, and there are 10 calls needed. The current implementation blocks waiting for each response before making the next call. What is the total expected latency and how can it be optimized?

A) Total latency is 5 seconds; use threading or async-await for concurrent execution

B) Total latency is 500ms; no optimization needed

C) Total latency is 10 seconds; implement caching for all responses

D) Total latency cannot be calculated without knowing network speed

Answer: A

Explanation:

The correct answer is A) Total latency is 5 seconds; use threading or async-await for concurrent execution. Sequential execution of 10 calls at 500ms each results in 5 seconds total latency. However, if these calls are independent, concurrent execution reduces latency to approximately 500ms plus overhead. Modern programming languages provide mechanisms for concurrency: Python uses async-await or threading, JavaScript uses Promises or async-await, Java uses CompletableFuture or ExecutorService. Implementing concurrency transforms the application from sequential to parallel execution, dramatically improving responsiveness. Load testing validates latency improvements and identifies thread pool optimal sizes.

Option B) is incorrect because sequential calls clearly require 5 seconds minimum. Option C) is incorrect because caching doesn’t help independent calls executed simultaneously. Option D) is incorrect because latency calculation doesn’t depend on network speed once individual call times are known. Understanding sequential versus concurrent execution patterns is fundamental to optimizing API integration performance.

Question 182

A developer is implementing a webhook receiver for Cisco Webex Events. The receiver endpoint occasionally receives duplicate webhooks from the platform. How should duplicates be handled to ensure idempotency?

A) Process all webhooks without checking for duplicates

B) Use a deduplication mechanism like storing event IDs in a database to detect and skip duplicates

C) Manually request the user to resend unique webhooks

D) Reject all webhook requests

Answer: B

Explanation:

The correct answer is B) Use a deduplication mechanism like storing event IDs in a database to detect and skip duplicates. Webhooks may be delivered multiple times due to network failures or platform retries. Implementing idempotency ensures duplicate deliveries don’t cause repeated processing. Each webhook includes a unique event ID; storing processed IDs allows detection of duplicates. Before processing, check if the event ID exists in storage; skip processing if found. This approach guarantees that regardless of how many times a webhook is delivered, business logic executes exactly once. Redis or a database table efficiently implements deduplication storage.

Option A) is incorrect because processing duplicates causes incorrect behavior and data corruption. Option C) is incorrect because users cannot control platform webhook retry behavior. Option D) is incorrect because legitimate webhooks would be rejected. Idempotent webhook processing is essential for reliable event-driven architectures.

Question 183

A developer is using Cisco ACI APIs to monitor network policies. The API supports both polling and subscription-based monitoring. Which approach is more efficient for real-time monitoring?

A) Polling every second for fresh data

B) Subscription-based notifications that push updates when changes occur

C) Polling every 5 seconds to reduce load

D) Manual monitoring without automation

Answer: B

Explanation:

The correct answer is B) Subscription-based notifications that push updates when changes occur. Subscription-based monitoring is significantly more efficient than polling because updates arrive immediately upon changes rather than waiting for the next poll interval. Subscriptions consume fewer resources, lower latency, and scale better as the number of monitored objects increases. Cisco ACI provides subscription mechanisms that deliver notifications through WebSockets or HTTP callbacks. Polling wastes bandwidth when nothing changes and introduces delays in detecting updates. For real-time requirements, subscriptions are the clear winner in efficiency and responsiveness.

Option A) is incorrect because one-second polling creates excessive load for minimal benefit. Option C) is incorrect because any polling interval introduces unnecessary latency. Option D) is incorrect because automated monitoring is essential for operations visibility. Subscription-based monitoring is the standard approach for real-time event-driven systems.

Question 184

A developer is implementing error responses for a REST API. A request fails with a 500 error. What information should be included in the error response to help developers debug the issue?

A) Return only the HTTP status code without details

B) Include error code, human-readable message, request ID for tracing, and optional detailed error information

C) Return stack traces directly to clients

D) Return generic “Error” message without context

Answer: B

Explanation:

The correct answer is B) Include error code, human-readable message, request ID for tracing, and optional detailed error information. Well-designed error responses include structured information aiding debugging. Error codes categorize failure types, human-readable messages explain issues in context, and request IDs enable correlation with server logs for detailed investigation. Optional fields like error details or recovery suggestions provide additional guidance. This approach balances security (not exposing internal details) with debuggability (providing actionable information). JSON error responses following conventions like RFC 7807 Problem Details format standardize error representation across APIs.

Option A) is incorrect because status codes alone provide insufficient debugging information. Option C) is incorrect because stack traces expose internal implementation and security vulnerabilities. Option D) is incorrect because generic messages don’t help identify root causes. Structured error responses with contextual information enable efficient issue resolution.

Question 185

A developer is building a CLI tool that integrates with multiple Cisco platforms. The tool needs to support different authentication methods (API keys, OAuth 2.0, certificates). How should authentication be abstracted?

A) Implement separate code paths for each authentication method

B) Create an abstract authentication provider interface with concrete implementations for each method

C) Use only the most common authentication method

D) Store credentials directly in the CLI configuration file

Answer: B

Explanation:

The correct answer is B) Create an abstract authentication provider interface with concrete implementations for each method. An abstract authentication provider interface defines the contract all authentication methods must fulfill. Concrete implementations handle specific methods (APIKeyProvider, OAuth2Provider, CertificateProvider). The main CLI code depends on the interface, not concrete implementations. This design enables adding new authentication methods without modifying existing code, following the Open-Closed Principle. The CLI can automatically select the appropriate provider based on user configuration, making the tool flexible and maintainable.

Option A) is incorrect because separate code paths create maintenance burden. Option C) is incorrect because supporting only one method limits flexibility. Option D) is incorrect because storing credentials in plain text creates security vulnerabilities. Abstract authentication providers enable building flexible, maintainable tools supporting multiple authentication mechanisms.

Question 186

A developer is implementing exponential backoff for retrying failed API calls. What is the purpose of adding jitter to the exponential backoff algorithm?

A) Jitter has no purpose and should not be used

B) Jitter adds randomness to prevent thundering herd problem where multiple clients retry simultaneously

C) Jitter ensures all clients retry at exactly the same time

D) Jitter increases latency intentionally

Answer: B

Explanation:

The correct answer is B) Jitter adds randomness to prevent thundering herd problem where multiple clients retry simultaneously. Without jitter, multiple clients failing simultaneously would retry at identical times, creating synchronized thundering herd that causes the same congestion problem that triggered the original failure. Jitter adds random variation to retry delays, distributing retry attempts across time. For example, instead of all clients retrying after 2 seconds, they retry between 1.6 and 2.4 seconds. This spreads load evenly and prevents synchronized spikes. Full jitter implementations randomize between 0 and the exponential backoff value for maximum distribution.

Option A) is incorrect because jitter provides critical resilience benefits. Option C) is incorrect because jitter specifically prevents synchronized retries. Option D) is incorrect because jitter optimizes timing, not intentionally increases latency. Jitter in retry mechanisms is a proven resilience pattern used across distributed systems.

Question 187

A developer is working with Cisco Meraki APIs that return paginated results. The API uses cursor-based pagination with a “next” token in responses. How should pagination be implemented to handle large datasets?

A) Fetch all data in a single request regardless of size

B) Implement a loop that fetches pages using the next token until all data is retrieved

C) Manually calculate page numbers and offsets

D) Stop after fetching the first page

Answer: B

Explanation:

The correct answer is B) Implement a loop that fetches pages using the next token until all data is retrieved. Cursor-based pagination using tokens is efficient for large datasets. The implementation maintains a loop that checks for the next token in each response; when present, it fetches the next page. When the next token is absent, pagination completes. This approach handles large datasets efficiently by fetching only needed pages and preventing timeout issues that occur with offset-based methods. The loop can implement concurrent fetching for multiple pages if the API supports it.

Option A) is incorrect because attempting to fetch all data simultaneously causes memory overflow and timeouts. Option C) is incorrect because offset calculations are inefficient and error-prone. Option D) is incorrect because retrieving only the first page leaves data unprocessed. Cursor-based pagination loops are the standard approach for efficiently traversing large datasets through APIs.

Question 188

A developer is implementing a feature to sync data between a local database and Cisco DNA Center. The sync should handle network failures gracefully. Which approach ensures data consistency?

A) Abort immediately if any network error occurs

B) Implement idempotent operations with checkpoints so sync can resume from the last successful point on failure

C) Sync without any failure handling

D) Manually reconcile data after each failure

Answer: B

Explanation:

The correct answer is B) Implement idempotent operations with checkpoints so sync can resume from the last successful point on failure. Idempotent operations produce identical results regardless of how many times they execute, enabling safe retries. Checkpoints record progress, allowing resumption from the last successful point rather than restarting from the beginning. Combined with retry logic, this approach ensures data consistency even during network failures. For example, if syncing 1000 records fails after 700, the next attempt resumes from record 701 rather than reprocessing 700 records. Transaction-like semantics ensure complete consistency.

Option A) is incorrect because aborting on any error doesn’t handle transient failures. Option C) is incorrect because unhandled failures corrupt data. Option D) is incorrect because manual reconciliation is impractical. Idempotent operations with checkpoints enable resilient, consistent data synchronization.

Question 189

A developer is building a dashboard that displays data from multiple Cisco APIs with different update frequencies. Device configuration changes infrequently (cache for 1 hour), while interface statistics update frequently (cache for 5 minutes). How should caching be configured?

A) Use identical cache TTL for all data types

B) Implement differentiated caching with per-datatype TTLs based on update frequency

C) Never cache any data

D) Cache everything indefinitely

Answer: B

Explanation:

The correct answer is B) Implement differentiated caching with per-datatype TTLs based on update frequency. Different data types have different freshness requirements. Configuration data changes rarely, justifying longer cache duration; operational statistics change frequently, requiring shorter caching. Configuring TTLs based on actual update frequency optimizes performance while maintaining appropriate data freshness. This approach balances API quota consumption against latency and consistency needs. Cache invalidation mechanisms handle cases where data changes unexpectedly. Monitoring cache hit rates ensures TTL configuration remains optimal.

Option A) is incorrect because identical TTLs waste resources or serve stale data. Option C) is incorrect because appropriate caching significantly improves performance. Option D) is incorrect because indefinite caching ensures stale data. Differentiated caching with appropriate TTLs is the standard approach for multi-source dashboards.

Question 190

A developer is implementing a service that processes Cisco Webex Events through a message queue. Events occasionally fail processing. How should failed events be handled to maintain reliability?

A) Discard failed events immediately

B) Implement a dead-letter queue and retry mechanism to handle transient failures while capturing permanently failed events

C) Retry failed events indefinitely

D) Log failures but ignore them

Answer: B

Explanation:

The correct answer is B) Implement a dead-letter queue and retry mechanism to handle transient failures while capturing permanently failed events. Message queue systems typically provide dead-letter queues for handling failed messages. Failed events are retried with exponential backoff to handle transient failures. After exceeding a maximum retry count, messages move to the dead-letter queue for investigation. This approach balances reliability (recovering from transient failures) with operational visibility (identifying permanently failed events). Operators can inspect dead-letter queues, fix underlying issues, and replay events when ready.

Option A) is incorrect because discarding events loses data. Option C) is incorrect because infinite retries consume resources and never resolve permanent failures. Option D) is incorrect because ignoring failures prevents issue detection. Dead-letter queues with retry logic are industry-standard patterns for reliable event processing.

Question 191

A developer is implementing API request logging for security auditing. What information should be logged without violating privacy or security?

A) Log all data including credentials, tokens, and personal information

B) Log request URLs, methods, response status codes, and timestamps while excluding sensitive data like credentials and personal information

C) Don’t log anything to avoid performance impact

D) Log only failed requests

Answer: B

Explanation:

The correct answer is B) Log request URLs, methods, response status codes, and timestamps while excluding sensitive data like credentials and personal information. Comprehensive logging enables security investigations and debugging without exposing sensitive information. Logging URLs reveals what resources were accessed, methods indicate action types, status codes show success or failure, and timestamps establish timelines. Excluding credentials, tokens, and personal information prevents security breaches if logs are compromised. Structured logging with correlation IDs enables tracing requests across distributed systems. Log retention policies balance auditing needs against storage costs and privacy requirements.

Option A) is incorrect because logging sensitive data violates security best practices and regulations. Option C) is incorrect because properly implemented logging has minimal performance impact and is essential for security. Option D) is incorrect because successful requests also provide valuable audit trails. Strategic logging that includes debugging information while protecting sensitive data is essential for production systems.

Question 192

A developer is building a Python application using the Cisco SDK. The SDK methods raise exceptions on API failures. How should exceptions be handled?

A) Let all exceptions propagate uncaught

B) Catch specific exceptions from the SDK and implement appropriate error handling for each type

C) Catch generic Exception class to suppress all errors

D) Never handle exceptions

Answer: B

Explanation:

The correct answer is B) Catch specific exceptions from the SDK and implement appropriate error handling for each type. SDKs typically raise specific exception classes for different failure scenarios (e.g., AuthenticationError, NotFoundError, RateLimitError). Catching specific exceptions enables tailored responses. AuthenticationError triggers re-authentication, NotFoundError skips processing that resource, RateLimitError implements backoff. This approach provides fine-grained error control without hiding unexpected errors. Generic Exception handlers suppress important debugging information. The Cisco SDK documentation specifies raised exceptions and expected handling patterns.

Option A) is incorrect because uncaught exceptions crash applications. Option C) is incorrect because catching generic exceptions hides important errors. Option D) is incorrect because exception handling enables graceful error recovery. Specific exception handling based on exception types enables robust SDK integration.

Question 193

A developer is implementing request timeouts for Cisco API calls. A typical API call completes in 200ms, but occasionally takes 5 seconds. What timeout value should be configured?

A) 100ms timeout to enforce performance

B) 10-30 seconds timeout allowing legitimate slow requests while preventing indefinite hangs

C) No timeout; wait indefinitely

D) 1 second timeout

Answer: B

Explanation:

The correct answer is B) 10-30 seconds timeout allowing legitimate slow requests while preventing indefinite hangs. Timeout configuration balances preventing indefinite hangs against avoiding premature timeout of legitimate slow requests. A timeout much larger than normal latency (200ms) but smaller than acceptable maximum latency accommodates occasional slowdowns. 10-30 seconds is reasonable for most Cisco APIs. Timeouts that are too short (1 second with 5-second occasional calls) cause false positives, while no timeout allows resource exhaustion. Different operations may warrant different timeouts; query operations might use shorter timeouts while configuration operations use longer ones.

Option A) is incorrect because 100ms timeout aborts many legitimate calls. Option C) is incorrect because infinite waits cause resource exhaustion. Option D) is incorrect because 1-second timeout aborts calls that normally complete in 5 seconds. Appropriate timeout configuration based on observed latency patterns is essential for robust applications.

Question 194

A developer is implementing a backup system that exports configuration from Cisco DNA Center periodically. The export generates large JSON files. How should large file transfers be handled?

A) Transfer entire file in a single request

B) Implement streaming or chunked transfer to handle memory efficiently and support resume on failure

C) Split manually and transfer sequentially

D) Store files only in memory without persisting

Answer: B

Explanation:

The correct answer is B) Implement streaming or chunked transfer to handle memory efficiently and support resume on failure. Streaming avoids loading entire files into memory, enabling efficient processing of large files. Chunked transfer divides files into manageable pieces, each transferred separately. If a chunk fails, only that chunk requires re-transfer rather than restarting the entire file. This approach scales to arbitrarily large files without memory constraints. APIs supporting range requests enable efficient resume capabilities. HTTP libraries like requests in Python or fetch in JavaScript provide streaming interfaces for large transfers.

Option A) is incorrect because large files exhaust memory and fail on connection interruptions. Option C) is incorrect because manual chunking duplicates work that libraries handle. Option D) is incorrect because memory-only storage doesn’t persist data across failures. Streaming and chunked transfer are essential techniques for handling large files reliably.

Question 195

A developer is implementing a feature that requires reading sensitive configuration from Cisco DNA Center. The API response includes sensitive data. How should this data be handled in logs and error messages?

A) Log all sensitive data for debugging purposes

B) Mask or exclude sensitive data from logs and error messages while keeping it in memory for processing

C) Store sensitive data in plain text files

D) Display sensitive data in user interface

Answer: B

Explanation:

The correct answer is B) Mask or exclude sensitive data from logs and error messages while keeping it in memory for processing. Sensitive data like passwords, API keys, or personal information must be protected throughout the application lifecycle. Keeping data in memory for legitimate processing is necessary, but logging it creates permanent exposure. Masking techniques replace sensitive portions with asterisks (e.g., “password: ****”) or remove entirely from logs. Error messages should indicate data presence without revealing values. This approach balances functionality with security. Careful data handling prevents breaches when logs are stored, shared, or analyzed.

Option A) is incorrect because logging sensitive data violates security best practices. Option C) is incorrect because plain text storage enables easy access to sensitive data. Option D) is incorrect because displaying sensitive data to users violates security principles. Selective data protection in logs while processing data in memory is the standard security practice.

Question 196

A developer is building an integration that processes configuration changes from Cisco ACI. The API notifications arrive asynchronously through subscriptions. How should asynchronous events be processed to maintain application state consistency?

A) Process events synchronously, blocking other operations

B) Queue events for asynchronous processing with state management to ensure consistency

C) Drop events if busy processing others

D) Merge all events into single request

Answer: B

Explanation:

The correct answer is B) Queue events for asynchronous processing with state management to ensure consistency. Asynchronous event processing requires careful state management. Events are queued upon receipt, enabling fast webhook responses. A worker process consumes queued events sequentially or with controlled concurrency, updating application state consistently. Queueing decouples event receipt from processing, preventing event loss and enabling graceful degradation under load. State management ensures multiple concurrent event handlers don’t corrupt shared state through proper synchronization or message ordering. This architecture enables scalable, reliable event processing.

Option A) is incorrect because synchronous blocking prevents receiving new events during processing. Option C) is incorrect because dropping events loses critical information. Option D) is incorrect because merging events loses ordering and event-specific details. Queued asynchronous processing with consistent state management is essential for reliable event-driven systems.

Question 197

A developer is implementing a multi-region deployment of a Cisco API integration. Different regions have different API endpoints. How should region-specific endpoints be managed?

A) Hardcode endpoints for each region

B) Use configuration management with region-specific endpoint mappings deployed separately per region

C) Always use a single global endpoint

D) Manually change endpoints for each region

Answer: B

Explanation:

The correct answer is B) Use configuration management with region-specific endpoint mappings deployed separately per region. Configuration management separates code from deployment-specific values. Region-specific endpoint mappings can be stored in configuration files, environment variables, or external configuration services deployed per region. The application code uses configuration-driven endpoints rather than hardcoding, enabling deployment across regions without code changes. This approach also supports dynamic endpoint updates if Cisco changes API URLs. Infrastructure-as-Code tools like Terraform or CloudFormation facilitate managing region-specific configurations consistently.

Option A) is incorrect because hardcoding endpoints requires code changes for each region. Option C) is incorrect because global endpoints don’t exist; APIs are typically regional. Option D) is incorrect because manual changes are error-prone and don’t scale. Configuration-managed region-specific endpoints enable scalable multi-region deployments.

Question 198

A developer is building a tool that imports network configurations into Cisco DNA Center. The import must handle partial failures—if some configurations fail, others should still complete. How should this be implemented?

A) Use a single transaction that rolls back entirely on any failure

B) Process configurations independently with individual error tracking so failures don’t block successful imports

C) Abort immediately on first error

D) Ignore all errors and report success

Answer: B

Explanation:

The correct answer is B) Process configurations independently with individual error tracking so failures don’t block successful imports. Processing configurations independently enables partial success when some fail. Individual error tracking records which configurations succeeded and which failed, enabling reporting and retry targeting. This approach maximizes utility when importing large configuration sets—even if 10% fail, 90% succeeds and provides value. Failed configurations can be analyzed, corrected, and retried. Progress indicators show users which configurations are processing, succeeding, and failing. This pattern is more practical than all-or-nothing transactions for bulk operations.

Option A) is incorrect because rolling back all changes on any failure is overly conservative. Option C) is incorrect because aborting on first error wastes remaining configurations. Option D) is incorrect because ignoring errors hides failures. Partial success with detailed error tracking is the practical approach for bulk operations.

Question 199

A developer is implementing API version compatibility checking. The application supports API versions 1.0, 1.5, and 2.0. How should version compatibility be managed?

A) Use the latest API version always

B) Implement version negotiation that discovers server version and adapts client behavior accordingly

C) Fail if versions don’t match exactly

D) Ignore version differences

Answer: B

Explanation:

The correct answer is B) Implement version negotiation that discovers server version and adapts client behavior accordingly. Version negotiation discovers the server’s API version upon connection, enabling the client to adapt behavior appropriately. For example, if the server is version 1.5, the client uses only version 1.5-compatible calls, avoiding version 2.0 features. This approach enables deployment against servers of different versions without code changes. Version negotiation typically involves calling a versioning endpoint or checking headers. Maintaining compatibility layers for multiple versions enables gradual migration to newer APIs.

Option A) is incorrect because assuming latest version breaks compatibility with older servers. Option C) is incorrect because exact matching prevents flexibility. Option D) is incorrect because version differences cause incompatibility errors. Version negotiation enables building flexible clients supporting multiple server versions.

Question 200

A developer is implementing a resilience pattern for calling Cisco Meraki APIs. The system should continue functioning if the API becomes temporarily unavailable. Which pattern best describes this requirement?

A) Fail-fast pattern that immediately errors on any unavailability

B) Bulkhead pattern with separate service instances so API failures don’t cascade to other services

C) Ignore all errors without recovery

D) Queue requests indefinitely

Answer: B

Explanation:

The correct answer is B) Bulkhead pattern with separate service instances so API failures don’t cascade to other services. The Bulkhead pattern isolates service components so failures in one don’t cascade to others. By dedicating separate thread pools or processes for Meraki API calls, failures don’t exhaust resources needed by other services. If Meraki API becomes unavailable, its bulkhead fails gracefully while other services continue. Bulkheads can implement fallback logic, returning cached data or degraded functionality when APIs fail. This pattern enables graceful degradation rather than complete system failure. Combined with circuit breakers and retry logic, bulkheads create highly resilient systems.

Option A) is incorrect because fail-fast patterns don’t continue functioning during unavailability. Option C) is incorrect because ignoring errors prevents detection. Option D) is incorrect because indefinite queueing doesn’t resolve unavailability. The Bulkhead pattern is a proven resilience technique that enables systems to function despite component failures.

 

Leave a Reply

How It Works

img
Step 1. Choose Exam
on ExamLabs
Download IT Exams Questions & Answers
img
Step 2. Open Exam with
Avanset Exam Simulator
Press here to download VCE Exam Simulator that simulates real exam environment
img
Step 3. Study
& Pass
IT Exams Anywhere, Anytime!