Google Associate Cloud Engineer Exam Dumps and Practice Test Questions Set 3 Q 41-60

Visit here for our full Google Associate Cloud Engineer exam dumps and practice test questions.

Question 41

Your organization needs to migrate a legacy application that requires specific kernel modules not available in standard container images. Which GKE node configuration should you use?

A) Standard GKE nodes with custom startup scripts

B) GKE nodes with Container-Optimized OS

C) GKE nodes with Ubuntu or custom node images

D) GKE Autopilot mode

Answer: C

Explanation:

Google Kubernetes Engine provides flexibility in node operating system selection to accommodate different application requirements. While Container-Optimized OS is the default and recommended option for most workloads, some applications require custom kernel modules, specific system libraries, or OS-level configurations not available in the locked-down Container-Optimized OS. Understanding when to use alternative node images is important for supporting legacy or specialized workloads.

GKE nodes with Ubuntu or custom node images provide the flexibility needed for applications requiring specific kernel modules. GKE supports multiple node operating systems including Container-Optimized OS, Ubuntu, and Windows Server. Ubuntu nodes provide a traditional Linux distribution where you can install kernel modules, system packages, and perform OS-level customizations that Container-Optimized OS doesn’t allow. For applications needing specific kernel modules, you create node pools using Ubuntu node image, then use DaemonSets or startup scripts to install required kernel modules and dependencies. Custom node images allow building specialized OS images with pre-installed software and configurations, then using those images for GKE node pools. This approach enables running legacy applications with specific OS dependencies, installing proprietary drivers or kernel modules, and configuring system-level settings. You maintain separate node pools with different OS images, scheduling workloads to appropriate nodes using node selectors or taints and tolerations.

A is incorrect because while startup scripts can configure nodes, Container-Optimized OS has a read-only root filesystem and limited package installation capabilities, preventing kernel module installation even with startup scripts. B is incorrect because Container-Optimized OS is designed as a locked-down, minimal OS optimized for containers and doesn’t support installing custom kernel modules or extensive system modifications. D is incorrect because GKE Autopilot is a fully managed mode where Google manages the node infrastructure including OS selection, and doesn’t provide the OS-level customization needed for custom kernel modules.

Question 42

You need to implement blue-green deployment for an application running behind a Cloud Load Balancer. Which approach should you use?

A) Update backend service to point to new instance group

B) Use Cloud CDN to cache different versions

C) Create separate load balancers for each version

D) Use traffic splitting with URL maps

Answer: A

Explanation:

Blue-green deployment is a release strategy that maintains two complete production environments, allowing instant switching between versions with easy rollback capabilities. Implementing blue-green deployments with Google Cloud Load Balancer requires understanding how backend services and instance groups interact to route traffic.

Updating the backend service to point to a new instance group implements blue-green deployment effectively with Cloud Load Balancer. The approach involves maintaining two instance groups, blue representing the current production version and green representing the new version. Both instance groups run simultaneously with the green environment thoroughly tested before switching traffic. The load balancer’s backend service initially points to the blue instance group serving production traffic. When ready to deploy, you update the backend service configuration to point to the green instance group, instantly switching all traffic to the new version. If issues arise, you quickly rollback by switching the backend service back to the blue instance group. This provides instant cutover with no gradual migration period, complete rollback capability by switching backend references, testing the new version in production-equivalent environment before cutover, and zero downtime during the switch. After successful deployment and monitoring period, the old blue environment can be decommissioned or repurposed as the new blue for the next deployment cycle.

B is incorrect because Cloud CDN caches content for performance but doesn’t control application version routing; it serves cached content from origin which wouldn’t switch between application versions appropriately. C is incorrect because creating separate load balancers adds complexity and requires DNS changes for cutover rather than the instant backend switching that blue-green deployment provides. D is incorrect because traffic splitting with URL maps gradually distributes traffic across versions rather than the instant full cutover that characterizes blue-green deployment; traffic splitting is more appropriate for canary deployments.

Question 43

Your application needs to query data from Cloud SQL using complex joins and aggregations. Performance is degrading as data volume grows. What optimization should you implement?

A) Increase Cloud SQL instance machine type

B) Add read replicas and route read queries to them

C) Migrate to BigQuery for analytical queries

D) Enable Cloud SQL query insights

Answer: B

Explanation:

Database performance optimization requires understanding workload characteristics and selecting appropriate scaling strategies. Cloud SQL provides several mechanisms for improving performance including vertical scaling, read replicas, and connection pooling. For read-heavy workloads with complex queries, distributing read load across multiple database instances provides significant performance improvements.

Adding read replicas and routing read queries to them optimizes performance for read-heavy workloads with complex joins and aggregations. Cloud SQL read replicas are read-only copies of the primary instance that asynchronously replicate data from the primary. Applications can distribute read queries across replicas, reducing load on the primary instance which can then focus on write operations. For applications with many complex SELECT queries including joins and aggregations, read replicas provide horizontal read scaling where each replica can serve a portion of read traffic. You can create multiple read replicas in the same region or different regions, connecting applications to replicas for read operations while directing writes to the primary instance. This approach maintains primary instance capacity for writes, scales read throughput proportionally with replica count, provides geographic distribution for global applications, and improves query response times by reducing primary instance load. Read replicas work well when replication lag of seconds is acceptable for read operations.

A is incorrect because increasing instance machine type provides vertical scaling that improves capacity but can be expensive and has limits; it doesn’t scale as effectively as horizontal scaling through read replicas for read-heavy workloads. C is incorrect because while BigQuery excels at analytics on large datasets, migrating requires significant architectural changes and BigQuery is optimized for different use cases than transactional databases; the question asks for optimization, not architecture redesign. D is incorrect because Query Insights provides visibility into query performance and helps identify optimization opportunities but doesn’t itself improve performance; it’s a diagnostic tool rather than a performance solution.

Question 44

You need to grant developers access to view Compute Engine resources without allowing them to make changes. Which predefined IAM role should you assign?

A) roles/compute.admin

B) roles/compute.viewer

C) roles/compute.instanceAdmin

D) roles/viewer

Answer: B

Explanation:

Google Cloud IAM provides predefined roles with curated permission sets for common use cases across GCP services. Selecting appropriate roles requires understanding the scope and permissions each role provides. For read-only access to specific resources, service-specific viewer roles provide more granular control than project-wide roles.

The roles/compute.viewer role is the correct predefined role for developers needing read-only access to Compute Engine resources. This role provides get and list permissions for Compute Engine resources including instances, disks, networks, firewalls, and other compute resources without granting modification permissions. Developers can view instance details, read configurations, examine logs, and monitor resource status but cannot create, modify, delete, start, or stop resources. The compute.viewer role is service-specific, providing visibility only into Compute Engine rather than all project resources. This follows the principle of least privilege by granting exactly the permissions needed without excess access. For scenarios requiring read access to compute resources for troubleshooting, monitoring, or understanding infrastructure, compute.viewer provides appropriate access without operational permissions. Users with this role can access the Compute Engine console and use gcloud commands for listing and describing resources but receive permission denied errors for modification attempts.

A is incorrect because roles/compute.admin provides full administrative access including creating, modifying, and deleting compute resources, which exceeds the read-only requirement. C is incorrect because roles/compute.instanceAdmin grants instance administration permissions including start, stop, and modify operations, providing more access than viewing requires. D is incorrect because roles/viewer grants read-only access to all project resources across all services, which is broader than necessary and violates least privilege when only Compute Engine visibility is needed.

Question 45

Your company requires all GCS bucket data to be encrypted with customer-managed keys that can be rotated on a specific schedule. What should you implement?

A) Default Google encryption with automatic rotation

B) Customer-managed encryption keys in Cloud KMS with automatic rotation

C) Customer-supplied encryption keys rotated manually

D) Client-side encryption with manual key rotation

Answer: B

Explanation:

Meeting encryption requirements that mandate customer control over keys and specific rotation schedules requires choosing encryption methods that provide both key management capabilities and operational automation. Cloud KMS provides comprehensive key management including automated rotation, centralized administration, and integration with GCP storage services.

Customer-managed encryption keys in Cloud KMS with automatic rotation provides the correct solution for organizational key control with scheduled rotation. Cloud KMS allows creating and managing encryption keys used for encrypting data in Cloud Storage and other GCP services. For Cloud Storage, you configure buckets or objects to use specific Cloud KMS keys through customer-managed encryption keys configuration. Cloud KMS supports automatic key rotation on configurable schedules such as every 90 days or annually, where KMS automatically creates new key versions while maintaining access to previous versions for decrypting existing data. The rotation process is transparent to applications with new data encrypted using the latest key version while older data remains accessible through previous versions. This approach provides centralized key lifecycle management, automatic rotation eliminating manual processes, audit logging of all key usage, ability to disable or destroy keys, and integration with Cloud IAM for key access control. Organizations maintain full control over encryption keys while benefiting from automated rotation and operational simplicity.

A is incorrect because default Google encryption doesn’t provide customer control over keys; Google manages these keys entirely without customer access to key material or rotation schedules. C is incorrect because customer-supplied encryption keys require providing key material with every request and managing rotation manually, adding operational complexity without the centralized management benefits of Cloud KMS. D is incorrect because client-side encryption requires applications to handle encryption before upload, manage keys externally, and implement rotation logic, creating significant operational overhead and preventing GCP services from processing encrypted data.

Question 46

Your application running on GKE needs to automatically scale based on custom metrics like queue depth. What should you configure?

A) Horizontal Pod Autoscaler with CPU metrics

B) Vertical Pod Autoscaler

C) Horizontal Pod Autoscaler with custom metrics

D) Cluster Autoscaler

Answer: C

Explanation:

Kubernetes autoscaling mechanisms operate at different levels with different triggering metrics. While CPU and memory-based autoscaling handles many scenarios, some applications require scaling based on application-specific metrics like message queue length, request latency percentiles, or business metrics. Understanding which autoscaler supports custom metrics is essential for implementing appropriate scaling behavior.

Horizontal Pod Autoscaler with custom metrics provides autoscaling based on application-specific metrics like queue depth. HPA supports three metric types including resource metrics like CPU and memory, custom metrics from applications exposed through the metrics API, and external metrics from services outside the cluster. For queue-based applications, you expose queue depth as a custom metric, then configure HPA to scale pod replicas based on that metric. For example, scaling to maintain fewer than 10 messages per pod by monitoring queue depth and adjusting replica count. Custom metrics are typically exposed through monitoring systems like Cloud Monitoring, Prometheus, or application-specific exporters. HPA queries these metrics and applies scaling algorithms to determine desired replica count. This enables scaling based on business logic, application performance characteristics, or external system state rather than just resource utilization. Custom metric autoscaling is essential for event-driven architectures, queue processing systems, and applications where resource metrics don’t accurately reflect scaling needs.

A is incorrect because HPA with CPU metrics only scales based on CPU utilization, which may not correlate with queue depth or application-specific scaling requirements. B is incorrect because Vertical Pod Autoscaler adjusts pod resource requests and limits rather than replica count, and operates on resource metrics rather than custom application metrics. D is incorrect because Cluster Autoscaler scales the number of nodes in the cluster based on pod resource requests, not application metrics; it responds to pod scheduling failures due to insufficient cluster capacity rather than application metrics.

Question 47

You need to deploy a containerized application that handles sensitive payment data and requires PCI DSS compliance. Which GCP compute option provides the strongest workload isolation?

A) Google Kubernetes Engine Standard

B) Cloud Run

C) GKE Sandbox using gVisor

D) Compute Engine with Shielded VMs

Answer: C

Explanation:

Security-sensitive workloads processing financial data or personal information often require enhanced isolation beyond standard container or VM security. Different GCP compute services provide varying levels of isolation, with some offering additional security layers specifically designed for untrusted or highly sensitive workloads.

GKE Sandbox using gVisor provides the strongest workload isolation for containerized applications by adding an additional security layer between containers and the host kernel. GKE Sandbox uses gVisor, a user-space kernel that implements most Linux system calls, creating a security boundary between containerized applications and the host operating system. This approach significantly reduces the attack surface by limiting direct kernel access from containers, providing defense-in-depth security where compromised containers have minimal ability to affect the host or other containers. GKE Sandbox is particularly suitable for processing sensitive data, running untrusted code, meeting strict compliance requirements, and multi-tenant environments requiring strong isolation. While gVisor introduces some performance overhead compared to standard containers, the security benefits are substantial for sensitive workloads. PCI DSS compliance often requires demonstrating strong isolation controls, and GKE Sandbox provides additional security assurance beyond standard container isolation. You enable sandbox mode per pod using runtimeClassName specification.

A is incorrect because GKE Standard uses traditional container isolation which, while secure for most workloads, doesn’t provide the enhanced isolation layer that gVisor offers for extremely sensitive data. B is incorrect because Cloud Run uses standard container isolation appropriate for many workloads but doesn’t offer the additional security boundary provided by GKE Sandbox. D is incorrect because while Shielded VMs provide boot integrity and firmware protection for virtual machines, they don’t offer the container-specific enhanced isolation that gVisor provides; they address different security concerns at the VM level rather than container isolation.

Question 48

Your team needs to debug intermittent latency issues in a distributed application. Which GCP service provides distributed tracing capabilities?

A) Cloud Monitoring

B) Cloud Logging

C) Cloud Trace

D) Cloud Profiler

Answer: C

Explanation:

Debugging performance issues in distributed applications where requests traverse multiple services presents unique challenges. Traditional monitoring showing aggregate metrics doesn’t reveal which specific component in a request chain causes latency. Distributed tracing tracks individual requests across service boundaries, providing visibility into request flows and performance bottlenecks.

Cloud Trace provides distributed tracing capabilities for tracking request latency across distributed applications. Cloud Trace collects latency data from applications, showing detailed timing information for requests as they travel through different services and components. The service automatically integrates with App Engine, Cloud Functions, and Cloud Run, while libraries and instrumentation support custom applications on Compute Engine or GKE. Cloud Trace displays trace data showing the path of requests through services, time spent in each service or function, dependencies between services, and bottlenecks causing latency. For microservices architectures, Cloud Trace reveals which service in a call chain introduces delays, helping identify performance problems that aggregate metrics hide. The trace view shows a timeline of operations with nested spans representing different operations within the overall request. This enables diagnosing intermittent latency by examining specific slow requests, comparing fast and slow traces, and understanding request behavior patterns. Cloud Trace is essential for optimizing distributed application performance and troubleshooting latency issues.

A is incorrect because Cloud Monitoring focuses on metrics, alerting, and dashboards showing aggregate performance data but doesn’t provide request-level tracing through distributed systems. B is incorrect because Cloud Logging captures log entries for debugging and auditing but doesn’t trace request flows across services or provide the timing analysis that distributed tracing offers. D is incorrect because Cloud Profiler analyzes CPU and memory usage within applications for optimization but doesn’t trace requests across distributed services or measure latency between components.

Question 49

You need to allow specific IP addresses from your on-premises network to access Cloud SQL instances. What should you configure?

A) VPC firewall rules

B) Authorized networks in Cloud SQL

C) Cloud Armor security policies

D) Private IP with VPN connection

Answer: B

Explanation:

Cloud SQL instances can be accessed through public IP addresses or private IP addresses, each with different security and connectivity characteristics. For public IP access, controlling which IP addresses can connect requires Cloud SQL-specific authorization mechanisms rather than general network security controls.

Authorized networks in Cloud SQL configuration allow specific IP addresses or CIDR ranges from on-premises networks to access Cloud SQL instances over public IP. You configure authorized networks by adding IP address ranges to the Cloud SQL instance configuration, explicitly permitting connections from those addresses while blocking all other public internet connections. Connections from authorized networks still require valid database credentials, providing defense-in-depth security with network-level and application-level access control. Authorized networks support both individual IP addresses and CIDR notation for address ranges, enabling permitting entire on-premises networks or specific jump hosts. This approach works well for direct database access from on-premises tools, hybrid applications accessing cloud databases, and development workflows where developers connect from known office IP addresses. Authorized networks provide straightforward access control without requiring VPN or Interconnect setup, though connections traverse the public internet with SSL encryption for security.

A is incorrect because VPC firewall rules control traffic between resources within VPC networks or ingress to VPC resources but don’t control access to Cloud SQL public IP addresses which exist outside the VPC network space. C is incorrect because Cloud Armor provides DDoS protection and application-level security for load-balanced services but doesn’t control access to Cloud SQL instances. D is incorrect because while private IP with VPN provides secure connectivity, the question asks about allowing specific IP addresses suggesting public IP access; private IP would be a better architecture but requires VPN or Interconnect infrastructure beyond just IP authorization.

Question 50

Your application needs to process files uploaded to Cloud Storage by running containerized batch jobs. Which service provides the most serverless solution?

A) Cloud Functions triggered by Cloud Storage events

B) Compute Engine with instance templates

C) Cloud Run jobs triggered by Eventarc

D) GKE with batch workload

Answer: C

Explanation:

Processing files uploaded to Cloud Storage requires event-driven architecture that responds to storage events by executing processing logic. Different GCP compute services offer varying levels of serverless operation and suitability for container-based batch processing. Understanding which service provides serverless container execution with event triggering helps select optimal solutions.

Cloud Run jobs triggered by Eventarc provides the most serverless solution for running containerized batch processing in response to Cloud Storage uploads. Cloud Run jobs extend Cloud Run’s serverless container platform to batch workloads that run to completion rather than serving requests continuously. Eventarc provides event routing from various GCP services including Cloud Storage, triggering Cloud Run jobs when files are uploaded. This architecture combines serverless benefits including no infrastructure management, automatic scaling including zero when idle, pay only for job execution time, and container-based deployment for consistency with existing images. When files upload to Cloud Storage, Eventarc detects the event and triggers a Cloud Run job passing event details. The job processes the file and exits, with Cloud Run automatically cleaning up resources. This pattern supports complex processing in containers with longer execution times than Cloud Functions allows, use of any language or framework available in containers, and batch processing patterns where jobs run to completion.

A is incorrect because while Cloud Functions can trigger on Cloud Storage events, they have execution time limits of 9 minutes and limited runtime environments, making them less suitable for complex containerized batch processing that may run longer. B is incorrect because Compute Engine requires managing virtual machine infrastructure including provisioning, scaling, and lifecycle management, lacking the serverless characteristics requested. D is incorrect because GKE provides container orchestration but requires managing the Kubernetes cluster and doesn’t offer the same serverless operation as Cloud Run jobs where all infrastructure is fully managed.

Question 51

Your organization needs to implement least privilege access where users temporarily request elevated permissions for specific tasks. Which feature should you enable?

A) IAM Conditions with time-based access

B) Access Approval

C) Access Context Manager

D) IAM Recommender

Answer: B

Explanation:

Just-in-time access and elevated permission workflows improve security by requiring explicit approval before granting sensitive permissions, reducing the window of elevated access and providing additional oversight. Different GCP features address various aspects of access management, from time-limited grants to approval workflows.

Access Approval provides just-in-time access workflows where users request temporary elevated permissions that require explicit approval before granting. Access Approval is particularly relevant for Google support access to your resources and for custom approval workflows using Access Approval API. For organizational workflows, administrators configure Access Approval to require approval for specific operations or resource access. When users attempt privileged operations, Access Approval generates approval requests that designated approvers must review and approve before the operation proceeds. This provides audit trails of approval decisions, time-limited approvals that expire automatically, explicit justification requirements for access requests, and separation of duties between requesters and approvers. Access Approval integrates with Access Context Manager for conditional access and with organization policies for governance. The approval workflow reduces risk from standing privileges, implements least privilege access patterns, and provides oversight for sensitive operations. Users maintain minimal baseline permissions and request elevated access only when needed for specific tasks.

A is incorrect because while IAM Conditions enable time-based access grants, they provide scheduled access rather than approval-based just-in-time access workflows where requests trigger human approval processes. C is incorrect because Access Context Manager defines security perimeters and context-aware access based on attributes like IP address and device security, not approval workflows for elevated permissions. D is incorrect because IAM Recommender analyzes IAM usage and suggests permission removals for unused access but doesn’t provide request-and-approval workflows for temporary elevated permissions.

Question 52

You need to deploy a web application that automatically provisions SSL certificates and manages HTTPS traffic. Which GCP service provides this with minimal configuration?

A) Compute Engine with Let’s Encrypt

B) Cloud Load Balancing with Google-managed certificates

C) Cloud CDN with custom certificates

D) GKE Ingress with manual certificate management

Answer: B

Explanation:

Securing web applications with HTTPS requires obtaining, installing, and renewing SSL/TLS certificates. Manual certificate management introduces operational overhead and risks service disruption from expired certificates. Google Cloud Load Balancing provides automated certificate provisioning and renewal for customer domains.

Cloud Load Balancing with Google-managed certificates provides automatic SSL certificate provisioning and HTTPS traffic management with minimal configuration. Google-managed certificates automatically obtain SSL certificates from Certificate Authority through ACME protocol, provision them to load balancer frontend, renew certificates before expiration, and handle all certificate lifecycle operations. To use Google-managed certificates, you configure an HTTPS load balancer, specify domain names requiring certificates, and configure DNS to point to the load balancer IP address. Google automatically provisions certificates within minutes after DNS validation, typically completing before you finish application deployment. The certificates renew automatically every 90 days without administrator intervention. This approach eliminates certificate management operational overhead, prevents outages from expired certificates, provides free SSL certificates for Google Cloud hosted applications, and simplifies HTTPS enablement. Google-managed certificates work with Cloud Load Balancing HTTPS load balancers and require domain ownership verification through DNS records.

A is incorrect because while Let’s Encrypt provides free certificates, using it with Compute Engine requires installing certbot or similar tools, configuring renewal automation, and managing certificate installation on instances, adding operational complexity compared to load balancer automation. C is incorrect because Cloud CDN enhances load balancer performance through caching but doesn’t independently manage certificates; it works with load balancer SSL termination rather than being a separate certificate management solution. D is incorrect because GKE Ingress with manual certificate management requires creating Kubernetes secrets with certificate data and managing renewal, lacking the automation that Google-managed certificates provide.

Question 53

Your application deployed on Compute Engine needs to access BigQuery datasets. What is the recommended authentication approach?

A) Service account key file stored on the instance

B) Instance metadata server providing automatic credentials

C) User credentials from gcloud auth login

D) API keys in environment variables

Answer: B

Explanation:

Authentication between GCP services should leverage built-in mechanisms that avoid storing or managing credentials explicitly. Compute Engine instances have access to the metadata server that provides automatic authentication tokens, enabling secure service-to-service communication without credential management.

The instance metadata server providing automatic credentials is the recommended authentication approach for Compute Engine accessing BigQuery. Every Compute Engine instance runs with an associated service account and can obtain short-lived OAuth2 access tokens from the instance metadata server. Applications use Google Cloud client libraries which automatically retrieve tokens from the metadata server without any explicit credential configuration. The metadata server endpoint is only accessible from within the instance, preventing token theft. To enable BigQuery access, you grant the instance’s service account appropriate IAM roles like roles/bigquery.dataViewer or roles/bigquery.user. Applications then use BigQuery client libraries which automatically authenticate using metadata server tokens. This approach provides no credentials to manage or secure on instances, automatic token rotation by the metadata server, audit trails through service account identity, and fine-grained access control through IAM roles on the service account. This is the Google-recommended pattern for Compute Engine applications accessing GCP services.

A is incorrect because storing service account key files on instances creates security risks from potential key compromise, requires key rotation management, and represents unnecessary credential storage when metadata server authentication is available. C is incorrect because user credentials from gcloud auth are intended for individual user access through command-line tools, not application authentication, and persist in user credential files rather than providing automatic token management. D is incorrect because API keys provide limited authentication suitable for public APIs but don’t work for BigQuery which requires OAuth2 authentication, and API keys in environment variables expose credentials unnecessarily.

Question 54

You need to ensure that all Compute Engine instances in your organization use only approved machine images. What should you implement?

A) IAM policies restricting image access

B) Organization Policy constraint for trusted images

C) VPC Service Controls perimeter

D) Security Command Center findings

Answer: B

Explanation:

Enforcing standardization across cloud resources prevents security misconfigurations and ensures compliance with organizational standards. For controlling which machine images can be used to create Compute Engine instances, organization-wide policy enforcement provides preventive controls rather than detective controls that identify violations after creation.

Organization Policy constraint for trusted images enforces that Compute Engine instances only use approved images, preventing instance creation with unauthorized images. The constraint compute.trustedImageProjects allows specifying which projects contain approved images. When configured, users can only create instances from images in the allowed projects, blocking attempts to use images from other projects including public images unless explicitly permitted. This provides preventive control stopping non-compliant instance creation, centralized governance through organization policy inheritance, and exceptions through policy overrides at folder or project levels where needed. Organizations typically maintain curated image projects containing hardened, patched images that meet security standards, compliance requirements, and operational guidelines. The organization policy ensures all instances throughout the organization use these approved images, preventing use of potentially vulnerable public images or unauthorized custom images. This constraint is essential for maintaining security baselines, achieving compliance certifications, and standardizing infrastructure configurations.

A is incorrect because IAM policies control who can access resources but don’t prevent using specific images; users with instance creation permissions could still use any accessible image without policy constraints. C is incorrect because VPC Service Controls create security perimeters controlling data access between services but don’t specifically enforce approved machine image usage for Compute Engine. D is incorrect because Security Command Center provides security visibility and identifies violations but doesn’t prevent non-compliant resource creation; it’s a detective control rather than preventive.

Question 55

Your application needs to fan out messages from a single Cloud Pub/Sub topic to multiple subscribers with different processing requirements. What is the correct configuration?

A) Multiple pull subscriptions to the same topic

B) Single subscription with multiple subscribers

C) Separate topics for each subscriber

D) Push subscriptions to different endpoints

Answer: A

Explanation:

Cloud Pub/Sub supports various messaging patterns including fan-out where multiple subscribers independently consume messages from a single topic. Understanding subscription models and message delivery guarantees helps design appropriate messaging architectures for different application requirements.

Multiple pull subscriptions to the same topic correctly implements fan-out messaging where multiple subscribers independently process messages. In Cloud Pub/Sub, each subscription maintains its own message queue and delivery tracking. When you create multiple subscriptions to a single topic, each subscription receives a copy of every message published to the topic. Subscribers pull messages from their respective subscriptions, processing messages according to their specific logic and timing. Each subscription acknowledges messages independently, allowing one subscriber to process messages quickly while another processes slowly without interference. This pattern enables several important scenarios including parallel processing by different systems, different processing logic for the same events, independent scaling of different processing pipelines, and separation of concerns between different application components. For example, an order topic might have subscriptions for inventory updates, shipping notifications, and analytics processing, each operating independently.

B is incorrect because single subscription with multiple subscribers creates competing consumers where each message is delivered to only one subscriber rather than all subscribers, implementing work distribution rather than fan-out. C is incorrect because separate topics for each subscriber would require publishers to send messages to multiple topics, adding complexity and coupling publishers to subscriber knowledge. D is incorrect because while push subscriptions to different endpoints can implement fan-out, the question asks for the correct configuration pattern, and pull subscriptions provide more flexibility and control for subscribers including manual message handling, backpressure management, and acknowledgment control.

Question 56

You need to implement a disaster recovery solution for Cloud SQL with an RTO of less than 5 minutes. Which configuration should you use?

A) Automated backups with point-in-time recovery

B) High availability with automatic failover

C) Cross-region replica with manual failover

D) Export data to Cloud Storage periodically

Answer: B

Explanation:

Disaster recovery planning requires balancing Recovery Time Objective representing maximum acceptable downtime and Recovery Point Objective representing maximum acceptable data loss. Cloud SQL provides several features for high availability and disaster recovery with different RTO and RPO characteristics.

High availability with automatic failover meets the RTO requirement of less than 5 minutes by providing automatic failover to a standby instance. Cloud SQL high availability configuration creates a primary instance and a standby instance in different zones within the same region, with synchronous replication between them. If the primary instance fails due to zone outage or instance failure, Cloud SQL automatically promotes the standby to primary within seconds to minutes, typically under 120 seconds. Applications experience minimal downtime as connections to the instance IP automatically route to the new primary. High availability provides synchronous replication with zero data loss under normal circumstances, automatic health monitoring and failover without manual intervention, same IP address and connection string after failover, and protection against zone failures. This configuration is ideal for production databases requiring high availability and minimal downtime. The trade-off is slightly higher latency from synchronous replication and additional cost for the standby instance.

A is incorrect because automated backups with point-in-time recovery require restoring from backup which typically takes minutes to hours depending on database size, exceeding the 5-minute RTO requirement. C is incorrect because cross-region replicas protect against region failures but require manual promotion which introduces human response time and typically takes longer than automatic failover. D is incorrect because exporting to Cloud Storage provides backup capability but requires manual restoration process taking significant time, not meeting the RTO objective.

Question 57

You need to grant a user the ability to view resources in a Google Cloud project but not make any changes. Which predefined IAM role should you assign?

A) roles/viewer

B) roles/browser

C) roles/editor

D) roles/owner

Answer: A

Explanation:

The correct answer is option A. The roles/viewer predefined IAM role grants read-only access to resources within a Google Cloud project, allowing users to view existing resources and their configurations without the ability to create, modify, or delete them. This role follows the principle of least privilege for users who need visibility without administrative capabilities.

The Viewer role includes permissions to list and get details of resources across most Google Cloud services including Compute Engine instances, Cloud Storage buckets, BigQuery datasets, and Cloud SQL instances. Users with this role can view monitoring metrics, read logs (but not modify retention), examine IAM policies, and review billing information. The role is commonly assigned to auditors, management personnel, or team members who need read-only access for monitoring and reporting purposes. It’s important to note that while Viewer provides broad read access, it doesn’t grant access to sensitive data like secrets in Secret Manager or private keys, which require additional specific permissions. For organizational visibility without project resource access, consider the Browser role instead.

Option B is incorrect because roles/browser provides the ability to browse the resource hierarchy (view organization, folders, and projects) but doesn’t grant read access to resources within projects. Browser is useful for navigating project structure without accessing actual resources.

Option C is incorrect because roles/editor grants extensive permissions to create, modify, and delete most resources within a project. Editor can change configurations, deploy applications, and manage services—far exceeding read-only requirements and violating least privilege principles.

Option D is incorrect because roles/owner grants full administrative control including all Editor permissions plus the ability to manage IAM policies, set up billing, and delete projects. Owner provides maximum privileges and should be restricted to trusted administrators.

Question 58

Your application running on Compute Engine needs to access Cloud Storage buckets. What is the recommended way to authenticate the application without embedding credentials in code?

A) Use the service account attached to the Compute Engine instance

B) Create a service account key and store it in the application directory

C) Use OAuth 2.0 user credentials

D) Hardcode an API key in the application code

Answer: A

Explanation:

The correct answer is option A. Using the service account attached to the Compute Engine instance is the recommended Google Cloud best practice for authentication. This approach leverages Google’s built-in Application Default Credentials (ADC) mechanism, eliminating the need for explicit credential management or key files in your application.

When you create a Compute Engine instance, you can specify a service account (or use the default Compute Engine service account). The instance automatically obtains short-lived OAuth 2.0 access tokens from the metadata server, which your application can access through standard Google Cloud client libraries. This authentication happens transparently—your code simply calls Cloud Storage APIs, and the client library automatically retrieves credentials from the metadata server. This approach provides several benefits: no credential files to secure or rotate, automatic token refresh, reduced security risk from exposed credentials, and simplified deployment. You grant the service account appropriate IAM roles (like roles/storage.objectViewer) to control access. This pattern extends to other Google Cloud services and represents the most secure authentication method for workloads running on Google Cloud.

Option B is incorrect because creating and storing service account key files introduces security risks. Key files are long-lived credentials that can be stolen, accidentally committed to version control, or misused. Google strongly recommends avoiding service account keys whenever possible, using workload identity or attached service accounts instead.

Option C is incorrect because OAuth 2.0 user credentials are designed for applications acting on behalf of end users, not for server-to-server authentication. User credentials require interactive login flows inappropriate for automated application access to Cloud Storage.

Option D is incorrect because hardcoding API keys in application code is a critical security antipattern. Keys can be exposed through source code repositories, decompiled binaries, or log files. API keys also provide limited access control compared to service account authentication with granular IAM permissions.

Question 59

You need to deploy a containerized application that automatically scales based on HTTP traffic. Which Google Cloud service should you use?

A) Cloud Run

B) Compute Engine with instance templates

C) App Engine Standard Environment

D) Cloud Functions

Answer: A

Explanation:

The correct answer is option A. Cloud Run is a fully managed serverless platform specifically designed for deploying containerized applications with automatic scaling based on incoming HTTP requests. Cloud Run scales automatically from zero to handle traffic, charging only for actual usage during request processing.

Cloud Run accepts container images from Container Registry or Artifact Registry and deploys them as services that automatically scale up when receiving HTTP requests and scale down to zero during idle periods. The platform handles infrastructure management, load balancing, SSL certificate provisioning, and traffic routing. Cloud Run supports any language or framework that responds to HTTP requests, making it extremely flexible. Scaling is instantaneous—new container instances start in milliseconds to handle traffic spikes. You configure maximum instances, concurrency (requests per container), CPU and memory allocation, and minimum instances if cold starts must be avoided. Cloud Run integrates with Cloud Build for CI/CD, Identity-Aware Proxy for authentication, and VPC for private networking. This serverless model eliminates infrastructure management while providing containerization flexibility and cost efficiency through precise pay-per-use pricing.

Option B is incorrect because Compute Engine with instance templates requires manual configuration of managed instance groups with autoscaling policies. While this provides infrastructure control, it requires significantly more setup and management compared to Cloud Run’s serverless approach. Compute Engine is better for non-containerized workloads or applications requiring specific OS-level customizations.

Option C is incorrect because App Engine Standard Environment, while offering automatic scaling, has runtime and language restrictions. App Engine Standard supports specific language versions (Python, Java, Node.js, PHP, Ruby, Go) and doesn’t accept arbitrary container images. Cloud Run provides greater flexibility with any containerized application.

Option D is incorrect because Cloud Functions is designed for event-driven, single-purpose functions, not full containerized applications. Functions have time limits (up to 9 minutes), are best for specific triggers (Pub/Sub, Storage events), and don’t support arbitrary container images. Cloud Run is more appropriate for HTTP-based containerized services.

Question 60

You want to implement a messaging system that decouples services and ensures messages are not lost even if the consuming application is temporarily unavailable. Which Google Cloud service should you use?

A) Cloud Pub/Sub

B) Cloud Storage

C) Firestore

D) Cloud SQL

Answer: A

Explanation:

The correct answer is option A. Cloud Pub/Sub is Google’s managed messaging service designed specifically for asynchronous communication between decoupled services. It provides reliable, scalable message delivery with guaranteed message persistence and at-least-once delivery semantics, ensuring messages aren’t lost even when consumers are temporarily unavailable.

Cloud Pub/Sub uses a publish-subscribe model where publishers send messages to topics, and subscribers receive messages from subscriptions attached to those topics. Messages are stored reliably until acknowledged by subscribers or until the configured retention period expires (up to 7 days by default). If a subscriber is down, messages accumulate and are delivered when the subscriber becomes available again. This decoupling allows services to operate independently—publishers don’t need knowledge of subscribers, and systems can scale independently. Pub/Sub supports both push subscriptions (messages pushed to webhook endpoints) and pull subscriptions (applications poll for messages). The service handles message ordering, filtering, dead-letter topics for problematic messages, and integrates with Cloud Functions, Cloud Run, and Dataflow for event-driven architectures. Pub/Sub is ideal for event notifications, streaming analytics pipelines, workload queuing, and microservice communication.

Option B is incorrect because Cloud Storage is an object storage service designed for storing files and blobs, not for message queuing or real-time service communication. While you could technically use Storage bucket notifications for events, this doesn’t provide the messaging semantics, reliability guarantees, or subscriber management that Pub/Sub offers.

Option C is incorrect because Firestore is a NoSQL document database designed for storing and querying structured data, not for message queuing. While Firestore supports real-time listeners for data changes, it’s not designed for reliable message delivery patterns or decoupled service communication.

Option D is incorrect because Cloud SQL is a managed relational database service (MySQL, PostgreSQL, SQL Server) designed for transactional data storage and structured queries. Using a database for message queuing creates unnecessary complexity, performance bottlenecks, and doesn’t provide the built-in reliability and scaling features of purpose-built messaging services like Pub/Sub.

 

Leave a Reply

How It Works

img
Step 1. Choose Exam
on ExamLabs
Download IT Exams Questions & Answers
img
Step 2. Open Exam with
Avanset Exam Simulator
Press here to download VCE Exam Simulator that simulates real exam environment
img
Step 3. Study
& Pass
IT Exams Anywhere, Anytime!