Google Associate Cloud Engineer Exam Dumps and Practice Test Questions Set 4 Q 61-80

Visit here for our full Google Associate Cloud Engineer exam dumps and practice test questions.

Question 61

You need to create a Cloud Storage bucket that will store sensitive financial data and must be encrypted with customer-managed encryption keys (CMEK). Which Google Cloud service should you use to manage the encryption keys?

A) Cloud Key Management Service (Cloud KMS)

B) Secret Manager

C) Cloud HSM

D) Certificate Authority Service

Answer: A

The correct answer is option A. Cloud Key Management Service (Cloud KMS) is Google Cloud’s centralized key management solution that allows you to create, manage, and use customer-managed encryption keys (CMEK) for encrypting data across Google Cloud services including Cloud Storage. When you configure a Cloud Storage bucket with CMEK, Cloud KMS controls the encryption keys used to protect your data, providing an additional layer of security and control beyond Google’s default encryption.

Cloud KMS enables you to create key rings and cryptographic keys within specific regions or multi-regions, matching your data residency requirements. You can configure automatic key rotation, set IAM policies controlling who can use keys for encryption and decryption operations, and audit all key usage through Cloud Logging. When using CMEK with Cloud Storage, the service encrypts your objects with data encryption keys (DEKs), which are themselves encrypted by your Cloud KMS keys (key encryption keys or KEKs). This envelope encryption approach provides both performance and security. You maintain control over access to your data—if you disable or destroy a Cloud KMS key, the encrypted data becomes inaccessible even to Google. Cloud KMS integrates seamlessly with Cloud Storage, BigQuery, Compute Engine persistent disks, and other services requiring customer-managed encryption. You can also use external keys through Cloud External Key Manager (Cloud EKM) for keys stored in third-party key management systems.

Option B is incorrect because Secret Manager is designed for storing application secrets like API keys, passwords, and certificates, not cryptographic keys for data encryption. While Secret Manager secures sensitive strings, it doesn’t provide the key management capabilities required for CMEK implementations.

Option C is incorrect because Cloud HSM is a hardware security module service for applications requiring FIPS 140-2 Level 3 certified hardware, not the standard service for CMEK. While Cloud KMS can use HSM-backed keys, Cloud HSM is a specialized offering for specific compliance requirements.

Option D is incorrect because Certificate Authority Service manages SSL/TLS certificates and certificate authorities, not encryption keys for data at rest. This service is for PKI infrastructure, not storage encryption key management.

Question 62

You are deploying an application to App Engine that requires environment-specific configuration values. What is the recommended way to provide these configuration values?

A) Use app.yaml environment variables

B) Hardcode values in the application source code

C) Store values in a separate configuration file in the repository

D) Pass values as command-line arguments

Answer: A

The correct answer is option A. Using environment variables defined in the app.yaml file is the recommended Google Cloud best practice for providing configuration values to App Engine applications. This approach separates configuration from code, allowing you to deploy the same application code across different environments (development, staging, production) with environment-specific settings.

In the app.yaml configuration file, you define environment variables under the env_variables section. These variables are injected into your application’s runtime environment and can be accessed through standard environment variable methods in your programming language. This pattern supports different configurations per service or version, enables secure configuration management without exposing sensitive values in source code, and simplifies deployment automation. For sensitive values like database passwords or API keys, you should reference values from Secret Manager rather than storing them directly in app.yaml. You can use Secret Manager integration where app.yaml references secrets, and App Engine automatically retrieves the values at runtime. This combination provides both convenience and security. Environment variables in app.yaml support various use cases including database connection strings, feature flags, third-party service endpoints, and application behavior settings. The approach follows twelve-factor app methodology and is standard across Google Cloud deployment platforms.

Option B is incorrect because hardcoding configuration values in source code creates significant maintenance problems, security risks, and deployment inflexibility. Changes require code modifications and redeployment, and sensitive values exposed in source code can be accidentally committed to version control.

Option C is incorrect because while configuration files in repositories can work, they create security risks for sensitive values and require application code to read and parse them. This approach is less flexible than environment variables and doesn’t integrate with Google Cloud’s configuration management services.

Option D is incorrect because App Engine doesn’t support passing command-line arguments to applications in the traditional sense. App Engine’s deployment model uses declarative configuration in app.yaml rather than imperative command-line parameters for configuration.

Question 63

You need to implement a solution that automatically executes code in response to Cloud Storage events when files are uploaded to a bucket. Which service should you use?

A) Cloud Functions

B) Cloud Run

C) Compute Engine with cron jobs

D) App Engine

Answer: A

The correct answer is option A. Cloud Functions is Google’s event-driven serverless compute platform specifically designed for executing code in response to cloud events like Cloud Storage uploads. Cloud Functions automatically scales, manages infrastructure, and charges only for actual execution time, making it ideal for event-driven processing workflows.

When you deploy a Cloud Function with a Cloud Storage trigger, you specify the bucket and event type (like finalize/create for new objects). When files are uploaded, Cloud Functions automatically invokes your function with event metadata including bucket name, file name, content type, and timestamps. Functions can process uploaded files for various purposes: image resizing, data transformation, file validation, triggering downstream workflows, or sending notifications. The serverless model means you write only the processing logic without managing servers, scaling configuration, or load balancers. Functions support multiple runtimes including Node.js, Python, Go, Java, and .NET, with execution timeouts up to 9 minutes for second-generation functions. Cloud Functions integrates with other services—you can write processed results to Cloud Storage, publish messages to Pub/Sub, update databases, or call external APIs. For long-running or containerized workloads, you might use Cloud Run with Eventarc, but Cloud Functions provides simpler deployment for straightforward event processing.

Option B is incorrect because while Cloud Run can respond to events through Eventarc integration, it’s more complex to set up than Cloud Functions for simple event-driven processing. Cloud Run is better suited for containerized applications requiring custom runtimes or longer execution times.

Option C is incorrect because Compute Engine with cron jobs requires manual infrastructure management, doesn’t automatically respond to events, and must poll for changes rather than reacting instantly to uploads. This approach is inefficient and costly compared to event-driven functions.

Option D is incorrect because App Engine is designed for serving web applications and APIs, not event-driven file processing. While App Engine can process files through HTTP endpoints, it doesn’t natively integrate with Cloud Storage events and requires additional infrastructure for event handling.

Question 64

You want to analyze logs from multiple Google Cloud services in one centralized location. Which service provides unified log management and analysis?

A) Cloud Logging

B) Cloud Monitoring

C) Cloud Trace

D) Cloud Profiler

Answer: A

The correct answer is option A. Cloud Logging is Google Cloud’s centralized logging service that collects, stores, analyzes, and monitors log data from Google Cloud services, on-premises systems, and other cloud providers. Cloud Logging automatically ingests logs from most Google Cloud services without additional configuration, providing unified visibility across your entire infrastructure.

Cloud Logging receives logs from Compute Engine instances, GKE clusters, App Engine applications, Cloud Functions, load balancers, and virtually all Google Cloud services. The service provides a unified Logs Explorer interface for searching, filtering, and analyzing logs using powerful query syntax. You can create log-based metrics to extract quantitative data from logs for monitoring and alerting, set up log sinks to export logs to BigQuery for long-term analysis, Cloud Storage for archival, or Pub/Sub for real-time processing. Cloud Logging supports structured logging with JSON payloads, enabling rich querying capabilities based on specific fields. The service includes log retention policies, error reporting integration, and IAM-based access controls for security. For applications running on Compute Engine or GKE, you can install the Logging agent to collect application and system logs. Cloud Logging integrates with Cloud Monitoring for creating alerts based on log patterns and with Error Reporting for automated error detection and aggregation.

Option B is incorrect because Cloud Monitoring focuses on metrics, dashboards, and alerting for performance and availability monitoring, not log analysis. While Monitoring works with Logging through log-based metrics, it doesn’t provide log collection and analysis capabilities.

Option C is incorrect because Cloud Trace is a distributed tracing system for analyzing application latency and performance bottlenecks in microservices architectures. Trace shows request paths through multiple services but doesn’t provide general log management.

Option D is incorrect because Cloud Profiler analyzes application performance by collecting CPU and memory usage profiles, helping identify code-level performance issues. Profiler doesn’t collect or analyze logs—it focuses on application performance profiling.

Question 65

You need to deploy a highly available MySQL database that automatically handles failover and backups. Which Google Cloud service should you use?

A) Cloud SQL

B) Cloud Spanner

C) Firestore

D) Bigtable

Answer: A

The correct answer is option A. Cloud SQL is Google’s fully managed relational database service that supports MySQL, PostgreSQL, and SQL Server with built-in high availability, automatic failover, and automated backup capabilities. Cloud SQL handles infrastructure management, database patching, replication configuration, and backup scheduling, allowing you to focus on application development.

For high availability, Cloud SQL supports regional configurations with primary and standby instances in different zones. When you enable high availability, Cloud SQL automatically replicates data synchronously to the standby instance and performs automatic failover if the primary becomes unavailable, typically completing failover in seconds to minutes. Cloud SQL manages automated daily backups with point-in-time recovery, allowing you to restore databases to any point within the retention period (up to 365 days). You can also create on-demand backups before major changes. The service provides automatic storage increases when approaching capacity limits, connection pooling through Cloud SQL Proxy for secure connections, and integration with VPC for private IP addressing. Cloud SQL supports read replicas for scaling read operations, maintenance windows for updates, and flags for MySQL configuration customization. For applications requiring MySQL compatibility with minimal operational overhead, Cloud SQL provides an ideal balance of managed services convenience and database functionality.

Option B is incorrect because Cloud Spanner is a globally distributed, horizontally scalable relational database designed for planet-scale applications requiring strong consistency. While Spanner is highly available, it uses a different architecture and pricing model than traditional MySQL, making it unsuitable for simple MySQL database requirements.

Option C is incorrect because Firestore is a NoSQL document database designed for mobile and web applications, not a relational database. Firestore doesn’t support SQL queries or MySQL compatibility, making it inappropriate for applications requiring MySQL.

Option D is incorrect because Bigtable is a NoSQL wide-column database designed for large-scale analytical and operational workloads. Bigtable doesn’t support SQL, lacks MySQL compatibility, and is designed for different use cases than traditional relational databases.

Question 66

You are designing a solution that needs to store and retrieve large media files with low latency from users around the world. Which Google Cloud service provides global content delivery with caching?

A) Cloud CDN

B) Cloud Storage

C) Persistent Disk

D) Filestore

Answer: A

The correct answer is option A. Cloud CDN (Content Delivery Network) is Google’s globally distributed edge caching service that accelerates content delivery by serving cached content from edge locations closest to users. Cloud CDN integrates with Cloud Load Balancing and Cloud Storage to deliver static and dynamic content with low latency worldwide.

When you enable Cloud CDN for a load balancer backend connected to Cloud Storage buckets or Compute Engine backends, Google’s global edge network caches content at over 100 edge locations across six continents. When users request content, Cloud CDN serves it from the nearest edge location if cached, dramatically reducing latency compared to retrieving from origin servers. For cache misses, Cloud CDN fetches content from the origin and caches it for subsequent requests. You can configure cache keys, time-to-live values, signed URLs for secure content delivery, and cache invalidation rules. Cloud CDN supports both HTTP and HTTPS, automatic compression, and request coalescing to reduce origin load. The service is particularly effective for static assets like images, videos, JavaScript, CSS, and downloadable files. Cloud CDN pricing is based on cache egress and HTTP requests, often significantly reducing costs compared to serving all content from origin servers. For media files requiring global distribution with low latency, combining Cloud Storage for storage with Cloud CDN for delivery provides optimal performance and cost efficiency.

Option B is incorrect because while Cloud Storage stores objects reliably, it doesn’t provide edge caching or content delivery network capabilities. Users far from the storage bucket location experience higher latency without CDN caching.

Option C is incorrect because Persistent Disk provides block storage for Compute Engine instances, not object storage for media files or global content delivery. Persistent disks are attached to specific instances in specific zones, lacking global distribution.

Option D is incorrect because Filestore is a managed NFS file storage service for applications requiring shared file systems. Filestore doesn’t provide object storage interfaces or content delivery network capabilities needed for serving media files globally.

Question 67

You need to implement a deployment strategy that gradually shifts traffic from an old version to a new version of your application. Which Cloud Run feature should you use?

A) Traffic splitting

B) Autoscaling

C) VPC connector

D) Service mesh

Answer: A

The correct answer is option A. Traffic splitting in Cloud Run allows you to distribute incoming requests across multiple revisions of a service, enabling gradual rollouts, A/B testing, and blue-green deployments. You can specify percentage-based traffic allocation to safely introduce new versions while maintaining the ability to quickly rollback if issues arise.

When you deploy a new revision to Cloud Run, you can configure traffic splitting to route a percentage of requests to the new version while the majority continues to the stable version. For example, you might route 95% of traffic to the current stable revision and 5% to the new revision for initial testing. If the new version performs well, you gradually increase its traffic percentage until reaching 100%. Cloud Run supports splitting traffic across up to 50 revisions simultaneously, though practical deployments typically use 2-3 revisions. Traffic splitting decisions are made per-request, not per-user, so the same user might hit different revisions on subsequent requests unless you implement session affinity in your application. You can configure traffic splits through the Cloud Console, gcloud CLI, or deployment automation tools. This capability enables sophisticated deployment strategies including canary deployments for testing with real production traffic, blue-green deployments for instant rollback capability, and feature flag implementations where different revisions represent different feature sets.

Option B is incorrect because autoscaling controls how Cloud Run scales container instances based on request load, not how traffic is distributed between different application versions. Autoscaling adjusts capacity, while traffic splitting controls version rollout strategies.

Option C is incorrect because VPC connector enables Cloud Run services to connect to resources in VPC networks, not to distribute traffic between application versions. VPC connectors are about network connectivity, not deployment strategies.

Option D is incorrect because while service meshes provide advanced traffic management, Cloud Run’s built-in traffic splitting feature provides sufficient capabilities for most version rollout scenarios without the complexity of deploying and managing a service mesh.

Question 68

You want to monitor the CPU utilization of your Compute Engine instances and receive alerts when utilization exceeds 80%. Which service should you configure?

A) Cloud Monitoring with alerting policies

B) Cloud Logging with log sinks

C) Cloud Trace

D) Cloud Profiler

Answer: A

The correct answer is option A. Cloud Monitoring is Google Cloud’s comprehensive monitoring solution that collects metrics, creates dashboards, and configures alerting policies for infrastructure and application monitoring. For Compute Engine instances, Cloud Monitoring automatically collects system metrics including CPU utilization without requiring agent installation, though installing the Monitoring agent provides additional metrics.

To implement CPU utilization monitoring with alerts, you create an alerting policy in Cloud Monitoring that specifies the metric (compute.googleapis.com/instance/cpu/utilization), threshold condition (greater than 80%), duration window, and notification channels. When instances exceed the threshold for the specified duration, Cloud Monitoring triggers alerts through configured channels including email, SMS, Slack, PagerDuty, or webhooks. You can configure alerting policies with multiple conditions, group alerts by resource labels or metadata, set auto-close duration, and create notification templates. Cloud Monitoring provides pre-built dashboards for common services and allows custom dashboard creation with various visualization types. The service supports uptime checks for endpoint availability monitoring, Service Level Objectives (SLOs) for reliability tracking, and integration with Cloud Logging for log-based metrics. For comprehensive monitoring, you can install the Monitoring agent on Compute Engine instances to collect additional system metrics, application metrics, and custom metrics published through the Monitoring API.

Option B is incorrect because Cloud Logging manages log collection and analysis, not metric monitoring and alerting. While you can create log-based metrics and alerts on log patterns, this is inefficient for monitoring numeric metrics like CPU utilization that are natively available in Cloud Monitoring.

Option C is incorrect because Cloud Trace provides distributed tracing for analyzing request latency across microservices, not infrastructure metric monitoring. Trace helps identify performance bottlenecks in application code, not monitor resource utilization.

Option D is incorrect because Cloud Profiler analyzes application performance by collecting CPU and memory profiles to identify code-level inefficiencies. Profiler doesn’t monitor infrastructure metrics or provide alerting capabilities for resource utilization thresholds.

Question 69

You need to grant a service account permission to read objects from a specific Cloud Storage bucket. Which IAM role should you assign at the bucket level?

A) roles/storage.objectViewer

B) roles/storage.admin

C) roles/storage.objectCreator

D) roles/viewer

Answer: A

The correct answer is option A. The roles/storage.objectViewer predefined IAM role grants read-only access to objects within Cloud Storage buckets, allowing the service account to list and download objects without the ability to modify or delete them. This role follows the principle of least privilege by providing only the necessary permissions for reading objects.

The storage.objectViewer role includes permissions for storage.objects.get and storage.objects.list, enabling the service account to retrieve object contents and list objects in the bucket. You assign this role at the bucket level using IAM policies, either through the Cloud Console, gcloud CLI, or Infrastructure as Code tools like Terraform. When assigned at the bucket level, the permissions apply only to objects within that specific bucket, providing granular access control. For applications requiring read-only access to stored data—such as serving static website content, processing uploaded files, or reading configuration data—objectViewer provides appropriate permissions without granting unnecessary modification capabilities. If the service account needs to create or modify objects, you would use objectCreator or objectAdmin roles. For comprehensive access control strategies, you can combine bucket-level IAM with bucket-level access control lists (ACLs) or object-level ACLs for more granular permissions, though IAM is the recommended approach for new implementations.

Option B is incorrect because roles/storage.admin grants full administrative control over Cloud Storage including bucket creation, deletion, IAM policy modification, and all object operations. This exceeds the read-only requirement and violates the principle of least privilege.

Option C is incorrect because roles/storage.objectCreator grants permission to create and overwrite objects but doesn’t include read permissions. This role is for write-only scenarios like log collection systems that upload files without needing to read existing objects.

Option D is incorrect because roles/viewer is a project-level role granting read access to all resources in the project, which is unnecessarily broad. Using storage.objectViewer at the bucket level provides more precise access control limited to the specific bucket.

Question 70

You are deploying a web application that needs to scale automatically based on incoming HTTP requests and handle traffic from users globally. Which combination of services should you use?

A) Cloud Load Balancing and Compute Engine managed instance groups

B) Cloud Run only

C) Cloud Functions and Cloud Storage

D) App Engine and Cloud SQL

Answer: A

The correct answer is option A. Combining Cloud Load Balancing with Compute Engine managed instance groups provides a robust, globally distributed solution for web applications requiring automatic scaling and global traffic handling. Cloud Load Balancing distributes traffic across multiple backend instances, while managed instance groups automatically add or remove instances based on load.

Cloud Load Balancing offers global HTTP(S) load balancing that routes users to the nearest healthy backend based on geographic proximity and instance health, reducing latency and improving availability. The load balancer provides SSL/TLS termination, Cloud CDN integration for caching static content, Cloud Armor for DDoS protection and WAF capabilities, and URL-based routing for microservices architectures. Managed instance groups use instance templates to define VM configuration and autoscaling policies that specify target CPU utilization, custom metrics, or request count. When load increases, the instance group automatically creates additional instances; when load decreases, it terminates unnecessary instances, optimizing costs. This combination supports various deployment patterns including blue-green deployments through backend service updates, canary releases through traffic splitting, and multi-region deployments for disaster recovery. For maximum reliability, you deploy instance groups across multiple zones or regions with appropriate health checks ensuring traffic reaches only healthy instances.

Option B is incorrect because while Cloud Run provides automatic scaling and global distribution through Cloud Load Balancing integration, the question implies a VM-based deployment scenario. Cloud Run is excellent for containerized applications but doesn’t use Compute Engine instances.

Option C is incorrect because this combination is suitable for static websites with serverless processing but doesn’t provide the full web application hosting and scaling capabilities implied by the question. Cloud Functions handles events, not serving web applications directly.

Option D is incorrect because while App Engine and Cloud SQL provide application hosting and database services, App Engine alone handles global distribution and autoscaling without requiring separate load balancing configuration. This answer doesn’t specifically address the global load balancing aspect highlighted in the question.

Question 71

You need to execute a data processing job that runs for several hours and can tolerate interruptions. Which Compute Engine feature can reduce costs by up to 91%?

A) Preemptible VM instances

B) Sustained use discounts

C) Committed use contracts

D) Sole-tenant nodes

Answer: A

The correct answer is option A. Preemptible VM instances are short-lived, low-cost compute instances that provide up to 91% discount compared to regular instances, making them ideal for fault-tolerant, batch processing workloads that can handle interruptions. Google Cloud can terminate preemptible instances at any time with 30 seconds notice, but they provide significant cost savings for appropriate workloads.

Preemptible instances are perfect for data processing jobs, batch analytics, video transcoding, scientific simulations, and other tasks that can checkpoint progress and resume after interruption. These instances have a maximum lifetime of 24 hours, after which Google automatically terminates them. Your applications should implement shutdown scripts to handle the 30-second termination notice gracefully, saving state and checkpointing progress for resumption on new instances. Preemptible instances support the same machine types, GPUs, and local SSDs as regular instances, providing identical performance characteristics. You can use preemptible instances with managed instance groups, where the group automatically replaces terminated instances maintaining target capacity. For maximum availability in critical workloads requiring cost optimization, you can mix preemptible and regular instances in instance groups. Preemptible instances don’t count toward regional CPU quotas separately, but availability depends on spare capacity. While preemptibility introduces complexity requiring fault-tolerant design, the cost savings make preemptible instances compelling for appropriate workloads.

Option B is incorrect because sustained use discounts provide automatic discounts (up to 30%) for instances running a significant portion of the month, but these apply to regular instances and don’t approach the 91% savings of preemptible instances.

Option C is incorrect because committed use contracts provide discounts (up to 57% for three-year commitments) in exchange for committing to specific resource amounts for one or three years, but these discounts are lower than preemptible pricing.

Option D is incorrect because sole-tenant nodes provide dedicated physical servers for your instances, typically increasing costs rather than reducing them. Sole tenancy addresses compliance or licensing requirements, not cost optimization.

Question 72

You want to enforce organizational policies across multiple projects to prevent users from creating resources in specific regions. Which service should you use?

A) Organization Policy Service

B) IAM policies

C) VPC Service Controls

D) Cloud Asset Inventory

Answer: A

The correct answer is option A. Organization Policy Service allows you to set centralized constraints across your Google Cloud resource hierarchy (organization, folders, projects) to enforce governance requirements and ensure compliance. You can create policies restricting resource locations, limiting resource types, and controlling service usage across your entire organization.

To restrict resource creation to specific regions, you implement the “Restrict Resource Locations” constraint (constraints/gcp.resourceLocations), specifying allowed or denied locations. This constraint prevents users from creating resources outside approved regions regardless of their IAM permissions, ensuring compliance with data residency requirements. Organization policies are inherited down the hierarchy—policies set at the organization level apply to all folders and projects unless overridden. You can set policies at organization, folder, or project levels depending on governance requirements. The service includes Boolean constraints (allow/deny specific behaviors), list constraints (specify allowed/denied values), and custom constraints (define using Common Expression Language). Organization Policy Service works in enforcement mode (blocking non-compliant actions) or dry-run mode (logging violations without blocking) for testing policies before enforcement. This centralized governance approach ensures consistent policy application across your entire cloud environment without relying on individual project administrators to maintain compliance.

Option B is incorrect because IAM policies control who can perform actions on resources, not where resources can be created or what types can be used. IAM provides identity-based access control while Organization Policy Service provides resource-based configuration constraints.

Option C is incorrect because VPC Service Controls create security perimeters around Google Cloud resources to prevent data exfiltration, not enforce resource location restrictions. Service Controls focus on data security boundaries, while Organization Policy Service addresses broader governance requirements.

Option D is incorrect because Cloud Asset Inventory provides visibility into your resource configurations and history but doesn’t enforce policies. Asset Inventory helps you understand what resources exist and track changes, but policy enforcement requires Organization Policy Service.

Question 73

You need to configure a VPC network with specific IP address ranges and subnets in multiple regions. Which VPC subnet creation mode should you use?

A) Custom mode

B) Auto mode

C) Legacy mode

D) Default mode

Answer: A

The correct answer is option A. Custom mode VPC networks allow you to explicitly define subnet IP ranges and their regions, providing complete control over network design and IP address allocation. Custom mode is recommended for production environments requiring specific network architectures, predictable IP addressing, and integration with on-premises networks.

When creating a custom mode VPC, you manually create subnets in desired regions with specific CIDR ranges that don’t overlap. This control enables designing networks that align with your IP addressing scheme, prevent conflicts with on-premises networks for hybrid connectivity, allocate appropriate address space for anticipated growth, and create security boundaries through subnet isolation. Custom mode VPCs support all Google Cloud features including Private Google Access, VPC peering, Cloud VPN, and Cloud Interconnect. You can expand subnet ranges (but not shrink them) as needed, add secondary IP ranges for GKE pods and services, and create multiple subnets per region for different purposes. Custom mode provides flexibility for complex network topologies, multi-tier applications with separate subnets for web, application, and database tiers, and compliance with organizational networking standards. Most enterprise deployments use custom mode VPCs for their production environments, while auto mode might be suitable for development or testing environments.

Option B is incorrect because auto mode VPC networks automatically create one subnet in each region with predefined, fixed IP ranges from 10.128.0.0/9. While convenient for quick starts, auto mode doesn’t provide the control needed for production networks requiring specific IP addressing schemes.

Option C is incorrect because legacy networks are deprecated and not recommended for new projects. Legacy networks use a single global IP range rather than regional subnets and lack modern VPC features like VPC peering and subnet expansion.

Option D is incorrect because “default mode” isn’t an official VPC creation mode. Google Cloud creates a default VPC in auto mode for new projects, but the proper terminology for user-created networks is “auto mode” or “custom mode.”

Question 74: You want to analyze large datasets using SQL queries without managing infrastructure. Which Google Cloud service should you use?

A) BigQuery

B) Cloud SQL

C) Cloud Spanner

D) Firestore

Answer: A

The correct answer is option A. BigQuery is Google Cloud’s fully managed, serverless data warehouse designed for analyzing massive datasets using standard SQL queries. BigQuery separates storage and compute, scales automatically to petabytes of data, and charges based on data processed by queries, eliminating infrastructure management overhead.

BigQuery supports standard SQL with extensions for arrays, structs, and nested data, making it accessible to analysts familiar with SQL. The service provides sub-second query response on terabytes of data through massive parallel processing across Google’s infrastructure. You can load data from Cloud Storage, stream data in real-time, or use federated queries accessing data in Cloud Storage and Cloud SQL without loading it into BigQuery. The service includes built-in machine learning capabilities through BigQuery ML, allowing you to create and train models using SQL syntax. BigQuery supports partitioned and clustered tables for query performance optimization, column-level security for data governance, and scheduled queries for automating analytics workflows. The service integrates with BI tools like Looker, Tableau, and Data Studio for visualization. BigQuery’s pricing model charges for storage (relatively inexpensive) and query computation (based on data processed), with options for flat-rate pricing for predictable costs. For organizations requiring SQL-based analytics on large datasets without infrastructure management, BigQuery provides an ideal serverless analytics platform.

Option B is incorrect because Cloud SQL is a managed relational database service designed for transactional workloads and operational databases, not large-scale analytics. Cloud SQL has storage and compute limitations compared to BigQuery’s data warehouse capabilities.

Option C is incorrect because Cloud Spanner is a globally distributed relational database optimized for transactional consistency and horizontal scalability, not analytics. While Spanner supports SQL, it’s designed for operational workloads requiring strong consistency, not analytical queries on large datasets.

Option D is incorrect because Firestore is a NoSQL document database designed for mobile and web applications, not SQL-based analytics. Firestore doesn’t support SQL queries and is optimized for document retrieval and real-time synchronization, not analytical workloads.

Question 75

You need to connect your on-premises network to Google Cloud with a dedicated, private connection for production workloads requiring predictable performance. Which service should you use?

A) Cloud Interconnect

B) Cloud VPN

C) VPC Peering

D) Cloud Router

Answer: A

The correct answer is option A. Cloud Interconnect provides dedicated physical connections between your on-premises network and Google Cloud, offering higher bandwidth, lower latency, and more predictable performance than internet-based connections. Cloud Interconnect is ideal for production workloads requiring consistent network performance and large data transfers.

Cloud Interconnect offers two options: Dedicated Interconnect for direct physical connections to Google’s network (10 Gbps or 100 Gbps links) at supported colocation facilities, and Partner Interconnect for connections through supported service providers when direct connections aren’t feasible. Dedicated Interconnect provides the highest performance and doesn’t traverse the public internet, ensuring predictable latency and secure connectivity. You can create multiple VLAN attachments over a single physical connection, connecting to different VPC networks or regions. Interconnect supports up to 200 Gbps of total bandwidth through multiple connections with Layer 3 redundancy using Cloud Router and BGP for dynamic route exchange. The service integrates with VPC networks for private IP addressing, supports accessing Google APIs and services through Private Google Access, and provides SLA-backed availability when properly configured with redundant links. For enterprise applications requiring dedicated connectivity, hybrid cloud architectures, or large-scale data migrations, Cloud Interconnect delivers production-grade network connectivity.

Option B is incorrect because Cloud VPN provides encrypted connectivity over the public internet, which may experience variable latency and throughput. While VPN is suitable for many scenarios and more cost-effective, it doesn’t provide the dedicated, predictable performance of Interconnect.

Option C is incorrect because VPC Peering connects VPC networks within Google Cloud, not on-premises networks. Peering enables private connectivity between VPCs in the same or different projects but doesn’t extend to on-premises infrastructure.

Option D is incorrect because Cloud Router is a component used with Cloud VPN and Cloud Interconnect to exchange routes dynamically via BGP, not a connectivity service itself. Cloud Router enables dynamic routing but requires VPN or Interconnect for the actual connection.

Question 76

You want to deploy a globally distributed database that provides strong consistency and horizontal scalability for a mission-critical application. Which database service should you use?

A) Cloud Spanner

B) Cloud SQL

C) Firestore

D) Bigtable

Answer: A

The correct answer is option A. Cloud Spanner is Google Cloud’s globally distributed, horizontally scalable relational database that provides strong consistency, high availability, and SQL support for mission-critical applications. Spanner uniquely combines traditional database ACID properties with horizontal scalability and global distribution capabilities.

Cloud Spanner supports standard SQL queries, secondary indexes, and ACID transactions while scaling horizontally to petabytes of data and millions of queries per second. The database uses Google’s globally synchronized TrueTime technology to provide external consistency (strongest consistency model) even across globally distributed deployments. You can configure Spanner instances as regional (within a single region for lower latency and cost) or multi-regional (across multiple geographic regions for highest availability and global reach). Spanner automatically replicates data across zones and regions, handles failover transparently, and provides 99.999% availability SLA for multi-regional configurations. The service supports online schema changes without downtime, automatic sharding as data grows, and integration with tools like Liquibase and Flyway for schema management. Spanner is ideal for financial systems requiring strong consistency, global gaming leaderboards, globally distributed SaaS applications, and supply chain management systems. While Spanner costs more than Cloud SQL, its unique combination of relational model, consistency guarantees, and global scalability justifies the investment for appropriate workloads.

Option B is incorrect because Cloud SQL is a regional managed database service designed for traditional applications requiring MySQL, PostgreSQL, or SQL Server. While Cloud SQL supports high availability and read replicas, it doesn’t provide Spanner’s horizontal scalability or global distribution capabilities.

Option C is incorrect because Firestore is a NoSQL document database offering eventual consistency (or strong consistency for single-document operations) but not the strong consistency across transactions that Spanner provides. Firestore excels at different use cases like mobile applications requiring real-time synchronization.

Option D is incorrect because Bigtable is a NoSQL wide-column database designed for large-scale analytical and operational work loads requiring low latency but doesn’t provide SQL support, ACID transactions, or strong consistency guarantees. Bigtable is optimized for time-series data, IoT telemetry, and analytics, not relational workloads.

Question 77

You need to implement network security controls to restrict traffic between VMs in different subnets within the same VPC. Which feature should you use?

A) VPC firewall rules

B) Cloud Armor

C) Identity-Aware Proxy

D) VPC Service Controls

Answer: A

The correct answer is option A. VPC firewall rules control traffic between instances within a VPC network, allowing you to define ingress and egress rules based on IP ranges, protocols, ports, and network tags. Firewall rules are stateful, meaning return traffic for allowed connections is automatically permitted without requiring explicit rules.

VPC firewall rules operate at the instance level but are configured at the VPC network level, applying to all instances matching the rule’s target specification. You can target rules using network tags (applied to specific instances), service accounts (associated with instances), or IP ranges. Each rule specifies direction (ingress or egress), priority (0-65535, lower numbers take precedence), action (allow or deny), and match conditions including source/destination IP ranges, protocols, and ports. Implied rules in every VPC allow egress to any destination and deny ingress from any source by default. You typically create explicit allow rules for required traffic patterns and deny rules for explicitly blocking specific traffic. Firewall rules enable micro-segmentation within VPCs, implementing defense-in-depth by restricting lateral movement between application tiers. For example, you might allow web tier instances to communicate only with application tier instances, which in turn communicate only with database tier instances. Firewall rule logs can be enabled for auditing and troubleshooting, integrating with Cloud Logging for analysis.

Option B is incorrect because Cloud Armor provides DDoS protection and web application firewall capabilities for external HTTP(S) load balancers, not internal VPC traffic control. Cloud Armor protects internet-facing services from attacks but doesn’t control traffic between VMs within a VPC.

Option C is incorrect because Identity-Aware Proxy provides identity-based access control for applications, verifying user identity before granting access. IAP operates at the application layer for authenticated access, not network-layer traffic control between VMs.

Option D is incorrect because VPC Service Controls create security perimeters around Google Cloud services to prevent data exfiltration across perimeter boundaries. Service Controls protect API-based services, not VM-to-VM network traffic within VPCs.

Question 78

You want to provide developers temporary elevated permissions to debug production issues without permanently granting those permissions. Which IAM feature should you use?

A) IAM Conditions with time-based access

B) Service accounts

C) Custom roles

D) Resource Manager hierarchy

Answer: A

The correct answer is option A. IAM Conditions allow you to define time-based, resource-based, or attribute-based constraints on IAM policy bindings, enabling temporary access grants that automatically expire. This capability implements the principle of least privilege by providing elevated permissions only when needed for specific durations.

IAM Conditions use Common Expression Language (CEL) to define complex access rules. For temporary elevated access, you create a policy binding with a condition that checks the current time against an expiration timestamp. For example, you can grant a developer the roles/compute.admin role with a condition that expires after 4 hours, automatically revoking the permission without manual intervention. Conditions support various attributes including request time, resource names, resource types, and custom attributes. This approach eliminates the security risk of forgetting to revoke temporary permissions and provides audit trails showing exactly when and why elevated access was granted. You can combine conditions—for instance, granting elevated permissions only during specific times and only for resources with specific labels. IAM Conditions integrate with Cloud Logging for comprehensive access logging and with automated systems for break-glass scenarios where emergency access must be granted and tracked. This feature is essential for security-conscious organizations implementing just-in-time access patterns.

Option B is incorrect because service accounts are identities for applications and services, not a mechanism for temporary human user access. While service accounts have uses in automation, they don’t provide time-limited elevated permissions for developers.

Option C is incorrect because custom roles define sets of permissions but don’t inherently provide temporary access. Custom roles allow you to create precisely scoped permission sets, but without IAM Conditions, role bindings remain permanent until manually removed.

Option D is incorrect because Resource Manager hierarchy (organization, folders, projects) provides organizational structure and permission inheritance but doesn’t enable temporary access grants. Hierarchy helps organize resources but doesn’t implement time-based access controls.

Question 79

You need to automate the deployment of infrastructure resources using declarative configuration files. Which tool should you use?

A) Terraform

B) gcloud CLI

C) Cloud Console

D) Cloud Shell

Answer: A

The correct answer is option A. Terraform is an infrastructure-as-code tool that uses declarative configuration files to define and provision cloud infrastructure. Terraform’s Google Cloud provider enables managing all Google Cloud resources through version-controlled configuration files, supporting automation, consistency, and reproducibility across environments.

Terraform configuration files (written in HCL – HashiCorp Configuration Language) describe desired infrastructure state, and Terraform determines the necessary actions to achieve that state. You define resources like VPC networks, Compute Engine instances, Cloud Storage buckets, and IAM bindings in .tf files, commit them to version control, and apply them to create or modify infrastructure. Terraform maintains state files tracking deployed resources, enabling updates and changes without recreating everything. The tool supports modules for reusable infrastructure components, variables for parameterizing configurations, and outputs for sharing values between modules. Terraform’s plan command previews changes before applying them, reducing the risk of unintended modifications. The tool integrates with CI/CD pipelines for automated infrastructure deployment, supports multiple cloud providers for multi-cloud environments, and provides a large ecosystem of community modules. For organizations adopting DevOps practices, Terraform enables treating infrastructure as code with all associated benefits including versioning, peer review, testing, and automated deployment.

Option B is incorrect because gcloud CLI is an imperative command-line tool for managing Google Cloud resources through individual commands. While gcloud can be scripted, it doesn’t provide the declarative, state-managed approach that Terraform offers for infrastructure automation.

Option C is incorrect because Cloud Console is a web-based graphical interface for manually managing resources. While convenient for exploration and one-off tasks, the console doesn’t support automated, repeatable infrastructure deployment.

Option D is incorrect because Cloud Shell is a browser-based shell environment with pre-installed tools including gcloud, not an infrastructure automation tool. Cloud Shell provides a convenient command-line interface but doesn’t replace infrastructure-as-code tools.

Question 80

You want to analyze application performance and identify code-level bottlenecks causing high CPU usage. Which Google Cloud tool should you use?

A) Cloud Profiler

B) Cloud Monitoring

C) Cloud Logging

D) Cloud Trace

Answer: A

The correct answer is option A. Cloud Profiler is a statistical, low-overhead profiling service that analyzes application performance by collecting CPU and memory usage data from production applications. Profiler identifies which functions or methods consume the most resources, helping developers optimize code for better performance and cost efficiency.

Cloud Profiler works by periodically sampling application call stacks to determine where time is spent during execution. It supports multiple languages including Java, Go, Python, Node.js, and .NET, with minimal performance impact (typically less than 5% overhead). You enable Profiler by adding a small agent library to your application code and configuring it with project information. Profiler visualizes data through flame graphs showing the call hierarchy and time spent in each function, making it easy to identify expensive operations. The service aggregates profiling data across all application instances, providing representative performance insights for distributed systems. Cloud Profiler helps identify inefficient algorithms, unnecessary computations, excessive memory allocations, and other code-level issues causing poor performance. By optimizing based on Profiler insights, you can reduce resource consumption, lower costs, improve user experience, and increase application throughput. Unlike monitoring tools that track metrics externally, Profiler provides deep code-level visibility into application behavior.

Option B is incorrect because Cloud Monitoring collects and tracks infrastructure and application metrics but doesn’t provide code-level profiling. Monitoring shows high CPU usage but doesn’t identify which specific functions cause it.

Option C is incorrect because Cloud Logging collects and analyzes log data but doesn’t profile application performance. While logs might contain performance information, they don’t provide systematic CPU and memory profiling.

Option D is incorrect because Cloud Trace analyzes request latency across distributed systems, showing time spent in different services but not detailed CPU profiling. Trace identifies slow requests and service bottlenecks, while Profiler identifies inefficient code within services.

 

Leave a Reply

How It Works

img
Step 1. Choose Exam
on ExamLabs
Download IT Exams Questions & Answers
img
Step 2. Open Exam with
Avanset Exam Simulator
Press here to download VCE Exam Simulator that simulates real exam environment
img
Step 3. Study
& Pass
IT Exams Anywhere, Anytime!