Visit here for our full Google Associate Cloud Engineer exam dumps and practice test questions.
Question 161
You need to deploy a containerized application that requires exactly 2 vCPUs and 4 GB of memory. Which Google Cloud service allows you to specify precise resource allocations without managing underlying infrastructure?
A) Cloud Run
B) Compute Engine
C) App Engine Flexible
D) GKE Autopilot
Answer: A
The correct answer is option A. Cloud Run is a fully managed serverless platform that allows you to specify exact CPU and memory allocations for containerized applications without managing infrastructure. You define resource requirements in the container configuration, and Cloud Run provisions instances with those precise specifications.
When deploying to Cloud Run, you specify CPU allocation in units of 1000m (millicores) and memory in megabytes or gigabytes. For your requirement, you would configure 2 CPUs (2000m) and 4096 MB memory. Cloud Run automatically handles infrastructure provisioning, scaling, load balancing, and health monitoring based on incoming requests. The platform scales from zero instances when idle to multiple instances under load, charging only for actual request processing time and allocated resources. Cloud Run supports two CPU allocation models: CPU allocated only during request processing for cost optimization, or CPU always allocated for background processing and startup optimization. You can configure maximum instances to control costs, minimum instances to reduce cold starts, concurrency settings defining requests per instance, and timeout values for long-running requests. Cloud Run eliminates infrastructure management while providing precise resource control, automatic HTTPS endpoints, custom domain mapping, and integration with Cloud Build for continuous deployment. The service is ideal for applications with variable traffic patterns requiring specific resource guarantees without operational overhead.
Option B is incorrect because Compute Engine requires managing virtual machine infrastructure including machine type selection, scaling configuration, and operating system maintenance. While you can select machine types providing specific resources, you manage the underlying infrastructure.
Option C is incorrect because App Engine Flexible Environment uses predefined instance classes rather than allowing arbitrary CPU and memory specifications. Flexible Environment provides less granular resource control than Cloud Run and requires managing some infrastructure aspects.
Option D is incorrect because GKE Autopilot is a managed Kubernetes service that abstracts infrastructure management but still requires understanding Kubernetes concepts like pods, deployments, and services. While Autopilot simplifies cluster management, Cloud Run provides simpler deployment for containerized applications not requiring Kubernetes features.
Question 162
You want to ensure that a specific Cloud Storage bucket is only accessible from your VPC network and not from the public internet. Which feature should you configure?
A) VPC Service Controls
B) Private Google Access
C) Cloud NAT
D) Firewall rules
Answer: A
The correct answer is option A. VPC Service Controls create security perimeters around Google Cloud services including Cloud Storage, restricting access to resources within defined perimeters and preventing data exfiltration. Service Controls ensure Cloud Storage buckets are accessible only from authorized VPC networks and projects.
When you create a VPC Service Control perimeter, you specify which projects and VPC networks are inside the perimeter and which Google Cloud services the perimeter protects. For Cloud Storage, you can restrict bucket access to only come from resources within the perimeter, blocking access from outside the perimeter even if requesters have valid IAM permissions. This protection operates at the API level, enforcing access controls before IAM evaluation occurs. Service Controls support ingress and egress rules for controlled cross-perimeter communication when necessary, such as allowing specific external services to access perimeter resources under defined conditions. You can configure perimeters in dry run mode to test policies before enforcement, monitor access violations through Cloud Logging, and create bridge perimeters for controlled communication between separate security boundaries. VPC Service Controls protect against data exfiltration scenarios where compromised credentials or malicious insiders attempt to copy data to unauthorized locations. The feature is essential for regulated industries requiring strong data residency and access controls, complementing IAM permissions with network-based security.
Option B is incorrect because Private Google Access enables VPC instances without external IPs to reach Google services through internal IPs, not restricting service access to specific networks. Private Google Access provides connectivity rather than access restrictions.
Option C is incorrect because Cloud NAT provides network address translation for instances without external IPs to access internet resources, not for restricting Cloud Storage access. NAT addresses outbound connectivity rather than inbound access control.
Option D is incorrect because firewall rules control network traffic between VMs and external networks at the IP layer, not API-level access to Google Cloud services like Cloud Storage. Firewall rules don’t restrict Cloud Storage bucket access which operates over HTTPS regardless of source IP.
Question 163
You need to grant a developer temporary permission to delete Compute Engine instances for testing but automatically revoke the permission after 8 hours. Which approach should you use?
A) IAM policy binding with time-based condition
B) Create a temporary service account
C) Grant permission and manually revoke later
D) Use organization policy constraints
Answer: A
The correct answer is option A. IAM policy bindings with time-based conditions using Common Expression Language (CEL) enable automatic temporary permission grants that expire after specified durations. This approach implements just-in-time access without requiring manual revocation or external automation.
To implement temporary permissions, you create an IAM policy binding granting the necessary role (like roles/compute.instanceAdmin) to the developer with a CEL condition checking current time against an expiration timestamp. The condition might be: request.time < timestamp(‘2024-01-15T16:00:00Z’). Google Cloud automatically denies access attempts after the expiration time without any manual intervention. This pattern eliminates risks of forgotten permission revocations, provides audit trails showing exactly when and why elevated access was granted, and supports break-glass scenarios for emergency access. You can create conditions based on time ranges, resource attributes, IP addresses, or combinations of factors. For recurring temporary access patterns, you might automate binding creation through scripts or ticketing system integrations that generate time-limited bindings based on approved requests. IAM conditions integrate with Cloud Logging for comprehensive access auditing. This approach is significantly more secure than granting permanent permissions or relying on manual revocation processes that can be forgotten or delayed.
Option B is incorrect because creating temporary service accounts doesn’t automatically expire permissions. Service accounts require manual deletion, and developers could create keys for service accounts extending access beyond intended durations. Service accounts don’t provide automatic expiration.
Option C is incorrect because manual revocation introduces human error risks. Administrators might forget to revoke permissions, be unavailable at the scheduled revocation time, or make mistakes during revocation. Manual processes don’t scale and create security gaps.
Option D is incorrect because organization policy constraints define what actions are allowed or denied across resources but don’t provide temporary time-based permissions for specific users. Organization policies enforce organizational standards rather than managing individual user access patterns.
Question 164
You want to deploy a web application that automatically scales based on request count and supports zero downtime deployments with gradual traffic shifting. Which deployment platform should you use?
A) Cloud Run with traffic splitting
B) Compute Engine managed instance groups
C) App Engine Standard with traffic migration
D) GKE with rolling updates
Answer: A
The correct answer is option A. Cloud Run provides automatic request-based scaling combined with built-in traffic splitting capabilities, enabling zero-downtime deployments with gradual traffic migration between revisions. This serverless platform handles infrastructure scaling and traffic management automatically.
Cloud Run scales container instances automatically from zero to many based on incoming request load, with each instance handling configured concurrency (default 80 requests). When you deploy new revisions, Cloud Run maintains previous revisions allowing traffic splitting across multiple versions. You configure traffic percentages allocating requests between revisions, enabling canary deployments where small traffic percentages test new versions before full rollout. For example, you might initially route 5% traffic to a new revision while monitoring metrics, gradually increase to 50%, then complete the rollout to 100% if no issues arise. If problems occur, you can immediately redirect traffic back to the stable revision, providing instant rollback capability. Traffic splitting operates at the request level through the Cloud Run load balancer. You manage traffic splits through Cloud Console, gcloud CLI, or CI/CD automation tools. This approach provides zero downtime since both old and new revisions serve traffic simultaneously during transitions. Cloud Run’s serverless nature eliminates infrastructure management while providing sophisticated deployment capabilities essential for production applications requiring high availability.
Option B is incorrect because while Compute Engine managed instance groups support autoscaling and rolling updates, they require more configuration for zero-downtime deployments and don’t provide built-in traffic splitting percentages. You would need external load balancing configuration for gradual traffic shifting.
Option C is incorrect because App Engine Standard does support traffic splitting and migration, making it a viable option. However, App Engine has runtime restrictions and requires application compliance with specific runtime environments, while Cloud Run accepts any containerized application providing greater flexibility.
Option D is incorrect because while GKE supports rolling updates maintaining availability, implementing percentage-based traffic splitting requires additional configuration like service mesh (Istio) or external traffic management tools. GKE provides powerful capabilities but requires more operational complexity than Cloud Run for this use case.
Question 165
You need to query data stored in Cloud Storage without loading it into a database. Which BigQuery feature should you use?
A) Federated queries
B) Streaming inserts
C) Export to Cloud Storage
D) External tables
Answer: D
The correct answer is option D. External tables in BigQuery allow you to query data residing in Cloud Storage, Cloud SQL, or Bigtable without importing it into BigQuery storage. External tables are also called federated queries, with the table definition pointing to external data sources.
When you create an external table in BigQuery, you specify the Cloud Storage URI containing your data files, format (CSV, JSON, Avro, Parquet, ORC), and schema definition. BigQuery queries external tables using the same SQL syntax as native tables, but data remains in Cloud Storage. Query execution reads data directly from Cloud Storage at query time, processing it through BigQuery’s distributed query engine. External tables are ideal for scenarios where data is frequently updated in Cloud Storage and you need to query the latest version, data volumes don’t justify storage costs in BigQuery, or you maintain data in Cloud Storage for other applications while occasionally querying it. Performance of external tables is generally lower than native BigQuery tables because data isn’t optimized for BigQuery’s columnar format and must be read from Cloud Storage during each query. For better performance with frequently queried data, load it into native BigQuery tables. External tables support partitioning and schema evolution, and you can combine external and native tables in queries for hybrid analytics.
Option A is incorrect because while federated queries is a term sometimes used to describe querying external data, the specific BigQuery feature is called external tables. Federated queries is a conceptual term while external tables is the implementation.
Option B is incorrect because streaming inserts load data into BigQuery in real-time for immediate querying, not querying data in external storage. Streaming inserts move data into BigQuery rather than querying it in place.
Option C is incorrect because export to Cloud Storage moves data from BigQuery to Cloud Storage, the opposite of what’s needed. Export is for archiving or sharing BigQuery query results, not for querying Cloud Storage data.
Question 166
You want to implement a serverless solution that processes files uploaded to Cloud Storage by resizing images and storing results in another bucket. Which combination of services should you use?
A) Cloud Functions triggered by Cloud Storage events
B) Cloud Scheduler with Cloud Run
C) Compute Engine with cron jobs
D) App Engine with Cloud Tasks
Answer: A
The correct answer is option A. Cloud Functions with Cloud Storage triggers provide a serverless, event-driven solution for processing uploaded files automatically. When files are uploaded to Cloud Storage, the storage event triggers your function which processes the file and stores results.
You deploy a Cloud Function configured with a Cloud Storage trigger specifying the source bucket and event type (typically finalize/create for new uploads). When users or applications upload images to the bucket, Cloud Storage publishes an event that invokes your function with metadata including bucket name, file name, and file properties. Your function code downloads the image from Cloud Storage, uses image processing libraries to resize it according to specifications, and uploads the resized image to the destination bucket. Cloud Functions automatically scales to handle multiple concurrent uploads without configuration, and you pay only for execution time. The function can include error handling, logging to Cloud Logging, and notifications for processing failures. For more complex workflows, you might chain functions or integrate with Pub/Sub for multi-step processing. This serverless approach eliminates infrastructure management, provides automatic scaling, and ensures reliable processing of uploaded files. Cloud Functions supports various runtimes including Node.js, Python, Go, and Java, allowing you to choose familiar languages for image processing tasks.
Option B is incorrect because Cloud Scheduler triggers functions on time-based schedules, not in response to Cloud Storage events. While you could schedule periodic processing of files, this doesn’t provide immediate event-driven processing when files are uploaded.
Option C is incorrect because Compute Engine with cron jobs requires managing infrastructure, doesn’t automatically scale with upload volume, and uses polling rather than event-driven triggers. This approach is less efficient and more complex than serverless functions.
Option D is incorrect because while App Engine with Cloud Tasks could implement file processing, this combination is more complex than necessary. Cloud Tasks handles asynchronous task queuing but still requires application code to monitor for uploads rather than automatic event triggering.
Question 167
You need to provide network connectivity between multiple VPC networks in different projects within your organization. Which feature should you use?
A) VPC Network Peering
B) Cloud VPN
C) Shared VPC
D) Cloud Interconnect
Answer: A
The correct answer is option A. VPC Network Peering connects VPC networks in different projects or organizations, enabling private communication between resources using internal IP addresses without traversing the public internet. Peering provides low-latency, high-bandwidth connectivity ideal for multi-project architectures.
When you establish VPC Network Peering, you create peering connections between VPC networks allowing resources in peered networks to communicate using private IPs as if they were in the same network. Peering is non-transitive, meaning if Network A peers with Network B, and Network B peers with Network C, resources in A cannot automatically communicate with C unless you explicitly peer A with C. Each peering relationship requires mutual configuration and acceptance from both network administrators. Peering supports scenarios like separating production and development projects while allowing controlled communication, connecting application tiers across projects, enabling shared services accessible from multiple projects, or linking networks owned by different teams or business units. Peering has limitations including maximum number of peering connections per network, subnet IP range overlap restrictions, and non-transitive routing. VPC firewall rules control traffic between peered networks, allowing you to restrict communication to specific resources or ports. Peering doesn’t incur the performance overhead or costs of encrypted tunnels since traffic uses Google’s internal network.
Option B is incorrect because Cloud VPN creates encrypted tunnels between networks, typically used for hybrid connectivity to on-premises environments. While VPN could connect VPC networks, it introduces encryption overhead and is less efficient than peering for VPC-to-VPC connectivity within Google Cloud.
Option C is incorrect because Shared VPC allows multiple projects to share network resources from a host project, creating a different organizational model than peering. Shared VPC is for centralized network management where one project hosts networks that others use, while peering connects independent networks.
Option D is incorrect because Cloud Interconnect provides dedicated connectivity between on-premises infrastructure and Google Cloud, not for connecting VPC networks within Google Cloud. Interconnect addresses hybrid cloud scenarios rather than multi-VPC connectivity.
Question 168
You want to implement horizontal pod autoscaling in GKE based on custom application metrics like queue length. Which component should you configure?
A) Horizontal Pod Autoscaler with custom metrics
B) Cluster Autoscaler
C) Vertical Pod Autoscaler
D) Node pool autoscaling
Answer: A
The correct answer is option A. Horizontal Pod Autoscaler (HPA) in GKE automatically scales the number of pod replicas based on observed metrics including CPU, memory, or custom metrics. Configuring HPA with custom metrics enables scaling decisions based on application-specific indicators like queue depth.
To implement custom metric scaling, you first instrument your application to publish metrics to Cloud Monitoring using the Monitoring API or client libraries. You might publish metrics like message queue length, database connection pool utilization, or business metrics like transactions per second. Next, you create a HorizontalPodAutoscaler resource in Kubernetes specifying the deployment to scale, target metric name and value, and minimum/maximum replica counts. GKE’s metrics pipeline queries Cloud Monitoring for metric values and adjusts pod count to maintain the target metric value. For example, if you set target queue length of 10 messages per pod and the queue grows to 50 messages, HPA scales from 1 to 5 pods. HPA supports multiple metrics simultaneously with the autoscaler acting on whichever metric indicates greatest scale-out need. You configure stabilization windows preventing rapid scale fluctuations and scale-down policies controlling how quickly pods are removed. Custom metric autoscaling enables intelligent capacity management based on actual application demand rather than resource utilization alone, improving both performance and cost efficiency.
Option B is incorrect because Cluster Autoscaler adjusts the number of nodes in node pools based on pending pod resource requests, not application metrics. Cluster Autoscaler handles infrastructure scaling while HPA handles application scaling.
Option C is incorrect because Vertical Pod Autoscaler adjusts CPU and memory requests for pods based on historical usage, not scaling pod replica count. VPA optimizes resource allocation for individual pods rather than scaling horizontally based on load.
Option D is incorrect because node pool autoscaling automatically adjusts node count in a pool based on pod resource requirements, similar to Cluster Autoscaler. Node pool autoscaling handles infrastructure capacity, not application-level scaling based on custom metrics.
Question 169
You need to ensure that a Cloud Run service can only be accessed by authenticated users within your organization. Which security feature should you configure?
A) Cloud Run IAM with allUsers removed
B) VPC Service Controls
C) Cloud Armor
D) API Gateway
Answer: A
The correct answer is option A. Cloud Run IAM controls access to services by requiring appropriate permissions to invoke them. Removing the allUsers principal and granting invoker permissions only to authenticated users or service accounts restricts access to authorized entities within your organization.
By default, Cloud Run services can be configured for either public access (allowing unauthenticated invocations) or private access (requiring authentication). For organizational access control, you configure the service to require authentication and grant the roles/run.invoker role to specific users, groups, service accounts, or domains. Removing allUsers from the IAM policy prevents anonymous access. When users attempt to access the service, Cloud Run verifies their identity through Bearer tokens in request headers and checks IAM permissions before forwarding requests to containers. For human users, you can integrate Identity-Aware Proxy for authentication with Google accounts or external identity providers. For service-to-service communication, calling services use service account credentials to obtain tokens for authenticated requests. Cloud Run integrates with Cloud Endpoints and API Gateway for additional authentication options including API keys and OAuth flows. This security model provides flexible access control ranging from completely public services to those restricted to specific service accounts in production environments. IAM policies can be managed through Cloud Console, gcloud CLI, or Infrastructure as Code tools.
Option B is incorrect because VPC Service Controls create perimeters around Google Cloud services to prevent data exfiltration but don’t provide application-level authentication for Cloud Run services. Service Controls complement IAM but don’t replace service authentication.
Option C is incorrect because Cloud Armor provides DDoS protection and web application firewall capabilities for load balancers, not Cloud Run authentication. Cloud Armor can protect services behind load balancers but doesn’t integrate directly with Cloud Run for authentication.
Option D is incorrect because while API Gateway can provide authentication for APIs and route to Cloud Run backends, it’s an additional component adding complexity. For simple organizational access control, Cloud Run IAM provides sufficient authentication without gateway infrastructure.
Question 170
You want to analyze BigQuery query performance and identify opportunities for optimization. Which feature should you use?
A) Query execution plan and query history
B) Data Studio dashboards
C) Cloud Monitoring metrics
D) Audit logs
Answer: A
The correct answer is option A. BigQuery’s query execution plan shows detailed information about how queries execute including stages, steps, timing, and data processed. Combined with query history showing past query performance, execution plans help identify optimization opportunities like inefficient joins or excessive data scanning.
When you run a query in BigQuery, you can view its execution plan in the Cloud Console showing the query broken into stages and steps with timing information, bytes processed, rows produced, and parallelization details. The execution plan visualizes operations like table scans, filters, joins, aggregations, and sorting, helping you understand query behavior. Query history stores metadata for all executed queries including execution time, bytes processed, bytes billed, cache hit status, and error messages. By analyzing execution plans and history, you identify slow queries consuming resources, opportunities for query restructuring to reduce data scanning, candidates for partitioning or clustering, inefficient join strategies that could benefit from reordering, and queries that would benefit from materialized views. BigQuery provides query validation showing estimated bytes processed before execution, allowing cost and performance evaluation before running expensive queries. Optimization strategies include using clustered and partitioned tables, selecting specific columns instead of SELECT *, filtering early in queries, using approximate aggregation functions where appropriate, and leveraging cached results for repeated queries.
Option B is incorrect because Data Studio creates visualizations and dashboards from data but doesn’t analyze query execution internals. Data Studio is for presenting results, not query performance analysis.
Option C is incorrect because while Cloud Monitoring collects BigQuery metrics like slot utilization and query count, it doesn’t provide query-level execution plans showing internal operations. Monitoring tracks aggregate metrics rather than individual query optimization opportunities.
Option D is incorrect because audit logs record administrative actions and data access for compliance and security monitoring, not query performance details. Audit logs show who accessed what and when, but don’t provide optimization insights like execution plans.
Question 171
You need to implement a CI/CD pipeline that automatically deploys to GKE when code is merged to the main branch in GitHub. Which Google Cloud service should you use?
A) Cloud Build with GitHub triggers
B) Cloud Scheduler
C) Cloud Functions
D) Deployment Manager
Answer: A
The correct answer is option A. Cloud Build integrates directly with GitHub repositories to automatically trigger builds when code changes occur. You can configure build triggers that respond to branch commits, pull requests, or tags, executing automated builds and deployments to GKE.
To implement the CI/CD pipeline, you connect your GitHub repository to Cloud Build through the GitHub App integration providing secure access to repository events. You create a build trigger specifying the repository, branch pattern (like main or master), and build configuration file (cloudbuild.yaml). The build configuration defines build steps including running tests, building container images, pushing images to Artifact Registry, and deploying to GKE using kubectl commands or helm. When developers merge code to the main branch, GitHub notifies Cloud Build which executes the configured build steps. Cloud Build provides parallel execution for faster builds, encrypted secrets for sensitive configuration, artifact storage, and integration with Cloud Logging for build logs. For GKE deployments, your build steps typically include kubectl apply commands using service account credentials with appropriate GKE permissions. You can implement advanced patterns like blue-green deployments, canary releases with gradual traffic shifting, or environment-specific deployments based on branch names. Cloud Build’s serverless nature means you don’t manage build infrastructure, and you pay only for build execution time.
Option B is incorrect because Cloud Scheduler triggers jobs on time-based schedules, not in response to code repository events. Scheduler is for periodic tasks rather than event-driven CI/CD pipelines responding to code commits.
Option C is incorrect because while Cloud Functions can respond to events, they’re designed for lightweight event processing not comprehensive CI/CD workflows. Functions have execution time limits and limited environment for build tools, making Cloud Build more appropriate for building and deploying applications.
Option D is incorrect because Deployment Manager is an infrastructure-as-code tool for deploying and managing Google Cloud resources, not a CI/CD pipeline for application deployments. Deployment Manager manages infrastructure configuration rather than application build and deployment workflows.
Question 172
You want to implement network address translation for multiple Compute Engine instances without external IPs to access internet resources. Which service should you configure?
A) Cloud NAT
B) Cloud VPN
C) Cloud Interconnect
D) Private Google Access
Answer: A
The correct answer is option A. Cloud NAT (Network Address Translation) is a managed service providing network address translation for Compute Engine instances and GKE nodes without external IP addresses, enabling them to access internet resources while remaining unreachable from the internet for security.
Cloud NAT is configured per region and attached to Cloud Router, which manages NAT IP addresses. When instances without external IPs send outbound traffic to internet destinations, Cloud NAT translates their private source IPs to NAT IP addresses, forwards traffic to the internet, and translates response traffic back to private IPs for delivery to instances. You configure NAT gateways specifying which subnets or instance tags use NAT, whether to use automatic or manual NAT IP allocation, and logging preferences for connection tracking. Cloud NAT supports high availability through automatic failover if NAT IP addresses become unavailable and scales automatically to handle traffic from thousands of instances. The service is ideal for security-conscious environments where instances shouldn’t have external IPs but need internet access for software updates, API calls, or external service communication. Cloud NAT eliminates the need for bastion hosts or proxy servers for outbound internet access. For fully private environments accessing only Google Cloud services, Private Google Access provides an alternative without internet egress. Cloud NAT charges based on gateway hours and data processed, providing cost-effective internet access for private instances.
Option B is incorrect because Cloud VPN creates encrypted tunnels for hybrid connectivity between on-premises and Google Cloud, not NAT for internet access. VPN addresses different use cases around secure site-to-site connectivity.
Option C is incorrect because Cloud Interconnect provides dedicated connectivity to on-premises environments, not NAT functionality. Interconnect is for hybrid cloud scenarios requiring high-bandwidth connections to on-premises infrastructure.
Option D is incorrect because Private Google Access enables instances without external IPs to reach Google Cloud services through private IPs, not general internet resources. Private Google Access is for Google services specifically, while Cloud NAT provides broader internet access.
Question 173
You need to ensure that your Cloud SQL instance is only accessible from your organization’s VPC network and not from the public internet. Which configuration should you implement?
A) Private IP only
B) Authorized networks with 0.0.0.0/0
C) SSL certificates
D) Cloud SQL Proxy
Answer: A
The correct answer is option A. Configuring Cloud SQL with a private IP address and disabling public IP ensures the instance is accessible only from your VPC network through private networking, eliminating internet exposure. Private IP configuration enhances security by preventing unauthorized connection attempts from public networks.
When you enable private IP for Cloud SQL, the instance receives an IP address from a VPC subnet and becomes accessible through private networking using VPC Network Peering between your VPC and Google’s services VPC. Clients connect to the instance using its private IP address without traffic traversing the public internet. You must configure Private Services Connection establishing the peering relationship and IP range allocation for Google services. Once configured, instances in your VPC can connect directly to Cloud SQL using the private IP while external access is blocked. This configuration is essential for compliance requirements mandating private network access, production environments requiring strong security boundaries, or architectures minimizing attack surfaces. You can combine private IP with Cloud SQL Proxy for additional security through encrypted connections and IAM-based authentication. For applications requiring Cloud SQL access from on-premises, use Cloud VPN or Cloud Interconnect to extend your private network to Google Cloud. Private IP configuration is a permanent security enhancement that can’t be easily bypassed, unlike authorized networks which filter public connections but don’t eliminate internet exposure.
Option B is incorrect because authorized networks with 0.0.0.0/0 would allow connections from any public IP address, the opposite of restricting access. Authorized networks implement IP filtering but maintain public IP exposure.
Option C is incorrect because SSL certificates encrypt connections but don’t restrict where connections originate. SSL provides encryption security but doesn’t eliminate public IP exposure or prevent connection attempts from the internet.
Option D is incorrect because while Cloud SQL Proxy provides secure connections and authentication, it’s a connection method rather than a network restriction. Proxy can be used with either public or private IPs and doesn’t inherently restrict access to VPC networks.
Question 174
You want to deploy a stateless application across multiple regions for high availability and low latency. Which combination of services provides the best solution?
A) Global HTTP(S) Load Balancer with regional managed instance groups
B) Regional Network Load Balancer with single region
C) Cloud Run in a single region
D) GKE cluster in one zone
Answer: A
The correct answer is option A. Global HTTP(S) Load Balancer combined with regional managed instance groups provides true multi-region deployment with automatic traffic routing to the nearest healthy backend, delivering both high availability and low latency for global users.
This architecture uses Google’s global load balancing infrastructure with a single anycast IP address that users worldwide connect to. The load balancer automatically routes traffic based on user proximity, backend health, and capacity to the optimal regional backend. You create managed instance groups in multiple regions (like us-central1, europe-west1, asia-east1) using instance templates defining your application configuration. The load balancer distributes traffic across all healthy backends, automatically failing over to other regions if backends become unhealthy. This provides several benefits: users connect to geographically close backends reducing latency, regional failures don’t impact global availability as traffic automatically shifts to healthy regions, you can deploy new versions using rolling updates or blue-green deployments, and the load balancer handles SSL termination, session affinity, and Cloud CDN integration for static content. For true stateless applications, this architecture provides the best balance of performance, availability, and operational simplicity. You can implement sophisticated traffic management including traffic splitting for canary deployments, custom headers for routing, and connection draining for graceful instance removal.
Option B is incorrect because Regional Network Load Balancer operates within a single region and doesn’t provide multi-region deployment or global traffic routing. Regional load balancers don’t address the multi-region high availability requirement.
Option C is incorrect because Cloud Run in a single region doesn’t provide multi-region deployment. While Cloud Run offers excellent scalability and simplicity, single-region deployment creates a single point of failure and doesn’t optimize latency for global users.
Option D is incorrect because a GKE cluster in one zone provides neither regional nor multi-regional high availability. Single-zone deployment leaves the application vulnerable to zonal failures and doesn’t address global latency optimization.
Question 175
You need to grant a service account permission to read secrets from Secret Manager without granting broader access to other resources. Which IAM role should you assign?
A) roles/secretmanager.secretAccessor
B) roles/secretmanager.admin
C) roles/viewer
D) roles/iam.serviceAccountUser
Answer: A
The correct answer is option A. The roles/secretmanager.secretAccessor predefined IAM role grants permission to access secret versions in Secret Manager without broader administrative capabilities. This role follows the principle of least privilege by providing only the necessary permissions for reading secrets.
The secretAccessor role includes the secretmanager.versions.access permission allowing service accounts or users to retrieve secret values, which is specifically what applications need to read secrets. The role can be granted at the organization, project, or individual secret level, enabling granular access control. For maximum security, grant secretAccessor on specific secrets rather than project-wide, ensuring each service account accesses only the secrets it requires. This pattern is common for application authentication scenarios where apps retrieve database passwords, API keys, or certificates from Secret Manager at runtime. Service accounts with this role can read secret contents but cannot create secrets, modify secret values, or change secret configurations. For scenarios requiring secret management capabilities, you would use secretmanager.admin or other administrative roles, but for applications simply consuming secrets, secretAccessor provides appropriate permissions. Combine Secret Manager IAM with secret versioning, automatic rotation policies, and audit logging for comprehensive secrets management security.
Option B is incorrect because roles/secretmanager.admin grants full administrative control over Secret Manager including creating, modifying, and deleting secrets—far exceeding the requirement for reading secrets. Admin roles should be restricted to administrators managing secrets infrastructure.
Option C is incorrect because roles/viewer is a project-level role granting read access to most resources but doesn’t include Secret Manager secret access permissions. Secret Manager requires explicit permissions for accessing secret values due to sensitivity.
Option D is incorrect because roles/iam.serviceAccountUser allows impersonating service accounts but doesn’t grant Secret Manager access permissions. This role is about service account impersonation rather than secret access.
Question 176
You want to implement a disaster recovery strategy with automated failover for your Cloud SQL database. Which feature should you configure?
A) High availability configuration with automatic failover
B) Read replicas in multiple regions
C) Automated backups with point-in-time recovery
D) Export to Cloud Storage
Answer: A
The correct answer is option A. Cloud SQL high availability (HA) configuration creates a primary instance and standby instance in different zones within the same region with synchronous replication and automatic failover. When the primary becomes unavailable, Cloud SQL automatically promotes the standby to primary within seconds to minutes.
High availability configuration provides automatic failover without manual intervention or application code changes. The standby instance synchronously replicates all writes from the primary, ensuring zero data loss during failover. Cloud SQL monitors both instances through health checks and initiates failover when detecting primary failure, maintenance needs, or zone issues. Applications connect using the instance connection name or IP address which remains constant during failover, allowing automatic reconnection without configuration changes. The failover process typically completes within 60-120 seconds for most workloads. After failover, Cloud SQL creates a new standby instance automatically, reestablishing HA protection. This configuration is essential for production databases requiring maximum availability and minimal recovery time objectives. High availability increases costs since you pay for both primary and standby instances, but provides the most robust disaster recovery within a region. For cross-region disaster recovery, combine HA with read replicas in other regions that can be manually promoted to standalone instances if needed.
Option B is incorrect because read replicas provide read scalability and can serve as DR targets but don’t provide automatic failover. Promoting a read replica to primary requires manual intervention and application configuration changes.
Option C is incorrect because while automated backups with point-in-time recovery enable data restoration, they don’t provide automatic failover or minimal downtime. Backup restoration requires manual intervention and results in extended downtime during the recovery process.
Option D is incorrect because exporting to Cloud Storage creates backup copies but doesn’t provide failover capabilities. Exports are useful for long-term archival or data migration but don’t address high availability requirements for production databases.
Question 177
You need to scan container images for vulnerabilities before deploying them to production. Which Google Cloud feature should you use?
A) Container Analysis and Binary Authorization
B) Cloud Security Scanner
C) Cloud Armor
D) VPC Service Controls
Answer: A
The correct answer is option A. Container Analysis automatically scans container images stored in Artifact Registry or Container Registry for known security vulnerabilities, providing visibility into potential risks before deployment. Binary Authorization enforces deployment policies requiring images to meet security criteria before running on GKE or Cloud Run, creating a complete security workflow from scanning to enforcement.
Container Analysis continuously scans images as they are pushed to registries, detecting vulnerabilities from public databases like CVE (Common Vulnerabilities and Exposures). The service generates detailed vulnerability reports showing severity levels, affected packages, and available fixes. You view scan results in the Cloud Console or query them through APIs for integration with CI/CD pipelines. Binary Authorization works with Container Analysis to enforce attestation-based deployment policies. You create policies requiring images to have valid attestations confirming they passed security scans, were built by trusted build systems, or meet organizational security standards. When deployments are attempted, Binary Authorization evaluates policies and blocks non-compliant images, preventing vulnerable containers from running in production. This combination provides defense-in-depth ensuring only verified, scanned images reach production environments. You can configure exception processes for urgent deployments while maintaining audit trails. The workflow integrates with Cloud Build for automated scanning during builds, creating fully automated security validation from code commit to production deployment.
Option B is incorrect because Cloud Security Scanner identifies security vulnerabilities in App Engine, GKE, and Compute Engine web applications by crawling and testing deployed applications, not scanning container images before deployment. Security Scanner addresses runtime application security rather than pre-deployment image scanning.
Option C is incorrect because Cloud Armor provides DDoS protection and web application firewall capabilities for applications behind load balancers, not container image vulnerability scanning. Cloud Armor protects running applications from network attacks rather than preventing vulnerable images from deploying.
Option D is incorrect because VPC Service Controls create security perimeters preventing data exfiltration from Google Cloud services, not scanning container images for vulnerabilities. Service Controls address data security boundaries rather than container security scanning.
Question 178
You want to implement fine-grained access control for BigQuery datasets where different teams can access only their specific tables. Which approach should you use?
A) Dataset-level IAM with authorized views
B) Project-level IAM roles
C) VPC Service Controls
D) Cloud Identity groups only
Answer: A
The correct answer is option A. Dataset-level IAM combined with authorized views provides fine-grained access control in BigQuery, allowing you to grant permissions at the dataset level while using authorized views to expose specific tables or filtered data to different teams without granting direct dataset access.
BigQuery IAM operates at multiple levels including project, dataset, table, and column. For team-based access control, you create separate datasets for different data domains and grant appropriate IAM roles to team groups at the dataset level. For scenarios requiring even more granular control, authorized views allow users to query specific views without accessing underlying tables. You create a view in a shared dataset showing filtered or aggregated data, then authorize that view to access source datasets containing sensitive data. Users with access to the shared dataset can query the view but cannot directly access source tables, enabling data sharing while maintaining security. For example, you might create views showing aggregated sales data accessible by the marketing team while raw transaction tables remain accessible only to the finance team. Authorized views support complex scenarios including cross-project data sharing, row-level security through view filtering, and column-level restrictions by selecting specific columns in views. You can also implement column-level security directly using BigQuery’s column-level access control for sensitive fields like personally identifiable information.
Option B is incorrect because project-level IAM roles grant broad permissions across all datasets and tables in the project, lacking the granularity needed for team-specific table access. Project-level roles are too coarse-grained for fine-grained access requirements.
Option C is incorrect because VPC Service Controls create security perimeters around Google Cloud services to prevent data exfiltration, not provide fine-grained access control for individual tables. Service Controls address data perimeter security rather than internal access management.
Option D is incorrect because while Cloud Identity groups are essential for organizing users, groups alone don’t implement access control. Groups must be combined with IAM roles and policies to actually grant permissions. Groups are a user management mechanism, not an access control implementation.
Question 179
You need to ensure that all API calls to Google Cloud services from your project are logged for security auditing. Which service automatically provides this capability?
A) Cloud Audit Logs
B) Cloud Logging
C) Cloud Monitoring
D) VPC Flow Logs
Answer: A
The correct answer is option A. Cloud Audit Logs automatically record administrative activities and data access within Google Cloud services, providing comprehensive audit trails for security analysis, compliance, and forensic investigations. Audit logs are automatically enabled for most Google Cloud services without requiring configuration.
Cloud Audit Logs consist of multiple log types: Admin Activity logs recording administrative changes like creating VMs or modifying IAM policies, Data Access logs recording read and write operations on user data, System Event logs for Google-initiated administrative actions, and Policy Denied logs showing security policy violations. Admin Activity logs are always enabled and retained for 400 days at no charge. Data Access logs are disabled by default except for BigQuery and must be explicitly enabled due to volume and cost considerations. Audit logs include detailed information about who performed actions (principal identity), what actions were performed (method names), when actions occurred (timestamps), where actions originated (IP addresses and locations), and what resources were affected. These logs integrate with Cloud Logging for querying and analysis, support exporting to BigQuery for long-term analysis, Cloud Storage for archival, or Pub/Sub for real-time processing. Organizations use audit logs for security monitoring detecting unauthorized access attempts, compliance reporting proving adherence to regulations, forensic analysis investigating security incidents, and operational troubleshooting understanding configuration changes.
Option B is incorrect because while Cloud Logging is the platform that stores and provides access to audit logs, Cloud Audit Logs is the specific service that generates audit entries. Cloud Logging is the infrastructure while Audit Logs is the content generator.
Option C is incorrect because Cloud Monitoring collects and analyzes performance metrics for resources and applications, not audit trails of API calls. Monitoring tracks system health while Audit Logs track administrative and data access activities.
Option D is incorrect because VPC Flow Logs capture network traffic information for VPC networks at the IP packet level, not API calls to Google Cloud services. Flow Logs address network traffic analysis while Audit Logs address service API auditing.
Question 180
You want to reduce costs by automatically deleting old log entries that are no longer needed for compliance. Which Cloud Logging feature should you configure?
A) Log retention policies
B) Log sinks
C) Log exclusions
D) Log-based metrics
Answer: A
The correct answer is option A. Log retention policies in Cloud Logging allow you to configure how long logs are stored before automatic deletion, helping control storage costs while maintaining necessary logs for operational and compliance requirements. Retention policies can be customized per log bucket to match different retention needs.
Cloud Logging organizes logs into log buckets with configurable retention periods ranging from 1 day to 3650 days (10 years). The default _Default bucket has a 30-day retention period, but you can create custom buckets with different retention settings for various log types. For example, you might retain audit logs for 7 years for compliance while retaining application logs for only 30 days for cost optimization. Logs are automatically deleted when they exceed the configured retention period, eliminating manual cleanup processes. You route logs to specific buckets using log sinks with filters directing different log types to appropriate retention buckets. This approach balances compliance requirements mandating long retention for audit trails with cost optimization for less critical logs. For logs requiring permanent retention, configure log sinks exporting to Cloud Storage where you control lifecycle policies, or BigQuery for long-term analysis. Retention policies reduce costs by automatically purging old logs while bucket-based organization provides flexibility for different log types.
Option B is incorrect because log sinks export logs to destinations like Cloud Storage, BigQuery, or Pub/Sub for long-term storage or processing, not deleting them. Sinks are for log routing and export, while retention policies control deletion.
Option C is incorrect because log exclusions filter logs preventing them from being ingested into Cloud Logging at all, not deleting old logs. Exclusions reduce ingestion costs but don’t address retention of already-logged entries.
Option D is incorrect because log-based metrics extract numerical data from logs for monitoring and alerting purposes, not managing log retention or deletion. Metrics create monitoring signals from logs but don’t control log lifecycle.