Visit here for our full Google Associate Cloud Engineer exam dumps and practice test questions.
Question 81
You need to ensure that all Compute Engine instances in your project are created with a specific set of metadata and startup scripts. Which feature should you use?
A) Instance templates
B) Instance groups
C) Snapshots
D) Machine images
Answer: A
The correct answer is option A. Instance templates are resource definitions that specify machine configuration, including machine type, boot disk image, network settings, metadata, and startup scripts. Templates enable consistent instance creation across your infrastructure and are essential for managed instance groups and autoscaling configurations.
When you create an instance template, you define all properties that instances should have including machine type, CPU platform, boot disk image, additional disks, network interfaces, service accounts, metadata key-value pairs, and startup scripts. Templates are immutable after creation, ensuring consistency in deployments. You can create multiple instances from a single template manually or automatically through managed instance groups. Startup scripts in templates execute when instances boot, allowing you to install software, configure services, or register instances with monitoring systems. Metadata in templates can include configuration values, API keys retrieved from Secret Manager, or environment-specific settings. Templates support integration with deployment automation tools like Terraform and Deployment Manager. For organizations requiring standardized instance configurations, templates enforce consistency, simplify management, reduce configuration errors, and enable rapid scaling. You can version templates by including version numbers in template names, maintaining different configurations for development, staging, and production environments.
Option B is incorrect because instance groups are collections of VM instances managed as a single entity, but they rely on instance templates for defining instance configuration. Instance groups use templates rather than defining configurations themselves.
Option C is incorrect because snapshots are point-in-time backups of persistent disk data used for data recovery or disk cloning. Snapshots don’t define instance configuration like machine type, network settings, or metadata.
Option D is incorrect because machine images capture entire VM state including disks, configuration, and metadata, primarily used for backup and cloning specific instances. While useful for replication, machine images don’t provide the standardization and automation capabilities of templates for creating new instances with specific configurations.
Question 82
You want to restrict API access to Google Cloud services from outside your VPC network and allow access only through private IP addresses. Which feature should you configure?
A) Private Google Access
B) Cloud NAT
C) VPC Service Controls
D) Cloud Interconnect
Answer: A
The correct answer is option A. Private Google Access allows VM instances with only internal IP addresses to reach Google APIs and services using private IP addresses instead of public IPs. This feature enhances security by keeping traffic within Google’s network and eliminating the need for external IP addresses on instances accessing Google services.
When you enable Private Google Access on a subnet, instances in that subnet without external IP addresses can access Google Cloud services like Cloud Storage, BigQuery, Container Registry, and Cloud APIs through Google’s private network. Traffic remains within Google’s infrastructure without traversing the public internet, improving security and potentially reducing egress costs. Private Google Access works with the IP range 199.36.153.4/30 for private.googleapis.com, which your instances use as destination for API calls. You configure Private Google Access per subnet in VPC network settings. For scenarios requiring access to other Google services not supported by Private Google Access, you might use Private Service Connect for supported services. This configuration is essential for security-conscious environments where VMs shouldn’t have external IP addresses, high-security workloads processing sensitive data, or cost optimization scenarios avoiding internet egress charges. Private Google Access enables building completely private infrastructure that still leverages Google Cloud managed services.
Option B is incorrect because Cloud NAT provides network address translation for instances without external IPs to access internet resources, not Google APIs through private addresses. NAT enables outbound internet connectivity but doesn’t provide the private access to Google services that Private Google Access delivers.
Option C is incorrect because VPC Service Controls create security perimeters to prevent data exfiltration from Google Cloud services, not enable private API access. Service Controls complement Private Google Access by adding perimeter-based protection but serve different purposes.
Option D is incorrect because Cloud Interconnect provides dedicated connectivity between on-premises networks and Google Cloud, not private API access for instances within VPC networks. Interconnect addresses hybrid connectivity rather than internal VPC to Google services communication.
Question 83
You need to deploy a stateful application that requires persistent storage surviving instance deletion and supporting concurrent access from multiple instances. Which storage option should you use?
A) Filestore
B) Persistent Disk
C) Local SSD
D) Cloud Storage
Answer: A
The correct answer is option A. Filestore is a fully managed NFS file storage service providing shared file systems accessible concurrently from multiple Compute Engine and GKE instances. Filestore supports applications requiring traditional file system semantics with shared access, making it ideal for content management systems, media processing, and shared home directories.
Filestore provides NFSv3 protocol support allowing standard Linux file operations and POSIX compliance. You can mount Filestore instances on multiple VMs simultaneously, enabling shared access to data across distributed applications. The service offers multiple tiers including Basic HDD for cost-effective general-purpose workloads, Basic SSD for performance-sensitive applications, and Enterprise for mission-critical workloads requiring high availability. Filestore automatically replicates data within zones, provides consistent performance, and supports snapshots for backup and recovery. Capacity ranges from 1TB to 100TB depending on tier, with throughput scaling based on provisioned capacity. Filestore integrates with VPC networks for private access, supports Cloud Identity and Access Management for access control, and provides monitoring through Cloud Monitoring. Common use cases include web serving content shared across web server instances, data science environments requiring shared datasets, application migration from on-premises NFS storage, and enterprise applications designed for shared file systems. Filestore maintains data independently of instances, surviving instance deletions and supporting flexible infrastructure.
Option B is incorrect because while Persistent Disk provides durable block storage, standard persistent disks can attach to only one instance in read-write mode. Multi-writer disks exist but have limitations and specific use cases, making Filestore more appropriate for typical shared access requirements.
Option C is incorrect because Local SSDs are physically attached to the host machine and their data is lost when instances stop or are deleted. Local SSDs provide high performance for temporary data but don’t meet persistence requirements.
Option D is incorrect because Cloud Storage provides object storage accessed through APIs rather than file system interfaces. While Cloud Storage supports concurrent access, it doesn’t offer POSIX file system semantics that many applications require.
Question 84
You want to automatically scale your application based on a custom metric like queue depth or business-specific measurements. Which Compute Engine feature should you configure?
A) Custom metric autoscaling
B) Target CPU utilization
C) Load balancing based scaling
D) Schedule-based scaling
Answer: A
The correct answer is option A. Custom metric autoscaling allows managed instance groups to scale based on metrics you define and publish to Cloud Monitoring, enabling scaling decisions based on application-specific indicators beyond standard CPU, memory, or request metrics. This capability provides flexibility for complex scaling requirements unique to your application.
To implement custom metric autoscaling, you first instrument your application to publish metrics to Cloud Monitoring using the Monitoring API or client libraries. Metrics might include queue depth, database connection counts, transaction processing rates, or business metrics like orders per second. You then configure the managed instance group autoscaler to use these custom metrics with target values, scale-in/scale-out thresholds, and cool-down periods. The autoscaler monitors metric values and adjusts instance count to maintain target levels. Custom metrics enable more intelligent scaling than CPU alone—for example, scaling based on message queue depth ensures adequate processing capacity regardless of CPU utilization. You can combine multiple metrics using autoscaler policies that consider CPU, memory, custom metrics, and load balancing metrics simultaneously, with the autoscaler acting on whichever metric indicates need for scaling. Custom metric autoscaling is essential for applications where resource requirements don’t correlate directly with CPU usage or for implementing business-driven scaling policies.
Option B is incorrect because target CPU utilization is a standard autoscaling metric available by default but doesn’t address custom metric requirements. CPU-based scaling works for CPU-bound workloads but may not accurately reflect capacity needs for all applications.
Option C is incorrect because load balancing based scaling uses metrics from load balancers like requests per second or utilization, which are predefined metrics. While useful, this doesn’t provide the flexibility of truly custom application-specific metrics.
Option D is incorrect because schedule-based scaling adjusts capacity based on time patterns rather than real-time metrics. Scheduled scaling is useful for predictable load patterns but doesn’t respond to actual workload demands like custom metrics.
Question 85
You need to set up a development environment that mirrors your production Kubernetes cluster configuration. Which GKE feature allows you to create identical cluster configurations?
A) Cluster configuration export and import
B) Cluster autoscaling
C) Node pools
D) Workload Identity
Answer: A
The correct answer is option A. GKE supports exporting cluster configurations and using them to create new clusters with identical settings, enabling consistent environments across development, staging, and production. This capability ensures configuration parity and reduces errors from manual cluster creation.
You can export a cluster’s configuration using gcloud commands that generate YAML or JSON specifications containing cluster settings including node pool configurations, networking settings, security policies, add-on configurations, and cluster features. These configuration files can be version controlled, reviewed through standard code review processes, and applied to create new clusters with identical settings. This approach supports infrastructure-as-code practices where cluster configurations are maintained as declarative files. You can parameterize exported configurations for environment-specific values like cluster name, node count, or machine types while maintaining consistent security and networking settings. Configuration export and import simplifies multi-environment strategies, disaster recovery cluster recreation, and cluster migration between regions or projects. Combining this with Kubernetes manifest management tools like kubectl, Helm, or Kustomize provides complete environment reproducibility including both cluster infrastructure and workload configurations.
Option B is incorrect because cluster autoscaling automatically adjusts node count based on resource demands, not for creating identical cluster configurations. Autoscaling is a runtime capacity management feature rather than a configuration replication mechanism.
Option C is incorrect because node pools are groups of nodes within a cluster sharing configuration, useful for heterogeneous workloads but not for creating identical clusters. Node pools address intra-cluster node management rather than cross-cluster configuration consistency.
Option D is incorrect because Workload Identity provides Kubernetes service accounts secure access to Google Cloud services, not cluster configuration management. Workload Identity addresses authentication and authorization rather than infrastructure configuration replication.
Question 86
You want to ensure that all outbound traffic from your VPC network passes through a specific VM acting as a network appliance for inspection. Which routing configuration should you implement?
A) Custom static routes with higher priority
B) Cloud Router with BGP
C) VPC Network Peering
D) Policy-based routing
Answer: A
The correct answer is option A. Custom static routes with higher priority than default routes allow you to direct traffic through specific instances like network virtual appliances for inspection, filtering, or security processing. Static routes override default routing behavior, enabling traffic control patterns required for security appliances.
To implement traffic inspection, you create custom static routes with destination ranges (like 0.0.0.0/0 for all traffic) and next-hop pointing to your network appliance instance. The route priority must be higher than default routes (lower numerical value, default is 1000). The network appliance instance requires IP forwarding enabled using the canIpForward setting, allowing it to receive and forward packets destined for other addresses. You typically configure the appliance with two network interfaces—one receiving traffic from your VMs and one sending inspected traffic toward destinations. This architecture enables deploying third-party security appliances, implementing custom traffic filtering, logging all network traffic, or enforcing organizational security policies. For high availability, you can deploy multiple appliance instances with route priorities adjusted for failover. Consider that routing all traffic through a single instance creates a potential bottleneck and single point of failure, so production implementations often use multiple appliances with load balancing or active-passive configurations.
Option B is incorrect because Cloud Router with BGP provides dynamic routing for hybrid connectivity scenarios like VPN and Interconnect, not for directing internal VPC traffic through specific instances for inspection. BGP is for route exchange with external networks.
Option C is incorrect because VPC Network Peering connects VPC networks for private communication between them, not for routing traffic through inspection appliances within a single VPC. Peering is about inter-VPC connectivity.
Option D is incorrect because while policy-based routing concepts exist in networking generally, GCP VPC routing uses route tables with priorities rather than policy-based routing configurations. Custom static routes achieve the traffic steering requirements.
Question 87
You need to provide developers access to view application logs but prevent them from modifying log settings or deleting logs. Which predefined IAM role should you assign?
A) roles/logging.viewer
B) roles/logging.admin
C) roles/logging.configWriter
D) roles/viewer
Answer: A
The correct answer is option A. The roles/logging.viewer predefined IAM role grants read-only access to logs in Cloud Logging, allowing users to view log entries without the ability to modify log configurations, delete logs, or change retention settings. This role follows least privilege principles for users requiring log visibility without administrative capabilities.
The Logging Viewer role includes permissions to view logs using Logs Explorer, create and save log queries, view log-based metrics, and access log analytics features, but explicitly excludes permissions to modify log sinks, change retention policies, delete logs, or configure logging settings. This separation ensures developers can troubleshoot applications using logs while preventing accidental or malicious log deletion that could hamper security investigations or compliance requirements. The role is commonly assigned to developers, QA engineers, on-call engineers, and support staff who need log access for operational purposes. For more granular control, you can create custom roles with specific logging permissions or use IAM Conditions to limit access to logs from specific projects, resources, or log types. Combining logging.viewer with other roles like monitoring.viewer provides comprehensive observability access without administrative privileges.
Option B is incorrect because roles/logging.admin grants full administrative control over logging including deleting logs, modifying retention policies, configuring sinks, and managing logging configuration—far exceeding read-only requirements and violating least privilege.
Option C is incorrect because roles/logging.configWriter allows creating and modifying logging configurations including sinks and exclusion filters but doesn’t necessarily grant log viewing permissions. ConfigWriter is for logging infrastructure management, not log consumption.
Option D is incorrect because roles/viewer is a project-level role granting read access to most resources but may be overly broad for security-sensitive environments. Using logging.viewer provides more precise access control limited to logging functionality.
Question 88
You want to deploy a containerized application to GKE with automatic HTTPS certificate provisioning and management. Which feature should you use?
A) Google-managed SSL certificates with Ingress
B) Self-signed certificates
C) Cloud Armor
D) Certificate Authority Service
Answer: A
The correct answer is option A. Google-managed SSL certificates integrated with GKE Ingress provide automatic provisioning, renewal, and management of SSL/TLS certificates for HTTPS traffic to your applications. This managed approach eliminates manual certificate management overhead and ensures certificates never expire.
When you create a GKE Ingress resource with HTTPS configuration, you can specify that Google should provision and manage SSL certificates automatically. You define the domains in the Ingress specification using the networking.gke.io/managed-certificates annotation, and Google Cloud provisions certificates from a trusted certificate authority, configures them on the load balancer, and automatically renews them before expiration. The process is transparent—you don’t handle certificate files, private keys, or renewal procedures. Google-managed certificates support both single-domain and multi-domain certificates through multiple ManagedCertificate resources. The provisioning process requires DNS configuration pointing domains to the Ingress load balancer IP address for domain ownership verification. Once configured, certificates automatically renew approximately 30 days before expiration without any intervention. This approach is ideal for production applications requiring HTTPS security without operational overhead of certificate lifecycle management. For organizations preferring to use their own certificates, GKE also supports pre-created certificates referenced in Ingress specifications.
Option B is incorrect because self-signed certificates aren’t trusted by browsers and clients, triggering security warnings for users. Self-signed certificates are suitable only for testing and development, not production applications requiring trusted HTTPS.
Option C is incorrect because Cloud Armor provides DDoS protection and web application firewall capabilities, not certificate management. Cloud Armor can work with Ingress for security but doesn’t provision SSL certificates.
Option D is incorrect because Certificate Authority Service provides private certificate authority infrastructure for issuing internal certificates, not public SSL certificates for HTTPS services. CAS is for internal PKI, while Google-managed certificates handle public-facing HTTPS.
Question 89
You need to analyze network traffic patterns and identify potential security issues in your VPC network. Which feature should you enable?
A) VPC Flow Logs
B) Cloud Logging
C) Cloud Monitoring
D) Packet Mirroring
Answer: A
The correct answer is option A. VPC Flow Logs capture information about IP traffic going to and from network interfaces in VPC networks, providing visibility into network traffic patterns, security analysis, network forensics, and troubleshooting. Flow logs enable detailed traffic analysis without impacting network performance.
VPC Flow Logs record samples of network flows including source and destination IP addresses, ports, protocols, packet and byte counts, timestamps, and connection disposition (accepted or dropped). You enable flow logs per subnet with configurable sampling rates and metadata inclusion levels balancing detail with storage costs. Flow logs export to Cloud Logging where you can query them using Logs Explorer, export to BigQuery for analysis with SQL, or send to Security Command Center for threat detection. Common use cases include identifying network traffic patterns and top talkers, investigating security incidents through traffic forensics, troubleshooting connectivity issues, validating firewall rules effectiveness, detecting anomalous traffic indicating compromise, and optimizing network utilization. Flow logs support compliance requirements for network monitoring and audit trails. You can analyze logs using BigQuery to create dashboards showing traffic patterns, identify unexpected connections, detect port scanning, or correlate network activity with application behavior. Flow logs integrate with Cloud IDS for intrusion detection based on traffic patterns.
Option B is incorrect because while Cloud Logging stores flow logs, it’s the destination service rather than the feature that generates network traffic data. Logging provides storage and analysis infrastructure but doesn’t capture flow information.
Option C is incorrect because Cloud Monitoring tracks metrics and creates dashboards but doesn’t capture detailed network flow information. Monitoring shows aggregate metrics while flow logs provide packet-level details.
Option D is incorrect because Packet Mirroring captures full packet contents for deep inspection by security appliances, which is more intensive and expensive than flow logs. Packet Mirroring is for detailed analysis of specific traffic while flow logs provide broad visibility across networks.
Question 90
You want to implement a CI/CD pipeline that automatically builds container images when code is committed to your Git repository. Which Google Cloud service should you use?
A) Cloud Build
B) Cloud Functions
C) Cloud Run
D) App Engine
Answer: A
The correct answer is option A. Cloud Build is Google Cloud’s fully managed continuous integration and continuous deployment platform that executes builds defined in configuration files. Cloud Build integrates with source repositories to automatically trigger builds on code commits, building container images, running tests, and deploying applications.
Cloud Build executes builds based on cloudbuild.yaml or Dockerfile configurations defining build steps. Each step runs in a container, enabling consistent build environments and supporting any build tool available in container images. You can trigger builds automatically from Cloud Source Repositories, GitHub, or Bitbucket when commits are pushed or pull requests are created. Cloud Build supports building Docker images with built-in Docker builder or using Buildpacks for source-to-image transformation without Dockerfiles. Built images are automatically pushed to Container Registry or Artifact Registry. Cloud Build integrates with GKE, Cloud Run, App Engine, and Compute Engine for deployment. The service provides build history, logs viewable in Cloud Logging, and artifact storage. Advanced features include parallel build steps, conditional execution, encrypted substitution variables for secrets, and custom builder images. Cloud Build’s serverless nature means you don’t manage build infrastructure—builds scale automatically and you pay only for build time. For comprehensive CI/CD pipelines, Cloud Build integrates with testing frameworks, security scanning tools, and deployment automation.
Option B is incorrect because Cloud Functions is for event-driven serverless compute, not build automation. While functions could theoretically trigger builds, Cloud Build is the purpose-built service for CI/CD.
Option C is incorrect because Cloud Run is a deployment platform for containerized applications, not a build service. Cloud Run runs container images but doesn’t build them—that’s Cloud Build’s role.
Option D is incorrect because App Engine is an application hosting platform, not a build service. App Engine deploys applications but relies on Cloud Build or other tools for building artifacts before deployment.
Question 91
You need to grant temporary access to a Cloud Storage bucket for an external user without creating a Google account. Which feature should you use?
A) Signed URLs
B) IAM service accounts
C) Bucket ACLs
D) Object versioning
Answer: A
The correct answer is option A. Signed URLs are time-limited URLs providing temporary access to specific Cloud Storage objects without requiring requesters to have Google accounts or credentials. Signed URLs include cryptographic signatures authorizing access for specified durations and operations.
You generate signed URLs using service account credentials with appropriate Cloud Storage permissions, specifying the bucket, object, HTTP method (GET for download, PUT for upload), and expiration time. The resulting URL contains query parameters including the signature, making it self-contained—anyone with the URL can access the object during the validity period without authentication. Signed URLs are ideal for sharing files with external users, providing temporary upload locations, enabling time-limited content downloads, or integrating with third-party systems requiring object access. You can generate signed URLs programmatically using client libraries or command-line tools, making them suitable for automated workflows. The expiration period can range from minutes to days depending on requirements, with shorter periods providing better security. After expiration, URLs become invalid and attempts to use them fail. For upload scenarios, you can further restrict signed URLs to specific content types or file sizes. Signed URLs work with both objects and bucket-level operations, providing flexibility for various access patterns.
Option B is incorrect because IAM service accounts are identities for applications, not mechanisms for providing temporary access to external users. Service accounts require credential management inappropriate for external user scenarios.
Option C is incorrect because bucket ACLs provide persistent access control rather than temporary access. ACLs grant permissions to specific users or groups but don’t support time-limited access or usage without credentials.
Option D is incorrect because object versioning maintains multiple versions of objects for recovery and audit purposes, not for providing temporary access. Versioning is about data protection rather than access control.
Question 92
You want to implement disaster recovery for your GKE cluster with automated backup and restore of cluster resources and persistent volumes. Which solution should you use?
A) Backup for GKE
B) Persistent disk snapshots
C) GKE cluster export
D) Cloud Storage bucket sync
Answer: A
The correct answer is option A. Backup for GKE is a managed service providing automated backup and restore capabilities for GKE cluster resources including Kubernetes manifests, configurations, and persistent volume data. This service simplifies disaster recovery, cluster migration, and protection against data loss or misconfigurations.
Backup for GKE creates comprehensive backups including Kubernetes resources like Deployments, Services, ConfigMaps, Secrets, and application data stored in persistent volumes. You configure backup plans specifying schedules, retention policies, and scope (entire cluster or specific namespaces). Backups capture point-in-time snapshots that can restore clusters to previous states after failures, accidental deletions, or security incidents. The service supports restoring to the same cluster, different clusters, or clusters in different projects or regions, enabling flexible disaster recovery strategies. Backup for GKE handles dependencies between resources, ensuring consistent restores where Deployments, Services, and PersistentVolumeClaims maintain relationships. You can configure application-consistent backups using pre and post-backup hooks for databases requiring quiescence. The service integrates with IAM for access control, provides backup encryption at rest, and generates restore reports showing what was recovered. Backup for GKE is essential for production GKE environments requiring business continuity, compliance with backup requirements, or protection against regional failures.
Option B is incorrect because persistent disk snapshots backup only disk data without Kubernetes resource definitions like Deployments and Services. Snapshots don’t capture the complete cluster state needed for full disaster recovery.
Option C is incorrect because while you can export cluster configurations, this doesn’t backup application data or automate the backup process. Cluster export is useful for configuration management but doesn’t provide comprehensive disaster recovery.
Option D is incorrect because Cloud Storage bucket sync could theoretically backup some data but doesn’t capture Kubernetes resources or provide integrated backup and restore workflows. This approach requires significant custom scripting and doesn’t leverage managed backup capabilities.
Question 93
You need to enforce that all VMs in your project use specific machine types and prevent users from creating instances with unauthorized configurations. Which Organization Policy constraint should you use?
A)vmExternalIpAccess or custom constraint for machine types
B)disableSerialPortAccess
C)requireShieldedVm
D)skipDefaultNetworkCreation
Answer: A
The correct answer is option A. Organization Policy Service provides constraints like compute.vmExternalIpAccess for restricting VM configurations and supports custom constraints using Common Expression Language for enforcing specific machine type requirements. These policies enable governance across projects ensuring compliance with organizational standards.
While there isn’t a predefined constraint specifically for machine types, you can create custom Organization Policy constraints that evaluate VM creation requests and deny those using unauthorized machine types. Custom constraints use CEL expressions checking resource properties like machine type names against allowed patterns. For example, you could create a constraint allowing only machine types matching patterns like “e2-” or “n2-standard-” while denying others. These constraints apply at organization, folder, or project levels, enforcing policies across your resource hierarchy. Organization policies evaluate during resource creation, preventing non-compliant resources before they’re created rather than detecting violations afterward. For machine type enforcement, combining organization policies with IAM restrictions on who can create instances provides comprehensive control. You might also use deployment automation tools like Terraform with validation rules ensuring only approved configurations are deployed, complementing organization policy enforcement with development-time checks.
Option B is incorrect because compute.disableSerialPortAccess controls whether VMs have serial port access enabled, not machine type restrictions. This constraint addresses security concerns about serial console access.
Option C is incorrect because compute.requireShieldedVm enforces that VMs use Shielded VM features like secure boot and vTPM, not machine type restrictions. Shielded VM is about security features rather than compute capacity governance.
Option D is incorrect because compute.skipDefaultNetworkCreation prevents automatic default network creation in new projects, not machine type restrictions. This constraint addresses network configuration governance.
Question 94
You want to route traffic to the closest healthy backend based on user location for a global application. Which load balancing option should you use?
A) External HTTP(S) Load Balancer
B) Regional Network Load Balancer
C) Internal TCP/UDP Load Balancer
D) SSL Proxy Load Balancer
Answer: A
The correct answer is option A. External HTTP(S) Load Balancer is Google Cloud’s global load balancing solution providing geo-proximity routing, automatic failover, and health-based traffic distribution across backend instances globally. This load balancer type delivers optimal user experience by routing requests to the nearest healthy backend.
The External HTTP(S) Load Balancer uses Google’s global network with a single anycast IP address that users connect to worldwide. Google’s infrastructure automatically routes traffic to the nearest Point of Presence (PoP), then directs it to the closest healthy backend based on latency and backend health. The load balancer supports backends in multiple regions as backend services with health checks ensuring traffic reaches only healthy instances. When backends in one region become unhealthy or overwhelmed, traffic automatically shifts to healthy backends in other regions. The load balancer provides URL-based routing for microservices, SSL/TLS termination, integration with Cloud CDN for static content caching, Cloud Armor for security, and session affinity for stateful applications. For global applications serving users worldwide, this architecture provides the best latency by serving traffic from geographically distributed backends while maintaining high availability through automatic failover. The load balancer scales automatically to handle traffic spikes without capacity planning.
Option B is incorrect because Regional Network Load Balancer operates within a single region and doesn’t provide global routing based on user location. Regional load balancers are suitable for regional applications but don’t optimize global traffic distribution.
Option C is incorrect because Internal TCP/UDP Load Balancer provides load balancing for traffic within VPC networks, not for external users. Internal load balancers address east-west traffic patterns rather than global internet user traffic.
Option D is incorrect because SSL Proxy Load Balancer handles non-HTTP(S) SSL traffic globally but lacks HTTP-specific features like URL-based routing and Cloud CDN integration. For HTTP/HTTPS applications, HTTP(S) load balancer provides better functionality.
Question 95
You need to ensure that data stored in BigQuery tables is encrypted using your own encryption keys rather than Google-managed keys. What should you configure?
A) Customer-managed encryption keys (CMEK) with Cloud KMS
B) Client-side encryption before loading data
C) Application-level encryption
D) BigQuery default encryption
Answer: A
The correct answer is option A. Customer-managed encryption keys (CMEK) through Cloud KMS allow you to control encryption keys used for BigQuery data at rest, providing additional security control and enabling compliance with regulations requiring customer control over encryption keys. CMEK gives you authority over data access through key management.
To implement CMEK with BigQuery, you create encryption keys in Cloud KMS in the same region as your BigQuery dataset, grant BigQuery service accounts appropriate IAM permissions to use those keys (roles/cloudkms.cryptoKeyEncrypter/Decrypter), then configure BigQuery datasets or tables to use specific Cloud KMS keys for encryption. BigQuery encrypts data using envelope encryption where data encryption keys (DEKs) encrypt actual data and your Cloud KMS key encrypts the DEKs. When you disable or destroy a Cloud KMS key, data encrypted with that key becomes inaccessible even to Google, providing strong control over data availability. CMEK supports key rotation, audit logging of key usage through Cloud Audit Logs, and centralized key management across multiple Google Cloud services. The approach satisfies compliance requirements for industries like healthcare, finance, or government requiring customer control over encryption keys. While CMEK adds complexity and costs compared to default Google-managed encryption, it provides essential control for sensitive data scenarios.
Option B is incorrect because client-side encryption before loading data means BigQuery never has access to plaintext data, preventing BigQuery from performing queries or analytics. Client-side encryption is suitable for opaque data storage but defeats BigQuery’s analytical capabilities.
Option C is incorrect because application-level encryption encrypts data within application code before storage, similar to client-side encryption, making data unqueryable by BigQuery. Application encryption works for specific use cases but isn’t the recommended approach for general BigQuery encryption control.
Option D is incorrect because BigQuery default encryption uses Google-managed keys, which doesn’t provide customer control over keys. Default encryption is transparent and sufficient for many use cases but doesn’t meet requirements for customer-managed keys.
Question 96
You want to limit the number of concurrent connections to your Cloud SQL instance to prevent resource exhaustion. Which parameter should you configure?
A) max_connections
B) connection_timeout
C) max_allowed_packet
D) query_cache_size
Answer: A
The correct answer is option A. The max_connections parameter in Cloud SQL defines the maximum number of concurrent client connections allowed to the database instance. Configuring this parameter appropriately prevents resource exhaustion from excessive connections while ensuring sufficient capacity for legitimate application needs.
Cloud SQL MySQL and PostgreSQL instances have default max_connections values calculated based on instance memory, but you can adjust this parameter through database flags in Cloud SQL configuration. Setting max_connections too high can exhaust instance memory and CPU as each connection consumes resources, while setting it too low can cause application connection failures during peak loads. You should consider your application’s connection pooling configuration, expected concurrent user load, and instance resources when setting this value. Best practices include implementing connection pooling in applications to reuse connections efficiently rather than creating new connections for each request, monitoring actual connection usage through Cloud SQL metrics, and setting max_connections with headroom for traffic spikes. For applications experiencing frequent “too many connections” errors, solutions include increasing max_connections if resources allow, implementing or optimizing connection pooling, scaling to larger instance types with more memory, or using Cloud SQL Proxy which provides connection pooling. The parameter is particularly important for shared application environments where multiple services connect to the same database.
Option B is incorrect because connection_timeout controls how long the server waits for connection requests before timing out inactive connections, not limiting total concurrent connections. Timeout addresses stuck connections rather than maximum capacity.
Option C is incorrect because max_allowed_packet sets the maximum size of packets or generated intermediate strings, not connection limits. This parameter addresses query and data size constraints rather than connection capacity.
Option D is incorrect because query_cache_size defines memory allocated for caching query results in MySQL (deprecated in MySQL 8.0), not connection limits. Query cache affects query performance but doesn’t restrict connection counts.
Question 97
You need to create a development project that inherits organization-wide security policies but allows developers greater flexibility for experimentation. How should you structure this in the resource hierarchy?
A) Create a development folder with specific policies overriding organization policies
B) Create an independent organization
C) Use a separate billing account
D) Create a shared VPC
Answer: A
The correct answer is option A. Creating a development folder within the organization hierarchy allows you to apply development-specific policies that override more restrictive organization-level policies while maintaining governance structure and baseline security. Folders provide hierarchical policy management enabling different rules for different environments.
In Google Cloud’s resource hierarchy (Organization > Folders > Projects), policies inherit from parent resources but can be overridden at lower levels if policy enforcement settings allow. For development environments, you create a folder under the organization, place development projects within that folder, and apply organization policies that grant additional flexibility like allowing external IP addresses, permitting specific machine types, or relaxing certain security constraints. Organization-wide baseline policies for critical requirements like audit logging or data residency remain enforced through inheritance even while development-specific policies provide necessary flexibility. This structure maintains visibility and control—all development projects remain under organizational governance with appropriate billing, IAM inheritance, and audit logging while enabling the flexibility developers need for experimentation and testing. You can implement similar patterns for staging and production folders with progressively stricter policies. This hierarchy approach balances security, compliance, and development agility without fragmenting your cloud organization into separate entities.
Option B is incorrect because creating independent organizations fragments your cloud environment, losing centralized management, shared billing, consolidated audit logging, and unified identity management. Separate organizations create administrative overhead and reduce visibility.
Option C is incorrect because while separate billing accounts might provide cost tracking, they don’t address policy management or provide the hierarchical policy control that folders offer. Billing accounts are about financial management, not resource governance.
Option D is incorrect because Shared VPC provides network resource sharing across projects but doesn’t address organizational policy management or resource hierarchy governance. Shared VPC solves network connectivity problems rather than policy and flexibility requirements.
Question 98
You need to migrate a large on-premises dataset (multiple terabytes) to Cloud Storage with minimal network bandwidth consumption. Which service should you use?
A) Transfer Appliance
B) Storage Transfer Service
C) gsutil
D) Cloud VPN
Answer: A
The correct answer is option A. Transfer Appliance is a physical storage device that Google ships to your data center for loading large datasets locally, then ships back to Google for data upload to Cloud Storage. This offline data transfer method is ideal for multi-terabyte datasets where network transfer would be prohibitively slow or expensive.
Transfer Appliance is a high-capacity NAS device available in 100TB or 480TB configurations that you connect to your local network. You mount the appliance as a network share, copy data using standard file transfer tools, then ship the appliance back to Google where data is uploaded to your specified Cloud Storage buckets. This approach bypasses network bandwidth limitations, potentially completing transfers in days that might take weeks or months over internet connections. Transfer Appliance is cost-effective for datasets where network egress charges and transfer time make online transfer impractical. The service includes encryption at rest and in transit for security, tracking for shipment visibility, and supports incremental transfers for ongoing data migration. Google provides the appliance, shipping logistics, and data loading services as part of the offering. Common use cases include initial cloud migrations, disaster recovery dataset transfers, data center decommissioning, and periodic bulk dataset updates. For datasets under 10TB, network transfer using gsutil or Storage Transfer Service might be more practical depending on bandwidth availability.
Option B is incorrect because Storage Transfer Service moves data over the network between cloud storage providers or from HTTP/HTTPS sources, making it unsuitable for extremely large datasets with limited bandwidth. Storage Transfer Service works well for online transfers but doesn’t address bandwidth constraints.
Option C is incorrect because gsutil is a command-line tool for Cloud Storage operations including data upload, but transfers occur over the network. While gsutil supports parallel uploads and resumable transfers, it still requires sufficient network bandwidth for multi-terabyte datasets.
Option D is incorrect because Cloud VPN provides secure connectivity between on-premises and Google Cloud but doesn’t increase available bandwidth or accelerate large data transfers. VPN encrypts network traffic but data still transfers over existing network connections.
Question 99
You want to analyze application traces to identify performance bottlenecks across microservices. Which Google Cloud service provides distributed tracing capabilities?
A) Cloud Trace
B) Cloud Profiler
C) Cloud Monitoring
D) Cloud Logging
Answer: A
The correct answer is option A. Cloud Trace is Google Cloud’s distributed tracing system that collects latency data from applications, showing how requests propagate through multiple services and identifying performance bottlenecks. Trace provides visibility into request execution paths across distributed architectures like microservices.
Cloud Trace captures timing information for requests as they flow through your application, recording spans representing operations within services and their durations. Traces show the complete request path including which services were called, in what order, how long each operation took, and where time was spent. This visibility is essential for microservices where requests traverse multiple services and identifying bottlenecks requires understanding the entire call chain. Cloud Trace automatically traces App Engine, Cloud Run, and GKE applications with minimal configuration. For custom applications, you instrument code using OpenTelemetry or Trace client libraries to create spans and add annotations. Trace provides a web interface visualizing request latency distributions, scatter plots showing latency patterns, and detailed timeline views for individual traces. You can filter traces by latency threshold to focus on slow requests, analyze traces by service to identify problematic components, and correlate traces with logs for comprehensive debugging. Cloud Trace integrates with Cloud Monitoring for latency-based alerting and with Cloud Profiler for code-level performance analysis.
Option B is incorrect because Cloud Profiler analyzes CPU and memory usage at the code level within individual services, not request flow across services. Profiler identifies inefficient code but doesn’t trace distributed requests through microservices architectures.
Option C is incorrect because Cloud Monitoring collects and visualizes metrics for infrastructure and applications but doesn’t provide detailed request tracing through distributed systems. Monitoring shows aggregate latency metrics while Trace shows individual request paths.
Option D is incorrect because Cloud Logging collects and stores log messages but doesn’t automatically create distributed traces showing request flow timing. While logs can contain trace IDs for correlation, Logging doesn’t provide the tracing visualization and latency analysis that Trace offers.
Question 100
You need to implement access controls based on user device security posture, requiring managed devices for accessing sensitive applications. Which service should you configure?
A) BeyondCorp Enterprise (formerly Context-Aware Access)
B) Cloud Identity
C) Identity-Aware Proxy
D) VPC Service Controls
Answer: A
The correct answer is option A. BeyondCorp Enterprise provides context-aware access controls that evaluate device security posture, user identity, location, and other contextual signals before granting access to applications and resources. This zero-trust security model ensures only compliant devices and authorized users access sensitive applications.
BeyondCorp Enterprise (formerly Context-Aware Access) integrates with endpoint management solutions to verify device compliance including encryption status, operating system version, screen lock configuration, and installed security software. You configure access levels defining requirements like “device must be corporate-managed and encrypted” or “user must authenticate with two-factor authentication from trusted location.” These access levels are then applied to applications through IAM conditional policies or Identity-Aware Proxy, enforcing requirements before granting access. The service evaluates access requests in real-time, denying access from non-compliant devices even when users have valid credentials. This approach protects against compromised devices, enforces security baselines across the organization, supports bring-your-own-device (BYOD) policies with appropriate restrictions, and enables granular access controls based on risk factors. BeyondCorp Enterprise supports integration with Chrome Enterprise for Chrome OS device management, Microsoft Endpoint Manager for Windows devices, and mobile device management solutions for iOS and Android. The zero-trust model eliminates reliance on network perimeter security, recognizing that threats exist both inside and outside traditional network boundaries.
Option B is incorrect because Cloud Identity provides user identity management including authentication and directory services but doesn’t evaluate device security posture. Cloud Identity is the identity provider foundation that BeyondCorp Enterprise builds upon.
Option C is incorrect because while Identity-Aware Proxy can enforce access controls and integrates with BeyondCorp Enterprise for context-aware decisions, IAP alone doesn’t evaluate device posture. IAP provides authentication and authorization proxying but requires BeyondCorp Enterprise for context-aware device checks.
Option D is incorrect because VPC Service Controls create security perimeters preventing data exfiltration from Google Cloud services, not device-based access controls for users. Service Controls protect cloud resources from unauthorized projects but don’t evaluate user device compliance.