Questions
ayuda
option
My Daypo

ERASED TEST, YOU MAY BE INTERESTED ONGD

COMMENTS STATISTICS RECORDS
TAKE THE TEST
Title of test:
GD

Description:
devoloperGC

Author:
AVATAR

Creation Date:
23/12/2023

Category:
Computers

Number of questions: 262
Share the Test:
Facebook
Twitter
Whatsapp
Share the Test:
Facebook
Twitter
Whatsapp
Last comments
No comments about this test.
Content:
You need to migrate a standalone Java application running in an on-premises Linux virtual machine (VM) to Google Cloud in a cost-effective manner. You decide not to take the lift-and-shift approach, and instead you plan to modernize the application by converting it to a container. How should you accomplish this task? A. Use Migrate for Anthos to migrate the VM to your Google Kubernetes Engine (GKE) cluster as a container. B. Export the VM as a raw disk and import it as an image. Create a Compute Engine instance from the Imported image. C. Use Migrate for Compute Engine to migrate the VM to a Compute Engine instance, and use Cloud Build to convert it to a container. D. Use Jib to build a Docker image from your source code, and upload it to Artifact Registry. Deploy the application in a GKE cluster, and test the application.
Your application is deployed on hundreds of Compute Engine instances in a managed instance group (MIG) in multiple zones. You need to deploy a new instance template to fix a critical vulnerability immediately but must avoid impact to your service. What setting should be made to the MIG after updating the instance template? A. Set the Max Surge to 100%. B. Set the Update mode to Opportunistic. C. Set the Maximum Unavailable to 100%. D. Set the Minimum Wait time to 0 seconds.
Your application is running as a container in a Google Kubernetes Engine cluster. You need to add a secret to your application using a secure approach. What should you do? A. Create a Kubernetes Secret, and pass the Secret as an environment variable to the container. B. Enable Application-layer Secret Encryption on the cluster using a Cloud Key Management Service (KMS) key. C. Store the credential in Cloud KMS. Create a Google service account (GSA) to read the credential from Cloud KMS. Export the GSA as a .json file, and pass the .json file to the container as a volume which can read the credential from Cloud KMS. D. Store the credential in Secret Manager. Create a Google service account (GSA) to read the credential from Secret Manager. Create a Kubernetes service account (KSA) to run the container. Use Workload Identity to configure your KSA to act as a GSA.
You are a lead developer working on a new retail system that runs on Cloud Run and Firestore in Datastore mode. A web UI requirement is for the system to display a list of available products when users access the system and for the user to be able to browse through all products. You have implemented this requirement in the minimum viable product (MVP) phase by returning a list of all available products stored in Firestore. A few months after go-live, you notice that Cloud Run instances are terminated with HTTP 500: Container instances are exceeding memory limits errors during busy times. This error coincides with spikes in the number of Datastore entity reads. You need to prevent Cloud Run from crashing and decrease the number of Datastore entity reads. You want to use a solution that optimizes system performance. What should you do? Modify the query that returns the product list using integer offsets. B. Modify the query that returns the product list using limits. C. Modify the Cloud Run configuration to increase the memory limits. D. Modify the query that returns the product list using cursors. .
You are developing a new application. You want the application to be triggered only when a given file is updated in your Cloud Storage bucket. Your trigger might change, so your process must support different types of triggers. You want the configuration to be simple so that multiple team members can update the triggers in the future. What should you do? A. Configure Cloud Storage events to be sent to Pub/Sub, and use Pub/Sub events to trigger a Cloud Build job that executes your application B. Create an Eventarc trigger that monitors your Cloud Storage bucket for a specific filename, and set the target as Cloud Run. C. Configure a Cloud Function that executes your application and is triggered when an object is updated in Cloud Storage. D. Configure a Firebase function that executes your application and is triggered when an object is updated in Cloud Storage.
You are defining your system tests for an application running in Cloud Run in a Google Cloud project. You need to create a testing environment that is isolated from the production environment. You want to fully automate the creation of the testing environment with the least amount of effort and execute automated tests. What should you do? A. Using Cloud Build, execute Terraform scripts to create a new Google Cloud project and a Cloud Run instance of your application in the Google Cloud project. B. Using Cloud Build, execute a Terraform script to deploy a new Cloud Run revision in the existing Google Cloud project. Use traffic splitting to send traffic to your test environment. C. Using Cloud Build, execute gcloud commands to create a new Google Cloud project and a Cloud Run instance of your application in the Google Cloud project. D. Using Cloud Build, execute gcloud commands to deploy a new Cloud Run revision in the existing Google Cloud project. Use traffic splitting to send traffic to your test environment.
You are a cluster administrator for Google Kubernetes Engine (GKE). Your organization’s clusters are enrolled in a release channel. You need to be informed of relevant events that affect your GKE clusters, such as available upgrades and security bulletins. What should you do? A. Configure cluster notifications to be sent to a Pub/Sub topic. B. Execute a scheduled query against the google_cloud_release_notes BigQuery dataset. C. Query the GKE API for available versions. D. Create an RSS subscription to receive a daily summary of the GKE release notes.
You are developing a single-player mobile game backend that has unpredictable traffic patterns as users interact with the game throughout the day and night. You want to optimize costs by ensuring that you have enough resources to handle requests, but minimize over-provisioning. You also want the system to handle traffic spikes efficiently. Which compute platform should you use? A. Cloud Run B. Compute Engine with managed instance groups C. Compute Engine with unmanaged instance groups D. Google Kubernetes Engine using cluster autoscaling.
You want to use the Stackdriver Logging Agent to send an application's log file to Stackdriver from a Compute Engine virtual machine instance. After installing the Stackdriver Logging Agent, what should you do first? A. Enable the Error Reporting API on the project. B. Grant the instance full access to all Cloud APIs. C. Configure the application log file as a custom source. D. Create a Stackdriver Logs Export Sink with a filter that matches the application's log entries.
You have a web application that publishes messages to Pub/Sub. You plan to build new versions of the application locally and need to quickly test Pub/Sub integration for each new build. How should you configure local testing? A. In the Google Cloud console, navigate to the API Library, and enable the Pub/Sub API. When developing locally configure your application to call pubsub.googleapis.com. B. Install the Pub/Sub emulator using gcloud, and start the emulator with a valid Google Project ID. When developing locally, configure your applicat.cn to use the local emulator by exporting the PUBSUB_EMULATOR_HOST variable. C. Run the gcloud config set api_endpoint_overrides/pubsub https://pubsubemulator.googleapis.com.com/ command to change the Pub/Sub endpoint prior to starting the application. D. Install Cloud Code on the integrated development environment (IDE). Navigate to Cloud APIs, and enable Pub/Sub against a valid Google Project IWhen developing locally, configure your application to call pubsub.googleapis.com.
You recently developed an application that monitors a large number of stock prices. You need to configure Pub/Sub to receive messages and update the current stock price in an in-memory database. A downstream service needs the most up-to-date prices in the in-memory database to perform stock trading transactions. Each message contains three pieces or information: • Stock symbol • Stock price • Timestamp for the update How should you set up your Pub/Sub subscription? A. Create a push subscription with exactly-once delivery enabled. B. Create a pull subscription with both ordering and exactly-once delivery turned off. C. Create a pull subscription with ordering enabled, using the stock symbol as the ordering key. D. Create a push subscription with both ordering and exactly-once delivery turned off.
You are tasked with using C++ to build and deploy a microservice for an application hosted on Google Cloud. The code needs to be containerized and use several custom software libraries that your team has built. You do not want to maintain the underlying infrastructure of the application. How should you deploy the microservice? A. Use Cloud Functions to deploy the microservice. B. Use Cloud Build to create the container, and deploy it on Cloud Run. C. Use Cloud Shell to containerize your microservice, and deploy it on a Container-Optimized OS Compute Engine instance. D. Use Cloud Shell to containerize your microservice, and deploy it on standard Google Kubernetes Engine.
ou are deploying a microservices application to Google Kubernetes Engine (GKE) that will broadcast livestreams. You expect unpredictable traffic patterns and large variations in the number of concurrent users. Your application must meet the following requirements: • Scales automatically during popular events and maintains high availability • Is resilient in the event of hardware failures How should you configure the deployment parameters? (Choose two.) A. Distribute your workload evenly using a multi-zonal node pool. B. Distribute your workload evenly using multiple zonal node pools. C. Use cluster autoscaler to resize the number of nodes in the node pool, and use a Horizontal Pod Autoscaler to scale the workload. D. Create a managed instance group for Compute Engine with the cluster nodes. Configure autoscaling rules for the managed instance group. E. Create alerting policies in Cloud Monitoring based on GKE CPU and memory utilization. Ask an on-duty engineer to scale the workload by executing a script when CPU and memory usage exceed predefined thresholds.
You recently developed a new application. You want to deploy the application on Cloud Run without a Dockerfile. Your organization requires that all container images are pushed to a centrally managed container repository. How should you build your container using Google Cloud services? (Choose two.) A. Push your source code to Artifact Registry. B. Submit a Cloud Build job to push the image. C. Use the pack build command with pack CLI. D. Include the --source flag with the gcloud run deploy CLI command. E. Include the --platform=kubernetes flag with the gcloud run deploy CLI command.
You have an application running on Google Kubernetes Engine (GKE). The application is currently using a logging library and is outputting to standard output. You need to export the logs to Cloud Logging, and you need the logs to include metadata about each request. You want to use the simplest method to accomplish this. What should you do? A. Change your application’s logging library to the Cloud Logging library, and configure your application to export logs to Cloud Logging. B. Update your application to output logs in JSON format, and add the necessary metadata to the JSON. C. Update your application to output logs in CSV format, and add the necessary metadata to the CSV. D. Install the Fluent Bit agent on each of your GKE nodes, and have the agent export all logs from /var/log.
You want to use the Stackdriver Logging Agent to send an application's log file to Stackdriver from a Compute Engine virtual machine instance. After installing the Stackdriver Logging Agent, what should you do first? A. Enable the Error Reporting API on the project. B. Grant the instance full access to all Cloud APIs. C. Configure the application log file as a custom source. D. Create a Stackdriver Logs Export Sink with a filter that matches the application's log entries.
You have a web application that publishes messages to Pub/Sub. You plan to build new versions of the application locally and need to quickly test Pub/Sub integration for each new build. How should you configure local testing? A. In the Google Cloud console, navigate to the API Library, and enable the Pub/Sub API. When developing locally configure your application to call pubsub.googleapis.com. B. Install the Pub/Sub emulator using gcloud, and start the emulator with a valid Google Project ID. When developing locally, configure your applicat.cn to use the local emulator by exporting the PUBSUB_EMULATOR_HOST variable. C. Run the gcloud config set api_endpoint_overrides/pubsub https://pubsubemulator.googleapis.com.com/ command to change the Pub/Sub endpoint prior to starting the application. D. Install Cloud Code on the integrated development environment (IDE). Navigate to Cloud APIs, and enable Pub/Sub against a valid Google Project IWhen developing locally, configure your application to call pubsub.googleapis.com.
You have a web application that publishes messages to Pub/Sub. You plan to build new versions of the application locally and want to quickly test Pub/Sub integration for each new build. How should you configure local testing? A. Install Cloud Code on the integrated development environment (IDE). Navigate to Cloud APIs, and enable Pub/Sub against a valid Google Project ID. When developing locally, configure your application to call pubsub.googleapis.com. B. Install the Pub/Sub emulator using gcloud, and start the emulator with a valid Google Project ID. When developing locally, configure your application to use the local emulator with ${gcloud beta emulators pubsub env-init}. C. In the Google Cloud console, navigate to the API Library, and enable the Pub/Sub API. When developing locally, configure your application to call pubsub.googleapis.com. D. Install the Pub/Sub emulator using gcloud, and start the emulator with a valid Google Project IWhen developing locally, configure your application to use the local emulator by exporting the PUBSUB_EMULATOR_HOST variable.
You are developing an event-driven application. You have created a topic to receive messages sent to Pub/Sub. You want those messages to be processed in real time. You need the application to be independent from any other system and only incur costs when new messages arrive. How should you configure the architecture? A. Deploy the application on Compute Engine. Use a Pub/Sub push subscription to process new messages in the topic. B. Deploy your code on Cloud Functions. Use a Pub/Sub trigger to invoke the Cloud Function. Use the Pub/Sub API to create a pull subscription to the Pub/Sub topic and read messages from it. C. Deploy the application on Google Kubernetes Engine. Use the Pub/Sub API to create a pull subscription to the Pub/Sub topic and read messages from it. D. Deploy your code on Cloud Functions. Use a Pub/Sub trigger to handle new messages in the topic.
You recently developed a new application. You want to deploy the application on Cloud Run without a Dockerfile. Your organization requires that all container images are pushed to a centrally managed container repository. How should you build your container using Google Cloud services? (Choose two.) A. Push your source code to Artifact Registry. B. Submit a Cloud Build job to push the image. C. Use the pack build command with pack CLI. D. Include the --source flag with the gcloud run deploy CLI command. E. Include the --platform=kubernetes flag with the gcloud run deploy CLI command.
You need to containerize a web application that will be hosted on Google Cloud behind a global load balancer with SSL certificates. You don’t have the time to develop authentication at the application level, and you want to offload SSL encryption and management from your application. You want to configure the architecture using managed services where possible. What should you do? A. Host the application on Google Kubernetes Engine, and deploy an NGINX Ingress Controller to handle authentication. B. Host the application on Google Kubernetes Engine, and deploy cert-manager to manage SSL certificates. C. Host the application on Compute Engine, and configure Cloud Endpoints for your application. D. Host the application on Google Kubernetes Engine, and use Identity-Aware Proxy (IAP) with Cloud Load Balancing and Google-managed certificates.
You manage a system that runs on stateless Compute Engine VMs and Cloud Run instances. Cloud Run is connected to a VPC, and the ingress setting is set to Internal. You want to schedule tasks on Cloud Run. You create a service account and grant it the roles/run.invoker Identity and Access Management (IAM) role. When you create a schedule and test it, a 403 Permission Denied error is returned in Cloud Logging. What should you do? A. Grant the service account the roles/run.developer IAM role. B. Configure a cron job on the Compute Engine VMs to trigger Cloud Run on schedule. C. Change the Cloud Run ingress setting to 'Internal and Cloud Load Balancing.' D. Use Cloud Scheduler with Pub/Sub to invoke Cloud Run.
You work on an application that relies on Cloud Spanner as its main datastore. New application features have occasionally caused performance regressions. You want to prevent performance issues by running an automated performance test with Cloud Build for each commit made. If multiple commits are made at the same time, the tests might run concurrently. What should you do? A. Create a new project with a random name for every build. Load the required data. Delete the project after the test is run. B. Create a new Cloud Spanner instance for every build. Load the required data. Delete the Cloud Spanner instance after the test is run. C. Create a project with a Cloud Spanner instance and the required data. Adjust the Cloud Build build file to automatically restore the data to its previous state after the test is run. D. Start the Cloud Spanner emulator locally. Load the required data. Shut down the emulator after the test is run.
Your company's security team uses Identity and Access Management (IAM) to track which users have access to which resources. You need to create a version control system that can integrate with your security team's processes. You want your solution to support fast release cycles and frequent merges to your main branch to minimize merge conflicts. What should you do? A. Create a Cloud Source Repositories repository, and use trunk-based development. B. Create a Cloud Source Repositories repository, and use feature-based development. C. Create a GitHub repository, mirror it to a Cloud Source Repositories repository, and use trunk-based development. D. Create a GitHub repository, mirror it to a Cloud Source Repositories repository, and use feature-based development.
Your ecommerce application receives external requests and forwards them to third-party API services for credit card processing, shipping, and inventory management as shown in the diagram. Your customers are reporting that your application is running slowly at unpredictable times. The application doesn’t report any metrics. You need to determine the cause of the inconsistent performance. What should you do? A. Install the OpenTelemetry library for your respective language, and instrument your application. B. Install the Ops Agent inside your container and configure it to gather application metrics. C. Modify your application to read and forward the X-Cloud-Trace-Context header when it calls the downstream services. D. Enable Managed Service for Prometheus on the Google Kubernetes Engine cluster to gather application metrics. .
You are using Cloud Run to host a global ecommerce web application. Your company’s design team is creating a new color scheme for the web app. You have been tasked with determining whether the new color scheme will increase sales. You want to conduct testing on live production traffic. How should you design the study? A. Use an external HTTP(S) load balancer to route a predetermined percentage of traffic to two different color schemes of your application. Analyze the results to determine whether there is a statistically significant difference in sales. B. Use an external HTTP(S) load balancer to route traffic to the original color scheme while the new deployment is created and tested. After testing is complete, reroute all traffic to the new color scheme. Analyze the results to determine whether there is a statistically significant difference in sales. C. Use an external HTTP(S) load balancer to mirror traffic to the new version of your application. Analyze the results to determine whether there is a statistically significant difference in sales. D. Enable a feature flag that displays the new color scheme to half of all users. Monitor sales to see whether they increase for this group of users.
Your website is deployed on Compute Engine. Your marketing team wants to test conversion rates between 3 different website designs. Which approach should you use? A. Deploy the website on App Engine and use traffic splitting. B. Deploy the website on App Engine as three separate services. C. Deploy the website on Cloud Functions and use traffic splitting. D. Deploy the website on Cloud Functions as three separate functions.
You have an application running in a production Google Kubernetes Engine (GKE) cluster. You use Cloud Deploy to automatically deploy your application to your production GKE cluster. As part of your development process, you are planning to make frequent changes to the application’s source code and need to select the tools to test the changes before pushing them to your remote source code repository. Your toolset must meet the following requirements: • Test frequent local changes automatically. • Local deployment emulates production deployment. Which tools should you use to test building and running a container on your laptop using minimal resources? A. Docker Compose and dockerd B. Terraform and kubeadm C. Minikube and Skaffold D. kaniko and Tekton.
You want to upload files from an on-premises virtual machine to Google Cloud Storage as part of a data migration. These files will be consumed by Cloud DataProc Hadoop cluster in a GCP environment. Which command should you use? A. gsutil cp [LOCAL_OBJECT] gs://[DESTINATION_BUCKET_NAME]/ B. gcloud cp [LOCAL_OBJECT] gs://[DESTINATION_BUCKET_NAME]/ C. hadoop fs cp [LOCAL_OBJECT] gs://[DESTINATION_BUCKET_NAME]/ D. gcloud dataproc cp [LOCAL_OBJECT] gs://[DESTINATION_BUCKET_NAME]/.
You are using Cloud Build build to promote a Docker image to Development, Test, and Production environments. You need to ensure that the same Docker image is deployed to each of these environments. How should you identify the Docker image in your build? A. Use the latest Docker image tag. B. Use a unique Docker image name C. Use the digest of the Docker image. D. Use a semantic version Docker image tag.
Your team has created an application that is hosted on a Google Kubemetes Engine (GKE) cluster. You need to connect the application to a legacy REST service that is deployed in two GKE clusters in two different regions. You want to connect your application to the legacy service in a way that is resilient and requires the fewest number of steps. You also want to be able to run probe-based health checks on the legacy service on a separate port. How should you set up the connection? (Choose two.) A. Use Traffic Director with a sidecar proxy to connect the application to the service. B. Set up a proxyless Traffic Director configuration for the application. C. Configure the legacy service's firewall to allow health checks originating from the sidecar proxy. D. Configure the legacy service's firewall to allow health checks originating from the application. E. Configure the legacy service's firewall to allow health checks originating from the Traffic Director control plane.
You manage a microservice-based ecommerce platform on Google Cloud that sends confirmation emails to a third-party email service provider using a Cloud Function. Your company just launched a marketing campaign, and some customers are reporting that they have not received order confirmation emails. You discover that the services triggering the Cloud Function are receiving HTTP 500 errors. You need to change the way emails are handled to minimize email loss. What should you do? A. Increase the Cloud Function's timeout to nine minutes. B. Configure the sender application to publish the outgoing emails in a message to a Pub/Sub topic. Update the Cloud Function configuration to consume the Pub/Sub queue. C. Configure the sender application to write emails to Memorystore and then trigger the Cloud Function. When the function is triggered, it reads the email details from Memorystore and sends them to the email service. D. Configure the sender application to retry the execution of the Cloud Function every one second if a request fails.
You migrated your applications to Google Cloud Platform and kept your existing monitoring platform. You now find that your notification system is too slow for time critical problems. What should you do? A. Replace your entire monitoring platform with Stackdriver. B. Install the Stackdriver agents on your Compute Engine instances. C. Use Stackdriver to capture and alert on logs, then ship them to your existing platform. D. Migrate some traffic back to your old platform and perform AB testing on the two platforms concurrently.
You are planning to migrate a MySQL database to the managed Cloud SQL database for Google Cloud. You have Compute Engine virtual machine instances that will connect with this Cloud SQL instance. You do not want to whitelist IPs for the Compute Engine instances to be able to access Cloud SQL. What should you do? A. Enable private IP for the Cloud SQL instance. B. Whitelist a project to access Cloud SQL, and add Compute Engine instances in the whitelisted project. C. Create a role in Cloud SQL that allows access to the database from external instances, and assign the Compute Engine instances to that role. D. Create a CloudSQL instance on one project. Create Compute engine instances in a different project. Create a VPN between these two projects to allow internal access to CloudSQL.
Health checks to port 80 on the Compute Engine virtual machine instance are failing and no traffic is sent to your instances. You want to resolve the problem. Which commands should you run? gcloud compute instances add-access-config ${NAME}-backend-instance-1 gcloud compute instances add-tags ${NAME}-backend-instance-1 --tags http-server gcloud compute firewall-rules create allow-lb --network load-balancer --allow tcp --source-ranges 130.211.0.0/22,35.191.0.0/16 --direction INGRESS gcloud compute firewall-rules create allow-lb --network load-balancer --allow tcp --source-ranges 130.211.0.0/22,35.191.0.0/16 --direction EGRESS .
Your team develops services that run on Google Kubernetes Engine. You need to standardize their log data using Google-recommended practices and make the data more useful in the fewest number of steps. What should you do? (Choose two.) Create aggregated exports on application logs to BigQuery to facilitate log analytics. Create aggregated exports on application logs to Cloud Storage to facilitate log analytics. Mandate the use of the Logging API in the application code to write structured logs to Cloud Logging. Write log output to standard output (stdout) as single-line JSON to be ingested into Cloud Logging as structured logs. Mandate the use of the Pub/Sub API to write structured data to Pub/Sub and create a Dataflow streaming pipeline to normalize logs and write them to BigQuery for analytics.
HipLocal's .net-based auth service fails under intermittent load. What should they do? Use App Engine for autoscaling. Use Cloud Functions for autoscaling. Use a Compute Engine cluster for the service. Use a dedicated Compute Engine virtual machine instance for the service.
You are designing a resource-sharing policy for applications used by different teams in a Google Kubernetes Engine cluster. You need to ensure that all applications can access the resources needed to run. What should you do? (Choose two.) Specify the resource limits and requests in the object specifications. Create a namespace for each team, and attach resource quotas to each namespace. Create a LimitRange to specify the default compute resource requirements for each namespace. Create a Kubernetes service account (KSA) for each application, and assign each KSA to the namespace. Use the Anthos Policy Controller to enforce label annotations on all namespaces. Use taints and tolerations to allow resource sharing for namespaces.
Your team develops stateless services that run on Google Kubernetes Engine (GKE). You need to deploy a new service that will only be accessed by other services running in the GKE cluster. The service will need to scale as quickly as possible to respond to changing load. What should you do? D. Use a Horizontal Pod Autoscaler to scale the containers, and expose them via a NodePort Service. Use a Vertical Pod Autoscaler to scale the containers, and expose them via a ClusterIP Service. Use a Vertical Pod Autoscaler to scale the containers, and expose them via a NodePort Service. Use a Horizontal Pod Autoscaler to scale the containers, and expose them via a ClusterIP Service. Use a Horizontal Pod Autoscaler to scale the containers, and expose them via a NodePort Service.
You are creating a web application that runs in a Compute Engine instance and writes a file to any user's Google Drive. You need to configure the application to authenticate to the Google Drive API. What should you do? Use an OAuth Client ID that uses the https://www.googleapis.com/auth/drive.file scope to obtain an access token for each user. Use an OAuth Client ID with delegated domain-wide authority. Use the App Engine service account and https://www.googleapis.com/auth/drive.file scope to generate a signed JSON Web Token (JWT). Use the App Engine service account with delegated domain-wide authority.
come? Collection->document Collection->collection->document collection->document->collection->document collection->collection->document->document.
The new version of your containerized application has been tested and is ready to deploy to production on Google Kubernetes Engine. You were not able to fully load-test the new version in pre-production environments, and you need to make sure that it does not have performance problems once deployed. Your deployment must be automated. What should you do? Use Cloud Load Balancing to slowly ramp up traffic between versions. Use Cloud Monitoring to look for performance issues. Deploy the application via a continuous delivery pipeline using canary deployments. Use Cloud Monitoring to look for performance issues. and ramp up traffic as the metrics support it. Deploy the application via a continuous delivery pipeline using blue/green deployments. Use Cloud Monitoring to look for performance issues, and launch fully when the metrics support it. Deploy the application using kubectl and set the spec.updateStrategv.type to RollingUpdate. Use Cloud Monitoring to look for performance issues, and run the kubectl rollback command if there are any issues.
You are developing a new application that has the following design requirements: ✑ Creation and changes to the application infrastructure are versioned and auditable. ✑ The application and deployment infrastructure uses Google-managed services as much as possible. ✑ The application runs on a serverless compute platform. How should you design the application's architecture? 1. Store the application and infrastructure source code in a Git repository. 2. Use Cloud Build to deploy the application infrastructure with Terraform. 3. Deploy the application to a Cloud Function as a pipeline step. 1. Deploy Jenkins from the Google Cloud Marketplace, and define a continuous integration pipeline in Jenkins. 2. Configure a pipeline step to pull the application source code from a Git repository. 3. Deploy the application source code to App Engine as a pipeline step. 1. Create a continuous integration pipeline on Cloud Build, and configure the pipeline to deploy the application infrastructure using Deployment Manager templates. 2. Configure a pipeline step to create a container with the latest application source code. 3. Deploy the container to a Compute Engine instance as a pipeline step. 1. Deploy the application infrastructure using gcloud commands. 2. Use Cloud Build to define a continuous integration pipeline for changes to the application source code. 3. Configure a pipeline step to pull the application source code from a Git repository, and create a containerized application. 4. Deploy the new container on Cloud Run as a pipeline step.
You have an application controlled by a managed instance group. When you deploy a new version of the application, costs should be minimized and the number of instances should not increase. You want to ensure that, when each new instance is created, the deployment only continues if the new instance is healthy. What should you do? Perform a rolling-action with maxSurge set to 1, maxUnavailable set to 0. Perform a rolling-action with maxSurge set to 0, maxUnavailable set to 1 Perform a rolling-action with maxHealthy set to 1, maxUnhealthy set to 0. Perform a rolling-action with maxHealthy set to 0, maxUnhealthy set to 1.
You are designing an application that will subscribe to and receive messages from a single Pub/Sub topic and insert corresponding rows into a database. Your application runs on Linux and leverages preemptible virtual machines to reduce costs. You need to create a shutdown script that will initiate a graceful shutdown. What should you do? Write a shutdown script that uses inter-process signals to notify the application process to disconnect from the database. Write a shutdown script that broadcasts a message to all signed-in users that the Compute Engine instance is going down and instructs them to save current work and sign out. Write a shutdown script that writes a file in a location that is being polled by the application once every five minutes. After the file is read, the application disconnects from the database Write a shutdown script that publishes a message to the Pub/Sub topic announcing that a shutdown is in progress. After the application reads the message, it disconnects from the database.
You are developing an application that will be launched on Compute Engine instances into multiple distinct projects, each corresponding to the environments in your software development process (development, QA, staging, and production). The instances in each project have the same application code but a different configuration. During deployment, each instance should receive the application's configuration based on the environment it serves. You want to minimize the number of steps to configure this flow. What should you do? When creating your instances, configure a startup script using the gcloud command to determine the project name that indicates the correct environment. In each project, configure a metadata key ג€environmentג€ whose value is the environment it serves. Use your deployment tool to query the instance metadata and configure the application based on the ג€environmentג€ value. Deploy your chosen deployment tool on an instance in each project. Use a deployment job to retrieve the appropriate configuration file from your version control system, and apply the configuration when deploying the application on each instance. During each instance launch, configure an instance custom-metadata key named ג€environmentג€ whose value is the environment the instance serves. Use your deployment tool to query the instance metadata, and configure the application based on the ג€environmentג€ value.
You are developing an application that reads credit card data from a Pub/Sub subscription. You have written code and completed unit testing. You need to test the Pub/Sub integration before deploying to Google Cloud. What should you do? Create a service to publish messages, and deploy the Pub/Sub emulator. Generate random content in the publishing service, and publish to the emulator. Create a service to publish messages to your application. Collect the messages from Pub/Sub in production, and replay them through the publishing service. Create a service to publish messages, and deploy the Pub/Sub emulator. Collect the messages from Pub/Sub in production, and publish them to the emulator. Create a service to publish messages, and deploy the Pub/Sub emulator. Publish a standard set of testing messages from the publishing service to the emulator.
You are developing an application that will allow users to read and post comments on news articles. You want to configure your application to store and display user-submitted comments using Firestore. How should you design the schema to support an unknown number of comments and articles? Store each comment in a subcollection of the article. Add each comment to an array property on the article. Store each comment in a document, and add the comment's key to an array property on the article. Store each comment in a document, and add the comment's key to an array property on the user profile.
You have written a Cloud Function that accesses other Google Cloud resources. You want to secure the environment using the principle of least privilege. What should you do?. Create a new service account that has Editor authority to access the resources. The deployer is given permission to get the access token. Create a new service account that has a custom IAM role to access the resources. The deployer is given permission to get the access token. Create a new service account that has Editor authority to access the resources. The deployer is given permission to act as the new service account. Create a new service account that has a custom IAM role to access the resources. The deployer is given permission to act as the new service account.
You are developing a web application that will be accessible over both HTTP and HTTPS and will run on Compute Engine instances. On occasion, you will need to SSH from your remote laptop into one of the Compute Engine instances to conduct maintenance on the app. How should you configure the instances while following Google-recommended best practices? Set up a backend with Compute Engine web server instances with a private IP address behind a TCP proxy load balancer. Configure the firewall rules to allow all ingress traffic to connect to the Compute Engine web servers, with each server having a unique external IP address. Configure Cloud Identity-Aware Proxy API for SSH access. Then configure the Compute Engine servers with private IP addresses behind an HTTP(s) load balancer for the application web traffic. Set up a backend with Compute Engine web server instances with a private IP address behind an HTTP(S) load balancer. Set up a bastion host with a public IP address and open firewall ports. Connect to the web instances using the bastion host.
Your team develops services that run on Google Cloud. You need to build a data processing service and will use Cloud Functions. The data to be processed by the function is sensitive. You need to ensure that invocations can only happen from authorized services and follow Google-recommended best practices for securing functions. What should you do? Enable Identity-Aware Proxy in your project. Secure function access using its permissions. Create a service account with the Cloud Functions Viewer role. Use that service account to invoke the function. Create a service account with the Cloud Functions Invoker role. Use that service account to invoke the function. Create an OAuth 2.0 client ID for your calling service in the same project as the function you want to secure. Use those credentials to invoke the function.
The new version of your containerized application has been tested and is ready to deploy to production on Google Kubernetes Engine. You were not able to fully load-test the new version in pre-production environments, and you need to make sure that it does not have performance problems once deployed. Your deployment must be automated. What should you do? Use Cloud Load Balancing to slowly ramp up traffic between versions. Use Cloud Monitoring to look for performance issues. Deploy the application via a continuous delivery pipeline using canary deployments. Use Cloud Monitoring to look for performance issues. and ramp up traffic as the metrics support it. Deploy the application via a continuous delivery pipeline using blue/green deployments. Use Cloud Monitoring to look for performance issues, and launch fully when the metrics support it. Deploy the application using kubectl and set the spec.updateStrategv.type to RollingUpdate. Use Cloud Monitoring to look for performance issues, and run the kubectl rollback command if there are any issues.
You work for a web development team at a small startup. Your team is developing a Node.js application using Google Cloud services, including Cloud Storage and Cloud Build. The team uses a Git repository for version control. Your manager calls you over the weekend and instructs you to make an emergency update to one of the company's websites, and you're the only developer available. You need to access Google Cloud to make the update, but you don't have your work laptop. You are not allowed to store source code locally on a non-corporate computer. How should you set up your developer environment? Use a text editor and the Git command line to send your source code updates as pull requests from a public computer. Use a text editor and the Git command line to send your source code updates as pull requests from a virtual machine running on a public computer. Use Cloud Shell and the built-in code editor for development. Send your source code updates as pull requests. Use a Cloud Storage bucket to store the source code that you need to edit. Mount the bucket to a public computer as a drive, and use a code editor to update the code. Turn on versioning for the bucket, and point it to the team's Git repository.
You are designing a deployment technique for your new applications on Google Cloud. As part of your deployment planning, you want to use live traffic to gather performance metrics for both new and existing applications. You need to test against the full production load prior to launch. What should you do? Use canary deployment Use blue/green deployment Use rolling updates deployment Use A/B testing with traffic mirroring during deployment.
You support an application that uses the Cloud Storage API. You review the logs and discover multiple HTTP 503 Service Unavailable error responses from the API. Your application logs the error and does not take any further action. You want to implement Google-recommended retry logic to improve success rates. Which approach should you take? Retry the failures in batch after a set number of failures is logged. Retry each failure at a set time interval up to a maximum number of times. Retry each failure at increasing time intervals up to a maximum number of tries. Retry each failure at decreasing time intervals up to a maximum number of tries.
You need to redesign the ingestion of audit events from your authentication service to allow it to handle a large increase in traffic. Currently, the audit service and the authentication system run in the same Compute Engine virtual machine. You plan to use the following Google Cloud tools in the new architecture: ✑ Multiple Compute Engine machines, each running an instance of the authentication service ✑ Multiple Compute Engine machines, each running an instance of the audit service ✑ Pub/Sub to send the events from the authentication services. How should you set up the topics and subscriptions to ensure that the system can handle a large volume of messages and can scale efficiently? Create one Pub/Sub topic. Create one pull subscription to allow the audit services to share the messages. Create one Pub/Sub topic. Create one pull subscription per audit service instance to allow the services to share the messages Create one Pub/Sub topic. Create one push subscription with the endpoint pointing to a load balancer in front of the audit services. Create one Pub/Sub topic per authentication service. Create one pull subscription per topic to be used by one audit service. Create one Pub/Sub topic per authentication service. Create one push subscription per topic, with the endpoint pointing to one audit service.
You are developing a marquee stateless web application that will run on Google Cloud. The rate of the incoming user traffic is expected to be unpredictable, with no traffic on some days and large spikes on other days. You need the application to automatically scale up and down, and you need to minimize the cost associated with running the application. What should you do? Build the application in Python with Firestore as the database. Deploy the application to Cloud Run. Build the application in C# with Firestore as the database. Deploy the application to App Engine flexible environment. Build the application in Python with CloudSQL as the database. Deploy the application to App Engine standard environment. Build the application in Python with Firestore as the database. Deploy the application to a Compute Engine managed instance group with autoscaling.
Your team is developing a Cloud Function triggered by Cloud Storage events. You want to accelerate testing and development of your Cloud Function while following Google-recommended best practices. What should you do? Create a new Cloud Function that is triggered when Cloud Audit Logs detects the cloudfunctions.functions.sourceCodeSet operation in the original Cloud Function. Send mock requests to the new function to evaluate the functionality. Make a copy of the Cloud Function, and rewrite the code to be HTTP-triggered. Edit and test the new version by triggering the HTTP endpoint. Send mock requests to the new function to evaluate the functionality. Install the Functions Frameworks library, and configure the Cloud Function on localhost. Make a copy of the function, and make edits to the new version. Test the new version using curl. Make a copy of the Cloud Function in the Google Cloud console. Use the Cloud console's in-line editor to make source code changes to the new function. Modify your web application to call the new function, and test the new version in production.
You are developing an application that will be launched on Compute Engine instances into multiple distinct projects, each corresponding to the environments in your software development process (development, QA, staging, and production). The instances in each project have the same application code but a different configuration. During deployment, each instance should receive the application's configuration based on the environment it serves. You want to minimize the number of steps to configure this flow. What should you do? When creating your instances, configure a startup script using the gcloud command to determine the project name that indicates the correct environment In each project, configure a metadata key ג€environmentג€ whose value is the environment it serves. Use your deployment tool to query the instance metadata and configure the application based on the ג€environmentג€ value. Deploy your chosen deployment tool on an instance in each project. Use a deployment job to retrieve the appropriate configuration file from your version control system, and apply the configuration when deploying the application on each instance. During each instance launch, configure an instance custom-metadata key named ג€environmentג€ whose value is the environment the instance serves. Use your deployment tool to query the instance metadata, and configure the application based on the ג€environmentג€ value.
You are developing a microservice-based application that will be deployed on a Google Kubernetes Engine cluster. The application needs to read and write to a Spanner database. You want to follow security best practices while minimizing code changes. How should you configure your application to retrieve Spanner credentials? Configure the appropriate service accounts, and use Workload Identity to run the pods. Store the application credentials as Kubernetes Secrets, and expose them as environment variables. Configure the appropriate routing rules, and use a VPC-native cluster to directly connect to the database. Store the application credentials using Cloud Key Management Service, and retrieve them whenever a database connection is made.
You manage an application that runs in a Compute Engine instance. You also have multiple backend services executing in stand-alone Docker containers running in Compute Engine instances. The Compute Engine instances supporting the backend services are scaled by managed instance groups in multiple regions. You want your calling application to be loosely coupled. You need to be able to invoke distinct service implementations that are chosen based on the value of an HTTP header found in the request. Which Google Cloud feature should you use to invoke the backend services? Traffic Director Service Directory Anthos Service Mesh Internal HTTP(S) Load Balancing.
HipLocal In order to meet their business requirements, how should HipLocal store their application state? Use local SSDs to store state. Put a memcache layer in front of MySQL. Move the state storage to Cloud Spanner. Replace the MySQL instance with Cloud SQL.
You are writing from a Go application to a Cloud Spanner database. You want to optimize your application’s performance using Google-recommended best practices. What should you do? Write to Cloud Spanner using a third-party HTTP client library. Write to Cloud Spanner using a custom gRPC client library. Write to Cloud Spanner using Google API Client Libraries Write to Cloud Spanner using Cloud Client Libraries.
You are running a containerized application on Google Kubernetes Engine. Your container images are stored in Container Registry. Your team uses CI/CD practices. You need to prevent the deployment of containers with known critical vulnerabilities. What should you do? • Use Web Security Scanner to automatically crawl your application • Review your application logs for scan results, and provide an attestation that the container is free of known critical vulnerabilities • Use Binary Authorization to implement a policy that forces the attestation to be provided before the container is deployed • Use Web Security Scanner to automatically crawl your application • Review the scan results in the scan details page in the Cloud Console, and provide an attestation that the container is free of known critical vulnerabilities • Use Binary Authorization to implement a policy that forces the attestation to be provided before the container is deployed • Enable the Container Scanning API to perform vulnerability scanning • Review vulnerability reporting in Container Registry in the Cloud Console, and provide an attestation that the container is free of known critical vulnerabilities • Use Binary Authorization to implement a policy that forces the attestation to be provided before the container is deployed • Enable the Container Scanning API to perform vulnerability scanning • Programmatically review vulnerability reporting through the Container Scanning API, and provide an attestation that the container is free of known critical vulnerabilities • Use Binary Authorization to implement a policy that forces the attestation to be provided before the container is deployed.
You need to build a public API that authenticates, enforces quotas, and reports metrics for API callers. Which tool should you use to complete this architecture? App Engine Cloud Endpoints Identity-Aware Proxy GKE Ingress for HTTP(S) Load Balancing.
You are building an API that will be used by Android and iOS apps. The API must: * Support HTTPs * Minimize bandwidth cost * Integrate easily with mobile apps Which API architecture should you use? RESTful APIs MQTT for APIs gRPC-based APIs SOAP-based APIs.
You are parsing a log file that contains three columns: a timestamp, an account number (a string), and a transaction amount (a number). You want to calculate the sum of all transaction amounts for each unique account number efficiently. Which data structure should you use? .A linked list A hash table A two-dimensional array A comma-delimited string.
Your application is controlled by a managed instance group. You want to share a large read-only data set between all the instances in the managed instance group. You want to ensure that each instance can start quickly and can access the data set via its filesystem with very low latency. You also want to minimize the total cost of the solution. What should you do? Move the data to a Cloud Storage bucket, and mount the bucket on the filesystem using Cloud Storage FUSE. Move the data to a Cloud Storage bucket, and copy the data to the boot disk of the instance via a startup script. Move the data to a Compute Engine persistent disk, and attach the disk in read-only mode to multiple Compute Engine virtual machine instances. Move the data to a Compute Engine persistent disk, take a snapshot, create multiple disks from the snapshot, and attach each disk to its own instance.
Your App Engine standard configuration is as follows: service: production instance_class: B1 You want to limit the application to 5 instances. Which code snippet should you include in your configuration? manual_scaling: instances: 5 min_pending_latency: 30ms manual_scaling: max_instances: 5 idle_timeout: 10m basic_scaling: instances: 5 min_pending_latency: 30ms basic_scaling: max_instances: 5 idle_timeout: 10m.
You are developing an application using different microservices that must remain internal to the cluster. You want the ability to configure each microservice with a specific number of replicas. You also want the ability to address a specific microservice from any other microservice in a uniform way, regardless of the number of replicas the microservice scales to. You plan to implement this solution on Google Kubernetes Engine. What should you do? Deploy each microservice as a Deployment. Expose the Deployment in the cluster using a Service, and use the Service DNS name to address it from other microservices within the cluster. Deploy each microservice as a Deployment. Expose the Deployment in the cluster using an Ingress, and use the Ingress IP address to address the Deployment from other microservices within the cluster. Deploy each microservice as a Pod. Expose the Pod in the cluster using a Service, and use the Service DNS name to address the microservice from other microservices within the cluster. Deploy each microservice as a Pod. Expose the Pod in the cluster using an Ingress, and use the Ingress IP address to address the Pod from other microservices within the cluster.
our team is responsible for maintaining an application that aggregates news articles from many different sources. Your monitoring dashboard contains publicly accessible real-time reports and runs on a Compute Engine instance as a web application. External stakeholders and analysts need to access these reports via a secure channel without authentication. How should you configure this secure channel? Add a public IP address to the instance. Use the service account key of the instance to encrypt the traffic. Use Cloud Scheduler to trigger Cloud Build every hour to create an export from the reports. Store the reports in a public Cloud Storage bucket. Add an HTTP(S) load balancer in front of the monitoring dashboard. Configure Identity-Aware Proxy to secure the communication channel. Add an HTTP(S) load balancer in front of the monitoring dashboard. Set up a Google-managed SSL certificate on the load balancer for traffic encryption.
You are writing a single-page web application with a user-interface that communicates with a third-party API for content using XMLHttpRequest. The data displayed on the UI by the API results is less critical than other data displayed on the same web page, so it is acceptable for some requests to not have the API data displayed in the UI. However, calls made to the API should not delay rendering of other parts of the user interface. You want your application to perform well when the API response is an error or a timeout. What should you do? Set the asynchronous option for your requests to the API to false and omit the widget displaying the API results when a timeout or error is encountered. Set the asynchronous option for your request to the API to true and omit the widget displaying the API results when a timeout or error is encountered. Catch timeout or error exceptions from the API call and keep trying with exponential backoff until the API response is successful. Catch timeout or error exceptions from the API call and display the error response in the UI widget.
You have an on-premises application that authenticates to the Cloud Storage API using a user-managed service account with a user-managed key. The application connects to Cloud Storage using Private Google Access over a Dedicated Interconnect link. You discover that requests from the application to access objects in the Cloud Storage bucket are failing with a 403 Permission Denied error code. What is the likely cause of this issue? The folder structure inside the bucket and object paths have changed. The permissions of the service account’s predefined role have changed. The service account key has been rotated but not updated on the application server. The Interconnect link from the on-premises data center to Google Cloud is experiencing a temporary outage.
HipLocal wants to improve the resilience of their MySQL deployment, while also meeting their business and technical requirements. Which configuration should they choose? Use the current single instance MySQL on Compute Engine and several read-only MySQL servers on Compute Engine Use the current single instance MySQL on Compute Engine, and replicate the data to Cloud SQL in an external master configuration. Replace the current single instance MySQL instance with Cloud SQL, and configure high availability. Replace the current single instance MySQL instance with Cloud SQL, and Google provides redundancy without further configuration.
You have an application deployed in production. When a new version is deployed, some issues don't arise until the application receives traffic from users in production. You want to reduce both the impact and the number of users affected. Which deployment strategy should you use? Blue/green deployment Canary deployment Rolling deployment Recreate deployment.
Your application takes an input from a user and publishes it to the user's contacts. This input is stored in a table in Cloud Spanner. Your application is more sensitive to latency and less sensitive to consistency. How should you perform reads from Cloud Spanner for this application? Perform Read-Only transactions. Perform stale reads using single-read methods. Perform strong reads using single-read methods. Perform stale reads using read-write transactions.
You are planning to deploy your application in a Google Kubernetes Engine (GKE) cluster. The application exposes an HTTP-based health check at /healthz. You want to use this health check endpoint to determine whether traffic should be routed to the pod by the load balancer. Which code snippet should you include in your Pod configuration? A B C D.
You have a container deployed on Google Kubernetes Engine. The container can sometimes be slow to launch, so you have implemented a liveness probe. You notice that the liveness probe occasionally fails on launch. What should you do? Add a startup probe. Increase the initial delay for the liveness probe. Increase the CPU limit for the container. Add a readiness probe.
You are developing a JPEG image-resizing API hosted on Google Kubernetes Engine (GKE). Callers of the service will exist within the same GKE cluster. You want clients to be able to get the IP address of the service. What should you do? Define a GKE Service. Clients should use the name of the A record in Cloud DNS to find the service's cluster IP address. Define a GKE Service. Clients should use the service name in the URL to connect to the service. Define a GKE Endpoint. Clients should get the endpoint name from the appropriate environment variable in the client container. Define a GKE Endpoint. Clients should get the endpoint name from Cloud DNS.
Your web application is deployed to the corporate intranet. You need to migrate the web application to Google Cloud. The web application must be available only to company employees and accessible to employees as they travel. You need to ensure the security and accessibility of the web application while minimizing application changes. What should you do? Configure the application to check authentication credentials for each HTTP(S) request to the application. Configure Identity-Aware Proxy to allow employees to access the application through its public IP address. Configure a Compute Engine instance that requests users to log in to their corporate account. Change the web application DNS to point to the proxy Compute Engine instance. After authenticating, the Compute Engine instance forwards requests to and from the web application. Configure a Compute Engine instance that requests users to log in to their corporate account. Change the web application DNS to point to the proxy Compute Engine instance. After authenticating, the Compute Engine issues an HTTP redirect to a public IP address hosting the web application.
You are deploying your application to a Compute Engine virtual machine instance. Your application is configured to write its log files to disk. You want to view the logs in Stackdriver Logging without changing the application code. What should you do? Install the Stackdriver Logging Agent and configure it to send the application logs Use a Stackdriver Logging Library to log directly from the application to Stackdriver Logging Provide the log file folder path in the metadata of the instance to configure it to send the application logs. Change the application to log to /var/log so that its logs are automatically sent to Stackdriver Logging.
Your service adds text to images that it reads from Cloud Storage. During busy times of the year, requests to Cloud Storage fail with an HTTP 429 "Too Many Requests" status code. How should you handle this error? Add a cache-control header to the objects. Request a quota increase from the GCP Console. Retry the request with a truncated exponential backoff strategy. Change the storage class of the Cloud Storage bucket to Multi-regional.
You are working on a new application that is deployed on Cloud Run and uses Cloud Functions. Each time new features are added, new Cloud Functions and Cloud Run services are deployed. You use ENV variables to keep track of the services and enable interservice communication, but the maintenance of the ENV variables has become difficult. You want to implement dynamic discovery in a scalable way. What should you do? Configure your microservices to use the Cloud Run Admin and Cloud Functions APIs to query for deployed Cloud Run services and Cloud Functions in the Google Cloud project. Create a Service Directory namespace. Use API calls to register the services during deployment, and query during runtime. Rename the Cloud Functions and Cloud Run services endpoint is using a well-documented naming convention. Deploy Hashicorp Consul on a single Compute Engine instance. Register the services with Consul during deployment, and query during runtime.
You work for a financial services company that has a container-first approach. Your team develops microservices applications. A Cloud Build pipeline creates the container image, runs regression tests, and publishes the image to Artifact Registry. You need to ensure that only containers that have passed the regression tests are deployed to Google Kubernetes Engine (GKE) clusters. You have already enabled Binary Authorization on the GKE clusters. What should you do next? Create an attestor and a policy. After a container image has successfully passed the regression tests, use Cloud Build to run Kritis Signer to create an attestation for the container image. Deploy Voucher Server and Voucher Client components. After a container image has successfully passed the regression tests, run Voucher Client as a step in the Cloud Build pipeline. Set the Pod Security Standard level to Restricted for the relevant namespaces. Use Cloud Build to digitally sign the container images that have passed the regression tests. Create an attestor and a policy. Create an attestation for the container images that have passed the regression tests as a step in the Cloud Build pipeline.
You are reviewing and updating your Cloud Build steps to adhere to best practices. Currently, your build steps include: 1. Pull the source code from a source repository. 2. Build a container image 3. Upload the built image to Artifact Registry. You need to add a step to perform a vulnerability scan of the built container image, and you want the results of the scan to be available to your deployment pipeline running in Google Cloud. You want to minimize changes that could disrupt other teams’ processes. What should you do? Enable Binary Authorization, and configure it to attest that no vulnerabilities exist in a container image. Upload the built container images to your Docker Hub instance, and scan them for vulnerabilities. Enable the Container Scanning API in Artifact Registry, and scan the built container images for vulnerabilities. Add Artifact Registry to your Aqua Security instance, and scan the built container images for vulnerabilities.
You are developing an online gaming platform as a microservices application on Google Kubernetes Engine (GKE). Users on social media are complaining about long loading times for certain URL requests to the application. You need to investigate performance bottlenecks in the application and identify which HTTP requests have a significantly high latency span in user requests. What should you do? Configure GKE workload metrics using kubectl. Select all Pods to send their metrics to Cloud Monitoring. Create a custom dashboard of application metrics in Cloud Monitoring to determine performance bottlenecks of your GKE cluster. Update your microservices to log HTTP request methods and URL paths to STDOUT. Use the logs router to send container logs to Cloud Logging. Create filters in Cloud Logging to evaluate the latency of user requests across different methods and URL paths. Instrument your microservices by installing the OpenTelemetry tracing package. Update your application code to send traces to Trace for inspection and analysis. Create an analysis report on Trace to analyze user requests. Install tcpdump on your GKE nodes. Run tcpdump to capture network traffic over an extended period of time to collect data. Analyze the data files using Wireshark to determine the cause of high latency.
Your team is creating a serverless web application on Cloud Run. The application needs to access images stored in a private Cloud Storage bucket. You want to give the application Identity and Access Management (IAM) permission to access the images in the bucket, while also securing the services using Google-recommended best practices. What should you do? Enforce signed URLs for the desired bucket. Grant the Storage Object Viewer IAM role on the bucket to the Compute Engine default service account. Enforce public access prevention for the desired bucket. Grant the Storage Object Viewer IAM role on the bucket to the Compute Engine default service account. Enforce signed URLs for the desired bucket. Create and update the Cloud Run service to use a user-managed service account. Grant the Storage Object Viewer IAM role on the bucket to the service account. Enforce public access prevention for the desired bucket. Create and update the Cloud Run service to use a user-managed service account. Grant the Storage Object Viewer IAM role on the bucket to the service account.
You are a developer at a large corporation. You manage three Google Kubernetes Engine clusters on Google Cloud. Your team’s developers need to switch from one cluster to another regularly without losing access to their preferred development tools. You want to configure access to these multiple clusters while following Google-recommended best practices. What should you do? Ask the developers to use Cloud Shell and run gcloud container clusters get-credential to switch to another cluster. In a configuration file, define the clusters, users, and contexts. Share the file with the developers and ask them to use kubect1 contig to add cluster, user, and context details. Ask the developers to install the gcloud CLI on their workstation and run gcloud container clusters get-credentials to switch to another cluster. Ask the developers to open three terminals on their workstation and use kubect1 config to configure access to each cluster.
You have two Google Cloud projects, named Project A and Project B. You need to create a Cloud Function in Project A that saves the output in a Cloud Storage bucket in Project B. You want to follow the principle of least privilege. What should you do? 1. Create a Google service account in Project B. 2. Deploy the Cloud Function with the service account in Project A. 3. Assign this service account the roles/storage.objectCreator role on the storage bucket residing in Project B. 1. Create a Google service account in Project A 2. Deploy the Cloud Function with the service account in Project A. 3. Assign this service account the roles/storage.objectCreator role on the storage bucket residing in Project B. 1. Determine the default App Engine service account (PROJECT_ID@appspot.gserviceaccount.com) in Project A. 2. Deploy the Cloud Function with the default App Engine service account in Project A. 3. Assign the default App Engine service account the roles/storage.objectCreator role on the storage bucket residing in Project B. 1. Determine the default App Engine service account (PROJECT_ID@appspot.gserviceaccount.com) in Project B. 2. Deploy the Cloud Function with the default App Engine service account in Project A. 3. Assign the default App Engine service account the roles/storage.objectCreator role on the storage bucket residing in Project B.
You are developing an application hosted on Google Cloud that uses a MySQL relational database schema. The application will have a large volume of reads and writes to the database and will require backups and ongoing capacity planning. Your team does not have time to fully manage the database but can take on small administrative tasks. How should you host the database? Configure Cloud SQL to host the database, and import the schema into Cloud SQL. Deploy MySQL from the Google Cloud Marketplace to the database using a client, and import the schema. Configure Bigtable to host the database, and import the data into Bigtable. Configure Cloud Spanner to host the database, and import the schema into Cloud Spanner. Configure Firestore to host the database, and import the data into Firestore.
Your teammate has asked you to review the code below. Its purpose is to efficiently add a large number of small rows to a BigQuery table. Include multiple rows with each request. Include multiple rows with each request. Write each row to a Cloud Storage object, then load into BigQuery. Write each row to a Cloud Storage object in parallel, then load into BigQuery.
You are developing a new web application using Cloud Run and committing code to Cloud Source Repositories. You want to deploy new code in the most efficient way possible. You have already created a Cloud Build YAML file that builds a container and runs the following command: gcloud run deploy. What should you do next? Create a Pub/Sub topic to be notified when code is pushed to the repository. Create a Pub/Sub trigger that runs the build file when an event is published to the topic. Create a build trigger that runs the build file in response to a repository code being pushed to the development branch. Create a webhook build trigger that runs the build file in response to HTTP POST calls to the webhook URL. Create a Cron job that runs the following command every 24 hours: gcloud builds submit.
You are a developer at a large organization. You are deploying a web application to Google Kubernetes Engine (GKE). The DevOps team has built a CI/CD pipeline that uses Cloud Deploy to deploy the application to Dev, Test, and Prod clusters in GKE. After Cloud Deploy successfully deploys the application to the Dev cluster, you want to automatically promote it to the Test cluster. How should you configure this process following Google-recommended best practices? 1. Create a Cloud Build trigger that listens for SUCCEEDED Pub/Sub messages from the clouddeploy-operations topic. 2. Configure Cloud Build to include a step that promotes the application to the Test cluster. 1. Create a Cloud Function that calls the Google Cloud Deploy API to promote the application to the Test cluster. 2. Configure this function to be triggered by SUCCEEDED Pub/Sub messages from the cloud-builds topic. 1. Create a Cloud Function that calls the Google Cloud Deploy API to promote the application to the Test cluster. 2. Configure this function to be triggered by SUCCEEDED Pub/Sub messages from the clouddeploy-operations topic. 1. Create a Cloud Build pipeline that uses the gke-deploy builder. 2. Create a Cloud Build trigger that listens for SUCCEEDED Pub/Sub messages from the cloud-builds topic. 3. Configure this pipeline to run a deployment step to the Test cluster.
Your team is setting up a build pipeline for an application that will run in Google Kubernetes Engine (GKE). For security reasons, you only want images produced by the pipeline to be deployed to your GKE cluster. Which combination of Google Cloud services should you use? Cloud Build, Cloud Storage, and Binary Authorization Google Cloud Deploy, Cloud Storage, and Google Cloud Armor Google Cloud Deploy, Artifact Registry, and Google Cloud Armor Cloud Build, Artifact Registry, and Binary Authorization.
You are supporting a business-critical application in production deployed on Cloud Run. The application is reporting HTTP 500 errors that are affecting the usability of the application. You want to be alerted when the number of errors exceeds 15% of the requests within a specific time window. What should you do? Create a Cloud Function that consumes the Cloud Monitoring API. Use Cloud Scheduler to trigger the Cloud Function daily and alert you if the number of errors is above the defined threshold. Navigate to the Cloud Run page in the Google Cloud console, and select the service from the services list. Use the Metrics tab to visualize the number of errors for that revision, and refresh the page daily. Create an alerting policy in Cloud Monitoring that alerts you if the number of errors is above the defined threshold. Create a Cloud Function that consumes the Cloud Monitoring API. Use Cloud Composer to trigger the Cloud Function daily and alert you if the number of errors is above the defined threshold.
You noticed that your application was forcefully shut down during a Deployment update in Google Kubernetes Engine. Your application didn’t close the database connection before it was terminated. You want to update your application to make sure that it completes a graceful shutdown. What should you do? Update your code to process a received SIGTERM signal to gracefully disconnect from the database. configure a PodDisruptionBudget to prevent the Pod from being forcefully shut down. Increase the terminationGracePeriodSeconds for your application. Configure a PreStop hook to shut down your application.
You are a lead developer working on a new retail system that runs on Cloud Run and Firestore in Datastore mode. A web UI requirement is for the system to display a list of available products when users access the system and for the user to be able to browse through all products. You have implemented this requirement in the minimum viable product (MVP) phase by returning a list of all available products stored in Firestore. A few months after go-live, you notice that Cloud Run instances are terminated with HTTP 500: Container instances are exceeding memory limits errors during busy times. This error coincides with spikes in the number of Datastore entity reads. You need to prevent Cloud Run from crashing and decrease the number of Datastore entity reads. You want to use a solution that optimizes system performance. What should you do? Modify the query that returns the product list using integer offsets. Modify the query that returns the product list using limits. Modify the Cloud Run configuration to increase the memory limits. Modify the query that returns the product list using cursors.
You need to deploy an internet-facing microservices application to Google Kubernetes Engine (GKE). You want to validate new features using the A/B testing method. You have the following requirements for deploying new container image releases: • There is no downtime when new container images are deployed. • New production releases are tested and verified using a subset of production users. What should you do? 1. Configure your CI/CD pipeline to update the Deployment manifest file by replacing the container version with the latest version. 2. Recreate the Pods in your cluster by applying the Deployment manifest file. 3. Validate the application's performance by comparing its functionality with the previous release version, and roll back if an issue arises. 1. Create a second namespace on GKE for the new release version. 2. Create a Deployment configuration for the second namespace with the desired number of Pods. 3. Deploy new container versions in the second namespace. 4. Update the Ingress configuration to route traffic to the namespace with the new container versions. 1. Install the Anthos Service Mesh on your GKE cluster. 2. Create two Deployments on the GKE cluster, and label them with different version names. 3. Implement an Istio routing rule to send a small percentage of traffic to the Deployment that references the new version of the application. 1. Implement a rolling update pattern by replacing the Pods gradually with the new release version. 2. Validate the application's performance for the new subset of users during the rollout, and roll back if an issue arises.
Your team manages a large Google Kubernetes Engine (GKE) cluster. Several application teams currently use the same namespace to develop microservices for the cluster. Your organization plans to onboard additional teams to create microservices. You need to configure multiple environments while ensuring the security and optimal performance of each team’s work. You want to minimize cost and follow Google-recommended best practices. What should you do? Create new role-based access controls (RBAC) for each team in the existing cluster, and define resource quotas. Create a new namespace for each environment in the existing cluster, and define resource quotas. Create a new GKE cluster for each team. Create a new namespace for each team in the existing cluster, and define resource quotas.
You have deployed a Java application to Cloud Run. Your application requires access to a database hosted on Cloud SQL. Due to regulatory requirements, your connection to the Cloud SQL instance must use its internal IP address. How should you configure the connectivity while following Google-recommended best practices? Configure your Cloud Run service with a Cloud SQL connection. Configure your Cloud Run service to use a Serverless VPC Access connector. Configure your application to use the Cloud SQL Java connector. Configure your application to connect to an instance of the Cloud SQL Auth proxy.
Your application stores customers’ content in a Cloud Storage bucket, with each object being encrypted with the customer's encryption key. The key for each object in Cloud Storage is entered into your application by the customer. You discover that your application is receiving an HTTP 4xx error when reading the object from Cloud Storage. What is a possible cause of this error? You attempted the read operation on the object with the customer's base64-encoded key. You attempted the read operation without the base64-encoded SHA256 hash of the encryption key. You entered the same encryption algorithm specified by the customer when attempting the read operation. You attempted the read operation on the object with the base64-encoded SHA256 hash of the customer's key.
A governmental regulation was recently passed that affects your application. For compliance purposes, you are now required to send a duplicate of specific application logs from your application’s project to a project that is restricted to the security team. What should you do? Create user-defined log buckets in the security team’s project. Configure a Cloud Logging sink to route your application’s logs to log buckets in the security team’s project. Create a job that copies the logs from the _Required log bucket into the security team’s log bucket in their project. Modify the _Default log bucket sink rules to reroute the logs into the security team’s log bucket. Create a job that copies the System Event logs from the _Required log bucket into the security team’s log bucket in their project.
You plan to deploy a new Go application to Cloud Run. The source code is stored in Cloud Source Repositories. You need to configure a fully managed, automated, continuous deployment pipeline that runs when a source code commit is made. You want to use the simplest deployment solution. What should you do? Configure a cron job on your workstations to periodically run gcloud run deploy --source in the working directory. Configure a Jenkins trigger to run the container build and deploy process for each source code commit to Cloud Source Repositories. Configure continuous deployment of new revisions from a source repository for Cloud Run using buildpacks. Use Cloud Build with a trigger configured to run the container build and deploy process for each source code commit to Cloud Source Repositories.
You deployed a new application to Google Kubernetes Engine and are experiencing some performance degradation. Your logs are being written to Cloud Logging, and you are using a Prometheus sidecar model for capturing metrics. You need to correlate the metrics and data from the logs to troubleshoot the performance issue and send real-time alerts while minimizing costs. What should you do? Create custom metrics from the Cloud Logging logs, and use Prometheus to import the results using the Cloud Monitoring REST API. Export the Cloud Logging logs and the Prometheus metrics to Cloud Bigtable. Run a query to join the results, and analyze in Google Data Studio. Export the Cloud Logging logs and stream the Prometheus metrics to BigQuery. Run a recurring query to join the results, and send notifications using Cloud Tasks. Export the Prometheus metrics and use Cloud Monitoring to view them as external metrics. Configure Cloud Monitoring to create log-based metrics from the logs, and correlate them with the Prometheus data.
You are deploying a Python application to Cloud Run using Cloud Source Repositories and Cloud Build. The Cloud Build pipeline is shown below: You want to optimize deployment times and avoid unnecessary steps. What should you do? Remove the step that pushes the container to Artifact Registry. Deploy a new Docker registry in a VPC, and use Cloud Build worker pools inside the VPC to run the build pipeline. Store image artifacts in a Cloud Storage bucket in the same region as the Cloud Run instance. Add the --cache-from argument to the Docker build step in your build config file.
You are developing a web application that contains private images and videos stored in a Cloud Storage bucket. Your users are anonymous and do not have Google Accounts. You want to use your application-specific logic to control access to the images and videos. How should you configure access? Cache each web application user's IP address to create a named IP table using Google Cloud Armor. Create a Google Cloud Armor security policy that allows users to access the backend bucket. Grant the Storage Object Viewer IAM role to allUsers. Allow users to access the bucket after authenticating through your web application Configure Identity-Aware Proxy (IAP) to authenticate users into the web application. Allow users to access the bucket after authenticating through IAP. Generate a signed URL that grants read access to the bucket. Allow users to access the URL after authenticating through your web application.
You have an application written in Python running in production on Cloud Run. Your application needs to read/write data stored in a Cloud Storage bucket in the same project. You want to grant access to your application following the principle of least privilege. What should you do? Create a user-managed service account with a custom Identity and Access Management (IAM) role. Create a user-managed service account with the Storage Admin Identity and Access Management (IAM) role Create a user-managed service account with the Project Editor Identity and Access Management (IAM) role. Use the default service account linked to the Cloud Run revision in production.
Your team is developing unit tests for Cloud Function code. The code is stored in a Cloud Source Repositories repository. You are responsible for implementing the tests. Only a specific service account has the necessary permissions to deploy the code to Cloud Functions. You want to ensure that the code cannot be deployed without first passing the tests. How should you configure the unit testing process? Configure Cloud Build to deploy the Cloud Function. If the code passes the tests, a deployment approval is sent to you. Configure Cloud Build to deploy the Cloud Function, using the specific service account as the build agent. Run the unit tests after successful deployment. Configure Cloud Build to run the unit tests. If the code passes the tests, the developer deploys the Cloud Function. Configure Cloud Build to run the unit tests, using the specific service account as the build agent. If the code passes the tests, Cloud Build deploys the Cloud Function.
HipLocal's .net-based auth service fails under intermittent load. What should they do? Use App Engine for autoscaling. Use Cloud Functions for autoscaling. Use a Compute Engine cluster for the service Use a dedicated Compute Engine virtual machine instance for the service.
Your team detected a spike of errors in an application running on Cloud Run in your production project. The application is configured to read messages from Pub/Sub topic A, process the messages, and write the messages to topic B. You want to conduct tests to identify the cause of the errors. You can use a set of mock messages for testing. What should you do? Deploy the Pub/Sub and Cloud Run emulators on your local machine. Deploy the application locally, and change the logging level in the application to DEBUG or INFO. Write mock messages to topic A, and then analyze the logs. Use the gcloud CLI to write mock messages to topic A. Change the logging level in the application to DEBUG or INFO, and then analyze the logs. Deploy the Pub/Sub emulator on your local machine. Point the production application to your local Pub/Sub topics. Write mock messages to topic A, and then analyze the logs. Use the Google Cloud console to write mock messages to topic A. Change the logging level in the application to DEBUG or INFO, and then analyze the logs.
You are developing a Java Web Server that needs to interact with Google Cloud services via the Google Cloud API on the user's behalf. Users should be able to authenticate to the Google Cloud API using their Google Cloud identities. Which workflow should you implement in your web application? 1. When a user arrives at your application, prompt them for their Google username and password. 2. Store an SHA password hash in your application's database along with the user's username. 3. The application authenticates to the Google Cloud API using HTTPs requests with the user's username and password hash in the Authorization request header. 1. When a user arrives at your application, prompt them for their Google username and password. 2. Forward the user's username and password in an HTTPS request to the Google Cloud authorization server, and request an access token. 3. The Google server validates the user's credentials and returns an access token to the application. 4. The application uses the access token to call the Google Cloud API. 1. When a user arrives at your application, route them to a Google Cloud consent screen with a list of requested permissions that prompts the user to sign in with SSO to their Google Account. 2. After the user signs in and provides consent, your application receives an authorization code from a Google server. 3. The Google server returns the authorization code to the user, which is stored in the browser's cookies. 4. The user authenticates to the Google Cloud API using the authorization code in the cookie. 1. When a user arrives at your application, route them to a Google Cloud consent screen with a list of requested permissions that prompts the user to sign in with SSO to their Google Account. 2. After the user signs in and provides consent, your application receives an authorization code from a Google server. 3. The application requests a Google Server to exchange the authorization code with an access token. 4. The Google server responds with the access token that is used by the application to call the Google Cloud API.
You work for an organization that manages an online ecommerce website. Your company plans to expand across the world; however, the estore currently serves one specific region. You need to select a SQL database and configure a schema that will scale as your organization grows. You want to create a table that stores all customer transactions and ensure that the customer (CustomerId) and the transaction (TransactionId) are unique. What should you do? Create a Cloud SQL table that has TransactionId and CustomerId configured as primary keys. Use an incremental number for the TransactionId. Create a Cloud SQL table that has TransactionId and CustomerId configured as primary keys. Use a random string (UUID) for the Transactionid. Create a Cloud Spanner table that has TransactionId and CustomerId configured as primary keys. Use a random string (UUID) for the TransactionId. Create a Cloud Spanner table that has TransactionId and CustomerId configured as primary keys. Use an incremental number for the TransactionId.
You are monitoring a web application that is written in Go and deployed in Google Kubernetes Engine. You notice an increase in CPU and memory utilization. You need to determine which source code is consuming the most CPU and memory resources. What should you do? Download, install, and start the Snapshot Debugger agent in your VM. Take debug snapshots of the functions that take the longest time. Review the call stack frame, and identify the local variables at that level in the stack. Import the Cloud Profiler package into your application, and initialize the Profiler agent. Review the generated flame graph in the Google Cloud console to identify time-intensive functions. Import OpenTelemetry and Trace export packages into your application, and create the trace provider. Review the latency data for your application on the Trace overview page, and identify where bottlenecks are occurring. Create a Cloud Logging query that gathers the web application's logs. Write a Python script that calculates the difference between the timestamps from the beginning and the end of the application's longest functions to identity time-intensive functions.
You work for an organization that manages an ecommerce site. Your application is deployed behind a global HTTP(S) load balancer. You need to test a new product recommendation algorithm. You plan to use A/B testing to determine the new algorithm’s effect on sales in a randomized way. How should you test this feature? Split traffic between versions using weights. Enable the new recommendation feature flag on a single instance. Mirror traffic to the new version of your application. Use HTTP header-based routing.
You plan to deploy a new application revision with a Deployment resource to Google Kubernetes Engine (GKE) in production. The container might not work correctly. You want to minimize risk in case there are issues after deploying the revision. You want to follow Google-recommended best practices. What should you do? Perform a rolling update with a PodDisruptionBudget of 80%. Perform a rolling update with a HorizontalPodAutoscaler scale-down policy value of 0. Convert the Deployment to a StatefulSet, and perform a rolling update with a PodDisruptionBudget of 80%. Convert the Deployment to a StatefulSet, and perform a rolling update with a HorizontalPodAutoscaler scale-down policy value of 0.
Before promoting your new application code to production, you want to conduct testing across a variety of different users. Although this plan is risky, you want to test the new version of the application with production users and you want to control which users are forwarded to the new version of the application based on their operating system. If bugs are discovered in the new version, you want to roll back the newly deployed version of the application as quickly as possible. What should you do? Deploy your application on Cloud Run. Use traffic splitting to direct a subset of user traffic to the new version based on the revision tag. Deploy your application on Google Kubernetes Engine with Anthos Service Mesh. Use traffic splitting to direct a subset of user traffic to the new version based on the user-agent header. Deploy your application on App Engine. Use traffic splitting to direct a subset of user traffic to the new version based on the IP address. Deploy your application on Compute Engine. Use Traffic Director to direct a subset of user traffic to the new version based on predefined weights.
Your team is writing a backend application to implement the business logic for an interactive voice response (IVR) system that will support a payroll application. The IVR system has the following technical characteristics: • Each customer phone call is associated with a unique IVR session. • The IVR system creates a separate persistent gRPC connection to the backend for each session. • If the connection is interrupted, the IVR system establishes a new connection, causing a slight latency for that call. You need to determine which compute environment should be used to deploy the backend application. Using current call data, you determine that: • Call duration ranges from 1 to 30 minutes. • Calls are typically made during business hours. • There are significant spikes of calls around certain known dates (e.g., pay days), or when large payroll changes occur. You want to minimize cost, effort, and operational overhead. Where should you deploy the backend application? D. Compute Engine Google Kubernetes Engine cluster in Standard mode Cloud Functions Cloud Run.
You are building a highly available and globally accessible application that will serve static content to users. You need to configure the storage and serving components. You want to minimize management overhead and latency while maximizing reliability for users. What should you do? 1. Create a managed instance group. Replicate the static content across the virtual machines (VMs) 2. Create an external HTTP(S) load balancer. 3. Enable Cloud CDN, and send traffic to the managed instance group. 1. Create an unmanaged instance group. Replicate the static content across the VMs. 2. Create an external HTTP(S) load balancer 3. Enable Cloud CDN, and send traffic to the unmanaged instance group. 1. Create a Standard storage class, regional Cloud Storage bucket. Put the static content in the bucket 2. Reserve an external IP address, and create an external HTTP(S) load balancer 3. Enable Cloud CDN, and send traffic to your backend bucket 1. Create a Standard storage class, multi-regional Cloud Storage bucket. Put the static content in the bucket. 2. Reserve an external IP address, and create an external HTTP(S) load balancer. 3. Enable Cloud CDN, and send traffic to your backend bucket.
HipLocal wants to reduce the latency of their services for users in global locations. They have created read replicas of their database in locations where their users reside and configured their service to read traffic using those replicas. How should they further reduce latency for all database interactions with the least amount of effort? Migrate the database to Bigtable and use it to serve all global user traffic. Migrate the database to Cloud Spanner and use it to serve all global user traffic Migrate the database to Firestore in Datastore mode and use it to serve all global user traffic. Migrate the services to Google Kubernetes Engine and use a load balancer service to better scale the application.
You have an application deployed in Google Kubernetes Engine (GKE). You need to update the application to make authorized requests to Google Cloud managed services. You want this to be a one-time setup, and you need to follow security best practices of auto-rotating your security keys and storing them in an encrypted store. You already created a service account with appropriate access to the Google Cloud service. What should you do next? Assign the Google Cloud service account to your GKE Pod using Workload Identity Export the Google Cloud service account, and share it with the Pod as a Kubernetes Secret Export the Google Cloud service account, and embed it in the source code of the application. Export the Google Cloud service account, and upload it to HashiCorp Vault to generate a dynamic service account for your application.
You are planning to deploy hundreds of microservices in your Google Kubernetes Engine (GKE) cluster. How should you secure communication between the microservices on GKE using a managed service? Use global HTTP(S) Load Balancing with managed SSL certificates to protect your services Deploy open source Istio in your GKE cluster, and enable mTLS in your Service Mesh Install cert-manager on GKE to automatically renew the SSL certificates. Install Anthos Service Mesh, and enable mTLS in your Service Mesh.
You are developing an application that will store and access sensitive unstructured data objects in a Cloud Storage bucket. To comply with regulatory requirements, you need to ensure that all data objects are available for at least 7 years after their initial creation. Objects created more than 3 years ago are accessed very infrequently (less than once a year). You need to configure object storage while ensuring that storage cost is optimized. What should you do? (Choose two.) Set a retention policy on the bucket with a period of 7 years. Use IAM Conditions to provide access to objects 7 years after the object creation date. Enable Object Versioning to prevent objects from being accidentally deleted for 7 years after object creation. Create an object lifecycle policy on the bucket that moves objects from Standard Storage to Archive Storage after 3 years. Implement a Cloud Function that checks the age of each object in the bucket and moves the objects older than 3 years to a second bucket with the Archive Storage class. Use Cloud Scheduler to trigger the Cloud Function on a daily schedule.
You are building an application that uses a distributed microservices architecture. You want to measure the performance and system resource utilization in one of the microservices written in Java. What should you do? Instrument the service with Cloud Profiler to measure CPU utilization and method-level execution times in the service. Instrument the service with Debugger to investigate service errors. Instrument the service with Cloud Trace to measure request latency. Instrument the service with OpenCensus to measure service latency, and write custom metrics to Cloud Monitoring.
You are planning to add unit tests to your application. You need to be able to assert that published Pub/Sub messages are processed by your subscriber in order. You want the unit tests to be cost-effective and reliable. What should you do? Implement a mocking framework. Create a topic and subscription for each tester. Add a filter by tester to the subscription. Use the Pub/Sub emulator.
You are using Cloud Run to host a web application. You need to securely obtain the application project ID and region where the application is running and display this information to users. You want to use the most performant approach. What should you do? Use HTTP requests to query the available metadata server at the http://metadata.google.internal/ endpoint with the Metadata-Flavor: Google header. In the Google Cloud console, navigate to the Project Dashboard and gather configuration details. Navigate to the Cloud Run “Variables & Secrets” tab, and add the desired environment variables in Key:Value format. In the Google Cloud console, navigate to the Project Dashboard and gather configuration details. Write the application configuration information to Cloud Run's in-memory container filesystem. Make an API call to the Cloud Asset Inventory API from the application and format the request to include instance metadata.
You need to deploy resources from your laptop to Google Cloud using Terraform. Resources in your Google Cloud environment must be created using a service account. Your Cloud Identity has the roles/iam.serviceAccountTokenCreator Identity and Access Management (IAM) role and the necessary permissions to deploy the resources using Terraform. You want to set up your development environment to deploy the desired resources following Google-recommended best practices. What should you do? 1. Download the service account’s key file in JSON format, and store it locally on your laptop. 2. Set the GOOGLE_APPLICATION_CREDENTIALS environment variable to the path of your downloaded key file. 1. Run the following command from a command line: gcloud config set auth/impersonate_service_account service-account-name@project.iam.gserviceacccount.com. 2. Set the GOOGLE_OAUTH_ACCESS_TOKEN environment variable to the value that is returned by the gcloud auth print-access-token command. 1. Run the following command from a command line: gcloud auth application-default login. 2. In the browser window that opens, authenticate using your personal credentials. 1. Store the service account's key file in JSON format in Hashicorp Vault. 2. Integrate Terraform with Vault to retrieve the key file dynamically, and authenticate to Vault using a short-lived access token.
Your company uses Cloud Logging to manage large volumes of log data. You need to build a real-time log analysis architecture that pushes logs to a third-party application for processing. What should you do? Create a Cloud Logging log export to Pub/Sub. Create a Cloud Logging log export to BigQuery. Create a Cloud Logging log export to Cloud Storage. Create a Cloud Function to read Cloud Logging log entries and send them to the third-party application.
You work at a rapidly growing financial technology startup. You manage the payment processing application written in Go and hosted on Cloud Run in the Singapore region (asia-southeast1). The payment processing application processes data stored in a Cloud Storage bucket that is also located in the Singapore region. The startup plans to expand further into the Asia Pacific region. You plan to deploy the Payment Gateway in Jakarta, Hong Kong, and Taiwan over the next six months. Each location has data residency requirements that require customer data to reside in the country where the transaction was made. You want to minimize the cost of these deployments. What should you do? Create a Cloud Storage bucket in each region, and create a Cloud Run service of the payment processing application in each region. Create a Cloud Storage bucket in each region, and create three Cloud Run services of the payment processing application in the Singapore region. Create three Cloud Storage buckets in the Asia multi-region, and create three Cloud Run services of the payment processing application in the Singapore region. Create three Cloud Storage buckets in the Asia multi-region, and create three Cloud Run revisions of the payment processing application in the Singapore region.
You recently joined a new team that has a Cloud Spanner database instance running in production. Your manager has asked you to optimize the Spanner instance to reduce cost while maintaining high reliability and availability of the database. What should you do? Use Cloud Logging to check for error logs, and reduce Spanner processing units by small increments until you find the minimum capacity required. Use Cloud Trace to monitor the requests per sec of incoming requests to Spanner, and reduce Spanner processing units by small increments until you find the minimum capacity required. Use Cloud Monitoring to monitor the CPU utilization, and reduce Spanner processing units by small increments until you find the minimum capacity required. Use Snapshot Debugger to check for application errors, and reduce Spanner processing units by small increments until you find the minimum capacity required.
You recently deployed a Go application on Google Kubernetes Engine (GKE). The operations team has noticed that the application's CPU usage is high even when there is low production traffic. The operations team has asked you to optimize your application's CPU resource consumption. You want to determine which Go functions consume the largest amount of CPU. What should you do? Deploy a Fluent Bit daemonset on the GKE cluster to log data in Cloud Logging. Analyze the logs to get insights into your application code’s performance. Create a custom dashboard in Cloud Monitoring to evaluate the CPU performance metrics of your application. Connect to your GKE nodes using SSH. Run the top command on the shell to extract the CPU utilization of your application. Modify your Go application to capture profiling data. Analyze the CPU metrics of your application in flame graphs in Profiler.
You have recently instrumented a new application with OpenTelemetry, and you want to check the latency of your application requests in Trace. You want to ensure that a specific request is always traced. What should you do? Wait 10 minutes, then verify that Trace captures those types of requests automatically. . Write a custom script that sends this type of request repeatedly from your dev project. Use the Trace API to apply custom attributes to the trace. Add the X-Cloud-Trace-Context header to the request with the appropriate parameters.
You are trying to connect to your Google Kubernetes Engine (GKE) cluster using kubectl from Cloud Shell. You have deployed your GKE cluster with a public endpoint. From Cloud Shell, you run the following command: You notice that the kubectl commands time out without returning an error message. What is the most likely cause of this issue? Your user account does not have privileges to interact with the cluster using kubectl Your Cloud Shell external IP address is not part of the authorized networks of the cluster. The Cloud Shell is not part of the same VPC as the GKE cluster. A VPC firewall is blocking access to the cluster’s endpoint.
You are developing a web application that contains private images and videos stored in a Cloud Storage bucket. Your users are anonymous and do not have Google Accounts. You want to use your application-specific logic to control access to the images and videos. How should you configure access? Cache each web application user's IP address to create a named IP table using Google Cloud Armor. Create a Google Cloud Armor security policy that allows users to access the backend bucket. Grant the Storage Object Viewer IAM role to allUsers. Allow users to access the bucket after authenticating through your web application. Configure Identity-Aware Proxy (IAP) to authenticate users into the web application. Allow users to access the bucket after authenticating through IAP. Generate a signed URL that grants read access to the bucket. Allow users to access the URL after authenticating through your web application.
You are developing an ecommerce web application that uses App Engine standard environment and Memorystore for Redis. When a user logs into the app, the application caches the user's information (e.g., session, name, address, preferences), which is stored for quick retrieval during checkout. While testing your application in a browser, you get a 502 Bad Gateway error. You have determined that the application is not connecting to Memorystore. What is the reason for this error? Your Memorystore for Redis instance was deployed without a public IP address. You configured your Serverless VPC Access connector in a different region than your App Engine instance. The firewall rule allowing a connection between App Engine and Memorystore was removed during an infrastructure update by the DevOps team You configured your application to use a Serverless VPC Access connector on a different subnet in a different availability zone than your App Engine instance.
Your development team has built several Cloud Functions using Java along with corresponding integration and service tests. You are building and deploying the functions and launching the tests using Cloud Build. Your Cloud Build job is reporting deployment failures immediately after successfully validating the code. What should you do? Check the maximum number of Cloud Function instances Verify that your Cloud Build trigger has the correct build parameters. Retry the tests using the truncated exponential backoff polling strategy. Verify that the Cloud Build service account is assigned the Cloud Functions Developer role.
You manage a microservices application on Google Kubernetes Engine (GKE) using Istio. You secure the communication channels between your microservices by implementing an Istio AuthorizationPolicy, a Kubernetes NetworkPolicy, and mTLS on your GKE cluster. You discover that HTTP requests between two Pods to specific URLs fail, while other requests to other URLs succeed. What is the cause of the connection issue? A Kubernetes NetworkPolicy resource is blocking HTTP traffic between the Pods. The Pod initiating the HTTP requests is attempting to connect to the target Pod via an incorrect TCP port. The Authorization Policy of your cluster is blocking HTTP requests for specific paths within your application. The cluster has mTLS configured in permissive mode, but the Pod's sidecar proxy is sending unencrypted traffic in plain text.
You recently migrated an on-premises monolithic application to a microservices application on Google Kubernetes Engine (GKE). The application has dependencies on backend services on-premises, including a CRM system and a MySQL database that contains personally identifiable information (PII). The backend services must remain on-premises to meet regulatory requirements. You established a Cloud VPN connection between your on-premises data center and Google Cloud. You notice that some requests from your microservices application on GKE to the backend services are failing due to latency issues caused by fluctuating bandwidth, which is causing the application to crash. How should you address the latency issues? Use Memorystore to cache frequently accessed PII data from the on-premises MySQL database Use Istio to create a service mesh that includes the microservices on GKE and the on-premises services Increase the number of Cloud VPN tunnels for the connection between Google Cloud and the on-premises services Decrease the network layer packet size by decreasing the Maximum Transmission Unit (MTU) value from its default value on Cloud VPN.
Your company has deployed a new API to a Compute Engine instance. During testing, the API is not behaving as expected. You want to monitor the application over 12 hours to diagnose the problem within the application code without redeploying the application. Which tool should you use? Cloud Trace Cloud Monitoring Cloud Debugger logpoints Cloud Debugger snapshots.
You are designing an application that consists of several microservices. Each microservice has its own RESTful API and will be deployed as a separate Kubernetes Service. You want to ensure that the consumers of these APIs aren't impacted when there is a change to your API, and also ensure that third-party systems aren't interrupted when new versions of the API are released. How should you configure the connection to the application following Google-recommended best practices? Use an Ingress that uses the API's URL to route requests to the appropriate backend. Leverage a Service Discovery system, and connect to the backend specified by the request. Use multiple clusters, and use DNS entries to route requests to separate versioned backends. Combine multiple versions in the same service, and then specify the API version in the POST request.
Your team is building an application for a financial institution. The application's frontend runs on Compute Engine, and the data resides in Cloud SQL and one Cloud Storage bucket. The application will collect data containing PII, which will be stored in the Cloud SQL database and the Cloud Storage bucket. You need to secure the PII data. What should you do? 1. Create the relevant firewall rules to allow only the frontend to communicate with the Cloud SQL database 2. Using IAM, allow only the frontend service account to access the Cloud Storage bucket 1. Create the relevant firewall rules to allow only the frontend to communicate with the Cloud SQL database 2. Enable private access to allow the frontend to access the Cloud Storage bucket privately 1. Configure a private IP address for Cloud SQL 2. Use VPC-SC to create a service perimeter 3. Add the Cloud SQL database and the Cloud Storage bucket to the same service perimeter 1. Configure a private IP address for Cloud SQL 2. Use VPC-SC to create a service perimeter 3. Add the Cloud SQL database and the Cloud Storage bucket to different service perimeters.
You are building a highly available and globally accessible application that will serve static content to users. You need to configure the storage and serving components. You want to minimize management overhead and latency while maximizing reliability for users. What should you do? 1. Create a managed instance group. Replicate the static content across the virtual machines (VMs) 2. Create an external HTTP(S) load balancer. 3. Enable Cloud CDN, and send traffic to the managed instance group 1. Create an unmanaged instance group. Replicate the static content across the VMs. 2. Create an external HTTP(S) load balancer 3. Enable Cloud CDN, and send traffic to the unmanaged instance group. 1. Create a Standard storage class, regional Cloud Storage bucket. Put the static content in the bucket 2. Reserve an external IP address, and create an external HTTP(S) load balancer 3. Enable Cloud CDN, and send traffic to your backend bucket 1. Create a Standard storage class, multi-regional Cloud Storage bucket. Put the static content in the bucket. 2. Reserve an external IP address, and create an external HTTP(S) load balancer. 3. Enable Cloud CDN, and send traffic to your backend bucket.
HipLocal wants to reduce the latency of their services for users in global locations. They have created read replicas of their database in locations where their users reside and configured their service to read traffic using those replicas. How should they further reduce latency for all database interactions with the least amount of effort? Migrate the database to Bigtable and use it to serve all global user traffic. Migrate the database to Cloud Spanner and use it to serve all global user traffic. Migrate the database to Firestore in Datastore mode and use it to serve all global user traffic. Migrate the services to Google Kubernetes Engine and use a load balancer service to better scale the application.
A recent security audit discovers that HipLocal’s database credentials for their Compute Engine-hosted MySQL databases are stored in plain text on persistent disks. HipLocal needs to reduce the risk of these credentials being stolen. What should they do?. Create a service account and download its key. Use the key to authenticate to Cloud Key Management Service (KMS) to obtain the database credentials Create a service account and download its key. Use the key to authenticate to Cloud Key Management Service (KMS) to obtain a key used to decrypt the database credentials. Create a service account and grant it the roles/iam.serviceAccountUser role. Impersonate as this account and authenticate using the Cloud SQL Proxy. Grant the roles/secretmanager.secretAccessor role to the Compute Engine service account. Store and access the database credentials with the Secret Manager API.
echnical Requirements - * The application and backend must provide usage metrics and monitoring. * APIs require strong authentication and authorization. * Logging must be increased, and data should be stored in a cloud analytics platform. * Move to serverless architecture to facilitate elastic scaling. * Provide authorized access to internal apps in a secure manner. Which database should HipLocal use for storing user activity? BigQuery Cloud SQL Cloud Spanner Cloud Datastore.
You have an application deployed in Google Kubernetes Engine (GKE). You need to update the application to make authorized requests to Google Cloud managed services. You want this to be a one-time setup, and you need to follow security best practices of auto-rotating your security keys and storing them in an encrypted store. You already created a service account with appropriate access to the Google Cloud service. What should you do next? Assign the Google Cloud service account to your GKE Pod using Workload Identity. Export the Google Cloud service account, and share it with the Pod as a Kubernetes Secret. Export the Google Cloud service account, and embed it in the source code of the application. Export the Google Cloud service account, and upload it to HashiCorp Vault to generate a dynamic service account for your application.
You are planning to deploy hundreds of microservices in your Google Kubernetes Engine (GKE) cluster. How should you secure communication between the microservices on GKE using a managed service? Use global HTTP(S) Load Balancing with managed SSL certificates to protect your services Deploy open source Istio in your GKE cluster, and enable mTLS in your Service Mesh Install cert-manager on GKE to automatically renew the SSL certificates. Install Anthos Service Mesh, and enable mTLS in your Service Mesh.
You are a developer working on an internal application for payroll processing. You are building a component of the application that allows an employee to submit a timesheet, which then initiates several steps: • An email is sent to the employee and manager, notifying them that the timesheet was submitted. • A timesheet is sent to payroll processing for the vendor's API. • A timesheet is sent to the data warehouse for headcount planning. These steps are not dependent on each other and can be completed in any order. New steps are being considered and will be implemented by different development teams. Each development team will implement the error handling specific to their step. What should you do? Deploy a Cloud Function for each step that calls the corresponding downstream system to complete the required action. Create a Pub/Sub topic for each step. Create a subscription for each downstream development team to subscribe to their step's topic. Create a Pub/Sub topic for timesheet submissions. Create a subscription for each downstream development team to subscribe to the topic. Create a timesheet microservice deployed to Google Kubernetes Engine. The microservice calls each downstream step and waits for a successful response before calling the next step.
You are designing an application that uses a microservices architecture. You are planning to deploy the application in the cloud and on-premises. You want to make sure the application can scale up on demand and also use managed services as much as possible. What should you do? Deploy open source Istio in a multi-cluster deployment on multiple Google Kubernetes Engine (GKE) clusters managed by Anthos. Create a GKE cluster in each environment with Anthos, and use Cloud Run for Anthos to deploy your application to each cluster. Install a GKE cluster in each environment with Anthos, and use Cloud Build to create a Deployment for your application in each cluster. Create a GKE cluster in the cloud and install open-source Kubernetes on-premises. Use an external load balancer service to distribute traffic across the two environments.
You want to migrate an on-premises container running in Knative to Google Cloud. You need to make sure that the migration doesn't affect your application's deployment strategy, and you want to use a fully managed service. Which Google Cloud service should you use to deploy your container? Cloud Run Compute Engine Google Kubernetes Engine App Engine flexible environment.
This architectural diagram depicts a system that streams data from thousands of devices. You want to ingest data into a pipeline, store the data, and analyze the data using SQL statements. Which Google Cloud services should you use for steps 1, 2, 3, and 4? 1. App Engine 2. Pub/Sub 3. BigQuery 4. Firestore 1. Dataflow 2. Pub/Sub 3. Firestore 4. BigQuery 1. Pub/Sub 2. Dataflow 3. BigQuery 4. Firestore 1. Pub/Sub 2. Dataflow 3. Firestore 4. BigQuery.
Your company just experienced a Google Kubernetes Engine (GKE) API outage due to a zone failure. You want to deploy a highly available GKE architecture that minimizes service interruption to users in the event of a future zone failure. What should you do? Deploy Zonal clusters Deploy Regional clusters Deploy Multi-Zone clusters Deploy GKE on-premises clusters.
Your team develops services that run on Google Cloud. You want to process messages sent to a Pub/Sub topic, and then store them. Each message must be processed exactly once to avoid duplication of data and any data conflicts. You need to use the cheapest and most simple solution. What should you do? Process the messages with a Dataproc job, and write the output to storage. Process the messages with a Dataflow streaming pipeline using Apache Beam's PubSubIO package, and write the output to storage. Process the messages with a Cloud Function, and write the results to a BigQuery location where you can run a job to deduplicate the data. Retrieve the messages with a Dataflow streaming pipeline, store them in Cloud Bigtable, and use another Dataflow streaming pipeline to deduplicate messages.
You are using the Cloud Client Library to upload an image in your application to Cloud Storage. Users of the application report that occasionally the upload does not complete and the client library reports an HTTP 504 Gateway Timeout error. You want to make the application more resilient to errors. What changes to the application should you make? Write an exponential backoff process around the client library call. Write a one-second wait time backoff process around the client library call. Design a retry button in the application and ask users to click if the error occurs. Create a queue for the object and inform the users that the application will try again in 10 minutes.
You are building a mobile application that will store hierarchical data structures in a database. The application will enable users working offline to sync changes when they are back online. A backend service will enrich the data in the database using a service account. The application is expected to be very popular and needs to scale seamlessly and securely. Which database and IAM role should you use? Use Cloud SQL, and assign the roles/cloudsql.editor role to the service account. Use Bigtable, and assign the roles/bigtable.viewer role to the service account. Use Firestore in Native mode and assign the roles/datastore.user role to the service account. Use Firestore in Datastore mode and assign the roles/datastore.viewer role to the service account.
You are developing an application that needs to store files belonging to users in Cloud Storage. You want each user to have their own subdirectory in Cloud Storage. When a new user is created, the corresponding empty subdirectory should also be created. What should you do? Create an object with the name of the subdirectory ending with a trailing slash ('/') that is zero bytes in length Create an object with the name of the subdirectory, and then immediately delete the object within that subdirectory. Create an object with the name of the subdirectory that is zero bytes in length and has WRITER access control list permission. Create an object with the name of the subdirectory that is zero bytes in length. Set the Content-Type metadata to CLOUDSTORAGE_FOLDER.
One of your deployed applications in Google Kubernetes Engine (GKE) is having intermittent performance issues. Your team uses a third-party logging solution. You want to install this solution on each node in your GKE cluster so you can view the logs. What should you do? Deploy the third-party solution as a DaemonSet Modify your container image to include the monitoring software Use SSH to connect to the GKE node, and install the software manually Deploy the third-party solution using Terraform and deploy the logging Pod as a Kubernetes Deployment.
How should HipLocal increase their API development speed while continuing to provide the QA team with a stable testing environment that meets feature requirements? Include unit tests in their code, and prevent deployments to QA until all tests have a passing status. Include performance tests in their code, and prevent deployments to QA until all tests have a passing status. Create health checks for the QA environment, and redeploy the APIs at a later time if the environment is unhealthy. Redeploy the APIs to App Engine using Traffic Splitting. Do not move QA traffic to the new versions if errors are found.
HipLocal's application uses Cloud Client Libraries to interact with Google Cloud. HipLocal needs to configure authentication and authorization in the Cloud Client Libraries to implement least privileged access for the application. What should they do? Create an API key. Use the API key to interact with Google Cloud. Use the default compute service account to interact with Google Cloud. Create a service account for the application. Export and deploy the private key for the application. Use the service account to interact with Google Cloud. Create a service account for the application and for each Google Cloud API used by the application. Export and deploy the private keys used by the application. Use the service account with one Google Cloud API to interact with Google Cloud.
How should HipLocal redesign their architecture to ensure that the application scales to support a large increase in users? Use Google Kubernetes Engine (GKE) to run the application as a microservice. Run the MySQL database on a dedicated GKE node. Use multiple Compute Engine instances to run MySQL to store state information. Use a Google Cloud-managed load balancer to distribute the load between instances. Use managed instance groups for scaling. Use Memorystore to store session information and CloudSQL to store state information. Use a Google Cloud-managed load balancer to distribute the load between instances. Use managed instance groups for scaling. Use a Cloud Storage bucket to serve the application as a static website, and use another Cloud Storage bucket to store user state information.
You are developing an application that consists of several microservices running in a Google Kubernetes Engine cluster. One microservice needs to connect to a third-party database running on-premises. You need to store credentials to the database and ensure that these credentials can be rotated while following security best practices. What should you do? Store the credentials in a sidecar container proxy, and use it to connect to the third-party database. Configure a service mesh to allow or restrict traffic from the Pods in your microservice to the database. Store the credentials in an encrypted volume mount, and associate a Persistent Volume Claim with the client Pod. Store the credentials as a Kubernetes Secret, and use the Cloud Key Management Service plugin to handle encryption and decryption.
You manage your company's ecommerce platform's payment system, which runs on Google Cloud. Your company must retain user logs for 1 year for internal auditing purposes and for 3 years to meet compliance requirements. You need to store new user logs on Google Cloud to minimize on-premises storage usage and ensure that they are easily searchable. You want to minimize effort while ensuring that the logs are stored correctly. What should you do? Store the logs in a Cloud Storage bucket with bucket lock turned on. Store the logs in a Cloud Storage bucket with a 3-year retention period. Store the logs in Cloud Logging as custom logs with a custom retention period. Store the logs in a Cloud Storage bucket with a 1-year retention period. After 1 year, move the logs to another bucket with a 2-year retention period.
Your company has a new security initiative that requires all data stored in Google Cloud to be encrypted by customer-managed encryption keys. You plan to use Cloud Key Management Service (KMS) to configure access to the keys. You need to follow the "separation of duties" principle and Google-recommended best practices. What should you do? (Choose two.) Provision Cloud KMS in its own project Do not assign an owner to the Cloud KMS project. Provision Cloud KMS in the project where the keys are being used. Grant the roles/cloudkms.admin role to the owner of the project where the keys from Cloud KMS are being used. Grant an owner role for the Cloud KMS project to a different user than the owner of the project where the keys from Cloud KMS are being used.
Your organization has recently begun an initiative to replatform their legacy applications onto Google Kubernetes Engine. You need to decompose a monolithic application into microservices. Multiple instances have read and write access to a configuration file, which is stored on a shared file system. You want to minimize the effort required to manage this transition, and you want to avoid rewriting the application code. What should you do? Create a new Cloud Storage bucket, and mount it via FUSE in the container. Create a new persistent disk, and mount the volume as a shared PersistentVolume. Create a new Filestore instance, and mount the volume as an NFS PersistentVolume. Create a new ConfigMap and volumeMount to store the contents of the configuration file.
You developed a JavaScript web application that needs to access Google Drive's API and obtain permission from users to store files in their Google Drives. You need to select an authorization approach for your application. What should you do? Create an API key. Create a SAML token. Create a service account. Create an OAuth Client ID.
You manage an ecommerce application that processes purchases from customers who can subsequently cancel or change those purchases. You discover that order volumes are highly variable and the backend order-processing system can only process one request at a time. You want to ensure seamless performance for customers regardless of usage volume. It is crucial that customers' order update requests are performed in the sequence in which they were generated. What should you do? Send the purchase and change requests over WebSockets to the backend. Send the purchase and change requests as REST requests to the backend. Use a Pub/Sub subscriber in pull mode and use a data store to manage ordering. Use a Pub/Sub subscriber in push mode and use a data store to manage ordering.
Your company needs a database solution that stores customer purchase history and meets the following requirements: ✑ Customers can query their purchase immediately after submission. ✑ Purchases can be sorted on a variety of fields. ✑ Distinct record formats can be stored at the same time. Which storage option satisfies these requirements? Firestore in Native mode Cloud Storage using an object read Cloud SQL using a SQL SELECT statement Firestore in Datastore mode using a global query.
You recently developed a new service on Cloud Run. The new service authenticates using a custom service and then writes transactional information to a Cloud Spanner database. You need to verify that your application can support up to 5,000 read and 1,000 write transactions per second while identifying any bottlenecks that occur. Your test infrastructure must be able to autoscale. What should you do? . Build a test harness to generate requests and deploy it to Cloud Run. Analyze the VPC Flow Logs using Cloud Logging. Create a Google Kubernetes Engine cluster running the Locust or JMeter images to dynamically generate load tests. Analyze the results using Cloud Trace. Create a Cloud Task to generate a test load. Use Cloud Scheduler to run 60,000 Cloud Task transactions per minute for 10 minutes. Analyze the results using Cloud Monitoring. Create a Compute Engine instance that uses a LAMP stack image from the Marketplace, and use Apache Bench to generate load tests against the service. Analyze the results using Cloud Trace.
Your company’s development teams want to use various open source operating systems in their Docker builds. When images are created in published containers in your company’s environment, you need to scan them for Common Vulnerabilities and Exposures (CVEs). The scanning process must not impact software development agility. You want to use managed services where possible. What should you do? Enable the Vulnerability scanning setting in the Container Registry. Create a Cloud Function that is triggered on a code check-in and scan the code for CVEs. Disallow the use of non-commercially supported base images in your development environment. Use Cloud Monitoring to review the output of Cloud Build to determine whether a vulnerable version has been used.
You are configuring a continuous integration pipeline using Cloud Build to automate the deployment of new container images to Google Kubernetes Engine (GKE). The pipeline builds the application from its source code, runs unit and integration tests in separate steps, and pushes the container to Container Registry. The application runs on a Python web server. The Dockerfile is as follows: FROM python:3.7-alpine - COPY . /app - WORKDIR /app - RUN pip install -r requirements.txt CMD [ "gunicorn", "-w 4", "main:app" ] You notice that Cloud Build runs are taking longer than expected to complete. You want to decrease the build time. What should you do? (Choose two.) Select a virtual machine (VM) size with higher CPU for Cloud Build runs. Deploy a Container Registry on a Compute Engine VM in a VPC, and use it to store the final images. Cache the Docker image for subsequent builds using the -- cache-from argument in your build config file. Change the base image in the Dockerfile to ubuntu:latest, and install Python 3.7 using a package manager utility. Store application source code on Cloud Storage, and configure the pipeline to use gsutil to download the source code.
You are building a CI/CD pipeline that consists of a version control system, Cloud Build, and Container Registry. Each time a new tag is pushed to the repository, a Cloud Build job is triggered, which runs unit tests on the new code builds a new Docker container image, and pushes it into Container Registry. The last step of your pipeline should deploy the new container to your production Google Kubernetes Engine (GKE) cluster. You need to select a tool and deployment strategy that meets the following requirements: • Zero downtime is incurred • Testing is fully automated • Allows for testing before being rolled out to users • Can quickly rollback if needed What should you do? Trigger a Spinnaker pipeline configured as an A/B test of your new code and, if it is successful, deploy the container to production. Trigger a Spinnaker pipeline configured as a canary test of your new code and, if it is successful, deploy the container to production. Trigger another Cloud Build job that uses the Kubernetes CLI tools to deploy your new container to your GKE cluster, where you can perform a canary test. Trigger another Cloud Build job that uses the Kubernetes CLI tools to deploy your new container to your GKE cluster, where you can perform a shadow test.
Your operations team has asked you to create a script that lists the Cloud Bigtable, Memorystore, and Cloud SQL databases running within a project. The script should allow users to submit a filter expression to limit the results presented. How should you retrieve the data? Use the HBase API, Redis API, and MySQL connection to retrieve database lists. Combine the results, and then apply the filter to display the results Use the HBase API, Redis API, and MySQL connection to retrieve database lists. Filter the results individually, and then combine them to display the results Run gcloud bigtable instances list, gcloud redis instances list, and gcloud sql databases list. Use a filter within the application, and then display the results Run gcloud bigtable instances list, gcloud redis instances list, and gcloud sql databases list. Use --filter flag with each command, and then display the results.
You need to deploy a new European version of a website hosted on Google Kubernetes Engine. The current and new websites must be accessed via the same HTTP(S) load balancer's external IP address, but have different domain names. What should you do? Define a new Ingress resource with a host rule matching the new domain Modify the existing Ingress resource with a host rule matching the new domain Create a new Service of type LoadBalancer specifying the existing IP address as the loadBalancerIP Generate a new Ingress resource and specify the existing IP address as the kubernetes.io/ingress.global-static-ip-name annotation value.
The development teams in your company want to manage resources from their local environments. You have been asked to enable developer access to each team’s Google Cloud projects. You want to maximize efficiency while following Google-recommended best practices. What should you do? Add the users to their projects, assign the relevant roles to the users, and then provide the users with each relevant Project ID. Add the users to their projects, assign the relevant roles to the users, and then provide the users with each relevant Project Number. Create groups, add the users to their groups, assign the relevant roles to the groups, and then provide the users with each relevant Project ID. Create groups, add the users to their groups, assign the relevant roles to the groups, and then provide the users with each relevant Project Number.
Your company’s product team has a new requirement based on customer demand to autoscale your stateless and distributed service running in a Google Kubernetes Engine (GKE) duster. You want to find a solution that minimizes changes because this feature will go live in two weeks. What should you do? Deploy a Vertical Pod Autoscaler, and scale based on the CPU load. Deploy a Vertical Pod Autoscaler, and scale based on a custom metric. Deploy a Horizontal Pod Autoscaler, and scale based on the CPU toad. Deploy a Horizontal Pod Autoscaler, and scale based on a custom metric.
Your application is composed of a set of loosely coupled services orchestrated by code executed on Compute Engine. You want your application to easily bring up new Compute Engine instances that find and use a specific version of a service. How should this be configured? Define your service endpoint information as metadata that is retrieved at runtime and used to connect to the desired service. Define your service endpoint information as label data that is retrieved at runtime and used to connect to the desired service. Define your service endpoint information to be retrieved from an environment variable at runtime and used to connect to the desired service. Define your service to use a fixed hostname and port to connect to the desired service. Replace the service at the endpoint with your new version.
You are developing a microservice-based application that will run on Google Kubernetes Engine (GKE). Some of the services need to access different Google Cloud APIs. How should you set up authentication of these services in the cluster following Google-recommended best practices? (Choose two.) Use the service account attached to the GKE node. Enable Workload Identity in the cluster via the gcloud command-line tool. Access the Google service account keys from a secret management service. Store the Google service account keys in a central secret management service. Use gcloud to bind the Kubernetes service account and the Google service account using roles/iam.workloadIdentity.
Your development team has been tasked with maintaining a .NET legacy application. The application incurs occasional changes and was recently updated. Your goal is to ensure that the application provides consistent results while moving through the CI/CD pipeline from environment to environment. You want to minimize the cost of deployment while making sure that external factors and dependencies between hosting environments are not problematic. Containers are not yet approved in your organization. What should you do? Rewrite the application using .NET Core, and deploy to Cloud Run. Use revisions to separate the environments. Use Cloud Build to deploy the application as a new Compute Engine image for each build. Use this image in each environment. Deploy the application using MS Web Deploy, and make sure to always use the latest, patched MS Windows Server base image in Compute Engine. Use Cloud Build to package the application, and deploy to a Google Kubernetes Engine cluster. Use namespaces to separate the environments.
Which service should HipLocal use for their public APIs? Cloud Armor Cloud Functions Cloud Endpoints Shielded Virtual Machines.
Your team develops services that run on Google Kubernetes Engine. Your team's code is stored in Cloud Source Repositories. You need to quickly identify bugs in the code before it is deployed to production. You want to invest in automation to improve developer feedback and make the process as efficient as possible. What should you do? Use Spinnaker to automate building container images from code based on Git tags. Use Cloud Build to automate building container images from code based on Git tags. Use Spinnaker to automate deploying container images to the production environment. Use Cloud Build to automate building container images from code based on forked versions.
Your team is developing an application in Google Cloud that executes with user identities maintained by Cloud Identity. Each of your application's users will have an associated Pub/Sub topic to which messages are published, and a Pub/Sub subscription where the same user will retrieve published messages. You need to ensure that only authorized users can publish and subscribe to their own specific Pub/Sub topic and subscription. What should you do? Bind the user identity to the pubsub.publisher and pubsub.subscriber roles at the resource level. Grant the user identity the pubsub.publisher and pubsub.subscriber roles at the project level. Grant the user identity a custom role that contains the pubsub.topics.create and pubsub.subscriptions.create permissions. Configure the application to run as a service account that has the pubsub.publisher and pubsub.subscriber roles.
You are evaluating developer tools to help drive Google Kubernetes Engine adoption and integration with your development environment, which includes VS Code and IntelliJ. What should you do? Use Cloud Code to develop applications Use the Cloud Shell integrated Code Editor to edit code and configuration files. Use a Cloud Notebook instance to ingest and process data and deploy models. Use Cloud Shell to manage your infrastructure and applications from the command line.
You are deploying your applications on Compute Engine. One of your Compute Engine instances failed to launch. What should you do? (Choose two.) Determine whether your file system is corrupted. Access Compute Engine as a different SSH user. Troubleshoot firewall rules or routes on an instance. Check whether your instance boot disk is completely full. Check whether network traffic to or from your instance is being dropped.
You have an application that uses an HTTP Cloud Function to process user activity from both desktop browser and mobile application clients. This function will serve as the endpoint for all metric submissions using HTTP POST. Due to legacy restrictions, the function must be mapped to a domain that is separate from the domain requested by users on web or mobile sessions. The domain for the Cloud Function is https://fn.example.com. Desktop and mobile clients use the domain https://www.example.com. You need to add a header to the function's HTTP response so that only those browser and mobile sessions can submit metrics to the Cloud Function. Which response header should you add? Access-Control-Allow-Origin: * Access-Control-Allow-Origin: https://*.example.com Access-Control-Allow-Origin: https://fn.example.com Access-Control-Allow-origin: https://www.example.com.
You have an HTTP Cloud Function that is called via POST. Each submission's request body has a flat, unnested JSON structure containing numeric and text data. After the Cloud Function completes, the collected data should be immediately available for ongoing and complex analytics by many users in parallel. How should you persist the submissions? Directly persist each POST request's JSON data into Datastore. Transform the POST request's JSON data, and stream it into BigQuery. Transform the POST request's JSON data, and store it in a regional Cloud SQL cluster. Persist each POST request's JSON data as an individual file within Cloud Storage, with the file name containing the request identifier.
Your security team is auditing all deployed applications running in Google Kubernetes Engine. After completing the audit, your team discovers that some of the applications send traffic within the cluster in clear text. You need to ensure that all application traffic is encrypted as quickly as possible while minimizing changes to your applications and maintaining support from Google. What should you do? Use Network Policies to block traffic between applications. Install Istio, enable proxy injection on your application namespace, and then enable mTLS. Define Trusted Network ranges within the application, and configure the applications to allow traffic only from those networks. Use an automated process to request SSL Certificates for your applications from Let's Encrypt and add them to your applications.
You migrated some of your applications to Google Cloud. You are using a legacy monitoring platform deployed on-premises for both on-premises and cloud- deployed applications. You discover that your notification system is responding slowly to time-critical problems in the cloud applications. What should you do? Replace your monitoring platform with Cloud Monitoring. Install the Cloud Monitoring agent on your Compute Engine instances. Migrate some traffic back to your old platform. Perform A/B testing on the two platforms concurrently. Use Cloud Logging and Cloud Monitoring to capture logs, monitor, and send alerts. Send them to your existing platform.
You recently deployed your application in Google Kubernetes Engine, and now need to release a new version of your application. You need the ability to instantly roll back to the previous version in case there are issues with the new version. Which deployment model should you use? Perform a rolling deployment, and test your new application after the deployment is complete. Perform A/B testing, and test your application periodically after the new tests are implemented. Perform a blue/green deployment, and test your new application after the deployment is. complete. Perform a canary deployment, and test your new application periodically after the new version is deployed.
You are a SaaS provider deploying dedicated blogging software to customers in your Google Kubernetes Engine (GKE) cluster. You want to configure a secure multi-tenant platform to ensure that each customer has access to only their own blog and can't affect the workloads of other customers. What should you do? Enable Application-layer Secrets on the GKE cluster to protect the cluster. Deploy a namespace per tenant and use Network Policies in each blog deployment. Use GKE Audit Logging to identify malicious containers and delete them on discovery. Build a custom image of the blogging software and use Binary Authorization to prevent untrusted image deployments.
You have decided to migrate your Compute Engine application to Google Kubernetes Engine. You need to build a container image and push it to Artifact Registry using Cloud Build. What should you do? (Choose two.) Run gcloud builds submit in the directory that contains the application source code. Run gcloud run deploy app-name --image gcr.io/$PROJECT_ID/app-name in the directory that contains the application source code. Run gcloud container images add-tag gcr.io/$PROJECT_ID/app-name gcr.io/$PROJECT_ID/app-name:latest in the directory that contains the application source code. In the application source directory, create a file named cloudbuild.yaml that contains the following contents: steps: - name: 'gcr.io/cloud-builders/docker' args: ['build', '-t', 'gcr.io/$PROJECT_ID/app-name', '.'] 'gcr. io/cloud-builders/docker' args: ['push', 'gcr.io/$PROJECT_ID/app-name'] - name: In the application source directory, create a file named cloudbuild.yaml that contains the following contents: steps: - name: 'gcr.io/cloud-builders/gcloud' args: ['app', 'deploy'] timeout: '1600s'.
You are developing an internal application that will allow employees to organize community events within your company. You deployed your application on a single Compute Engine instance. Your company uses Google Workspace (formerly G Suite), and you need to ensure that the company employees can authenticate to the application from anywhere. What should you do? Add a public IP address to your instance, and restrict access to the instance using firewall rules. Allow your company's proxy as the only source IP address. Add an HTTP(S) load balancer in front of the instance, and set up Identity-Aware Proxy (IAP). Configure the IAP settings to allow your company domain to access the website. Set up a VPN tunnel between your company network and your instance's VPC location on Google Cloud. Configure the required firewall rules and routing information to both the on-premises and Google Cloud networks. Add a public IP address to your instance, and allow traffic from the internet. Generate a random hash, and create a subdomain that includes this hash and points to your instance. Distribute this DNS address to your company's employees.
You recently developed an application. You need to call the Cloud Storage API from a Compute Engine instance that doesn't have a public IP address. What should you do? Use Carrier Peering Use VPC Network Peering Use Shared VPC networks Use Private Google Access.
Your application is running on Compute Engine and is showing sustained failures for a small number of requests. You have narrowed the cause down to a single Compute Engine instance, but the instance is unresponsive to SSH. What should you do next? Reboot the machine. Enable and check the serial port output. Delete the machine and create a new one. Take a snapshot of the disk and attach it to a new machine.
Which database should HipLocal use for storing user activity? BigQuery Cloud SQL Cloud Spanner Cloud Datastore.
Your team is developing an ecommerce platform for your company. Users will log in to the website and add items to their shopping cart. Users will be automatically logged out after 30 minutes of inactivity. When users log back in, their shopping cart should be saved. How should you store users' session and shopping cart information while following Google-recommended best practices? Store the session information in Pub/Sub, and store the shopping cart information in Cloud SQL. Store the shopping cart information in a file on Cloud Storage where the filename is the SESSION ID. Store the session and shopping cart information in a MySQL database running on multiple Compute Engine instances. Store the session information in Memorystore for Redis or Memorystore for Memcached, and store the shopping cart information in Firestore.
You are creating and running containers across different projects in Google Cloud. The application you are developing needs to access Google Cloud services from within Google Kubernetes Engine (GKE). What should you do? Assign a Google service account to the GKE nodes. Use a Google service account to run the Pod with Workload Identity Store the Google service account credentials as a Kubernetes Secret. Use a Google service account with GKE role-based access control (RBAC).
You have containerized a legacy application that stores its configuration on an NFS share. You need to deploy this application to Google Kubernetes Engine (GKE) and do not want the application serving traffic until after the configuration has been retrieved. What should you do? Use the gsutil utility to copy files from within the Docker container at startup, and start the service using an ENTRYPOINT script. Create a PersistentVolumeClaim on the GKE cluster. Access the configuration files from the volume, and start the service using an ENTRYPOINT script. Use the COPY statement in the Dockerfile to load the configuration into the container image. Verify that the configuration is available, and start the service using an ENTRYPOINT script. Add a startup script to the GKE instance group to mount the NFS share at node startup. Copy the configuration files into the container, and start the service using an ENTRYPOINT script.
Your team is developing a new application using a PostgreSQL database and Cloud Run. You are responsible for ensuring that all traffic is kept private on Google Cloud. You want to use managed services and follow Google-recommended best practices. What should you do? 1. Enable Cloud SQL and Cloud Run in the same project. 2. Configure a private IP address for Cloud SQL. Enable private services access. 3. Create a Serverless VPC Access connector. 4. Configure Cloud Run to use the connector to connect to Cloud SQL. 1. Install PostgreSQL on a Compute Engine virtual machine (VM), and enable Cloud Run in the same project. 2. Configure a private IP address for the VM. Enable private services access. 3. Create a Serverless VPC Access connector. 4. Configure Cloud Run to use the connector to connect to the VM hosting PostgreSQL. 1. Use Cloud SQL and Cloud Run in different projects. 2. Configure a private IP address for Cloud SQL. Enable private services access. 3. Create a Serverless VPC Access connector. 4. Set up a VPN connection between the two projects. Configure Cloud Run to use the connector to connect to Cloud SQL 1. Install PostgreSQL on a Compute Engine VM, and enable Cloud Run in different projects. 2. Configure a private IP address for the VM. Enable private services access. 3. Create a Serverless VPC Access connector. 4. Set up a VPN connection between the two projects. Configure Cloud Run to use the connector to access the VM hosting PostgreSQL.
You are developing an application that will allow clients to download a file from your website for a specific period of time. How should you design the application to complete this task while following Google-recommended best practices? Configure the application to send the file to the client as an email attachment. Generate and assign a Cloud Storage-signed URL for the file. Make the URL available for the client to download. Create a temporary Cloud Storage bucket with time expiration specified, and give download permissions to the bucket. Copy the file, and send it to the client. Generate the HTTP cookies with time expiration specified. If the time is valid, copy the file from the Cloud Storage bucket, and make the file available for the client to download.
Your development team has been asked to refactor an existing monolithic application into a set of composable microservices. Which design aspects should you implement for the new application? (Choose two.) Develop the microservice code in the same programming language used by the microservice caller. Create an API contract agreement between the microservice implementation and microservice caller. Require asynchronous communications between all microservice implementations and microservice callers. Ensure that sufficient instances of the microservice are running to accommodate the performance requirements. Implement a versioning scheme to permit future changes that could be incompatible with the current interface.
You deployed a new application to Google Kubernetes Engine and are experiencing some performance degradation. Your logs are being written to Cloud Logging, and you are using a Prometheus sidecar model for capturing metrics. You need to correlate the metrics and data from the logs to troubleshoot the performance issue and send real-time alerts while minimizing costs. What should you do? Create custom metrics from the Cloud Logging logs, and use Prometheus to import the results using the Cloud Monitoring REST API. Export the Cloud Logging logs and the Prometheus metrics to Cloud Bigtable. Run a query to join the results, and analyze in Google Data Studio. Export the Cloud Logging logs and stream the Prometheus metrics to BigQuery. Run a recurring query to join the results, and send notifications using Cloud Tasks. Export the Prometheus metrics and use Cloud Monitoring to view them as external metrics. Configure Cloud Monitoring to create log-based metrics from the logs, and correlate them with the Prometheus data.
You have been tasked with planning the migration of your company's application from on-premises to Google Cloud. Your company's monolithic application is an ecommerce website. The application will be migrated to microservices deployed on Google Cloud in stages. The majority of your company's revenue is generated through online sales, so it is important to minimize risk during the migration. You need to prioritize features and select the first functionality to migrate. What should you do? Migrate the Product catalog, which has integrations to the frontend and product database. Migrate Payment processing, which has integrations to the frontend, order database, and third-party payment vendor. Migrate Order fulfillment, which has integrations to the order database, inventory system, and third-party shipping vendor. Migrate the Shopping cart, which has integrations to the frontend, cart database, inventory system, and payment processing system.
You are a developer at a large corporation. You manage three Google Kubernetes Engine clusters on Google Cloud. Your team’s developers need to switch from one cluster to another regularly without losing access to their preferred development tools. You want to configure access to these multiple clusters while following Google-recommended best practices. What should you do? Ask the developers to use Cloud Shell and run gcloud container clusters get-credential to switch to another cluster. In a configuration file, define the clusters, users, and contexts. Share the file with the developers and ask them to use kubect1 contig to add cluster, user, and context details. Ask the developers to install the gcloud CLI on their workstation and run gcloud container clusters get-credentials to switch to another cluster. Ask the developers to open three terminals on their workstation and use kubect1 config to configure access to each cluster.
You are a developer at a large organization. Your team uses Git for source code management (SCM). You want to ensure that your team follows Google-recommended best practices to manage code to drive higher rates of software delivery. Which SCM process should your team use? Each developer commits their code to the main branch before each product release, conducts testing, and rolls back if integration issues are detected. Each group of developers copies the repository, commits their changes to their repository, and merges their code into the main repository before each product release. Each developer creates a branch for their own work, commits their changes to their branch, and merges their code into the main branch daily. Each group of developers creates a feature branch from the main branch for their work, commits their changes to their branch, and merges their code into the main branch after the change advisory board approves it.
You are developing a new application. You want the application to be triggered only when a given file is updated in your Cloud Storage bucket. Your trigger might change, so your process must support different types of triggers. You want the configuration to be simple so that multiple team members can update the triggers in the future. What should you do? Configure Cloud Storage events to be sent to Pub/Sub, and use Pub/Sub events to trigger a Cloud Build job that executes your application. Create an Eventarc trigger that monitors your Cloud Storage bucket for a specific filename, and set the target as Cloud Run. Configure a Cloud Function that executes your application and is triggered when an object is updated in Cloud Storage. Configure a Firebase function that executes your application and is triggered when an object is updated in Cloud Storage.
You are working on a social media application. You plan to add a feature that allows users to upload images. These images will be 2 MB `" 1 GB in size. You want to minimize their infrastructure operations overhead for this feature. What should you do? Change the application to accept images directly and store them in the database that stores other user information. Change the application to create signed URLs for Cloud Storage. Transfer these signed URLs to the client application to upload images to Cloud Storage. Set up a web server on GCP to accept user images and create a file store to keep uploaded files. Change the application to retrieve images from the file store. Create a separate bucket for each user in Cloud Storage. Assign a separate service account to allow write access on each bucket. Transfer service account credentials to the client application based on user information. The application uses this service account to upload images to Cloud Storage.
Your application is built as a custom machine image. You have multiple unique deployments of the machine image. Each deployment is a separate managed instance group with its own template. Each deployment requires a unique set of configuration values. You want to provide these unique values to each deployment but use the same custom machine image in all deployments. You want to use out-of-the-box features of Compute Engine. What should you do? Place the unique configuration values in the persistent disk. Place the unique configuration values in a Cloud Bigtable table. Place the unique configuration values in the instance template startup script. Place the unique configuration values in the instance template instance metadata.
Your application performs well when tested locally, but it runs significantly slower after you deploy it to a Compute Engine instance. You need to diagnose the problem. What should you do? What should you do? File a ticket with Cloud Support indicating that the application performs faster locally. Use Cloud Debugger snapshots to look at a point-in-time execution of the application. Use Cloud Profiler to determine which functions within the application take the longest amount of time. Add logging commands to the application and use Cloud Logging to check where the latency problem occurs.
You have an application running in App Engine. Your application is instrumented with Stackdriver Trace. The /product-details request reports details about four known unique products at /sku-details as shown below. You want to reduce the time it takes for the request to complete. What should you do Increase the size of the instance class. Change the Persistent Disk type to SSD. Change /product-details to perform the requests in parallel. Store the /sku-details information in a database, and replace the webservice call with a database query.
Your company has a data warehouse that keeps your application information in BigQuery. The BigQuery data warehouse keeps 2 PBs of user data. Recently, your company expanded your user base to include EU users and needs to comply with these requirements: ✑ Your company must be able to delete all user account information upon user request. ✑ All EU user data must be stored in a single region specifically for EU users. Which two actions should you take? (Choose two.) Use BigQuery federated queries to query data from Cloud Storage. Create a dataset in the EU region that will keep information about EU users only. Create a Cloud Storage bucket in the EU region to store information for EU users only. Re-upload your data using to a Cloud Dataflow pipeline by filtering your user records out Use DML statements in BigQuery to update/delete user records based on their requests.
Your analytics system executes queries against a BigQuery dataset. The SQL query is executed in batch and passes the contents of a SQL file to the BigQuery CLI. Then it redirects the BigQuery CLI output to another process. However, you are getting a permission error from the BigQuery CLI when the queries are executed. You want to resolve the issue. What should you do? Grant the service account BigQuery Data Viewer and BigQuery Job User roles. Grant the service account BigQuery Data Editor and BigQuery Data Viewer roles Create a view in BigQuery from the SQL query and SELECT* from the view in the CLI Create a new dataset in BigQuery, and copy the source table to the new dataset Query the new dataset and table from the CLI.
You configured your Compute Engine instance group to scale automatically according to overall CPU usage. However, your application's response latency increases sharply before the cluster has finished adding up instances. You want to provide a more consistent latency experience for your end users by changing the configuration of the instance group autoscaler. Which two configuration changes should you make? (Choose two.) Add the label ג€AUTOSCALEג€ to the instance group template. Decrease the cool-down period for instances added to the group. Increase the target CPU usage for the instance group autoscaler. Decrease the target CPU usage for the instance group autoscaler. Remove the health-check for individual VMs in the instance group.
Your application requires service accounts to be authenticated to GCP products via credentials stored on its host Compute Engine virtual machine instances. You want to distribute these credentials to the host instances as securely as possible. What should you do? Use HTTP signed URLs to securely provide access to the required resources. Use the instance's service account Application Default Credentials to authenticate to the required resources. Generate a P12 file from the GCP Console after the instance is deployed, and copy the credentials to the host instance before starting the application. Commit the credential JSON file into your application's source repository, and have your CI/CD process package it with the software that is deployed to the instance.
Your application is deployed in a Google Kubernetes Engine (GKE) cluster. You want to expose this application publicly behind a Cloud Load Balancing HTTP(S) load balancer. What should you do? Configure a GKE Ingress resource Configure a GKE Service resource. Configure a GKE Ingress resource with type: LoadBalancer. Configure a GKE Service resource with type: LoadBalancer.
Your company is planning to migrate their on-premises Hadoop environment to the cloud. Increasing storage cost and maintenance of data stored in HDFS is a major concern for your company. You also want to make minimal changes to existing data analytics jobs and existing architecture. How should you proceed with the migration? Migrate your data stored in Hadoop to BigQuery. Change your jobs to source their information from BigQuery instead of the on-premises Hadoop environment. Create Compute Engine instances with HDD instead of SSD to save costs. Then perform a full migration of your existing environment into the new one in Compute Engine instances. Create a Cloud Dataproc cluster on Google Cloud Platform, and then migrate your Hadoop environment to the new Cloud Dataproc cluster. Move your HDFS data into larger HDD disks to save on storage costs. Create a Cloud Dataproc cluster on Google Cloud Platform, and then migrate your Hadoop code objects to the new cluster. Move your data to Cloud Storage and leverage the Cloud Dataproc connector to run jobs on that data.
Your data is stored in Cloud Storage buckets. Fellow developers have reported that data downloaded from Cloud Storage is resulting in slow API performance. You want to research the issue to provide details to the GCP support team. Which command should you run? gsutil test ג€"o output.json gs://my-bucket gsutil perfdiag ג€"o output.json gs://my-bucket gcloud compute scp example-instance:~/test-data ג€"o output.json gs://my-bucket gcloud services test ג€"o output.json gs://my-bucket.
Your company has created an application that uploads a report to a Cloud Storage bucket. When the report is uploaded to the bucket, you want to publish a message to a Cloud Pub/Sub topic. You want to implement a solution that will take a small amount to effort to implement. What should you do? Configure the Cloud Storage bucket to trigger Cloud Pub/Sub notifications when objects are modified. Create an App Engine application to receive the file; when it is received, publish a message to the Cloud Pub/Sub topic. Create a Cloud Function that is triggered by the Cloud Storage bucket. In the Cloud Function, publish a message to the Cloud Pub/Sub topic. Create an application deployed in a Google Kubernetes Engine cluster to receive the file; when it is received, publish a message to the Cloud Pub/Sub topic.
Your teammate has asked you to review the code below, which is adding a credit to an account balance in Cloud Datastore. Which improvement should you suggest your teammate make? Get the entity with an ancestor query. Get and put the entity in a transaction. Use a strongly consistent transactional database. Don't return the account entity from the function.
You are writing a Compute Engine hosted application in project A that needs to securely authenticate to a Cloud Pub/Sub topic in project B. What should you do? Configure the instances with a service account owned by project B. Add the service account as a Cloud Pub/Sub publisher to project A. Configure the instances with a service account owned by project A. Add the service account as a publisher on the topic. Configure Application Default Credentials to use the private key of a service account owned by project B. Add the service account as a Cloud Pub/Sub publisher to project A. Configure Application Default Credentials to use the private key of a service account owned by project A. Add the service account as a publisher on the topic.
You are developing a corporate tool on Compute Engine for the finance department, which needs to authenticate users and verify that they are in the finance department. All company employees use G Suite. What should you do? Enable Cloud Identity-Aware Proxy on the HTTP(s) load balancer and restrict access to a Google Group containing users in the finance department. Verify the provided JSON Web Token within the application. Enable Cloud Identity-Aware Proxy on the HTTP(s) load balancer and restrict access to a Google Group containing users in the finance department. Issue client-side certificates to everybody in the finance team and verify the certificates in the application. Configure Cloud Armor Security Policies to restrict access to only corporate IP address ranges. Verify the provided JSON Web Token within the application. Configure Cloud Armor Security Policies to restrict access to only corporate IP address ranges. Issue client side certificates to everybody in the finance team and verify the certificates in the application.
HipLocal is configuring their access controls. Which firewall configuration should they implement? Block all traffic on port 443. Allow all traffic into the network. Allow traffic on port 443 for a specific tag. Allow all traffic on port 443 into the network.
In order for HipLocal to store application state and meet their stated business requirements, which database service should they migrate to? Cloud Spanner Cloud Datastore Cloud Memorystore as a cache Separate Cloud SQL clusters for each region.
You have an application deployed in production. When a new version is deployed, you want to ensure that all production traffic is routed to the new version of your application. You also want to keep the previous version deployed so that you can revert to it if there is an issue with the new version. Which deployment strategy should you use? Blue/green deployment Canary deployment Rolling deployment Recreate deployment.
You are porting an existing Apache/MySQL/PHP application stack from a single machine to Google Kubernetes Engine. You need to determine how to containerize the application. Your approach should follow Google-recommended best practices for availability. What should you do? Package each component in a separate container. Implement readiness and liveness probes. Package the application in a single container. Use a process management tool to manage each component. Package each component in a separate container. Use a script to orchestrate the launch of the components. Package the application in a single container. Use a bash script as an entrypoint to the container, and then spawn each component as a background job.
You recently developed an application that monitors a large number of stock prices. You need to configure Pub/Sub to receive a high volume messages and update the current stock price in a single large in-memory database. A downstream service needs the most up-to-date prices in the in-memory database to perform stock trading transactions. Each message contains three pieces or information: • Stock symbol • Stock price • Timestamp for the update How should you set up your Pub/Sub subscription? Create a pull subscription with exactly-once delivery enabled. Create a push subscription with both ordering and exactly-once delivery turned off. Create a push subscription with exactly-once delivery enabled. Create a pull subscription with both ordering and exactly-once delivery turned off.
You work on an application that relies on Cloud Spanner as its main datastore. New application features have occasionally caused performance regressions. You want to prevent performance issues by running an automated performance test with Cloud Build for each commit made. If multiple commits are made at the same time, the tests might run concurrently. What should you do? Create a new project with a random name for every build. Load the required data. Delete the project after the test is run. Create a new Cloud Spanner instance for every build. Load the required data. Delete the Cloud Spanner instance after the test is run. Create a project with a Cloud Spanner instance and the required data. Adjust the Cloud Build build file to automatically restore the data to its previous state after the test is run. Start the Cloud Spanner emulator locally. Load the required data. Shut down the emulator after the test is run.
Your company's security team uses Identity and Access Management (IAM) to track which users have access to which resources. You need to create a version control system that can integrate with your security team's processes. You want your solution to support fast release cycles and frequent merges to your main branch to minimize merge conflicts. What should you do? Create a Cloud Source Repositories repository, and use trunk-based development. Create a Cloud Source Repositories repository, and use feature-based development. Create a GitHub repository, mirror it to a Cloud Source Repositories repository, and use trunk-based development. Create a GitHub repository, mirror it to a Cloud Source Repositories repository, and use feature-based development.
You are developing an HTTP API hosted on a Compute Engine virtual machine instance that needs to be invoked by multiple clients within the same Virtual Private Cloud (VPC). You want clients to be able to get the IP address of the service. What should you do? Reserve a static external IP address and assign it to an HTTP(S) load balancing service's forwarding rule. Clients should use this IP address to connect to the service. Reserve a static external IP address and assign it to an HTTP(S) load balancing service's forwarding rule. Then, define an A record in Cloud DNS. Clients should use the name of the A record to connect to the service. Ensure that clients use Compute Engine internal DNS by connecting to the instance name with the url https://[INSTANCE_NAME].[ZONE].c. [PROJECT_ID].internal/. Ensure that clients use Compute Engine internal DNS by connecting to the instance name with the url https://[API_NAME]/[API_VERSION]/.
Your application is logging to Stackdriver. You want to get the count of all requests on all /api/alpha/* endpoints. What should you do? Add a Stackdriver counter metric for path:/api/alpha/. Add a Stackdriver counter metric for endpoint:/api/alpha/*. Export the logs to Cloud Storage and count lines matching /api/alpha. Export the logs to Cloud Pub/Sub and count lines matching /api/alpha.
You want to re-architect a monolithic application so that it follows a microservices model. You want to accomplish this efficiently while minimizing the impact of this change to the business. Which approach should you take? Deploy the application to Compute Engine and turn on autoscaling. Replace the application's features with appropriate microservices in phases. Refactor the monolithic application with appropriate microservices in a single effort and deploy it. Build a new application with the appropriate microservices separate from the monolith and replace it when it is complete.
Your existing application keeps user state information in a single MySQL database. This state information is very user-specific and depends heavily on how long a user has been using an application. The MySQL database is causing challenges to maintain and enhance the schema for various users. Which storage option should you choose? Cloud SQL Cloud Storage Cloud Spanner Cloud Datastore/Firestore.
ou are building a new API. You want to minimize the cost of storing and reduce the latency of serving images. Which architecture should you use? App Engine backed by Cloud Storage Compute Engine backed by Persistent Disk Transfer Appliance backed by Cloud Filestore Cloud Content Delivery Network (CDN) backed by Cloud Storage.
Your company's development teams want to use Cloud Build in their projects to build and push Docker images to Container Registry. The operations team requires all Docker images to be published to a centralized, securely managed Docker registry that the operations team manages. What should you do? Use Container Registry to create a registry in each development team's project. Configure the Cloud Build build to push the Docker image to the project's registry. Grant the operations team access to each development team's registry. Create a separate project for the operations team that has Container Registry configured. Assign appropriate permissions to the Cloud Build service account in each developer team's project to allow access to the operation team's registry. Create a separate project for the operations team that has Container Registry configured. Create a Service Account for each development team and assign the appropriate permissions to allow it access to the operations team's registry. Store the service account key file in the source code repository and use it to authenticate against the operations team's registry. Create a separate project for the operations team that has the open source Docker Registry deployed on a Compute Engine virtual machine instance. Create a username and password for each development team. Store the username and password in the source code repository and use it to authenticate against the operations team's Docker registry.
You are planning to deploy your application in a Google Kubernetes Engine (GKE) cluster. Your application can scale horizontally, and each instance of your application needs to have a stable network identity and its own persistent disk. Which GKE object should you use? Deployment StatefulSet ReplicaSet ReplicaController.
You are using Cloud Build to build a Docker image. You need to modify the build to execute unit and run integration tests. When there is a failure, you want the build history to clearly display the stage at which the build failed. What should you do? Add RUN commands in the Dockerfile to execute unit and integration tests. Create a Cloud Build build config file with a single build step to compile unit and integration tests. Create a Cloud Build build config file that will spawn a separate cloud build pipeline for unit and integration tests. Create a Cloud Build build config file with separate cloud builder steps to compile and execute unit and integration tests.
Your code is running on Cloud Functions in project A. It is supposed to write an object in a Cloud Storage bucket owned by project B. However, the write call is failing with the error "403 Forbidden". What should you do to correct the problem? Grant your user account the roles/storage.objectCreator role for the Cloud Storage bucket. Grant your user account the roles/iam.serviceAccountUser role for the service-PROJECTA@gcf-admin-robot.iam.gserviceaccount.com service account. Grant the service-PROJECTA@gcf-admin-robot.iam.gserviceaccount.com service account the roles/storage.objectCreator role for the Cloud Storage bucket. Enable the Cloud Storage API in project B.
HipLocal's APIs are having occasional application failures. They want to collect application information specifically to troubleshoot the issue. What should they do? Take frequent snapshots of the virtual machines. Install the Cloud Logging agent on the virtual machines Install the Cloud Monitoring agent on the virtual machines. Use Cloud Trace to look for performance bottlenecks.
HipLocal has connected their Hadoop infrastructure to GCP using Cloud Interconnect in order to query data stored on persistent disks. Which IP strategy should they use? Create manual subnets. Create an auto mode subnet. Create multiple peered VPCs. Provision a single instance for NAT.
Which service should HipLocal use to enable access to internal apps? Cloud VPN Cloud Armor Virtual Private Cloud Cloud Identity-Aware Proxy.
HipLocal wants to reduce the number of on-call engineers and eliminate manual scaling. Which two services should they choose? (Choose two.) Use Google App Engine services. Use serverless Google Cloud Functions. Use Knative to build and deploy serverless applications. Use Google Kubernetes Engine for automated deployments. Use a large Google Compute Engine cluster for deployments.
You need to copy directory local-scripts and all of its contents from your local workstation to a Compute Engine virtual machine instance. Which command should you use? gsutil cp --project ג€my-gcp-projectג€ -r ~/local-scripts/ gcp-instance-name:~/server-scripts/ --zone ג€us-east1-bג€ gsutil cp --project ג€my-gcp-projectג€ -R ~/local-scripts/ gcp-instance-name:~/server-scripts/ --zone ג€us-east1-bג€ gcloud compute scp --project ג€my-gcp-projectג€ --recurse ~/local-scripts/ gcp-instance-name:~/server-scripts/ --zone ג€us-east1-bג€ gcloud compute mv --project ג€my-gcp-projectג€ --recurse ~/local-scripts/ gcp-instance-name:~/server-scripts/ --zone ג€us-east1-bג€.
You are deploying your application to a Compute Engine virtual machine instance with the Stackdriver Monitoring Agent installed. Your application is a unix process on the instance. You want to be alerted if the unix process has not run for at least 5 minutes. You are not able to change the application to generate metrics or logs. Which alert condition should you configure? Uptime check Process health Metric absence Metric threshold.
You have two tables in an ANSI-SQL compliant database with identical columns that you need to quickly combine into a single table, removing duplicate rows from the result set. What should you do? Use the JOIN operator in SQL to combine the tables. Use nested WITH statements to combine the tables. Use the UNION operator in SQL to combine the tables. Use the UNION ALL operator in SQL to combine the tables.
Your company wants to expand their users outside the United States for their popular application. The company wants to ensure 99.999% availability of the database for their application and also wants to minimize the read latency for their users across the globe. Which two actions should they take? (Choose two.) Create a multi-regional Cloud Spanner instance with "nam-asia-eur1" configuration. Create a multi-regional Cloud Spanner instance with "nam3" configuration. Create a cluster with at least 3 Spanner nodes. Create a cluster with at least 1 Spanner node. Create a minimum of two Cloud Spanner instances in separate regions with at least one node. Create a Cloud Dataflow pipeline to replicate data across different databases.
You need to migrate an internal file upload API with an enforced 500-MB file size limit to App Engine. What should you do? Use FTP to upload files. Use CPanel to upload files. Use signed URLs to upload files. Change the API to be a multipart file upload API.
2020 Your application is deployed in a Google Kubernetes Engine (GKE) cluster. When a new version of your application is released, your CI/CD tool updates the spec.template.spec.containers[0].image value to reference the Docker image of your new application version. When the Deployment object applies the change, you want to deploy at least 1 replica of the new version and maintain the previous replicas until the new replica is healthy. Which change should you make to the GKE Deployment object shown below? Set the Deployment strategy to RollingUpdate with maxSurge set to 0, maxUnavailable set to 1. Set the Deployment strategy to RollingUpdate with maxSurge set to 1, maxUnavailable set to 0. Set the Deployment strategy to Recreate with maxSurge set to 0, maxUnavailable set to 1. Set the Deployment strategy to Recreate with maxSurge set to 1, maxUnavailable set to 0.
You plan to make a simple HTML application available on the internet. This site keeps information about FAQs for your application. The application is static and contains images, HTML, CSS, and Javascript. You want to make this application available on the internet with as few steps as possible. What should you do? Upload your application to Cloud Storage. Upload your application to an App Engine environment. Create a Compute Engine instance with Apache web server installed. Configure Apache web server to host the application. Containerize your application first. Deploy this container to Google Kubernetes Engine (GKE) and assign an external IP address to the GKE pod hosting the application.
Your company has a BigQuery data mart that provides analytics information to hundreds of employees. One user of wants to run jobs without interrupting important workloads. This user isn't concerned about the time it takes to run these jobs. You want to fulfill this request while minimizing cost to the company and the effort required on your part. What should you do? Ask the user to run the jobs as batch jobs. Create a separate project for the user to run jobs. Add the user as a job.user role in the existing project. Allow the user to run jobs when important workloads are not running.
You want to resolve the issue. What should you do? Request additional GKE quota in the GCP Console. Request additional Compute Engine quota in the GCP Console. Open a support case to request additional GKE quota. Decouple services in the cluster, and rewrite new clusters to function with fewer cores.
You have an application in production. It is deployed on Compute Engine virtual machine instances controlled by a managed instance group. Traffic is routed to the instances via a HTTP(s) load balancer. Your users are unable to access your application. You want to implement a monitoring technique to alert you when the application is unavailable. Which technique should you choose? Smoke tests Stackdriver uptime checks Cloud Load Balancing - heath checks Managed instance group - heath checks.
You are load testing your server application. During the first 30 seconds, you observe that a previously inactive Cloud Storage bucket is now servicing 2000 write requests per second and 7500 read requests per second. Your application is now receiving intermittent 5xx and 429 HTTP responses from the Cloud Storage JSON API as the demand escalates. You want to decrease the failed responses from the Cloud Storage API. What should you do? Distribute the uploads across a large number of individual storage buckets. Use the XML API instead of the JSON API for interfacing with Cloud Storage. Pass the HTTP response codes back to clients that are invoking the uploads from your application. Limit the upload rate from your application clients so that the dormant bucket's peak request rate is reached more gradually.
You are deploying a single website on App Engine that needs to be accessible via the URL http://www.altostrat.com/. What should you do? Verify domain ownership with Webmaster Central. Create a DNS CNAME record to point to the App Engine canonical name ghs.googlehosted.com. Verify domain ownership with Webmaster Central. Define an A record pointing to the single global App Engine IP address. Define a mapping in dispatch.yaml to point the domain www.altostrat.com to your App Engine service. Create a DNS CNAME record to point to the App Engine canonical name ghs.googlehosted.com. Define a mapping in dispatch.yaml to point the domain www.altostrat.com to your App Engine service. Define an A record pointing to the single global App Engine IP address.
Your App Engine standard configuration is as follows: service: production instance_class: B1 You want to limit the application to 5 instances. Which code snippet should you include in your configuration? manual_scaling: instances: 5 min_pending_latency: 30ms manual_scaling: max_instances: 5 idle_timeout: 10m basic_scaling: instances: 5 min_pending_latency: 30ms basic_scaling: max_instances: 5 idle_timeout: 10m.
HipLocal's data science team wants to analyze user reviews. How should they prepare the data? Use the Cloud Data Loss Prevention API for redaction of the review dataset. Use the Cloud Data Loss Prevention API for de-identification of the review dataset. Use the Cloud Natural Language Processing API for redaction of the review dataset Use the Cloud Natural Language Processing API for de-identification of the review dataset.
You are a SaaS provider deploying dedicated blogging software to customers in your Google Kubernetes Engine (GKE) cluster. You want to configure a secure multi-tenant platform to ensure that each customer has access to only their own blog and can't affect the workloads of other customers. What should you do? Enable Application-layer Secrets on the GKE cluster to protect the cluster. Deploy a namespace per tenant and use Network Policies in each blog deployment. Use GKE Audit Logging to identify malicious containers and delete them on discover Build a custom image of the blogging software and use Binary Authorization to prevent untrusted image deployments.
You want to create `fully baked` or `golden` Compute Engine images for your application. You need to bootstrap your application to connect to the appropriate database according to the environment the application is running on (test, staging, production). What should you do? Embed the appropriate database connection string in the image. Create a different image for each environment. When creating the Compute Engine instance, add a tag with the name of the database to be connected. In your application, query the Compute Engine API to pull the tags for the current instance, and use the tag to construct the appropriate database connection string. When creating the Compute Engine instance, create a metadata item with a key of ג€DATABASEג€ and a value for the appropriate database connection string. In your application, read the ג€DATABASEג€ environment variable, and use the value to connect to the appropriate database. When creating the Compute Engine instance, create a metadata item with a key of ג€DATABASEג€ and a value for the appropriate database connection string. In your application, query the metadata server for the ג€DATABASEג€ value, and use the value to connect to the appropriate database.
You recently migrated a monolithic application to Google Cloud by breaking it down into microservices. One of the microservices is deployed using Cloud Functions. As you modernize the application, you make a change to the API of the service that is backward-incompatible. You need to support both existing callers who use the original API and new callers who use the new API. What should you do? Leave the original Cloud Function as-is and deploy a second Cloud Function with the new API. Use a load balancer to distribute calls between the versions. Leave the original Cloud Function as-is and deploy a second Cloud Function that includes only the changed API. Calls are automatically routed to the correct function. Leave the original Cloud Function as-is and deploy a second Cloud Function with the new API. Use Cloud Endpoints to provide an API gateway that exposes a versioned API. Re-deploy the Cloud Function after making code changes to support the new API. Requests for both versions of the API are fulfilled based on a version identifier included in the call.
You are a developer working with the CI/CD team to troubleshoot a new feature that your team introduced. The CI/CD team used HashiCorp Packer to create a new Compute Engine image from your development branch. The image was successfully built, but is not booting up. You need to investigate the issue with the CI/ CD team. What should you do? Create a new feature branch, and ask the build team to rebuild the image. Shut down the deployed virtual machine, export the disk, and then mount the disk locally to access the boot logs. Install Packer locally, build the Compute Engine image locally, and then run it in your personal Google Cloud project. Check Compute Engine OS logs using the serial port, and check the Cloud Logging logs to confirm access to the serial port.
You manage an application that runs in a Compute Engine instance. You also have multiple backend services executing in stand-alone Docker containers running in Compute Engine instances. The Compute Engine instances supporting the backend services are scaled by managed instance groups in multiple regions. You want your calling application to be loosely coupled. You need to be able to invoke distinct service implementations that are chosen based on the value of an HTTP header found in the request. Which Google Cloud feature should you use to invoke the backend services? Traffic Director Service Directory Anthos Service Mesh Internal HTTP(S) Load Balancing.
You are evaluating developer tools to help drive Google Kubernetes Engine adoption and integration with your development environment, which includes VS Code and IntelliJ. What should you do? Use Cloud Code to develop applications. Use the Cloud Shell integrated Code Editor to edit code and configuration files Use a Cloud Notebook instance to ingest and process data and deploy models Use Cloud Shell to manage your infrastructure and applications from the command line.
277. Your team has created an application that is hosted on a Google Kubemetes Engine (GKE) cluster. You need to connect the application to a legacy REST service that is deployed in two GKE clusters in two different regions. You want to connect your application to the legacy service in a way that is resilient and requires the fewest number of steps. You also want to be able to run probe-based health checks on the legacy service on a separate port. How should you set up the connection? (Choose two.) Use Traffic Director with a sidecar proxy to connect the application to the service. Set up a proxyless Traffic Director configuration for the application. Configure the legacy service's firewall to allow health checks originating from the proxy. Configure the legacy service's firewall to allow health checks originating from the application. Configure the legacy service's firewall to allow health checks originating from the Traffic Director control plane.
You are developing a flower ordering application. Currently you have three microservices: • Order Service (receives the orders) • Order Fulfillment Service (processes the orders) • Notification Service (notifies the customer when the order is filled) You need to determine how the services will communicate with each other. You want incoming orders to be processed quickly and you need to collect order information for fulfillment. You also want to make sure orders are not lost between your services and are able to communicate asynchronously. How should the requests be processed? Order request - Order Service - Order Fulfillment Service - Notification Service Order request - Order Service - Cloud Storage bucket - Order Fulfillment Service - Cloud Storage bucket - Notification Service Order request - Order Service - Firestore database - Order Fulfillment Service - Firestore database - Notification Service Order request - Order Service - Pub/Sub queue - Order Fulfillment Service - Firestore database - Pub/Sub queue - Notification Service.
HipLocal's APIs are showing occasional failures, but they cannot find a pattern. They want to collect some metrics to help them troubleshoot. What should they do? Take frequent snapshots of all of the VMs Install the Stackdriver Logging agent on the VMs Install the Stackdriver Monitoring agent on the VMs Use Stackdriver Trace to look for performance bottlenecks.
Report abuse Consent Terms of use