option
Questions
ayuda
daypo
search.php

ERASED TEST, YOU MAY BE INTERESTED ON learningearning-dv2

COMMENTS STATISTICS RECORDS
TAKE THE TEST
Title of test:
learningearning-dv2

Description:
lea rni nge ar ni ng zim pwr

Author:
examsure2pass@gmail.com
Other tests from this author

Creation Date: 12/09/2024

Category: Driving Test

Number of questions: 87
Share the Test:
New CommentNuevo Comentario
No comments about this test.
Content:
You are developing the deployment and testing strategies for your CI/CD pipeline in Google Cloud. You must be able to: •Reduce the complexity of release deployments and minimize the duration of deployment rollbacks. •Test real production traffic with a gradual increase in the number of affected users. You want to select a deployment and testing strategy that meets your requirements. What should you do? Recreate deployment and canary testing Blue/green deployment and canary testing Rolling update deployment and A/B testing Rolling update deployment and shadow testing.
You are creating a CI/CD pipeline to perform Terraform deployments of Google Cloud resources. Your CI/CD tooling is running in Google Kubernetes Engine (GKE) and uses an ephemeral Pod for each pipeline run. You must ensure that the pipelines that run in the Pods have the appropriate Identity and Access Management (IAM) permissions to perform the Terraform deployments. You want to follow Google-recommended practices for identity management. What should you do? (Choose two.) Create a new Kubernetes service account, and assign the service account to the Pods. Use Workload Identity to authenticate as the Google service account. Create a new JSON service account key for the Google service account, store the key as a Kubernetes secret, inject the key into the Pods, and set the GOOGLE_APPLICATION_CREDENTIALS environment variable. Create a new Google service account, and assign the appropriate IAM permissions. Create a new JSON service account key for the Google service account, store the key in the secret management store for the CI/CD tool, and configure Terraform to use this key for authentication. Assign the appropriate IAM permissions to the Google service account associated with the Compute Engine VM instances that run the Pods.
You are the on-call Site Reliability Engineer for a microservice that is deployed to a Google Kubernetes Engine (GKE) Autopilot cluster. Your company runs an online store that publishes order messages to Pub/Sub, and a microservice receives these messages and updates stock information in the warehousing system. A sales event caused an increase in orders, and the stock information is not being updated quickly enough. This is causing a large number of orders to be accepted for products that are out of stock. You check the metrics for the microservice and compare them to typical levels: You need to ensure that the warehouse system accurately reflects product inventory at the time orders are placed and minimize the impact on customers. What should you do? Decrease the acknowledgment deadline on the subscription. Add a virtual queue to the online store that allows typical traffic levels. Increase the number of Pod replicas. Increase the Pod CPU and memory limits.
Your team deploys applications to three Google Kubernetes Engine (GKE) environments: development, staging, and production. You use GitHub repositories as your source of truth. You need to ensure that the three environments are consistent. You want to follow Google-recommended practices to enforce and install network policies and a logging DaemonSet on all the GKE clusters in those environments. What should you do? Use Google Cloud Deploy to deploy the network policies and the DaemonSet. Use Cloud Monitoring to trigger an alert if the network policies and DaemonSet drift from your source in the repository. Use Google Cloud Deploy to deploy the DaemonSet and use Policy Controller to configure the network policies. Use Cloud Monitoring to detect drifts from the source in the repository and Cloud Functions to correct the drifts. Use Cloud Build to render and deploy the network policies and the DaemonSet. Set up Config Sync to sync the configurations for the three environments. Use Cloud Build to render and deploy the network policies and the DaemonSet. Set up a Policy Controller to enforce the configurations for the three environments.
You are using Terraform to manage infrastructure as code within a CI/CD pipeline. You notice that multiple copies of the entire infrastructure stack exist in your Google Cloud project, and a new copy is created each time a change to the existing infrastructure is made. You need to optimize your cloud spend by ensuring that only a single instance of your infrastructure stack exists at a time. You want to follow Google-recommended practices. What should you do? Create a new pipeline to delete old infrastructure stacks when they are no longer needed. Confirm that the pipeline is storing and retrieving the terraform.tfstate file from Cloud Storage with the Terraform gcs backend. Verify that the pipeline is storing and retrieving the terraform.tfstate file from a source control. Update the pipeline to remove any existing infrastructure before you apply the latest configuration.
You are creating Cloud Logging sinks to export log entries from Cloud Logging to BigQuery for future analysis. Your organization has a Google Cloud folder named Dev that contains development projects and a folder named Prod that contains production projects. Log entries for development projects must be exported to dev_dataset, and log entries for production projects must be exported to prod_dataset. You need to minimize the number of log sinks created, and you want to ensure that the log sinks apply to future projects. What should you do? Create a single aggregated log sink at the organization level. Create a log sink in each project. Create two aggregated log sinks at the organization level, and filter by project ID. Create an aggregated log sink in the Dev and Prod folders.
Your company runs services by using multiple globally distributed Google Kubernetes Engine (GKE) clusters. Your operations team has set up workload monitoring that uses Prometheus-based tooling for metrics, alerts, and generating dashboards. This setup does not provide a method to view metrics globally across all clusters. You need to implement a scalable solution to support global Prometheus querying and minimize management overhead. What should you do? Configure Prometheus cross-service federation for centralized data access. Configure workload metrics within Cloud Operations for GKE. Configure Prometheus hierarchical federation for centralized data access. Configure Google Cloud Managed Service for Prometheus.
You need to build a CI/CD pipeline for a containerized application in Google Cloud. Your development team uses a central Git repository for trunk-based development. You want to run all your tests in the pipeline for any new versions of the application to improve the quality. What should you do? 1. Install a Git hook to require developers to run unit tests before pushing the code to a central repository. 2. Trigger Cloud Build to build the application container. Deploy the application container to a testing environment, and run integration tests. 3. If the integration tests are successful, deploy the application container to your production environment, and run acceptance tests. 1. Install a Git hook to require developers to run unit tests before pushing the code to a central repository. If all tests are successful, build a container. 2. Trigger Cloud Build to deploy the application container to a testing environment, and run integration tests and acceptance tests. 3. If all tests are successful, tag the code as production ready. Trigger Cloud Build to build and deploy the application container to the production environment. 1. Trigger Cloud Build to build the application container, and run unit tests with the container. 2. If unit tests are successful, deploy the application container to a testing environment, and run integration tests. 3. If the integration tests are successful, the pipeline deploys the application container to the production environment. After that, run acceptance tests. 1. Trigger Cloud Build to run unit tests when the code is pushed. If all unit tests are successful, build and push the application container to a central registry. 2. Trigger Cloud Build to deploy the container to a testing environment, and run integration tests and acceptance tests. 3. If all tests are successful, the pipeline deploys the application to the production environment and runs smoke tests.
The new version of your containerized application has been tested and is ready to be deployed to production on Google Kubernetes Engine (GKE). You could not fully load-test the new version in your pre-production environment, and you need to ensure that the application does not have performance problems after deployment. Your deployment must be automated. What should you do? Deploy the application through a continuous delivery pipeline by using canary deployments. Use Cloud Monitoring to look for performance issues, and ramp up traffic as supported by the metrics. Deploy the application through a continuous delivery pipeline by using blue/green deployments. Migrate traffic to the new version of the application and use Cloud Monitoring to look for performance issues. Deploy the application by using kubectl and use Config Connector to slowly ramp up traffic between versions. Use Cloud Monitoring to look for performance issues. Deploy the application by using kubectl and set the spec.updateStrategy.type field to RollingUpdate. Use Cloud Monitoring to look for performance issues, and run the kubectl rollback command if there are any issues.
101###You are managing an application that runs in Compute Engine. The application uses a custom HTTP server to expose an API that is accessed by other applications through an internal TCP/UDP load balancer. A firewall rule allows access to the API port from 0.0.0.0/0. You need to configure Cloud Logging to log each IP address that accesses the API by using the fewest number of steps. What should you do first? Enable Packet Mirroring on the VPC. Install the Ops Agent on the Compute Engine instances. Enable logging on the firewall rule. Enable VPC Flow Logs on the subnet.
Your company runs an ecommerce website built with JVM-based applications and microservice architecture in Google Kubernetes Engine (GKE). The application load increases during the day and decreases during the night. Your operations team has configured the application to run enough Pods to handle the evening peak load. You want to automate scaling by only running enough Pods and nodes for the load. What should you do? Configure the Vertical Pod Autoscaler, but keep the node pool size static. Configure the Vertical Pod Autoscaler, and enable the cluster autoscaler. Configure the Horizontal Pod Autoscaler, but keep the node pool size static. Configure the Horizontal Pod Autoscaler, and enable the cluster autoscaler.
Your organization wants to increase the availability target of an application from 99.9% to 99.99% for an investment of $2,000. The application's current revenue is $1,000,000. You need to determine whether the increase in availability is worth the investment for a single year of usage. What should you do? Calculate the value of improved availability to be $900, and determine that the increase in availability is not worth the investment. Calculate the value of improved availability to be $1,000, and determine that the increase in availability is not worth the investment. Calculate the value of improved availability to be $1,000, and determine that the increase in availability is worth the investment. Calculate the value of improved availability to be $9,000, and determine that the increase in availability is worth the investment.
A third-party application needs to have a service account key to work properly. When you try to export the key from your cloud project, you receive an error: “The organization policy constraint iam.disableServiceAccounKeyCreation is enforced.” You need to make the third-party application work while following Google-recommended security practices. What should you do? Enable the default service account key, and download the key. Remove the iam.disableServiceAccountKeyCreation policy at the organization level, and create a key. Disable the service account key creation policy at the project's folder, and download the default key. Add a rule to set the iam.disableServiceAccountKeyCreation policy to off in your project, and create a key.
Your team is writing a postmortem after an incident on your external facing application. Your team wants to improve the postmortem policy to include triggers that indicate whether an incident requires a postmortem. Based on Site Reliability Engineering (SRE) practices, what triggers should be defined in the postmortem policy? (Choose two.) An external stakeholder asks for a postmortem Data is lost due to an incident. An internal stakeholder requests a postmortem. The monitoring system detects that one of the instances for your application has failed. The CD pipeline detects an issue and rolls back a problematic release.
You are implementing a CI/CD pipeline for your application in your company’s multi-cloud environment. Your application is deployed by using custom Compute Engine images and the equivalent in other cloud providers. You need to implement a solution that will enable you to build and deploy the images to your current environment and is adaptable to future changes. Which solution stack should you use? Cloud Build with Packer Cloud Build with Google Cloud Deploy Google Kubernetes Engine with Google Cloud Deploy Cloud Build with kpt.
Your application's performance in Google Cloud has degraded since the last release. You suspect that downstream dependencies might be causing some requests to take longer to complete. You need to investigate the issue with your application to determine the cause. What should you do? Configure Error Reporting in your application. Configure Google Cloud Managed Service for Prometheus in your application. Configure Cloud Profiler in your application. Configure Cloud Trace in your application.
You are creating a CI/CD pipeline in Cloud Build to build an application container image. The application code is stored in GitHub. Your company requires that production image builds are only run against the main branch and that the change control team approves all pushes to the main branch. You want the image build to be as automated as possible. What should you do? (Choose two.) Create a trigger on the Cloud Build job. Set the repository event setting to ‘Pull request’. Add the OWNERS file to the Included files filter on the trigger. Create a trigger on the Cloud Build job. Set the repository event setting to ‘Push to a branch’ Configure a branch protection rule for the main branch on the repository. Enable the Approval option on the trigger.
You built a serverless application by using Cloud Run and deployed the application to your production environment. You want to identify the resource utilization of the application for cost optimization. What should you do? Use Cloud Trace with distributed tracing to monitor the resource utilization of the application. Use Cloud Profiler with Ops Agent to monitor the CPU and memory utilization of the application. Use Cloud Monitoring to monitor the container CPU and memory utilization of the application. Use Cloud Ops to create logs-based metrics to monitor the resource utilization of the application.
!!!!110!!!!Your company is using HTTPS requests to trigger a public Cloud Run-hosted service accessible at the https://booking-engine-abcdef.a.run.app URL. You need to give developers the ability to test the latest revisions of the service before the service is exposed to customers. What should you do? Run the gcloud run deploy booking-engine --no-traffic --tag dev command. Use the https://dev--bookingengine-abcdef.a.run.app URL for testing. Run the gcloud run services update-traffic booking-engine --to-revisions LATEST=1 command. Use the https://booking-engine-abcdef.a.run.app URL for testing. Pass the curl –H “Authorization:Bearer $(gcloud auth print-identity-token)” auth token. Use the https://booking-engine-abcdef.a.run.app URL to test privately. Grant the roles/run.invoker role to the developers testing the booking-engine service. Use the https://booking-engine-abcdef.private.run.app URL for testing.
You are configuring connectivity across Google Kubernetes Engine (GKE) clusters in different VPCs. You notice that the nodes in Cluster A are unable to access the nodes in Cluster B. You suspect that the workload access issue is due to the network configuration. You need to troubleshoot the issue but do not have execute access to workloads and nodes. You want to identify the layer at which the network connectivity is broken. What should you do? Install a toolbox container on the node in Cluster Confirm that the routes to Cluster B are configured appropriately. Use Network Connectivity Center to perform a Connectivity Test from Cluster A to Cluster B. Use a debug container to run the traceroute command from Cluster A to Cluster B and from Cluster B to Cluster A. Identify the common failure point. Enable VPC Flow Logs in both VPCs, and monitor packet drops.
You manage an application that runs in Google Kubernetes Engine (GKE) and uses the blue/green deployment methodology. Extracts of the Kubernetes manifests are shown below: The Deployment app-green was updated to use the new version of the application. During post-deployment monitoring, you notice that the majority of user requests are failing. You did not observe this behavior in the testing environment. You need to mitigate the incident impact on users and enable the developers to troubleshoot the issue. What should you do? Update the Deployment app-blue to use the new version of the application. Update the Deployment app-green to use the previous version of the application. Change the selector on the Service app-svc to app: my-app. Change the selector on the Service app-svc to app: my-app, version: blue.
You are running a web application deployed to a Compute Engine managed instance group. Ops Agent is installed on all instances. You recently noticed suspicious activity from a specific IP address. You need to configure Cloud Monitoring to view the number of requests from that specific IP address with minimal operational overhead. What should you do? Configure the Ops Agent with a logging receiver. Create a logs-based metric. Create a script to scrape the web server log. Export the IP address request metrics to the Cloud Monitoring API. Update the application to export the IP address request metrics to the Cloud Monitoring API. Configure the Ops Agent with a metrics receiver.
Your organization is using Helm to package containerized applications. Your applications reference both public and private charts. Your security team flagged that using a public Helm repository as a dependency is a risk. You want to manage all charts uniformly, with native access control and VPC Service Controls. What should you do? Store public and private charts in OCI format by using Artifact Registry. Store public and private charts by using GitHub Enterprise with Google Workspace as the identity provider. Store public and private charts by using Git repository. Configure Cloud Build to synchronize contents of the repository into a Cloud Storage bucket. Connect Helm to the bucket by using https://[bucket].storagegoogleapis.com/[helmchart] as the Helm repository. Configure a Helm chart repository server to run in Google Kubernetes Engine (GKE) with Cloud Storage bucket as the storage backend.
You use Terraform to manage an application deployed to a Google Cloud environment. The application runs on instances deployed by a managed instance group. The Terraform code is deployed by using a CI/CD pipeline. When you change the machine type on the instance template used by the managed instance group, the pipeline fails at the terraform apply stage with the following error message: You need to update the instance template and minimize disruption to the application and the number of pipeline runs. What should you do? Delete the managed instance group, and recreate it after updating the instance template. Add a new instance template, update the managed instance group to use the new instance template, and delete the old instance template. Remove the managed instance group from the Terraform state file, update the instance template, and reimport the managed instance group. Set the create_before_destroy meta-argument to true in the lifecycle block on the instance template.
Your company operates in a highly regulated domain that requires you to store all organization logs for seven years. You want to minimize logging infrastructure complexity by using managed services. You need to avoid any future loss of log capture or stored logs due to misconfiguration or human error. What should you do? Use Cloud Logging to configure an aggregated sink at the organization level to export all logs into a BigQuery dataset. Use Cloud Logging to configure an aggregated sink at the organization level to export all logs into Cloud Storage with a seven-year retention policy and Bucket Lock. Use Cloud Logging to configure an export sink at each project level to export all logs into a BigQuery dataset Use Cloud Logging to configure an export sink at each project level to export all logs into Cloud Storage with a seven-year retention policy and Bucket Lock.
You are building the CI/CD pipeline for an application deployed to Google Kubernetes Engine (GKE). The application is deployed by using a Kubernetes Deployment, Service, and Ingress. The application team asked you to deploy the application by using the blue/green deployment methodology. You need to implement the rollback actions. What should you do? Run the kubectl rollout undo command. Delete the new container image, and delete the running Pods. Update the Kubernetes Service to point to the previous Kubernetes Deployment. Scale the new Kubernetes Deployment to zero.
You are building and running client applications in Cloud Run and Cloud Functions. Your client requires that all logs must be available for one year so that the client can import the logs into their logging service. You must minimize required code changes. What should you do? Update all images in Cloud Run and all functions in Cloud Functions to send logs to both Cloud Logging and the client's logging service. Ensure that all the ports required to send logs are open in the VPC firewall. Create a Pub/Sub topic, subscription, and logging sink. Configure the logging sink to send all logs into the topic. Give your client access to the topic to retrieve the logs. Create a storage bucket and appropriate VPC firewall rules. Update all images in Cloud Run and all functions in Cloud Functions to send logs to a file within the storage bucket. Create a logs bucket and logging sink. Set the retention on the logs bucket to 365 days. Configure the logging sink to send logs to the bucket. Give your client access to the bucket to retrieve the logs.
You have an application that runs in Google Kubernetes Engine (GKE). The application consists of several microservices that are deployed to GKE by using Deployments and Services. One of the microservices is experiencing an issue where a Pod returns 403 errors after the Pod has been running for more than five hours. Your development team is working on a solution, but the issue will not be resolved for a month. You need to ensure continued operations until the microservice is fixed. You want to follow Google-recommended practices and use the fewest number of steps. What should you do? Create a cron job to terminate any Pods that have been running for more than five hours. Add a HTTP liveness probe to the microservice's deployment. Monitor the Pods, and terminate any Pods that have been running for more than five hours. Configure an alert to notify you whenever a Pod returns 403 errors.
You want to share a Cloud Monitoring custom dashboard with a partner team. What should you do? Provide the partner team with the dashboard URL to enable the partner team to create a copy of the dashboard. Export the metrics to BigQuery. Use Looker Studio to create a dashboard, and share the dashboard with the partner team. Copy the Monitoring Query Language (MQL) query from the dashboard, and send the ML query to the partner team. Download the JSON definition of the dashboard, and send the JSON file to the partner team.
You are building an application that runs on Cloud Run. The application needs to access a third-party API by using an API key. You need to determine a secure way to store and use the API key in your application by following Google-recommended practices. What should you do? Save the API key in Secret Manager as a secret. Reference the secret as an environment variable in the Cloud Run application. Save the API key in Secret Manager as a secret key. Mount the secret key under the /sys/api_key directory, and decrypt the key in the Cloud Run application. Save the API key in Cloud Key Management Service (Cloud KMS) as a key. Reference the key as an environment variable in the Cloud Run application. Encrypt the API key by using Cloud Key Management Service (Cloud KMS), and pass the key to Cloud Run as an environment variable. Decrypt and use the key in Cloud Run.
You are currently planning how to display Cloud Monitoring metrics for your organization’s Google Cloud projects. Your organization has three folders and six projects: You want to configure Cloud Monitoring dashboards to only display metrics from the projects within one folder. You need to ensure that the dashboards do not display metrics from projects in the other folders. You want to follow Google-recommended practices. What should you do? Create a single new scoping project. Create new scoping projects for each folder. Use the current app-one-prod project as the scoping project. Use the current app-one-dev, app-one-staging, and app-one-prod projects as the scoping project for each folder.
Your company’s security team needs to have read-only access to Data Access audit logs in the _Required bucket. You want to provide your security team with the necessary permissions following the principle of least privilege and Google-recommended practices. What should you do? Assign the roles/logging.viewer role to each member of the security team. Assign the roles/logging.viewer role to a group with all the security team members. Assign the roles/logging.privateLogViewer role to each member of the security team. Assign the roles/logging.privateLogViewer role to a group with all the security team members.
Your team is building a service that performs compute-heavy processing on batches of data. The data is processed faster based on the speed and number of CPUs on the machine. These batches of data vary in size and may arrive at any time from multiple third-party sources. You need to ensure that third parties are able to upload their data securely. You want to minimize costs, while ensuring that the data is processed as quickly as possible. What should you do? A.B.C.D. Answer: C Provide a secure file transfer protocol (SFTP) server on a Compute Engine instance so that third parties can upload batches of data, and provide appropriate credentials to the server. Create a Cloud Function with a google.storage.object.finalize Cloud Storage trigger. Write code so that the function can scale up a Compute Engine autoscaling managed instance group Use an image pre-loaded with the data processing software that terminates the instances when processing completes. Provide a Cloud Storage bucket so that third parties can upload batches of data, and provide appropriate Identity and Access Management (IAM) access to the bucket. Use a standard Google Kubernetes Engine (GKE) cluster and maintain two services: one that processes the batches of data, and one that monitors Cloud Storage for new batches of data. Stop the processing service when there are no batches of data to process. Provide a Cloud Storage bucket so that third parties can upload batches of data, and provide appropriate Identity and Access Management (IAM) access to the bucket. Create a Cloud Function with a google.storage.object.finalize Cloud Storage trigger. Write code so that the function can scale up a Compute Engine autoscaling managed instance group. Use an image pre-loaded with the data processing software that terminates the instances when processing completes. Provide a Cloud Storage bucket so that third parties can upload batches of data, and provide appropriate Identity and Access Management (IAM) access to the bucket. Use Cloud Monitoring to detect new batches of data in the bucket and trigger a Cloud Function that processes the data. Set a Cloud Function to use the largest CPU possible to minimize the runtime of the processing.
You are reviewing your deployment pipeline in Google Cloud Deploy. You must reduce toil in the pipeline, and you want to minimize the amount of time it takes to complete an end-to-end deployment. What should you do? (Choose two.) Divide the automation steps into smaller tasks. Use a script to automate the creation of the deployment pipeline in Google Cloud Deploy. Add more engineers to finish the manual steps. Automate promotion approvals from the development environment to the test environment. Create a trigger to notify the required team to complete the next step when manual intervention is required.
You work for a global organization and are running a monolithic application on Compute Engine. You need to select the machine type for the application to use that optimizes CPU utilization by using the fewest number of steps. You want to use historical system metrics to identify the machine type for the application to use. You want to follow Google-recommended practices. What should you do? Use the Recommender API and apply the suggested recommendations. Create an Agent Policy to automatically install Ops Agent in all VMs. Install the Ops Agent in a fleet of VMs by using the gcloud CLI. Review the Cloud Monitoring dashboard for the VM and choose the machine type with the lowest CPU utilization.
You deployed an application into a large Standard Google Kubernetes Engine (GKE) cluster. The application is stateless and multiple pods run at the same time. Your application receives inconsistent traffic. You need to ensure that the user experience remains consistent regardless of changes in traffic and that the resource usage of the cluster is optimized. What should you do? Configure a cron job to scale the deployment on a schedule Configure a Horizontal Pod Autoscaler. Configure a Vertical Pod Autoscaler Configure cluster autoscaling on the node pool.
You need to deploy a new service to production. The service needs to automatically scale using a managed instance group and should be deployed across multiple regions. The service needs a large number of resources for each instance and you need to plan for capacity. What should you do? Monitor results of Cloud Trace to determine the optimal sizing. Use the n2-highcpu-96 machine type in the configuration of the managed instance group. Deploy the service in multiple regions and use an internal load balancer to route traffic. Validate that the resource requirements are within the available project quota limits of each region.
You are analyzing Java applications in production. All applications have Cloud Profiler and Cloud Trace installed and configured by default. You want to determine which applications need performance tuning. What should you do? (Choose two.) Examine the wall-clock time and the CPU time of the application. If the difference is substantial increase the CPU resource allocation. Examine the wall-clock time and the CPU time of the application. If the difference is substantial, increase the memory resource allocation. Examine the wall-clock time and the CPU time of the application. If the difference is substantial, increase the local disk storage allocation. Examine the latency time the wall-clock time and the CPU time of the application. If the latency time is slowly burning down the error budget, and the difference between wall-clock time and CPU time is minimal mark the application for optimization. Examine the heap usage of the application. If the usage is low, mark the application for optimization.
Your organization stores all application logs from multiple Google Cloud projects in a central Cloud Logging project. Your security team wants to enforce a rule that each project team can only view their respective logs and only the operations team can view all the logs. You need to design a solution that meets the security team s requirements while minimizing costs. What should you do? Grant each project team access to the project _Default view in the central logging project. Grant togging viewer access to the operations team in the central logging project. Create Identity and Access Management (IAM) roles for each project team and restrict access to the _Default log view in their individual Google Cloud project. Grant viewer access to the operations team in the central logging project. Create log views for each project team and only show each project team their application logs. Grant the operations team access to the _AllLogs view in the central logging project. Export logs to BigQuery tables for each project team. Grant project teams access to their tables. Grant logs writer access to the operations team in the central logging project.
Your company uses Jenkins running on Google Cloud VM instances for CI/CD. You need to extend the functionality to use infrastructure as code automation by using Terraform. You must ensure that the Terraform Jenkins instance is authorized to create Google Cloud resources. You want to follow Google-recommended practices. What should you do? Answer: A Confirm that the Jenkins VM instance has an attached service account with the appropriate Identity and Access Management (IAM) permissions. Use the Terraform module so that Secret Manager can retrieve credentials. Create a dedicated service account for the Terraform instance. Download and copy the secret key value to the GOOGLE_CREDENTIALS environment variable on the Jenkins server. Add the gcloud auth application-default login command as a step in Jenkins before running the Terraform commands.
You encounter a large number of outages in the production systems you support. You receive alerts for all the outages, the alerts are due to unhealthy systems that are automatically restarted within a minute. You want to set up a process that would prevent staff burnout while following Site Reliability Engineering (SRE) practices. What should you do? Eliminate alerts that are not actionable Redefine the related SLO so that the error budget is not exhausted Distribute the alerts to engineers in different time zones Create an incident report for each of the alerts.
As part of your company's initiative to shift left on security, the InfoSec team is asking all teams to implement guard rails on all the Google Kubernetes Engine (GKE) clusters to only allow the deployment of trusted and approved images. You need to determine how to satisfy the InfoSec team's goal of shifting left on security. What should you do? Enable Container Analysis in Artifact Registry, and check for common vulnerabilities and exposures (CVEs) in your container images Use Binary Authorization to attest images during your CI/CD pipeline Configure Identity and Access Management (IAM) policies to create a least privilege model on your GKE clusters. Deploy Falco or Twistlock on GKE to monitor for vulnerabilities on your running Pods.
Your company operates in a highly regulated domain. Your security team requires that only trusted container images can be deployed to Google Kubernetes Engine (GKE). You need to implement a solution that meets the requirements of the security team while minimizing management overhead. What should you do? Configure Binary Authorization in your GKE clusters to enforce deploy-time security policies. Grant the roles/artifactregistry.writer role to the Cloud Build service account. Confirm that no employee has Artifact Registry write permission. Use Cloud Run to write and deploy a custom validator. Enable an Eventarc trigger to perform validations when new images are uploaded. Configure Kritis to run in your GKE clusters to enforce deploy-time security policies.
Your CTO has asked you to implement a postmortem policy on every incident for internal use. You want to define what a good postmortem is to ensure that the policy is successful at your company. What should you do? (Choose two.) Ensure that all postmortems include what caused the incident, identify the person or team responsible for causing the incident, and how to prevent a future occurrence of the incident. Ensure that all postmortems include what caused the incident, how the incident could have been worse, and how to prevent a future occurrence of the incident. Ensure that all postmortems include the severity of the incident, how to prevent a future occurrence of the incident, and what caused the incident without naming internal system components. Ensure that all postmortems include how the incident was resolved and what caused the incident without naming customer information. Ensure that all postmortems include all incident participants in postmortem authoring and share postmortems as widely as possible.
You are developing reusable infrastructure as code modules. Each module contains integration tests that launch the module in a test project. You are using GitHub for source control. You need to continuously test your feature branch and ensure that all code is tested before changes are accepted. You need to implement a solution to automate the integration tests. What should you do? Use a Jenkins server for CI/CD pipelines. Periodically run all tests in the feature branch. Ask the pull request reviewers to run the integration tests before approving the code. Use Cloud Build to run the tests. Trigger all tests to run after a pull request is merged. Use Cloud Build to run tests in a specific folder. Trigger Cloud Build for every GitHub pull request.
Your company processes IoT data at scale by using Pub/Sub, App Engine standard environment, and an application written in Go. You noticed that the performance inconsistently degrades at peak load. You could not reproduce this issue on your workstation. You need to continuously monitor the application in production to identify slow paths in the code. You want to minimize performance impact and management overhead. What should you do? Use Cloud Monitoring to assess the App Engine CPU utilization metric. Install a continuous profiling tool into Compute Engine. Configure the application to send profiling data to the tool. Periodically run the go tool pprof command against the application instance. Analyze the results by using flame graphs. Configure Cloud Profiler, and initialize the cloud.google.com/go/profiler library in the application.
Your company runs services by using Google Kubernetes Engine (GKE). The GKE dusters in the development environment run applications with verbose logging enabled. Developers view logs by using the kubectl logs command and do not use Cloud Logging. Applications do not have a uniform logging structure defined. You need to minimize the costs associated with application logging while still collecting GKE operational logs. What should you do? Run the gcloud container clusters update --logging=SYSTEM command for the development cluster. Run the gcloud container clusters update --logging=WORKLOAD command for the development cluster. Run the gcloud logging sinks update _Default --disabled command in the project associated with the development environment. Add the severity >= DEBUG resource.type = "k8s_container" exclusion filter to the _Default logging sink in the project associated with the development environment.
You have deployed a fleet of Compute Engine instances in Google Cloud. You need to ensure that monitoring metrics and logs for the instances are visible in Cloud Logging and Cloud Monitoring by your company's operations and cyber security teams. You need to grant the required roles for the Compute Engine service account by using Identity and Access Management (IAM) while following the principle of least privilege. What should you do? Grant the logging.logWriter and monitoring.metricWriter roles to the Compute Engine service accounts. Grant the logging.admin and monitoring.editor roles to the Compute Engine service accounts. Grant the logging.editor and monitoring.metricWriter roles to the Compute Engine service accounts. Grant the logging.logWriter and monitoring.editor roles to the Compute Engine service accounts.
You are the Site Reliability Engineer responsible for managing your company's data services and products. You regularly navigate operational challenges, such as unpredictable data volume and high cost, with your company's data ingestion processes. You recently learned that a new data ingestion product will be developed in Google Cloud. You need to collaborate with the product development team to provide operational input on the new product. What should you do? Deploy the prototype product in a test environment, run a load test, and share the results with the product development team. When the initial product version passes the quality assurance phase and compliance assessments, deploy the product to a staging environment. Share error logs and performance metrics with the product development team. When the new product is used by at least one internal customer in production, share error logs and monitoring metrics with the product development team. Review the design of the product with the product development team to provide feedback early in the design phase.
You are investigating issues in your production application that runs on Google Kubernetes Engine (GKE). You determined that the source of the issue is a recently updated container image, although the exact change in code was not identified. The deployment is currently pointing to the latest tag. You need to update your cluster to run a version of the container that functions as intended. What should you do? Create a new tag called stable that points to the previously working container, and change the deployment to point to the new tag. Alter the deployment to point to the sha256 digest of the previously working container. Build a new container from a previous Git tag, and do a rolling update on the deployment to the new container. Apply the latest tag to the previous container image, and do a rolling update on the deployment.
You need to create a Cloud Monitoring SLO for a service that will be published soon. You want to verify that requests to the service will be addressed in fewer than 300 ms at least 90% of the time per calendar month. You need to identify the metric and evaluation method to use. What should you do? Select a latency metric for a request-based method of evaluation. Select a latency metric for a window-based method of evaluation. Select an availability metric for a request-based method of evaluation. Select an availability metric for a window-based method of evaluation.
You have an application that runs on Cloud Run. You want to use live production traffic to test a new version of the application, while you let the quality assurance team perform manual testing. You want to limit the potential impact of any issues while testing the new version, and you must be able to roll back to a previous version of the application if needed. How should you deploy the new version? (Choose two.) Deploy the application as a new Cloud Run service. Deploy a new Cloud Run revision with a tag and use the --no-traffic option. Deploy a new Cloud Run revision without a tag and use the --no-traffic option. Deploy the new application version and use the --no-traffic option. Route production traffic to the revision’s URL. Deploy the new application version, and split traffic to the new version.
You recently noticed that one of your services has exceeded the error budget for the current rolling window period. Your company's product team is about to launch a new feature. You want to follow Site Reliability Engineering (SRE) practices. What should you do? Notify the team about the lack of error budget and ensure that all their tests are successful so the launch will not further risk the error budget Notify the team that their error budget is used up. Negotiate with the team for a launch freeze or tolerate a slightly worse user experience. Escalate the situation and request additional error budget. Look through other metrics related to the product and find SLOs with remaining error budget. Reallocate the error budgets and allow the feature launch.
You need to introduce postmortems into your organization. You want to ensure that the postmortem process is well received. What should you do? (Choose two.) Encourage new employees to conduct postmortems to team through practice. Create a designated team that is responsible for conducting all postmortems. Encourage your senior leadership to acknowledge and participate in postmortems. Ensure that writing effective postmortems is a rewarded and celebrated practice. Provide your organization with a forum to critique previous postmortems.
You need to enforce several constraint templates across your Google Kubernetes Engine (GKE) clusters. The constraints include policy parameters, such as restricting the Kubernetes API. You must ensure that the policy parameters are stored in a GitHub repository and automatically applied when changes occur. What should you do? Set up a GitHub action to trigger Cloud Build when there is a parameter change. In Cloud Build, run a gcloud CLI command to apply the change. When there is a change in GitHub. use a web hook to send a request to Anthos Service Mesh, and apply the change. Configure Anthos Config Management with the GitHub repository. When there is a change in the repository, use Anthos Config Management to apply the change. Configure Config Connector with the GitHub repository. When there is a change in the repository, use Config Connector to apply the change.
You are the Operations Lead for an ongoing incident with one of your services. The service usually runs at around 70% capacity. You notice that one node is returning 5xx errors for all requests. There has also been a noticeable increase in support cases from customers. You need to remove the offending node from the load balancer pool so that you can isolate and investigate the node. You want to follow Google-recommended practices to manage the incident and reduce the impact on users. What should you do? 1. Communicate your intent to the incident team. 2. Perform a load analysis to determine if the remaining nodes can handle the increase in traffic offloaded from the removed node, and scale appropriately. 3. When any new nodes report healthy, drain traffic from the unhealthy node, and remove the unhealthy node from service. 1. Communicate your intent to the incident team. 2. Add a new node to the pool, and wait for the new node to report as healthy. 3. When traffic is being served on the new node, drain traffic from the unhealthy node, and remove the old node from service. 1. Drain traffic from the unhealthy node and remove the node from service. 2. Monitor traffic to ensure that the error is resolved and that the other nodes in the pool are handling the traffic appropriately. 3. Scale the pool as necessary to handle the new load. 4. Communicate your actions to the incident team. 1. Drain traffic from the unhealthy node and remove the old node from service. 2. Add a new node to the pool, wait for the new node to report as healthy, and then serve traffic to the new node. 3. Monitor traffic to ensure that the pool is healthy and is handling traffic appropriately. 4. Communicate your actions to the incident team.
$$$You are configuring your CI/CD pipeline natively on Google Cloud. You want builds in a pre-production Google Kubernetes Engine (GKE) environment to be automatically load-tested before being promoted to the production GKE environment. You need to ensure that only builds that have passed this test are deployed to production. You want to follow Google-recommended practices. How should you configure this pipeline with Binary Authorization? Create an attestation for the builds that pass the load test by requiring the lead quality assurance engineer to sign the attestation by using their personal private key. Create an attestation for the builds that pass the load test by using a private key stored in Cloud Key Management Service (Cloud KMS) with a service account JSON key stored as a Kubernetes Secret. Create an attestation for the builds that pass the load test by using a private key stored in Cloud Key Management Service (Cloud KMS) authenticated through Workload Identity. Create an attestation for the builds that pass the load test by requiring the lead quality assurance engineer to sign the attestation by using a key stored in Cloud Key Management Service (Cloud KMS).
You are deploying an application to Cloud Run. The application requires a password to start. Your organization requires that all passwords are rotated every 24 hours, and your application must have the latest password. You need to deploy the application with no downtime. What should you do? Store the password in Secret Manager and send the secret to the application by using environment variables. Store the password in Secret Manager and mount the secret as a volume within the application. Use Cloud Build to add your password into the application container at build time. Ensure that Artifact Registry is secured from public access. Store the password directly in the code. Use Cloud Build to rebuild and deploy the application each time the password changes.
Your company runs applications in Google Kubernetes Engine (GKE) that are deployed following a GitOps methodology. Application developers frequently create cloud resources to support their applications. You want to give developers the ability to manage infrastructure as code, while ensuring that you follow Google-recommended practices. You need to ensure that infrastructure as code reconciles periodically to avoid configuration drift. What should you do? Install and configure Config Connector in Google Kubernetes Engine (GKE). Configure Cloud Build with a Terraform builder to execute terraform plan and terraform apply commands. Create a Pod resource with a Terraform docker image to execute terraform plan and terraform apply commands. Create a Job resource with a Terraform docker image to execute terraform plan and terraform apply commands.
You are designing a system with three different environments: development, quality assurance (QA), and production. Each environment will be deployed with Terraform and has a Google Kubernetes Engine (GKE) cluster created so that application teams can deploy their applications. Anthos Config Management will be used and templated to deploy infrastructure level resources in each GKE cluster. All users (for example, infrastructure operators and application owners) will use GitOps. How should you structure your source control repositories for both Infrastructure as Code (IaC) and application code? • Cloud Infrastructure (Terraform) repository is shared: different directories are different environments • GKE Infrastructure (Anthos Config Management Kustomize manifests) repository is shared: different overlay directories are different environments • Application (app source code) repositories are separated: different branches are different features • Cloud Infrastructure (Terraform) repository is shared: different directories are different environments • GKE Infrastructure (Anthos Config Management Kustomize manifests) repositories are separated: different branches are different environments • Application (app source code) repositories are separated: different branches are different features • Cloud Infrastructure (Terraform) repository is shared: different branches are different environments • GKE Infrastructure (Anthos Config Management Kustomize manifests) repository is shared: different overlay directories are different environments • Application (app source code) repository is shared: different directories are different features • Cloud Infrastructure (Terraform) repositories are separated: different branches are different environments • GKE Infrastructure (Anthos Config Management Kustomize manifests) repositories are separated: different overlay directories are different environments • Application (app source code) repositories are separated: different branches are different.
You are configuring Cloud Logging for a new application that runs on a Compute Engine instance with a public IP address. A user-managed service account is attached to the instance. You confirmed that the necessary agents are running on the instance but you cannot see any log entries from the instance in Cloud Logging. You want to resolve the issue by following Google-recommended practices. What should you do? Export the service account key and configure the agents to use the key. Update the instance to use the default Compute Engine service account. Add the Logs Writer role to the service account. Enable Private Google Access on the subnet that the instance is in.
As a Site Reliability Engineer, you support an application written in Go that runs on Google Kubernetes Engine (GKE) in production. After releasing a new version of the application, you notice the application runs for about 15 minutes and then restarts. You decide to add Cloud Profiler to your application and now notice that the heap usage grows constantly until the application restarts. What should you do? Increase the CPU limit in the application deployment. Add high memory compute nodes to the cluster. Increase the memory limit in the application deployment. Add Cloud Trace to the application, and redeploy.
You are deploying a Cloud Build job that deploys Terraform code when a Git branch is updated. While testing, you noticed that the job fails. You see the following error in the build logs: Initializing the backend... Error: Failed to get existing workspaces: querying Cloud Storage failed: googleapi: Error 403 You need to resolve the issue by following Google-recommended practices. What should you do? Change the Terraform code to use local state. Create a storage bucket with the name specified in the Terraform configuration. Grant the roles/owner Identity and Access Management (IAM) role to the Cloud Build service account on the project. Grant the roles/storage.objectAdmin Identity and Access Management (1AM) role to the Cloud Build service account on the state file bucket.
Your company runs applications in Google Kubernetes Engine (GKE). Several applications rely on ephemeral volumes. You noticed some applications were unstable due to the DiskPressure node condition on the worker nodes. You need to identify which Pods are causing the issue, but you do not have execute access to workloads and nodes. What should you do? Check the node/ephemeral_storage/used_bytes metric by using Metrics Explorer. Check the container/ephemeral_storage/used_bytes metric by using Metrics Explorer. Locate all the Pods with emptyDir volumes. Use the df -h command to measure volume disk usage. Locate all the Pods with emptyDir volumes. Use the df -sh * command to measure volume disk usage.
You are designing a new Google Cloud organization for a client. Your client is concerned with the risks associated with long-lived credentials created in Google Cloud. You need to design a solution to completely eliminate the risks associated with the use of JSON service account keys while minimizing operational overhead. What should you do? Apply the constraints/iam.disableServiceAccountKevCreation constraint to the organization. Use custom versions of predefined roles to exclude all iam.serviceAccountKeys.* service account role permissions. Apply the constraints/iam.disableServiceAccountKeyUpload constraint to the organization. Grant the roles/iam.serviceAccountKeyAdmin IAM role to organization administrators only.
You are designing a deployment technique for your applications on Google Cloud. As part of your deployment planning, you want to use live traffic to gather performance metrics for new versions of your applications. You need to test against the full production load before your applications are launched. What should you do? Use A/B testing with blue/green deployment. Use canary testing with continuous deployment. Use canary testing with rolling updates deployment. Use shadow testing with continuous deployment.
Your Cloud Run application writes unstructured logs as text strings to Cloud Logging. You want to convert the unstructured logs to JSON-based structured logs. What should you do? Modify the application to use Cloud Logging software development kit (SDK), and send log entries with a jsonPayload field. Install a Fluent Bit sidecar container, and use a JSON parser. Install the log agent in the Cloud Run container image, and use the log agent to forward logs to Cloud Logging. Configure the log agent to convert log text payload to JSON payload.
Your company is planning a large marketing event for an online retailer during the holiday shopping season. You are expecting your web application to receive a large volume of traffic in a short period. You need to prepare your application for potential failures during the event. What should you do? (Choose two.) Configure Anthos Service Mesh on the application to identify issues on the topology map. Ensure that relevant system metrics are being captured with Cloud Monitoring, and create alerts at levels of interest. Review your increased capacity requirements and plan for the required quota management. Monitor latency of your services for average percentile latency. Create alerts in Cloud Monitoring for all common failures that your application experiences.
Your company recently migrated to Google Cloud. You need to design a fast, reliable, and repeatable solution for your company to provision new projects and basic resources in Google Cloud. What should you do? Use the Google Cloud console to create projects. Write a script by using the gcloud CLI that passes the appropriate parameters from the request. Save the script in a Git repository. Write a Terraform module and save it in your source control repository. Copy and run the terraform apply command to create the new project. Use the Terraform repositories from the Cloud Foundation Toolkit. Apply the code with appropriate parameters to create the Google Cloud project and related resources.
You are configuring a CI pipeline. The build step for your CI pipeline integration testing requires access to APIs inside your private VPC network. Your security team requires that you do not expose API traffic publicly. You need to implement a solution that minimizes management overhead. What should you do? Use Cloud Build private pools to connect to the private VPC. Use Spinnaker for Google Cloud to connect to the private VPC. Use Cloud Build as a pipeline runner. Configure Internal HTTP(S) Load Balancing for API access. Use Cloud Build as a pipeline runner. Configure External HTTP(S) Load Balancing with a Google Cloud Armor policy for API access.
You are leading a DevOps project for your organization. The DevOps team is responsible for managing the service infrastructure and being on-call for incidents. The Software Development team is responsible for writing, submitting, and reviewing code. Neither team has any published SLOs. You want to design a new joint-ownership model for a service between the DevOps team and the Software Development team. Which responsibilities should be assigned to each team in the new joint-ownership model? A B C D.
You recently migrated an ecommerce application to Google Cloud. You now need to prepare the application for the upcoming peak traffic season. You want to follow Google-recommended practices. What should you do first to prepare for the busy season? Migrate the application to Cloud Run, and use autoscaling. Create a Terraform configuration for the application's underlying infrastructure to quickly deploy to additional regions. Load test the application to profile its performance for scaling. Pre-provision the additional compute power that was used last season, and expect growth.
You are monitoring a service that uses n2-standard-2 Compute Engine instances that serve large files. Users have reported that downloads are slow. Your Cloud Monitoring dashboard shows that your VMs are running at peak network throughput. You want to improve the network throughput performance. What should you do? Add additional network interface controllers (NICs) to your VMs. Deploy a Cloud NAT gateway and attach the gateway to the subnet of the VMs. Change the machine type for your VMs to n2-standard-8. Deploy the Ops Agent to export additional monitoring metrics.
Your organization is starting to containerize with Google Cloud. You need a fully managed storage solution for container images and Helm charts. You need to identify a storage solution that has native integration into existing Google Cloud services, including Google Kubernetes Engine (GKE), Cloud Run, VPC Service Controls, and Identity and Access Management (IAM). What should you do? Use Docker to configure a Cloud Storage driver pointed at the bucket owned by your organization. Configure an open source container registry server to run in GKE with a restrictive role-based access control (RBAC) configuration. Configure Artifact Registry as an OCI-based container registry for both Helm charts and container images. Configure Container Registry as an OCI-based container registry for container images.
You work with a video rendering application that publishes small tasks as messages to a Cloud Pub/Sub topic. You need to deploy the application that will execute these tasks on multiple virtual machines (VMs). Each task takes less than 1 hour to complete. The rendering is expected to be completed within a month. You need to minimize rendering costs. What should you do? Deploy the application as a managed instance group. Deploy the application as a managed instance group. Configure a Committed Use Discount for the amount of CPI and memory required. Deploy the application as a managed instance group with Preemptible VMs. Deploy the application as a managed instance group with Preemptible VMs. Configure Committed Use Discount for the amount of CPI and memory required.
You have a data processing pipeline that uses Cloud Dataproc to load data into BigQuery. A team of analysts works with the data using a Busing Intelligence (BI) tool running on Windows Virtual Machines (VMs) in Compute Engine. The BI tool is in use 24 hours a day, 7 days a week, and will be used increasingly over the coming years. The BI tool communicates to BigQuery only. Cloud Dataproc nodes are the main part of the GCP cost of this application. You want to reduce the cost without affecting the performance. What should you do? Apply Committed Use Discounts to the BI tool VMs. Create the Cloud Dataproc cluster when loading data, and delete the cluster when no data is being loaded. Apply Committed Use Discounts to the BI tool VMs and the Cloud Dataproc nodes. Create the Cloud Dataproc cluster when loading data, and delete the cluster when no data is being loaded. Apply Committed Use Discounts to the BI tool VMs and the Cloud Dataproc nodes. Apply Committed Use Discounts to the BI tool VMs.
You support a Python application running in production on Compute Engine. You want to debug some of the application code by inspecting the value of a specific variable. What should you do? Create a Stackdriver Debugger Logpoint with the variable at a specific line location in your application’s source code, and view the value in the Logs Viewer. Use your local development environment and code editor to set up a breakpoint in the source code, run the application locally, and then inspect the value of the variable. Modify the source code of the application to log the value of the variable, deploy to the development environment, and then run the application to capture the value in Stackdriver Logging. Create a Stackdriver Debugger snapshot at a specific line location in your application’s source code, and view the value of the variable in the Google Cloud Platform Console.
You are running a production application on Compute Engine. You want to monitor the key metrics of CPU, Memory, and Disk I/O time. You want to ensure that the metrics are visible by the team and will be explorable if an issue occurs. What should you do? (Choose 2). Set up logs-based metrics based on your application logs to identify errors. Export key metrics to a Google Cloud Function and then analyze them for outliers. Set up alerts in Stackdriver Monitoring for key metrics breaching defined thresholds. Create a Dashboard with key metrics and indicators that can be viewed by the team. Export key metrics to BigQuery and then run hourly queries on the metrics to identify outliers.
You have a Compute Engine instance that uses the default Debian image. The application hosted on this instance recently suffered a series of crashes that you weren’t able to debug in real time; the application process died suddenly every time. The application usually consumes 50% of the instance’s memory, and normally never more than 70%, but you suspect that a memory leak was responsible for the crashes. You want to validate this hypothesis. What should you do? Go to Stackdriver’s Metric Explorer and look for the “compute.googleapis.com/guest/system/problem_count” metric for that instance. Examine its value for when the application crashed in the past. In Stackdriver, create an uptime check for your application. Create an alert policy for that uptime check to be notified when your application crashes. When you receive an alert, use your usual debugging tools to investigate the behavior of the application in real time. Install the Stackdriver Monitoring agent on the instance. Go to Stackdriver’s Metric Explorer and look for the “agent.googleapis/memory/percent_used” metric for that instance. Examine its value for when the application crashed in the past. Install the Stackdriver Monitoring agent on the instance. Create an alert policy on the “agent.googleapis.com/memory/percent_used” metrics for that instance to be alerted when the memory used is higher than 75%. When you receive an alert, use your debugging tools to investigate the behavior of the application in real time.
You have an application deployed on Google Kubernetes Engine (GKE). The application logs are captured by Stackdriver Logging. You need to remove sensitive data before it reaches the Stackdriver Logging API. What should you do? Write the log information to the container file system. Execute a second process inside the container that will filter the sensitive information before writing to Standard Output. Customize the GKE clusters’ Fluentd configuration with a filter rule. Update the Fluentd Config Map and Daemon Set in the GKE cluster. Configure a filter in the Stackdriver Logging UI to exclude the logs with sensitive data. Configure BigQuery as a sink for the logs from Stackdriver Logging, and then create a Data Loss Prevention Job.
67.Several teams in your company want to use Cloud Build to deploy to their own Google Kubernetes Engine (GKE) clusters. The clusters are in projects that are dedicated to each team. Only the teams have access to their own projects. One team should not have access to the cluster of another team. You are in charge of designing the Cloud Build setup, and want to follow Google recommended practices. What should you do? Limit each team member’s access so that they only have access to their team’s clusters. Ask each team member to install the gcloud CLI and to authenticate themselves by running “gcloud init”. Ask each team member to execute Cloud Build builds by using the “gcloud builds submit”. Create a single project for Cloud Build that all the teams will use. List the service accounts in this project and identify the one used by Cloud Build. Grant the Kubernetes Engine Developer IAM role to that service account in each team’s project. In each team’s project, list the service accounts and identify the one used by Cloud Build for each project. In each project, grant the Kubernetes Engine Developer IAM role to the service account used by Cloud Build. Ask each team to execute Cloud Build builds in their own project. In each team’s project, create a service account, download a JSON key for that service account, and grant the Kubernetes Engine Developer IAM role to that service account in that project. Create a single project for Cloud Build that all the teams will use. In that project, encrypt all the service account keys by using Cloud KMS. Grant the Cloud KMS CryptoKey Decrypter IAM role to Cloud Build’s service account. Ask each team to include in their “cloudbuild.yaml” files a step that decrypts the key of their service account, and use that key to connect to their cluster.
You are deploying an application to a Kubernetes cluster that requires a username and password to connect to another service. When you deploy the application, you want to ensure that the credentials are used securely in multiple environments with minimal code changes. What should you do? Bundle the credentials with the code inside the container and secure the container registry. Store the credentials as a Kubernetes Secret and let the application access it via environment variables at runtime. Leverage a CI/CD pipeline to update the variables at build time and inject them into a templated Kubernetes application manifest. Store the credentials as a Kubernetes ConfigMap and let the application access it via environment variables at runtime.
Your Site Reliability Engineering team does toil work to achieve unused data in tables within your application’s relational database. This toil is required to ensure that your application has a low latency Service Level Indicator (SLI) to meet your Service Level Objective (SLO). Toil is preventing your team from focusing on a high-priority engineering project that will improve the availability SLI of your application. You want to reduce repetitive tasks to avoid burnout, improve organizational efficiency, and follow the Site Reliability Engineering recommended practices. What should you do? Identify repetitive tasks that contribute to toil and onboard additional team members for support. Identify repetitive tasks that contribute to toil and automate them. Change the SLO of your Latency SLI to accommodate toil being done less often. Use this capacity to work on the Availability SLI engineering project. Assign the Availability SLI engineering project to the Software Engineering team.
You are helping with the design of an e-commerce application. The web application receives web requests and stores sales transactions in a database. A batch job runs every hour to trigger analysis of sales numbers, available inventory, and forecasted sales numbers. You want to identify minimal Service Level Indicators (SLIs) for the application to ensure that forecasted numbers are based on the latest sales numbers. Which SLIs should you set for the application? Web Application – Quality Database – Availability Batch Job – Coverage Web Application – Latency Database – Latency Batch Job – Throughput Web Application – Availability Database – Availability Batch Job – Freshness Web Application – Availability, Quality Database – Durability Batch Job – Coverage.
Your application runs in Google Kubernetes Engine (GKE). You want to use Spinnaker with the Kubernetes Provider v2 to perform blue/green deployments and control which version of the application receives the traffic. What should you do? Use a Kubernetes Replica Set and use Spinnaker to create a new service for each new version of the application to be deployed. Use a Kubernetes Replica Set and use Spinnaker to update the Replica Set for each new version of the application to be deployed. Use a Kubernetes Deployment and use Spinnaker to update the deployment for each new version of the application to be deployed. Use a Kubernetes Deployment and use Spinnaker to create a new deployment object for each new version of the application to be deployed.
You support a website with a global audience. The website has a frontend web service and a backend database service that runs on different clusters. All clusters are scaled to handle at least 1/3 of the total user traffic. You use 4 different regions in Google Cloud Platform and Cloud Load Balancing to direct traffic to a region closer to the user. You are applying a critical security patch to the backend database. You successfully patch the database in the first two regions, but you make a configuration error while patching the third region. The unsuccessful patching causes 50% of user requests to the third region to time out. You want to mitigate the impact of unsuccessful patching on users. What should you do? Add more capacity to the frontend of third Region. Revert the third Region backend database and run it without the patch. Drain the requests to the third Region and redirect new requests to other regions. Back up the database in the backend of third Region and restart the database.
You want to collect feedback on proposed changes from the beta users before rolling out updates systemwide. What type of deployment pattern should you implement? You should implement A/B testing. You should implement an in-place release. You should implement canary testing. You should implement a blue/green deployment.
Report abuse