option
Questions
ayuda
daypo
search.php

PCA case study

COMMENTS STATISTICS RECORDS
TAKE THE TEST
Title of test:
PCA case study

Description:
Practice test

Creation Date: 2023/03/01

Category: Others

Number of questions: 23

Rating:(1)
Share the Test:
Nuevo ComentarioNuevo Comentario
New Comment
NO RECORDS
Content:

For this question, refer to the EHR Healthcare case study. You are a developer on the EHR customer portal team. Your team recently migrated the customer portal application to Google Cloud. The load has increased on the application servers, and now the application is logging many timeout errors. You recently incorporated Pub/Sub into the application architecture, and the application is not logging any Pub/Sub publishing errors. You want to improve publishing latency. What should you do?. Increase the Pub/Sub Total Timeout retry value. Move from a Pub/Sub subscriber pull model to a push model. Turn off Pub/Sub message batching. Create a backup Pub/Sub message queue.

For this question, refer to the EHR Healthcare case study. You are responsible for ensuring that EHR's use of Google Cloud will pass an upcoming privacy compliance audit. What should you do? (Choose two.). Verify EHR's product usage against the list of compliant products on the Google Cloud compliance page. Advise EHR to execute a Business Associate Agreement (BAA) with Google Cloud. Use Firebase Authentication for EHR's user facing applications. Implement Prometheus to detect and prevent security breaches on EHR's web-based applications. Use GKE private clusters for all Kubernetes workloads.

For this question, refer to the EHR Healthcare case study. You are responsible for designing the Google Cloud network architecture for Google Kubernetes Engine. You want to follow Google best practices. Considering the EHR Healthcare business and technical requirements, what should you do to reduce the attack surface?. Use a private cluster with a private endpoint with master authorized networks configured. Use a public cluster with firewall rules and Virtual Private Cloud (VPC) routes. Use a private cluster with a public endpoint with master authorized networks configured. Use a public cluster with master authorized networks enabled and firewall rules.

For this question, refer to the EHR Healthcare case study. You need to define the technical architecture for securely deploying workloads to Google Cloud. You also need to ensure that only verified containers are deployed using Google Cloud services. What should you do? (Choose two.). Enable Binary Authorization on GKE, and sign containers as part of a CI/CD pipeline. Configure Jenkins to utilize Kritis to cryptographically sign a container as part of a CI/CD pipeline. Configure Container Registry to only allow trusted service accounts to create and deploy containers from the registry. Configure Container Registry to use vulnerability scanning to confirm that there are no vulnerabilities before deploying the workload.

For this question, refer to the EHR Healthcare case study. You need to define the technical architecture for hybrid connectivity between EHR's on-premises systems and Google Cloud. You want to follow Google's recommended practices for production-level applications. Considering the EHR Healthcare business and technical requirements, what should you do?. Configure two Partner Interconnect connections in one metro (City), and make sure the Interconnect connections are placed in different metro zones. Configure two VPN connections from on-premises to Google Cloud, and make sure the VPN devices on-premises are in separate racks. Configure Direct Peering between EHR Healthcare and Google Cloud, and make sure you are peering at least two Google locations. Configure two Dedicated Interconnect connections in one metro (City) and two connections in another metro, and make sure the Interconnect connections are placed in different metro zones.

JencoMart has built a version of their application on Google Cloud Platform that serves traffic to Asia. You want to measure success against their business and technical goals. Which metrics should you track?. Error rates for requests from Asia. Latency difference between US and Asia. Total visits, error rates, and latency from Asia. Total visits and average latency for users from Asia. The number of character sets present in the database.

A few days after JencoMart migrates the user credentials database to Google Cloud Platform and shuts down the old server, the new database server stops responding to SSH connections. It is still serving database requests to the application servers correctly. What three steps should you take to diagnose the problem? (Choose three.). Delete the virtual machine (VM) and disks and create a new one. Delete the instance, attach the disk to a new VM, and investigate. Take a snapshot of the disk and connect to a new machine to investigate. Check inbound firewall rules for the network the machine is connected to. Connect the machine to another network with very simple firewall rules and investigate. Print the Serial Console output for the instance for troubleshooting, activate the interactive console, and investigate.

For this question, refer to the Mountkirk Games case study. You need to analyze and define the technical architecture for the database workloads for your company, Mountkirk Games. Considering the business and technical requirements, what should you do?. Use Cloud SQL for time series data, and use Cloud Bigtable for historical data queries. Use Cloud SQL to replace MySQL, and use Cloud Spanner for historical data queries. Use Cloud Bigtable to replace MySQL, and use BigQuery for historical data queries. Use Cloud Bigtable for time series data, use Cloud Spanner for transactional data, and use BigQuery for historical data queries.

For this question, refer to the Mountkirk Games case study. Mountkirk Games wants to design their solution for the future in order to take advantage of cloud and technology improvements as they become available. Which two steps should they take? (Choose two.). Store as much analytics and game activity data as financially feasible today so it can be used to train machine learning models to predict user behavior in the future. Begin packaging their game backend artifacts in container images and running them on Google Kubernetes Engine to improve the ability to scale up or down based on game activity. Set up a CI/CD pipeline using Jenkins and Spinnaker to automate canary deployments and improve development velocity. Adopt a schema versioning tool to reduce downtime when adding new game features that require storing additional player data in the database. Implement a weekly rolling maintenance process for the Linux virtual machines so they can apply critical kernel patches and package updates and reduce the risk of 0-day vulnerabilities.

For this question, refer to the Mountkirk Games case study. You need to analyze and define the technical architecture for the compute workloads for your company, Mountkirk Games. Considering the Mountkirk Games business and technical requirements, what should you do?. Create network load balancers. Use preemptible Compute Engine instances. Create network load balancers. Use non-preemptible Compute Engine instances. Create a global load balancer with managed instance groups and autoscaling policies. Use preemptible Compute Engine instances. Create a global load balancer with managed instance groups and autoscaling policies. Use non-preemptible Compute Engine instances.

Mountkirk Games wants to migrate from their current analytics and statistics reporting model to one that meets their technical requirements on Google Cloud Platform. Which two steps should be part of their migration plan? (Choose two.). Evaluate the impact of migrating their current batch ETL code to Cloud Dataflow. Write a schema migration plan to denormalize data for better performance in BigQuery. Draw an architecture diagram that shows how to move from a single MySQL database to a MySQL cluster. Load 10 TB of analytics data from a previous game into a Cloud SQL instance, and run test queries against the full dataset to confirm that they complete successfully. Integrate Cloud Armor to defend against possible SQL injection attacks in analytics files uploaded to Cloud Storage.

Mountkirk Games wants to set up a real-time analytics platform for their new game. The new platform must meet their technical requirements. Which combination of Google technologies will meet all of their requirements?. Kubernetes Engine, Cloud Pub/Sub, and Cloud SQL. Cloud Dataflow, Cloud Storage, Cloud Pub/Sub, and BigQuery. Cloud SQL, Cloud Storage, Cloud Pub/Sub, and Cloud Dataflow. Cloud Dataproc, Cloud Pub/Sub, Cloud SQL, and Cloud Dataflow. Cloud Pub/Sub, Compute Engine, Cloud Storage, and Cloud Dataproc.

For this question, refer to the Helicopter Racing League (HRL) case study. HRL is looking for a cost-effective approach for storing their race data such as telemetry. They want to keep all historical records, train models using only the previous season's data, and plan for data growth in terms of volume and information collected. You need to propose a data solution. Considering HRL business requirements and the goals expressed by CEO S. Hawke, what should you do?. Use Firestore for its scalable and flexible document-based database. Use collections to aggregate race data by season and event. Use Cloud Spanner for its scalability and ability to version schemas with zero downtime. Split race data using season as a primary key. Use BigQuery for its scalability and ability to add columns to a schema. Partition race data based on season. Use Cloud SQL for its ability to automatically manage storage increases and compatibility with MySQL. Use separate database instances for each season.

For this question, refer to the Helicopter Racing League (HRL) case study. HRL wants better prediction accuracy from their ML prediction models. They want you to use Google's AI Platform so HRL can understand and interpret the predictions. What should you do?. Use Explainable AI. Use Vision AI. Use Google Cloud's operations suite. Use Jupyter Notebooks.

For this question, refer to the Helicopter Racing League (HRL) case study. The HRL development team releases a new version of their predictive capability application every Tuesday evening at 3 a.m. UTC to a repository. The security team at HRL has developed an in-house penetration test Cloud Function called Airwolf. The security team wants to run Airwolf against the predictive capability application as soon as it is released every Tuesday. You need to set up Airwolf to run at the recurring weekly cadence. What should you do?. Set up Cloud Tasks and a Cloud Storage bucket that triggers a Cloud Function. Set up a Cloud Logging sink and a Cloud Storage bucket that triggers a Cloud Function. Configure the deployment job to notify a Pub/Sub queue that triggers a Cloud Function. Set up Identity and Access Management (IAM) and Confidential Computing to trigger a Cloud Function.

For this question, refer to the Helicopter Racing League (HRL) case study. Recently HRL started a new regional racing league in Cape Town, South Africa. In an effort to give customers in Cape Town a better user experience, HRL has partnered with the Content Delivery Network provider, Fastly. HRL needs to allow traffic coming from all of the Fastly IP address ranges into their Virtual Private Cloud network (VPC network). You are a member of the HRL security team and you need to configure the update that will allow only the Fastly IP address ranges through the External HTTP(S) load balancer. Which command should you use?. gcloud compute security-policies rules update 1000 \ --security-policy from-fastly \ --src-ip-ranges * \ --action allow. gcloud compute firewall rules update sourceiplist-fastly \ --priority 100 \ --allow tcp:443. gcloud compute firewall rules update hir-policy \ --priority 100 \ --target-tags=sourceiplist-fastly \ --allow tcp:443. gcloud compute security-policies rules update 1000 \ --security-policy hir-policy \ --expression evaluatePreconfiguredExpr(˜sourceiplist-fastly`) \ --action allow.

For this question, refer to the Helicopter Racing League (HRL) case study. Your team is in charge of creating a payment card data vault for card numbers used to bill tens of thousands of viewers, merchandise consumers, and season ticket holders. You need to implement a custom card tokenization service that meets the following requirements: * It must provide low latency at minimal cost. * It must be able to identify duplicate credit cards and must not store plaintext card numbers. * It should support annual key rotation. Which storage approach should you adopt for your tokenization service?. Store the card data in Secret Manager after running a query to identify duplicates. Encrypt the card data with a deterministic algorithm stored in Firestore using Datastore mode. Encrypt the card data with a deterministic algorithm and shard it across multiple Memorystore instances. Use column-level encryption to store the data in Cloud SQL.

The migration of JencoMart's application to Google Cloud Platform (GCP) is progressing too slowly. The infrastructure is shown in the diagram. You want to maximize throughput. What are three potential bottlenecks? (Choose three.). A single VPN tunnel, which limits throughput. A tier of Google Cloud Storage that is not suited for this task. A copy command that is not suited to operate over long distances. Fewer virtual machines (VMs) in GCP than on-premises machines. A separate storage layer outside the VMs, which is not suited for this task. Complicated internet connectivity between the on-premises infrastructure and GCP.

Your company has a Google Cloud project that uses BigQuery for data warehousing on a pay-per-use basis. You want to monitor queries in real time to discover the most costly queries and which users spend the most. What should you do?. In the BigQuery dataset that contains all the tables to be queried, add a label for each user that can launch a query. 2. Open the Billing page of the project. 3. Select Reports. 4. Select BigQuery as the product and filter by the user you want to check. Create a Cloud Logging sink to export BigQuery data access logs to BigQuery. 2. Perform a BigQuery query on the generated table to extract the information you need. Create a Cloud Logging sink to export BigQuery data access logs to Cloud Storage. 2. Develop a Dataflow pipeline to compute the cost of queries split by users. Activate billing export into BigQuery. 2. Perform a BigQuery query on the billing table to extract the information you need.

Your company has an application running as a Deployment in a Google Kubernetes Engine (GKE) cluster. When releasing new versions of the application via a rolling deployment, the team has been causing outages. The root cause of the outages is misconfigurations with parameters that are only used in production. You want to put preventive measures for this in the platform to prevent outages. What should you do?. Configure liveness and readiness probes in the Pod specification. Configure health checks on the managed instance group. Create a Scheduled Task to check whether the application is available. Configure an uptime alert in Cloud Monitoring.

You have a Compute Engine managed instance group that adds and removes Compute Engine instances from the group in response to the load on your application. The instances have a shutdown script that removes REDIS database entries associated with the instance. You see that many database entries have not been removed, and you suspect that the shutdown script is the problem. You need to ensure that the commands in the shutdown script are run reliably every time an instance is shut down. You create a Cloud Function to remove the database entries. What should you do next?. Modify the shutdown script to wait for 30 seconds before triggering the Cloud Function. Do not use the Cloud Function. Modify the shutdown script to restart if it has not completed in 30 seconds. Set up a Cloud Monitoring sink that triggers the Cloud Function after an instance removal log message arrives in Cloud Logging. Modify the shutdown script to wait for 30 seconds and then publish a message to a Pub/Sub queue.

Your operations team has asked you to help diagnose a performance issue in a production application that runs on Compute Engine. The application is dropping requests that reach it when under heavy load. The process list for affected instances shows a single application process that is consuming all available CPU, and autoscaling has reached the upper limit of instances. There is no abnormal load on any other related systems, including the database. You want to allow production traffic to be served again as quickly as possible. Which action should you recommend?. Change the autoscaling metric to agent.googleapis.com/memory/percent_used. Restart the affected instances on a staggered schedule. SSH to each instance and restart the application process. Increase the maximum number of instances in the autoscaling group.

For this question, refer to the TerramEarth case study. A new architecture that writes all incoming data to BigQuery has been introduced. You notice that the data is dirty, and want to ensure data quality on an automated daily basis while managing cost. What should you do?. Set up a streaming Cloud Dataflow job, receiving data by the ingestion process. Clean the data in a Cloud Dataflow pipeline. Create a Cloud Function that reads data from BigQuery and cleans it. Trigger the Cloud Function from a Compute Engine instance. Create a SQL statement on the data in BigQuery, and save it as a view. Run the view daily, and save the result to a new table. Use Cloud Dataprep and configure the BigQuery tables as the source. Schedule a daily job to clean the data.

Report abuse