option
Questions
ayuda
daypo
search.php

ERASED TEST, YOU MAY BE INTERESTED ON ML-2

COMMENTS STATISTICS RECORDS
TAKE THE TEST
Title of test:
ML-2

Description:
MachineLearning

Author:
examsure2pass
Other tests from this author

Creation Date: 20/08/2024

Category: Driving Test

Number of questions: 95
Share the Test:
New CommentNuevo Comentario
No comments about this test.
Content:
You have successfully deployed to production a large and complex TensorFlow model trained on tabular data. You want to predict the lifetime value (LTV) field for each subscription stored in the BigQuery table named subscription. subscriptionPurchase in the project named my-fortune500-company-project. You have organized all your training code, from preprocessing data from the BigQuery table up to deploying the validated model to the Vertex AI endpoint, into a TensorFlow Extended (TFX) pipeline. You want to prevent prediction drift, i.e., a situation when a feature data distribution in production changes significantly over time. What should you do? Implement continuous retraining of the model daily using Vertex AI Pipelines. Add a model monitoring job where 10% of incoming predictions are sampled 24 hours. Add a model monitoring job where 90% of incoming predictions are sampled 24 hours. Add a model monitoring job where 10% of incoming predictions are sampled every hour.
You recently developed a deep learning model using Keras, and now you are experimenting with different training strategies. First, you trained the model using a single GPU, but the training process was too slow. Next, you distributed the training across 4 GPUs using tf.distribute.MirroredStrategy (with no other changes), but you did not observe a decrease in training time. What should you do? Distribute the dataset with tf.distribute.Strategy.experimental_distribute_dataset Create a custom training loop. Use a TPU with tf.distribute.TPUStrategy. Increase the batch size.
You work for a gaming company that has millions of customers around the world. All games offer a chat feature that allows players to communicate with each other in real time. Messages can be typed in more than 20 languages and are translated in real time using the Cloud Translation API. You have been asked to build an ML system to moderate the chat in real time while assuring that the performance is uniform across the various languages and without changing the serving infrastructure. You trained your first model using an in-house word2vec model for embedding the chat messages translated by the Cloud Translation API. However, the model has significant differences in performance across the different languages. How should you improve it? Add a regularization term such as the Min-Diff algorithm to the loss function. Train a classifier using the chat messages in their original language. Replace the in-house word2vec with GPT-3 or T5. Remove moderation for languages for which the false positive rate is too high.
You work for a gaming company that develops massively multiplayer online (MMO) games. You built a TensorFlow model that predicts whether players will make in-app purchases of more than $10 in the next two weeks. The model’s predictions will be used to adapt each user’s game experience. User data is stored in BigQuery. How should you serve your model while optimizing cost, user experience, and ease of management? Import the model into BigQuery ML. Make predictions using batch reading data from BigQuery, and push the data to Cloud SQL Deploy the model to Vertex AI Prediction. Make predictions using batch reading data from Cloud Bigtable, and push the data to Cloud SQL. Embed the model in the mobile application. Make predictions after every in-app purchase event is published in Pub/Sub, and push the data to Cloud SQL. Embed the model in the streaming Dataflow pipeline. Make predictions after every in-app purchase event is published in Pub/Sub, and push the data to Cloud SQL.
You are building a linear regression model on BigQuery ML to predict a customer’s likelihood of purchasing your company’s products. Your model uses a city name variable as a key predictive component. In order to train and serve the model, your data must be organized in columns. You want to prepare your data using the least amount of coding while maintaining the predictable variables. What should you do? Use TensorFlow to create a categorical variable with a vocabulary list. Create the vocabulary file, and upload it as part of your model to BigQuery ML. Create a new view with BigQuery that does not include a column with city information Use Cloud Data Fusion to assign each city to a region labeled as 1, 2, 3, 4, or 5, and then use that number to represent the city in the model. Use Dataprep to transform the state column using a one-hot encoding method, and make each city a column with binary values.
You are an ML engineer at a bank that has a mobile application. Management has asked you to build an ML-based biometric authentication for the app that verifies a customer’s identity based on their fingerprint. Fingerprints are considered highly sensitive personal information and cannot be downloaded and stored into the bank databases. Which learning strategy should you recommend to train and deploy this ML mode? Data Loss Prevention API Federated learning MD5 to encrypt data Differential privacy.
You are experimenting with a built-in distributed XGBoost model in Vertex AI Workbench user-managed notebooks. You use BigQuery to split your data into training and validation sets using the following queries: CREATE OR REPLACE TABLE ‘myproject.mydataset.training‘ AS (SELECT * FROM ‘myproject.mydataset.mytable‘ WHERE RAND() <= 0.8); CREATE OR REPLACE TABLE ‘myproject.mydataset.validation‘ AS (SELECT * FROM ‘myproject.mydataset.mytable‘ WHERE RAND() <= 0.2); After training the model, you achieve an area under the receiver operating characteristic curve (AUC ROC) value of 0.8, but after deploying the model to production, you notice that your model performance has dropped to an AUC ROC value of 0.65. What problem is most likely occurring? There is training-serving skew in your production environment. There is not a sufficient amount of training data. The tables that you created to hold your training and validation records share some records, and you may not be using all the data in your initial table. The RAND() function generated a number that is less than 0.2 in both instances, so every record in the validation table will also be in the training table.
During batch training of a neural network, you notice that there is an oscillation in the loss. How should you adjust your model to ensure that it converges? Decrease the size of the training batch. Decrease the learning rate hyperparameter. Increase the learning rate hyperparameter. Increase the size of the training batch.
You work for a toy manufacturer that has been experiencing a large increase in demand. You need to build an ML model to reduce the amount of time spent by quality control inspectors checking for product defects. Faster defect detection is a priority. The factory does not have reliable Wi-Fi. Your company wants to implement the new ML model as soon as possible. Which model should you use? AutoML Vision Edge mobile-high-accuracy-1 model AutoML Vision Edge mobile-low-latency-1 model AutoML Vision model AutoML Vision Edge mobile-versatile-1 model.
You are an ML engineer in the contact center of a large enterprise. You need to build a sentiment analysis tool that predicts customer sentiment from recorded phone conversations. You need to identify the best approach to building a model while ensuring that the gender, age, and cultural differences of the customers who called the contact center do not impact any stage of the model development pipeline and results. What should you do? Convert the speech to text and extract sentiments based on the sentences. Convert the speech to text and build a model based on the words. Extract sentiment directly from the voice recordings. Convert the speech to text and extract sentiment using syntactical analysis.
You need to analyze user activity data from your company’s mobile applications. Your team will use BigQuery for data analysis, transformation, and experimentation with ML algorithms. You need to ensure real-time ingestion of the user activity data into BigQuery. What should you do? Configure Pub/Sub to stream the data into BigQuery. Run an Apache Spark streaming job on Dataproc to ingest the data into BigQuery. Run a Dataflow streaming job to ingest the data into BigQuery. Configure Pub/Sub and a Dataflow streaming job to ingest the data into BigQuery,.
You work for a gaming company that manages a popular online multiplayer game where teams with 6 players play against each other in 5-minute battles. There are many new players every day. You need to build a model that automatically assigns available players to teams in real time. User research indicates that the game is more enjoyable when battles have players with similar skill levels. Which business metrics should you track to measure your model’s performance? Average time players wait before being assigned to a team Precision and recall of assigning players to teams based on their predicted versus actual ability User engagement as measured by the number of battles played daily per user Rate of return as measured by additional revenue generated minus the cost of developing a new model.
You are building an ML model to predict trends in the stock market based on a wide range of factors. While exploring the data, you notice that some features have a large range. You want to ensure that the features with the largest magnitude don’t overfit the model. What should you do? Standardize the data by transforming it with a logarithmic function. Normalize the data by scaling it to have values between 0 and 1. Use a binning strategy to replace the magnitude of each feature with the appropriate bin number. Apply a principal component analysis (PCA) to minimize the effect of any particular feature.
You work for a biotech startup that is experimenting with deep learning ML models based on properties of biological organisms. Your team frequently works on early-stage experiments with new architectures of ML models, and writes custom TensorFlow ops in C++. You train your models on large datasets and large batch sizes. Your typical batch size has 1024 examples, and each example is about 1 MB in size. The average size of a network with all weights and embeddings is 20 GB. What hardware should you choose for your models? A cluster with 2 n1-highcpu-64 machines, each with 8 NVIDIA Tesla V100 GPUs (128 GB GPU memory in total), and a n1-highcpu-64 machine with 64 vCPUs and 58 GB RAM A cluster with 2 a2-megagpu-16g machines, each with 16 NVIDIA Tesla A100 GPUs (640 GB GPU memory in total), 96 vCPUs, and 1.4 TB RAM A cluster with an n1-highcpu-64 machine with a v2-8 TPU and 64 GB RAM A cluster with 4 n1-highcpu-96 machines, each with 96 vCPUs and 86 GB RAM.
You are an ML engineer at an ecommerce company and have been tasked with building a model that predicts how much inventory the logistics team should order each month. Which approach should you take? Use a clustering algorithm to group popular items together. Give the list to the logistics team so they can increase inventory of the popular items. Use a regression model to predict how much additional inventory should be purchased each month. Give the results to the logistics team at the beginning of the month so they can increase inventory by the amount predicted by the model. Use a time series forecasting model to predict each item's monthly sales. Give the results to the logistics team so they can base inventory on the amount predicted by the model. Use a classification model to classify inventory levels as UNDER_STOCKED, OVER_STOCKED, and CORRECTLY_STOCKEGive the report to the logistics team each month so they can fine-tune inventory levels.
You are building a TensorFlow model for a financial institution that predicts the impact of consumer spending on inflation globally. Due to the size and nature of the data, your model is long-running across all types of hardware, and you have built frequent checkpointing into the training process. Your organization has asked you to minimize cost. What hardware should you choose? A Vertex AI Workbench user-managed notebooks instance running on an n1-standard-16 with 4 NVIDIA P100 GPUs A Vertex AI Workbench user-managed notebooks instance running on an n1-standard-16 with an NVIDIA P100 GPU A Vertex AI Workbench user-managed notebooks instance running on an n1-standard-16 with a non-preemptible v3-8 TPU A Vertex AI Workbench user-managed notebooks instance running on an n1-standard-16 with a preemptible v3-8 TPU.
You work for a company that provides an anti-spam service that flags and hides spam posts on social media platforms. Your company currently uses a list of 200,000 keywords to identify suspected spam posts. If a post contains more than a few of these keywords, the post is identified as spam. You want to start using machine learning to flag spam posts for human review. What is the main advantage of implementing machine learning for this business case? Posts can be compared to the keyword list much more quickly. New problematic phrases can be identified in spam posts. A much longer keyword list can be used to flag spam posts. Spam posts can be flagged using far fewer keywords.
One of your models is trained using data provided by a third-party data broker. The data broker does not reliably notify you of formatting changes in the data. You want to make your model training pipeline more robust to issues like this. What should you do? Use TensorFlow Data Validation to detect and flag schema anomalies. Use TensorFlow Transform to create a preprocessing component that will normalize data to the expected distribution, and replace values that don’t match the schema with 0. Use tf.math to analyze the data, compute summary statistics, and flag statistical anomalies. Use custom TensorFlow functions at the start of your model training to detect and flag known formatting.
You work for a company that is developing a new video streaming platform. You have been asked to create a recommendation system that will suggest the next video for a user to watch. After a review by an AI Ethics team, you are approved to start development. Each video asset in your company’s catalog has useful metadata (e.g., content type, release date, country), but you do not have any historical user event data. How should you build the recommendation system for the first version of the product? Launch the product without machine learning. Present videos to users alphabetically, and start collecting user event data so you can develop a recommender model in the future. Launch the product without machine learning. Use simple heuristics based on content metadata to recommend similar videos to users, and start collecting user event data so you can develop a recommender model in the future. Launch the product with machine learning. Use a publicly available dataset such as MovieLens to train a model using the Recommendations AI, and then apply this trained model to your data. Launch the product with machine learning. Generate embeddings for each video by training an autoencoder on the content metadata using TensorFlow. Cluster content based on the similarity of these embeddings, and then recommend videos from the same cluster.
You recently built the first version of an image segmentation model for a self-driving car. After deploying the model, you observe a decrease in the area under the curve (AUC) metric. When analyzing the video recordings, you also discover that the model fails in highly congested traffic but works as expected when there is less traffic. What is the most likely reason for this result? The model is overfitting in areas with less traffic and underfitting in areas with more traffic. AUC is not the correct metric to evaluate this classification model. Too much data representing congested areas was used for model training. Gradients become small and vanish while backpropagating from the output to input nodes. .
You are developing an ML model to predict house prices. While preparing the data, you discover that an important predictor variable, distance from the closest school, is often missing and does not have high variance. Every instance (row) in your data is important. How should you handle the missing data? Delete the rows that have missing values. Apply feature crossing with another column that does not have missing values. Predict the missing values using linear regression. Replace the missing values with zeros. .
You are an ML engineer responsible for designing and implementing training pipelines for ML models. You need to create an end-to-end training pipeline for a TensorFlow model. The TensorFlow model will be trained on several terabytes of structured data. You need the pipeline to include data quality checks before training and model quality checks after training but prior to deployment. You want to minimize development time and the need for infrastructure maintenance. How should you build and orchestrate your training pipeline? Create the pipeline using Kubeflow Pipelines domain-specific language (DSL) and predefined Google Cloud components. Orchestrate the pipeline using Vertex AI Pipelines. Create the pipeline using TensorFlow Extended (TFX) and standard TFX components. Orchestrate the pipeline using Vertex AI Pipelines. Create the pipeline using Kubeflow Pipelines domain-specific language (DSL) and predefined Google Cloud components. Orchestrate the pipeline using Kubeflow Pipelines deployed on Google Kubernetes Engine. Create the pipeline using TensorFlow Extended (TFX) and standard TFX components. Orchestrate the pipeline using Kubeflow Pipelines deployed on Google Kubernetes Engine.
You manage a team of data scientists who use a cloud-based backend system to submit training jobs. This system has become very difficult to administer, and you want to use a managed service instead. The data scientists you work with use many different frameworks, including Keras, PyTorch, theano, scikit-learn, and custom libraries. What should you do? Use the Vertex AI Training to submit training jobs using any framework. Configure Kubeflow to run on Google Kubernetes Engine and submit training jobs through TFJob. Create a library of VM images on Compute Engine, and publish these images on a centralized repository. Set up Slurm workload manager to receive jobs that can be scheduled to run on your cloud infrastructure.
You are training an object detection model using a Cloud TPU v2. Training time is taking longer than expected. Based on this simplified trace obtained with a Cloud TPU profile, what action should you take to decrease training time in a cost-efficient way? Move from Cloud TPU v2 to Cloud TPU v3 and increase batch size. Move from Cloud TPU v2 to 8 NVIDIA V100 GPUs and increase batch size. Rewrite your input function to resize and reshape the input images. Rewrite your input function using parallel reads, parallel processing, and prefetch.
While performing exploratory data analysis on a dataset, you find that an important categorical feature has 5% null values. You want to minimize the bias that could result from the missing values. How should you handle the missing values? Remove the rows with missing values, and upsample your dataset by 5%. Replace the missing values with the feature’s mean. Replace the missing values with a placeholder category indicating a missing value. Move the rows with missing values to your validation dataset.
You are an ML engineer on an agricultural research team working on a crop disease detection tool to detect leaf rust spots in images of crops to determine the presence of a disease. These spots, which can vary in shape and size, are correlated to the severity of the disease. You want to develop a solution that predicts the presence and severity of the disease with high accuracy. What should you do? Create an object detection model that can localize the rust spots. Develop an image segmentation ML model to locate the boundaries of the rust spots. Develop a template matching algorithm using traditional computer vision libraries. Develop an image classification ML model to predict the presence of the disease.
You have been asked to productionize a proof-of-concept ML model built using Keras. The model was trained in a Jupyter notebook on a data scientist’s local machine. The notebook contains a cell that performs data validation and a cell that performs model analysis. You need to orchestrate the steps contained in the notebook and automate the execution of these steps for weekly retraining. You expect much more training data in the future. You want your solution to take advantage of managed services while minimizing cost. What should you do? Move the Jupyter notebook to a Notebooks instance on the largest N2 machine type, and schedule the execution of the steps in the Notebooks instance using Cloud Scheduler. Write the code as a TensorFlow Extended (TFX) pipeline orchestrated with Vertex AI Pipelines. Use standard TFX components for data validation and model analysis, and use Vertex AI Pipelines for model retraining. Rewrite the steps in the Jupyter notebook as an Apache Spark job, and schedule the execution of the job on ephemeral Dataproc clusters using Cloud Scheduler. Extract the steps contained in the Jupyter notebook as Python scripts, wrap each script in an Apache Airflow BashOperator, and run the resulting directed acyclic graph (DAG) in Cloud Composer.
You are working on a system log anomaly detection model for a cybersecurity organization. You have developed the model using TensorFlow, and you plan to use it for real-time prediction. You need to create a Dataflow pipeline to ingest data via Pub/Sub and write the results to BigQuery. You want to minimize the serving latency as much as possible. What should you do? Containerize the model prediction logic in Cloud Run, which is invoked by Dataflow. Load the model directly into the Dataflow job as a dependency, and use it for prediction. Deploy the model to a Vertex AI endpoint, and invoke this endpoint in the Dataflow job. Deploy the model in a TFServing container on Google Kubernetes Engine, and invoke it in the Dataflow job.
You are an ML engineer at a mobile gaming company. A data scientist on your team recently trained a TensorFlow model, and you are responsible for deploying this model into a mobile application. You discover that the inference latency of the current model doesn’t meet production requirements. You need to reduce the inference time by 50%, and you are willing to accept a small decrease in model accuracy in order to reach the latency requirement. Without training a new model, which model optimization technique for reducing latency should you try first? Weight pruning Dynamic range quantization Model distillation Dimensionality reduction.
You work on a data science team at a bank and are creating an ML model to predict loan default risk. You have collected and cleaned hundreds of millions of records worth of training data in a BigQuery table, and you now want to develop and compare multiple models on this data using TensorFlow and Vertex AI. You want to minimize any bottlenecks during the data ingestion state while considering scalability. What should you do? Use the BigQuery client library to load data into a dataframe, and use tf.data.Dataset.from_tensor_slices() to read it. Export data to CSV files in Cloud Storage, and use tf.data.TextLineDataset() to read them. Convert the data into TFRecords, and use tf.data.TFRecordDataset() to read them. Use TensorFlow I/O’s BigQuery Reader to directly read the data.
You have recently created a proof-of-concept (POC) deep learning model. You are satisfied with the overall architecture, but you need to determine the value for a couple of hyperparameters. You want to perform hyperparameter tuning on Vertex AI to determine both the appropriate embedding dimension for a categorical feature used by your model and the optimal learning rate. You configure the following settings: •For the embedding dimension, you set the type to INTEGER with a minValue of 16 and maxValue of 64. •For the learning rate, you set the type to DOUBLE with a minValue of 10e-05 and maxValue of 10e-02. You are using the default Bayesian optimization tuning algorithm, and you want to maximize model accuracy. Training time is not a concern. How should you set the hyperparameter scaling for each hyperparameter and the maxParallelTrials? Use UNIT_LINEAR_SCALE for the embedding dimension, UNIT_LOG_SCALE for the learning rate, and a large number of parallel trials. Use UNIT_LINEAR_SCALE for the embedding dimension, UNIT_LOG_SCALE for the learning rate, and a small number of parallel trials. Use UNIT_LOG_SCALE for the embedding dimension, UNIT_LINEAR_SCALE for the learning rate, and a large number of parallel trials. Use UNIT_LOG_SCALE for the embedding dimension, UNIT_LINEAR_SCALE for the learning rate, and a small number of parallel trials.
###You are the Director of Data Science at a large company, and your Data Science team has recently begun using the Kubeflow Pipelines SDK to orchestrate their training pipelines. Your team is struggling to integrate their custom Python code into the Kubeflow Pipelines SDK. How should you instruct them to proceed in order to quickly integrate their code with the Kubeflow Pipelines SDK? Use the func_to_container_op function to create custom components from the Python code. Use the predefined components available in the Kubeflow Pipelines SDK to access Dataproc, and run the custom code there. Package the custom Python code into Docker containers, and use the load_component_from_file function to import the containers into the pipeline. Deploy the custom Python code to Cloud Functions, and use Kubeflow Pipelines to trigger the Cloud Function.
$$$You work for the AI team of an automobile company, and you are developing a visual defect detection model using TensorFlow and Keras. To improve your model performance, you want to incorporate some image augmentation functions such as translation, cropping, and contrast tweaking. You randomly apply these functions to each training batch. You want to optimize your data processing pipeline for run time and compute resources utilization. What should you do? Embed the augmentation functions dynamically in the tf.Data pipeline. Embed the augmentation functions dynamically as part of Keras generators. Use Dataflow to create all possible augmentations, and store them as TFRecords. Use Dataflow to create the augmentations dynamically per training run, and stage them as TFRecords.
You work for an online publisher that delivers news articles to over 50 million readers. You have built an AI model that recommends content for the company’s weekly newsletter. A recommendation is considered successful if the article is opened within two days of the newsletter’s published date and the user remains on the page for at least one minute. All the information needed to compute the success metric is available in BigQuery and is updated hourly. The model is trained on eight weeks of data, on average its performance degrades below the acceptable baseline after five weeks, and training time is 12 hours. You want to ensure that the model’s performance is above the acceptable baseline while minimizing cost. How should you monitor the model to determine when retraining is necessary? Use Vertex AI Model Monitoring to detect skew of the input features with a sample rate of 100% and a monitoring frequency of two days. Schedule a cron job in Cloud Tasks to retrain the model every week before the newsletter is created. Schedule a weekly query in BigQuery to compute the success metric. Schedule a daily Dataflow job in Cloud Composer to compute the success metric.
You deployed an ML model into production a year ago. Every month, you collect all raw requests that were sent to your model prediction service during the previous month. You send a subset of these requests to a human labeling service to evaluate your model’s performance. After a year, you notice that your model's performance sometimes degrades significantly after a month, while other times it takes several months to notice any decrease in performance. The labeling service is costly, but you also need to avoid large performance degradations. You want to determine how often you should retrain your model to maintain a high level of performance while minimizing cost. What should you do? Train an anomaly detection model on the training dataset, and run all incoming requests through this model. If an anomaly is detected, send the most recent serving data to the labeling service. Identify temporal patterns in your model’s performance over the previous year. Based on these patterns, create a schedule for sending serving data to the labeling service for the next year. Compare the cost of the labeling service with the lost revenue due to model performance degradation over the past year. If the lost revenue is greater than the cost of the labeling service, increase the frequency of model retraining; otherwise, decrease the model retraining frequency. Run training-serving skew detection batch jobs every few days to compare the aggregate statistics of the features in the training dataset with recent serving data. If skew is detected, send the most recent serving data to the labeling service.
You work for a company that manages a ticketing platform for a large chain of cinemas. Customers use a mobile app to search for movies they’re interested in and purchase tickets in the app. Ticket purchase requests are sent to Pub/Sub and are processed with a Dataflow streaming pipeline configured to conduct the following steps: 1. Check for availability of the movie tickets at the selected cinema. 2. Assign the ticket price and accept payment. 3. Reserve the tickets at the selected cinema. 4. Send successful purchases to your database. Each step in this process has low latency requirements (less than 50 milliseconds). You have developed a logistic regression model with BigQuery ML that predicts whether offering a promo code for free popcorn increases the chance of a ticket purchase, and this prediction should be added to the ticket purchase process. You want to identify the simplest way to deploy this model to production while adding minimal latency. What should you do? Run batch inference with BigQuery ML every five minutes on each new set of tickets issued. Export your model in TensorFlow format, and add a tfx_bsl.public.beam.RunInference step to the Dataflow pipeline. Export your model in TensorFlow format, deploy it on Vertex AI, and query the prediction endpoint from your streaming pipeline. Convert your model with TensorFlow Lite (TFLite), and add it to the mobile app so that the promo code and the incoming request arrive together in Pub/Sub.
You work on a team in a data center that is responsible for server maintenance. Your management team wants you to build a predictive maintenance solution that uses monitoring data to detect potential server failures. Incident data has not been labeled yet. What should you do first? Train a time-series model to predict the machines’ performance values. Configure an alert if a machine’s actual performance values significantly differ from the predicted performance values. Develop a simple heuristic (e.g., based on z-score) to label the machines’ historical performance data. Use this heuristic to monitor server performance in real time. Develop a simple heuristic (e.g., based on z-score) to label the machines’ historical performance data. Train a model to predict anomalies based on this labeled dataset. Hire a team of qualified analysts to review and label the machines’ historical performance data. Train a model based on this manually labeled dataset.
You work for a retailer that sells clothes to customers around the world. You have been tasked with ensuring that ML models are built in a secure manner. Specifically, you need to protect sensitive customer data that might be used in the models. You have identified four fields containing sensitive data that are being used by your data science team: AGE, IS_EXISTING_CUSTOMER, LATITUDE_LONGITUDE, and SHIRT_SIZE. What should you do with the data before it is made available to the data science team for training purposes? Tokenize all of the fields using hashed dummy values to replace the real values. Use principal component analysis (PCA) to reduce the four sensitive fields to one PCA vector. Coarsen the data by putting AGE into quantiles and rounding LATITUDE_LONGTTUDE into single precision. The other two fields are already as coarse as possible. Remove all sensitive data fields, and ask the data science team to build their models using non-sensitive data.
You work for a magazine publisher and have been tasked with predicting whether customers will cancel their annual subscription. In your exploratory data analysis, you find that 90% of individuals renew their subscription every year, and only 10% of individuals cancel their subscription. After training a NN Classifier, your model predicts those who cancel their subscription with 99% accuracy and predicts those who renew their subscription with 82% accuracy. How should you interpret these results? This is not a good result because the model should have a higher accuracy for those who renew their subscription than for those who cancel their subscription. This is not a good result because the model is performing worse than predicting that people will always renew their subscription. This is a good result because predicting those who cancel their subscription is more difficult, since there is less data for this group. This is a good result because the accuracy across both groups is greater than 80%.
You have built a model that is trained on data stored in Parquet files. You access the data through a Hive table hosted on Google Cloud. You preprocessed these data with PySpark and exported it as a CSV file into Cloud Storage. After preprocessing, you execute additional steps to train and evaluate your model. You want to parametrize this model training in Kubeflow Pipelines. What should you do? Remove the data transformation step from your pipeline. Containerize the PySpark transformation step, and add it to your pipeline. Add a ContainerOp to your pipeline that spins a Dataproc cluster, runs a transformation, and then saves the transformed data in Cloud Storage. Deploy Apache Spark at a separate node pool in a Google Kubernetes Engine cluster. Add a ContainerOp to your pipeline that invokes a corresponding transformation job for this Spark instance.
You have developed an ML model to detect the sentiment of users’ posts on your company's social media page to identify outages or bugs. You are using Dataflow to provide real-time predictions on data ingested from Pub/Sub. You plan to have multiple training iterations for your model and keep the latest two versions live after every run. You want to split the traffic between the versions in an 80:20 ratio, with the newest model getting the majority of the traffic. You want to keep the pipeline as simple as possible, with minimal management required. What should you do? Deploy the models to a Vertex AI endpoint using the traffic-split=0=80, PREVIOUS_MODEL_ID=20 configuration. Wrap the models inside an App Engine application using the --splits PREVIOUS_VERSION=0.2, NEW_VERSION=0.8 configuration Wrap the models inside a Cloud Run container using the REVISION1=20, REVISION2=80 revision configuration. Implement random splitting in Dataflow using beam.Partition() with a partition function calling a Vertex AI endpoint.
You are developing an image recognition model using PyTorch based on ResNet50 architecture. Your code is working fine on your local laptop on a small subsample. Your full dataset has 200k labeled images. You want to quickly scale your training workload while minimizing cost. You plan to use 4 V100 GPUs. What should you do? Create a Google Kubernetes Engine cluster with a node pool that has 4 V100 GPUs. Prepare and submit a TFJob operator to this node pool. Create a Vertex AI Workbench user-managed notebooks instance with 4 V100 GPUs, and use it to train your model. Package your code with Setuptools, and use a pre-built container. Train your model with Vertex AI using a custom tier that contains the required GPUs. Configure a Compute Engine VM with all the dependencies that launches the training. Train your model with Vertex AI using a custom tier that contains the required GPUs.
You have trained a DNN regressor with TensorFlow to predict housing prices using a set of predictive features. Your default precision is tf.float64, and you use a standard TensorFlow estimator: Your model performs well, but just before deploying it to production, you discover that your current serving latency is 10ms @ 90 percentile and you currently serve on CPUs. Your production requirements expect a model latency of 8ms @ 90 percentile. You're willing to accept a small decrease in performance in order to reach the latency requirement. Therefore your plan is to improve latency while evaluating how much the model's prediction decreases. What should you first try to quickly lower the serving latency? Switch from CPU to GPU serving. Apply quantization to your SavedModel by reducing the floating point precision to tf.float16. Increase the dropout rate to 0.8 and retrain your model. Increase the dropout rate to 0.8 in _PREDICT mode by adjusting the TensorFlow Serving parameters.
You work on the data science team at a manufacturing company. You are reviewing the company’s historical sales data, which has hundreds of millions of records. For your exploratory data analysis, you need to calculate descriptive statistics such as mean, median, and mode; conduct complex statistical tests for hypothesis testing; and plot variations of the features over time. You want to use as much of the sales data as possible in your analyses while minimizing computational resources. What should you do? Visualize the time plots in Google Data Studio. Import the dataset into Vertex Al Workbench user-managed notebooks. Use this data to calculate the descriptive statistics and run the statistical analyses. Spin up a Vertex Al Workbench user-managed notebooks instance and import the dataset. Use this data to create statistical and visual analyses. Use BigQuery to calculate the descriptive statistics. Use Vertex Al Workbench user-managed notebooks to visualize the time plots and run the statistical analyses. Use BigQuery to calculate the descriptive statistics, and use Google Data Studio to visualize the time plots. Use Vertex Al Workbench user-managed notebooks to run the statistical analyses.
Your data science team needs to rapidly experiment with various features, model architectures, and hyperparameters. They need to track the accuracy metrics for various experiments and use an API to query the metrics over time. What should they use to track and report their experiments while minimizing manual effort? Use Vertex Al Pipelines to execute the experiments. Query the results stored in MetadataStore using the Vertex Al API. Use Vertex Al Training to execute the experiments. Write the accuracy metrics to BigQuery, and query the results using the BigQuery API. Use Vertex Al Training to execute the experiments. Write the accuracy metrics to Cloud Monitoring, and query the results using the Monitoring API. Use Vertex Al Workbench user-managed notebooks to execute the experiments. Collect the results in a shared Google Sheets file, and query the results using the Google Sheets API.
You are training an ML model using data stored in BigQuery that contains several values that are considered Personally Identifiable Information (PII). You need to reduce the sensitivity of the dataset before training your model. Every column is critical to your model. How should you proceed? Using Dataflow, ingest the columns with sensitive data from BigQuery, and then randomize the values in each sensitive column. Use the Cloud Data Loss Prevention (DLP) API to scan for sensitive data, and use Dataflow with the DLP API to encrypt sensitive values with Format Preserving Encryption. Use the Cloud Data Loss Prevention (DLP) API to scan for sensitive data, and use Dataflow to replace all sensitive data by using the encryption algorithm AES-256 with a salt. Before training, use BigQuery to select only the columns that do not contain sensitive data. Create an authorized view of the data so that sensitive values cannot be accessed by unauthorized individuals.
You recently deployed an ML model. Three months after deployment, you notice that your model is underperforming on certain subgroups, thus potentially leading to biased results. You suspect that the inequitable performance is due to class imbalances in the training data, but you cannot collect more data. What should you do? (Choose two.) Remove training examples of high-performing subgroups, and retrain the model. Add an additional objective to penalize the model more for errors made on the minority class, and retrain the model Remove the features that have the highest correlations with the majority class. Upsample or reweight your existing training data, and retrain the model Redeploy the model, and provide a label explaining the model's behavior to users.
####You are working on a binary classification ML algorithm that detects whether an image of a classified scanned document contains a company’s logo. In the dataset, 96% of examples don’t have the logo, so the dataset is very skewed. Which metric would give you the most confidence in your model? Precision Recall RMSE F1 score.
While running a model training pipeline on Vertex Al, you discover that the evaluation step is failing because of an out-of-memory error. You are currently using TensorFlow Model Analysis (TFMA) with a standard Evaluator TensorFlow Extended (TFX) pipeline component for the evaluation step. You want to stabilize the pipeline without downgrading the evaluation quality while minimizing infrastructure overhead. What should you do? Include the flag -runner=DataflowRunner in beam_pipeline_args to run the evaluation step on Dataflow. Move the evaluation step out of your pipeline and run it on custom Compute Engine VMs with sufficient memory. Migrate your pipeline to Kubeflow hosted on Google Kubernetes Engine, and specify the appropriate node parameters for the evaluation step. Add tfma.MetricsSpec () to limit the number of metrics in the evaluation step.
You are developing an ML model using a dataset with categorical input variables. You have randomly split half of the data into training and test sets. After applying one-hot encoding on the categorical variables in the training set, you discover that one categorical variable is missing from the test set. What should you do? Use sparse representation in the test set. Randomly redistribute the data, with 70% for the training set and 30% for the test set Apply one-hot encoding on the categorical variables in the test data Collect more data representing all categories.
You are developing a classification model to support predictions for your company’s various products. The dataset you were given for model development has class imbalance You need to minimize false positives and false negatives What evaluation metric should you use to properly train the model? F1 score Recall Accuracy Precision.
You need to build classification workflows over several structured datasets currently stored in BigQuery. Because you will be performing the classification several times, you want to complete the following steps without writing code: exploratory data analysis, feature selection, model building, training, and hyperparameter tuning and serving. What should you do? Train a classification Vertex AutoML model Train a TensorFlow model on Vertex AI. Run a logistic regression job on BigQuery ML. Use scikit-learn in Vertex AI Workbench user-managed notebooks with pandas library.
You recently developed a deep learning model. To test your new model, you trained it for a few epochs on a large dataset. You observe that the training and validation losses barely changed during the training run. You want to quickly debug your model. What should you do first? Verify that your model can obtain a low loss on a small subset of the dataset Add handcrafted features to inject your domain knowledge into the model Use the Vertex AI hyperparameter tuning service to identify a better learning rate Use hardware accelerators and train your model for more epochs.
Your organization manages an online message board. A few months ago, you discovered an increase in toxic language and bullying on the message board. You deployed an automated text classifier that flags certain comments as toxic or harmful. Now some users are reporting that benign comments referencing their religion are being misclassified as abusive. Upon further inspection, you find that your classifier's false positive rate is higher for comments that reference certain underrepresented religious groups. Your team has a limited budget and is already overextended. What should you do? Add synthetic training data where those phrases are used in non-toxic ways. Remove the model and replace it with human moderation. Replace your model with a different text classifier. Raise the threshold for comments to be considered toxic or harmful.
You work for a magazine distributor and need to build a model that predicts which customers will renew their subscriptions for the upcoming year. Using your company’s historical data as your training set, you created a TensorFlow model and deployed it to Vertex AI. You need to determine which customer attribute has the most predictive power for each prediction served by the model. What should you do? Stream prediction results to BigQuery. Use BigQuery’s CORR(X1, X2) function to calculate the Pearson correlation coefficient between each feature and the target variable. Use Vertex Explainable AI. Submit each prediction request with the explain' keyword to retrieve feature attributions using the sampled Shapley method. Use Vertex AI Workbench user-managed notebooks to perform a Lasso regression analysis on your model, which will eliminate features that do not provide a strong signal. Use the What-If tool in Google Cloud to determine how your model will perform when individual features are excluded. Rank the feature importance in order of those that caused the most significant performance drop when removed from the model.
You are an ML engineer at a manufacturing company. You are creating a classification model for a predictive maintenance use case. You need to predict whether a crucial machine will fail in the next three days so that the repair crew has enough time to fix the machine before it breaks. Regular maintenance of the machine is relatively inexpensive, but a failure would be very costly. You have trained several binary classifiers to predict whether the machine will fail, where a prediction of 1 means that the ML model predicts a failure. You are now evaluating each model on an evaluation dataset. You want to choose a model that prioritizes detection while ensuring that more than 50% of the maintenance jobs triggered by your model address an imminent machine failure. Which model should you choose? The model with the highest area under the receiver operating characteristic curve (AUC ROC) and precision greater than 0.5 The model with the lowest root mean squared error (RMSE) and recall greater than 0.5. The model with the highest recall where precision is greater than 0.5. The model with the highest precision where recall is greater than 0.5.
You built a custom ML model using scikit-learn. Training time is taking longer than expected. You decide to migrate your model to Vertex AI Training, and you want to improve the model’s training time. What should you try out first? Train your model in a distributed mode using multiple Compute Engine VMs. Train your model using Vertex AI Training with CPUs. Migrate your model to TensorFlow, and train it using Vertex AI Training. Train your model using Vertex AI Training with GPUs.
You are an ML engineer at a retail company. You have built a model that predicts a coupon to offer an ecommerce customer at checkout based on the items in their cart. When a customer goes to checkout, your serving pipeline, which is hosted on Google Cloud, joins the customer's existing cart with a row in a BigQuery table that contains the customers' historic purchase behavior and uses that as the model's input. The web team is reporting that your model is returning predictions too slowly to load the coupon offer with the rest of the web page. How should you speed up your model's predictions? Attach an NVIDIA P100 GPU to your deployed model’s instance. Use a low latency database for the customers’ historic purchase behavior. Deploy your model to more instances behind a load balancer to distribute traffic. Create a materialized view in BigQuery with the necessary data for predictions.
You work for a small company that has deployed an ML model with autoscaling on Vertex AI to serve online predictions in a production environment. The current model receives about 20 prediction requests per hour with an average response time of one second. You have retrained the same model on a new batch of data, and now you are canary testing it, sending ~10% of production traffic to the new model. During this canary test, you notice that prediction requests for your new model are taking between 30 and 180 seconds to complete. What should you do? Submit a request to raise your project quota to ensure that multiple prediction services can run concurrently. Turn off auto-scaling for the online prediction service of your new model. Use manual scaling with one node always available. Remove your new model from the production environment. Compare the new model and existing model codes to identify the cause of the performance bottleneck. Remove your new model from the production environment. For a short trial period, send all incoming prediction requests to BigQuery. Request batch predictions from your new model, and then use the Data Labeling Service to validate your model’s performance before promoting it to production.
You want to train an AutoML model to predict house prices by using a small public dataset stored in BigQuery. You need to prepare the data and want to use the simplest, most efficient approach. What should you do? Write a query that preprocesses the data by using BigQuery and creates a new table. Create a Vertex AI managed dataset with the new table as the data source. Use Dataflow to preprocess the data. Write the output in TFRecord format to a Cloud Storage bucket. Write a query that preprocesses the data by using BigQuery. Export the query results as CSV files, and use those files to create a Vertex AI managed dataset. Use a Vertex AI Workbench notebook instance to preprocess the data by using the pandas library. Export the data as CSV files, and use those files to create a Vertex AI managed dataset.
#####You developed a Vertex AI ML pipeline that consists of preprocessing and training steps and each set of steps runs on a separate custom Docker image. Your organization uses GitHub and GitHub Actions as CI/CD to run unit and integration tests. You need to automate the model retraining workflow so that it can be initiated both manually and when a new version of the code is merged in the main branch. You want to minimize the steps required to build the workflow while also allowing for maximum flexibility. How should you configure the CI/CD workflow? Trigger a Cloud Build workflow to run tests, build custom Docker images, push the images to Artifact Registry, and launch the pipeline in Vertex AI Pipelines. Trigger GitHub Actions to run the tests, launch a job on Cloud Run to build custom Docker images, push the images to Artifact Registry, and launch the pipeline in Vertex AI Pipelines. Trigger GitHub Actions to run the tests, build custom Docker images, push the images to Artifact Registry, and launch the pipeline in Vertex AI Pipelines. Trigger GitHub Actions to run the tests, launch a Cloud Build workflow to build custom Docker images, push the images to Artifact Registry, and launch the pipeline in Vertex AI Pipelines.
You are working with a dataset that contains customer transactions. You need to build an ML model to predict customer purchase behavior. You plan to develop the model in BigQuery ML, and export it to Cloud Storage for online prediction. You notice that the input data contains a few categorical features, including product category and payment method. You want to deploy the model as quickly as possible. What should you do? Use the TRANSFORM clause with the ML.ONE_HOT_ENCODER function on the categorical features at model creation and select the categorical and non-categorical features. Use the ML.ONE_HOT_ENCODER function on the categorical features and select the encoded categorical features and non-categorical features as inputs to create your model. Use the CREATE MODEL statement and select the categorical and non-categorical features. Use the ML.MULTI_HOT_ENCODER function on the categorical features, and select the encoded categorical features and non-categorical features as inputs to create your model.
You need to develop an image classification model by using a large dataset that contains labeled images in a Cloud Storage bucket. What should you do? Use Vertex AI Pipelines with the Kubeflow Pipelines SDK to create a pipeline that reads the images from Cloud Storage and trains the model. Use Vertex AI Pipelines with TensorFlow Extended (TFX) to create a pipeline that reads the images from Cloud Storage and trains the model. Import the labeled images as a managed dataset in Vertex AI and use AutoML to train the model. Convert the image dataset to a tabular format using Dataflow Load the data into BigQuery and use BigQuery ML to train the model.
You are developing a model to detect fraudulent credit card transactions. You need to prioritize detection, because missing even one fraudulent transaction could severely impact the credit card holder. You used AutoML to tram a model on users' profile information and credit card transaction data After training the initial model, you notice that the model is failing to detect many fraudulent transactions. How should you adjust the training parameters in AutoML to improve model performance? (Choose two.) Increase the score threshold Decrease the score threshold. Add more positive examples to the training set Add more negative examples to the training set Reduce the maximum number of node hours for training.
You need to deploy a scikit-leam classification model to production. The model must be able to serve requests 24/7, and you expect millions of requests per second to the production application from 8 am to 7 pm. You need to minimize the cost of deployment. What should you do? Deploy an online Vertex AI prediction endpoint. Set the max replica count to 1 Deploy an online Vertex AI prediction endpoint. Set the max replica count to 100 Deploy an online Vertex AI prediction endpoint with one GPU per replica. Set the max replica count to 1 Deploy an online Vertex AI prediction endpoint with one GPU per replica. Set the max replica count to 100.
You work with a team of researchers to develop state-of-the-art algorithms for financial analysis. Your team develops and debugs complex models in TensorFlow. You want to maintain the ease of debugging while also reducing the model training time. How should you set up your training environment? Configure a v3-8 TPU VM. SSH into the VM to train and debug the model. Configure a v3-8 TPU node. Use Cloud Shell to SSH into the Host VM to train and debug the model. Configure a n1 -standard-4 VM with 4 NVIDIA P100 GPUs. SSH into the VM and use ParameterServerStraregv to train the model. Configure a n1-standard-4 VM with 4 NVIDIA P100 GPUs. SSH into the VM and use MultiWorkerMirroredStrategy to train the model.
####You created an ML pipeline with multiple input parameters. You want to investigate the tradeoffs between different parameter combinations. The parameter options are •Input dataset •Max tree depth of the boosted tree regressor •Optimizer learning rate You need to compare the pipeline performance of the different parameter combinations measured in F1 score, time to train, and model complexity. You want your approach to be reproducible, and track all pipeline runs on the same platform. What should you do? 1. Use BigQueryML to create a boosted tree regressor, and use the hyperparameter tuning capability. 2. Configure the hyperparameter syntax to select different input datasets: max tree depths, and optimizer learning rates. Choose the grid search option. 1. Create a Vertex AI pipeline with a custom model training job as part of the pipeline. Configure the pipeline’s parameters to include those you are investigating. 2. In the custom training step, use the Bayesian optimization method with F1 score as the target to maximize. 1. Create a Vertex AI Workbench notebook for each of the different input datasets. 2. In each notebook, run different local training jobs with different combinations of the max tree depth and optimizer learning rate parameters. 3. After each notebook finishes, append the results to a BigQuery table. 1. Create an experiment in Vertex AI Experiments. 2. Create a Vertex AI pipeline with a custom model training job as part of the pipeline. Configure the pipeline’ parameters to include those you are investigating. 3. Submit multiple runs to the same experiment, using different values for the parameters. .
You received a training-serving skew alert from a Vertex AI Model Monitoring job running in production. You retrained the model with more recent training data, and deployed it back to the Vertex AI endpoint, but you are still receiving the same alert. What should you do? Update the model monitoring job to use a lower sampling rate. Update the model monitoring job to use the more recent training data that was used to retrain the model. Temporarily disable the alert. Enable the alert again after a sufficient amount of new production traffic has passed through the Vertex AI endpoint. Temporarily disable the alert until the model can be retrained again on newer training data. Retrain the model again after a sufficient amount of new production traffic has passed through the Vertex AI endpoint.
You developed a custom model by using Vertex AI to forecast the sales of your company’s products based on historical transactional data. You anticipate changes in the feature distributions and the correlations between the features in the near future. You also expect to receive a large volume of prediction requests. You plan to use Vertex AI Model Monitoring for drift detection and you want to minimize the cost. What should you do? Use the features for monitoring. Set a monitoring-frequency value that is higher than the default. Use the features for monitoring. Set a prediction-sampling-rate value that is closer to 1 than 0. Use the features and the feature attributions for monitoring. Set a monitoring-frequency value that is lower than the default. Use the features and the feature attributions for monitoring. Set a prediction-sampling-rate value that is closer to 0 than 1.
You have recently trained a scikit-learn model that you plan to deploy on Vertex AI. This model will support both online and batch prediction. You need to preprocess input data for model inference. You want to package the model for deployment while minimizing additional code. What should you do? 1. Upload your model to the Vertex AI Model Registry by using a prebuilt scikit-ieam prediction container. 2. Deploy your model to Vertex AI Endpoints, and create a Vertex AI batch prediction job that uses the instanceConfig.instanceType setting to transform your input data. 1. Wrap your model in a custom prediction routine (CPR). and build a container image from the CPR local model. 2. Upload your scikit learn model container to Vertex AI Model Registry. 3. Deploy your model to Vertex AI Endpoints, and create a Vertex AI batch prediction job 1. Create a custom container for your scikit learn model. 2. Define a custom serving function for your model. 3. Upload your model and custom container to Vertex AI Model Registry. 4. Deploy your model to Vertex AI Endpoints, and create a Vertex AI batch prediction job. 1. Create a custom container for your scikit learn model. 2. Upload your model and custom container to Vertex AI Model Registry. 3. Deploy your model to Vertex AI Endpoints, and create a Vertex AI batch prediction job that uses the instanceConfig.instanceType setting to transform your input data.
You work for a food product company. Your company’s historical sales data is stored in BigQuery.You need to use Vertex AI’s custom training service to train multiple TensorFlow models that read the data from BigQuery and predict future sales. You plan to implement a data preprocessing algorithm that performs mm-max scaling and bucketing on a large number of features before you start experimenting with the models. You want to minimize preprocessing time, cost, and development effort. How should you configure this workflow? Write the transformations into Spark that uses the spark-bigquery-connector, and use Dataproc to preprocess the data. Write SQL queries to transform the data in-place in BigQuery. Add the transformations as a preprocessing layer in the TensorFlow models. Create a Dataflow pipeline that uses the BigQuerylO connector to ingest the data, process it, and write it back to BigQuery.
You have created a Vertex AI pipeline that includes two steps. The first step preprocesses 10 TB data completes in about 1 hour, and saves the result in a Cloud Storage bucket. The second step uses the processed data to train a model. You need to update the model’s code to allow you to test different algorithms. You want to reduce pipeline execution time and cost while also minimizing pipeline changes. What should you do? Add a pipeline parameter and an additional pipeline step. Depending on the parameter value, the pipeline step conducts or skips data preprocessing, and starts model training. Create another pipeline without the preprocessing step, and hardcode the preprocessed Cloud Storage file location for model training. Configure a machine with more CPU and RAM from the compute-optimized machine family for the data preprocessing step. Enable caching for the pipeline job, and disable caching for the model training step.
You work for a bank. You have created a custom model to predict whether a loan application should be flagged for human review. The input features are stored in a BigQuery table. The model is performing well, and you plan to deploy it to production. Due to compliance requirements the model must provide explanations for each prediction. You want to add this functionality to your model code with minimal effort and provide explanations that are as accurate as possible. What should you do? Create an AutoML tabular model by using the BigQuery data with integrated Vertex Explainable AI. Create a BigQuery ML deep neural network model and use the ML.EXPLAIN_PREDICT method with the num_integral_steps parameter. Upload the custom model to Vertex AI Model Registry and configure feature-based attribution by using sampled Shapley with input baselines. Update the custom serving container to include sampled Shapley-based explanations in the prediction outputs.
You recently used XGBoost to train a model in Python that will be used for online serving. Your model prediction service will be called by a backend service implemented in Golang running on a Google Kubernetes Engine (GKE) cluster. Your model requires pre and postprocessing steps. You need to implement the processing steps so that they run at serving time. You want to minimize code changes and infrastructure maintenance, and deploy your model into production as quickly as possible. What should you do? Use FastAPI to implement an HTTP server. Create a Docker image that runs your HTTP server, and deploy it on your organization’s GKE cluster. Use FastAPI to implement an HTTP server. Create a Docker image that runs your HTTP server, Upload the image to Vertex AI Model Registry and deploy it to a Vertex AI endpoint. Use the Predictor interface to implement a custom prediction routine. Build the custom container, upload the container to Vertex AI Model Registry and deploy it to a Vertex AI endpoint. Use the XGBoost prebuilt serving container when importing the trained model into Vertex AI. Deploy the model to a Vertex AI endpoint. Work with the backend engineers to implement the pre- and postprocessing steps in the Golang backend service.
You recently deployed a pipeline in Vertex AI Pipelines that trains and pushes a model to a Vertex AI endpoint to serve real-time traffic. You need to continue experimenting and iterating on your pipeline to improve model performance. You plan to use Cloud Build for CI/CD You want to quickly and easily deploy new pipelines into production, and you want to minimize the chance that the new pipeline implementations will break in production. What should you do? Set up a CI/CD pipeline that builds and tests your source code. If the tests are successful, use the Google. Cloud console to upload the built container to Artifact Registry and upload the compiled pipeline to Vertex AI Pipelines. Set up a CI/CD pipeline that builds your source code and then deploys built artifacts into a pre-production environment. Run unit tests in the pre-production environment. If the tests are successful deploy the pipeline to production. Set up a CI/CD pipeline that builds and tests your source code and then deploys built artifacts into a pre-production environment. After a successful pipeline run in the pre-production environment, deploy the pipeline to production. Set up a CI/CD pipeline that builds and tests your source code and then deploys built artifacts into a pre-production environment. After a successful pipeline run in the pre-production environment, rebuild the source code and deploy the artifacts to production.
You work for a bank with strict data governance requirements. You recently implemented a custom model to detect fraudulent transactions. You want your training code to download internal data by using an API endpoint hosted in your project’s network. You need the data to be accessed in the most secure way, while mitigating the risk of data exfiltration. What should you do? Enable VPC Service Controls for peerings, and add Vertex AI to a service perimeter. Create a Cloud Run endpoint as a proxy to the data. Use Identity and Access Management (IAM) authentication to secure access to the endpoint from the training job. Configure VPC Peering with Vertex AI, and specify the network of the training job. Download the data to a Cloud Storage bucket before calling the training job.
You are deploying a new version of a model to a production Vertex Al endpoint that is serving traffic. You plan to direct all user traffic to the new model. You need to deploy the model with minimal disruption to your application. What should you do? 1. Create a new endpoint 2. Create a new model. Set it as the default version. Upload the model to Vertex AI Model Registry 3. Deploy the new model to the new endpoint 4. Update Cloud DNS to point to the new endpoint 1. Create a new endpoint 2. Create a new model. Set the parentModel parameter to the model ID of the currently deployed model and set it as the default version. Upload the model to Vertex AI Model Registry 3. Deploy the new model to the new endpoint, and set the new model to 100% of the traffic. 1. Create a new model. Set the parentModel parameter to the model ID of the currently deployed model. Upload the model to Vertex AI Model Registry. 2. Deploy the new model to the existing endpoint, and set the new model to 100% of the traffic 1. Create a new model. Set it as the default version. Upload the model to Vertex AI Model Registry 2. Deploy the new model to the existing endpoint.
You are training an ML model on a large dataset. You are using a TPU to accelerate the training process. You notice that the training process is taking longer than expected. You discover that the TPU is not reaching its full capacity. What should you do? Increase the learning rate Increase the number of epochs Decrease the learning rate Increase the batch size.
You work for a retail company. You have a managed tabular dataset in Vertex AI that contains sales data from three different stores. The dataset includes several features, such as store name and sale timestamp. You want to use the data to train a model that makes sales predictions for a new store that will open soon. You need to split the data between the training, validation, and test sets. What approach should you use to split the data? Use Vertex AI manual split, using the store name feature to assign one store for each set Use Vertex AI default data split Use Vertex AI chronological split, and specify the sales timestamp feature as the time variable Use Vertex AI random split, assigning 70% of the rows to the training set, 10% to the validation set, and 20% to the test set.
You have developed a BigQuery ML model that predicts customer chum, and deployed the model to Vertex AI Endpoints. You want to automate the retraining of your model by using minimal additional code when model feature values change. You also want to minimize the number of times that your model is retrained to reduce training costs. What should you do? 1 Enable request-response logging on Vertex AI Endpoints 2. Schedule a TensorFlow Data Validation job to monitor prediction drift 3. Execute model retraining if there is significant distance between the distributions 1. Enable request-response logging on Vertex AI Endpoints 2. Schedule a TensorFlow Data Validation job to monitor training/serving skew 3. Execute model retraining if there is significant distance between the distributions 1. Create a Vertex AI Model Monitoring job configured to monitor prediction drift 2. Configure alert monitoring to publish a message to a Pub/Sub queue when a monitoring alert is detected 3. Use a Cloud Function to monitor the Pub/Sub queue, and trigger retraining in BigQuery 1. Create a Vertex AI Model Monitoring job configured to monitor training/serving skew 2. Configure alert monitoring to publish a message to a Pub/Sub queue when a monitoring alert is detected 3. Use a Cloud Function to monitor the Pub/Sub queue, and trigger retraining in BigQuery.
You have been tasked with deploying prototype code to production. The feature engineering code is in PySpark and runs on Dataproc Serverless. The model training is executed by using a Vertex AI custom training job. The two steps are not connected, and the model training must currently be run manually after the feature engineering step finishes. You need to create a scalable and maintainable production process that runs end-to-end and tracks the connections between steps. What should you do? Create a Vertex AI Workbench notebook. Use the notebook to submit the Dataproc Serverless feature engineering job. Use the same notebook to submit the custom model training job. Run the notebook cells sequentially to tie the steps together end-to-end. Create a Vertex AI Workbench notebook. Initiate an Apache Spark context in the notebook and run the PySpark feature engineering code. Use the same notebook to run the custom model training job in TensorFlow. Run the notebook cells sequentially to tie the steps together end-to-end. Use the Kubeflow pipelines SDK to write code that specifies two components: - The first is a Dataproc Serverless component that launches the feature engineering job - The second is a custom component wrapped in the create_custom_training_job_from_component utility that launches the custom model training job Create a Vertex AI Pipelines job to link and run both components Use the Kubeflow pipelines SDK to write code that specifies two components - The first component initiates an Apache Spark context that runs the PySpark feature engineering code - The second component runs the TensorFlow custom model training code Create a Vertex AI Pipelines job to link and run both components.
You recently deployed a scikit-learn model to a Vertex AI endpoint. You are now testing the model on live production traffic. While monitoring the endpoint, you discover twice as many requests per hour than expected throughout the day. You want the endpoint to efficiently scale when the demand increases in the future to prevent users from experiencing high latency. What should you do? Deploy two models to the same endpoint, and distribute requests among them evenly Configure an appropriate minReplicaCount value based on expected baseline traffic Set the target utilization percentage in the autoscailngMetricSpecs configuration to a higher value Change the model’s machine type to one that utilizes GPUs.
You work at a bank. You have a custom tabular ML model that was provided by the bank’s vendor. The training data is not available due to its sensitivity. The model is packaged as a Vertex AI Model serving container, which accepts a string as input for each prediction instance. In each string, the feature values are separated by commas. You want to deploy this model to production for online predictions and monitor the feature distribution over time with minimal effort. What should you do? 1. Upload the model to Vertex AI Model Registry, and deploy the model to a Vertex AI endpoint 2. Create a Vertex AI Model Monitoring job with feature drift detection as the monitoring objective, and provide an instance schema 1. Upload the model to Vertex AI Model Registry, and deploy the model to a Vertex AI endpoint 2. Create a Vertex AI Model Monitoring job with feature skew detection as the monitoring objective, and provide an instance schema 1. Refactor the serving container to accept key-value pairs as input format 2. Upload the model to Vertex AI Model Registry, and deploy the model to a Vertex AI endpoint 3. Create a Vertex AI Model Monitoring job with feature drift detection as the monitoring objective. 1. Refactor the serving container to accept key-value pairs as input format 2. Upload the model to Vertex AI Model Registry, and deploy the model to a Vertex AI endpoint 3. Create a Vertex AI Model Monitoring job with feature skew detection as the monitoring objective.
You are implementing a batch inference ML pipeline in Google Cloud. The model was developed using TensorFlow and is stored in SavedModel format in Cloud Storage. You need to apply the model to a historical dataset containing 10 TB of data that is stored in a BigQuery table. How should you perform the inference? Export the historical data to Cloud Storage in Avro format. Configure a Vertex AI batch prediction job to generate predictions for the exported data Import the TensorFlow model by using the CREATE MODEL statement in BigQuery ML. Apply the historical data to the TensorFlow model Export the historical data to Cloud Storage in CSV format. Configure a Vertex AI batch prediction job to generate predictions for the exported data Configure a Vertex AI batch prediction job to apply the model to the historical data in BigQuery.
You recently deployed a model to a Vertex AI endpoint. Your data drifts frequently, so you have enabled request response logging and created a Vertex AI Model Monitoring job. You have observed that your model is receiving higher traffic than expected. You need to reduce the model monitoring cost while continuing to quickly detect drift. What should you do? Replace the monitoring job with a DataFlow pipeline that uses TensorFlow Data Validation (TFDV) Replace the monitoring job with a custom SQL script to calculate statistics on the features and predictions in BigQuery Decrease the sample_rate parameter in the RandomSampleConfig of the monitoring job Increase the monitor_interval parameter in the ScheduleConfig of the monitoring job.
You work for a retail company. You have created a Vertex AI forecast model that produces monthly item sales predictions. You want to quickly create a report that will help to explain how the model calculates the predictions. You have one month of recent actual sales data that was not included in the training dataset. How should you generate data for your report? Create a batch prediction job by using the actual sales data. Compare the predictions to the actuals in the report. Create a batch prediction job by using the actual sales data, and configure the job settings to generate feature attributions. Compare the results in the report. Generate counterfactual examples by using the actual sales data. Create a batch prediction job using the actual sales data and the counterfactual examples. Compare the results in the report. Train another model by using the same training dataset as the original, and exclude some columns. Using the actual sales data create one batch prediction job by using the new model and another one with the original model. Compare the two sets of predictions in the report.
Your team has a model deployed to a Vertex AI endpoint. You have created a Vertex AI pipeline that automates the model training process and is triggered by a Cloud Function. You need to prioritize keeping the model up-to-date, but also minimize retraining costs. How should you configure retraining? Configure Pub/Sub to call the Cloud Function when a sufficient amount of new data becomes available Configure a Cloud Scheduler job that calls the Cloud Function at a predetermined frequency that fits your team’s budget Enable model monitoring on the Vertex AI endpoint. Configure Pub/Sub to call the Cloud Function when anomalies are detected Enable model monitoring on the Vertex AI endpoint. Configure Pub/Sub to call the Cloud Function when feature drift is detected.
Your company stores a large number of audio files of phone calls made to your customer call center in an on premises database. Each audio file is in wav format and is approximately 5 minutes long. You need to analyze these audio files for customer sentiment. You plan to use the Speech-to-Text API You want to use the most efficient approach. What should you do? 1. Upload the audio files to Cloud Storage 2. Call the speech:longrunningrecognize API endpoint to generate transcriptions 3. Call the predict method of an AutoML sentiment analysis model to analyze the transcriptions. 1. Upload the audio files to Cloud Storage. 2. Call the speech:longrunningrecognize API endpoint to generate transcriptions 3. Create a Cloud Function that calls the Natural Language API by using the analyzeSentiment method 1. Iterate over your local files in Python 2. Use the Speech-to-Text Python library to create a speech.RecognitionAudio object, and set the content to the audio file data 3. Call the speech:recognize API endpoint to generate transcriptions 4. Call the predict method of an AutoML sentiment analysis model to analyze the transcriptions. 1. Iterate over your local files in Python 2. Use the Speech-to-Text Python Library to create a speech.RecognitionAudio object and set the content to the audio file data 3. Call the speech:longrunningrecognize API endpoint to generate transcriptions. 4. Call the Natural Language API by using the analyzeSentiment method.
You work for a social media company. You want to create a no-code image classification model for an iOS mobile application to identify fashion accessories. You have a labeled dataset in Cloud Storage. You need to configure a training workflow that minimizes cost and serves predictions with the lowest possible latency. What should you do? Train the model by using AutoML, and register the model in Vertex AI Model Registry. Configure your mobile application to send batch requests during prediction. Train the model by using AutoML Edge, and export it as a Core ML model. Configure your mobile application to use the .mlmodel file directly. Train the model by using AutoML Edge, and export the model as a TFLite model. Configure your mobile application to use the .tflite file directly. Train the model by using AutoML, and expose the model as a Vertex AI endpoint. Configure your mobile application to invoke the endpoint during prediction.
You work for a retail company. You have been asked to develop a model to predict whether a customer will purchase a product on a given day. Your team has processed the company’s sales data, and created a table with the following rows: •Customer_id •Product_id •Date •Days_since_last_purchase (measured in days) •Average_purchase_frequency (measured in 1/days) •Purchase (binary class, if customer purchased product on the Date) You need to interpret your model’s results for each individual prediction. What should you do? Create a BigQuery table. Use BigQuery ML to build a boosted tree classifier. Inspect the partition rules of the trees to understand how each prediction flows through the trees. Create a Vertex AI tabular dataset. Train an AutoML model to predict customer purchases. Deploy the model to a Vertex AI endpoint and enable feature attributions. Use the “explain” method to get feature attribution values for each individual prediction. Create a BigQuery table. Use BigQuery ML to build a logistic regression classification model. Use the values of the coefficients of the model to interpret the feature importance, with higher values corresponding to more importance Create a Vertex AI tabular dataset. Train an AutoML model to predict customer purchases. Deploy the model to a Vertex AI endpoint. At each prediction, enable L1 regularization to detect non-informative features.
You work for a company that captures live video footage of checkout areas in their retail stores. You need to use the live video footage to build a model to detect the number of customers waiting for service in near real time. You want to implement a solution quickly and with minimal effort. How should you build the model? Use the Vertex AI Vision Occupancy Analytics model. Use the Vertex AI Vision Person/vehicle detector model. Train an AutoML object detection model on an annotated dataset by using Vertex AutoML. Train a Seq2Seq+ object detection model on an annotated dataset by using Vertex AutoML.
You work as an analyst at a large banking firm. You are developing a robust scalable ML pipeline to tram several regression and classification models. Your primary focus for the pipeline is model interpretability. You want to productionize the pipeline as quickly as possible. What should you do? Use Tabular Workflow for Wide & Deep through Vertex AI Pipelines to jointly train wide linear models and deep neural networks Use Google Kubernetes Engine to build a custom training pipeline for XGBoost-based models Use Tabular Workflow for TabNet through Vertex AI Pipelines to train attention-based models Use Cloud Composer to build the training pipelines for custom deep learning-based models.
You developed a Transformer model in TensorFlow to translate text. Your training data includes millions of documents in a Cloud Storage bucket. You plan to use distributed training to reduce training time. You need to configure the training job while minimizing the effort required to modify code and to manage the cluster’s configuration. What should you do? Create a Vertex AI custom training job with GPU accelerators for the second worker pool. Use tf.distribute.MultiWorkerMirroredStrategy for distribution. Create a Vertex AI custom distributed training job with Reduction Server. Use N1 high-memory machine type instances for the first and second pools, and use N1 high-CPU machine type instances for the third worker pool. Create a training job that uses Cloud TPU VMs. Use tf.distribute.TPUStrategy for distribution. Create a Vertex AI custom training job with a single worker pool of A2 GPU machine type instances. Use tf.distribute.MirroredStrategv for distribution.
####You are developing a process for training and running your custom model in production. You need to be able to show lineage for your model and predictions. What should you do? 1. Create a Vertex AI managed dataset. 2. Use a Vertex AI training pipeline to train your model. 3. Generate batch predictions in Vertex AI. 1. Use a Vertex AI Pipelines custom training job component to tram your model. 2. Generate predictions by using a Vertex AI Pipelines model batch predict component. 1. Upload your dataset to BigQuery. 2. Use a Vertex AI custom training job to train your model. 3. Generate predictions by using Vertex Al SDK custom prediction routines. 1. Use Vertex AI Experiments to train your model. 2. Register your model in Vertex AI Model Registry. 3. Generate batch predictions in Vertex AI.
####You work for a hotel and have a dataset that contains customers’ written comments scanned from paper-based customer feedback forms, which are stored as PDF files. Every form has the same layout. You need to quickly predict an overall satisfaction score from the customer comments on each form. How should you accomplish this task? Use the Vision API to parse the text from each PDF file. Use the Natural Language API analyzeSentiment feature to infer overall satisfaction scores. Use the Vision API to parse the text from each PDF file. Use the Natural Language API analyzeEntitySentiment feature to infer overall satisfaction scores. Uptrain a Document AI custom extractor to parse the text in the comments section of each PDF file. Use the Natural Language API analyzeSentiment feature to infer overall satisfaction scores. Uptrain a Document AI custom extractor to parse the text in the comments section of each PDF file. Use the Natural Language API analyzeEntitySentiment feature to infer overall satisfaction scores.
Report abuse