DS2023
![]() |
![]() |
![]() |
Title of test:![]() DS2023 Description: data science |




New Comment |
---|
NO RECORDS |
A bike sharing platform has collected user commute data for the past 3 years. For increasing the profitability and making useful inferences, a machine learning model needs to be built from the accumulated data. Which of the following options has the correct order of the required machine learning tasks for building a model?. Data Access, Data Exploration, Feature Exploration, Feature Engineering, Modeling. Data Access. Feature Exploration, Data Exploration, Feature Engineering, Modeling. Data Access, Data Exploration, Feature Engineering, Feature Exploration, Modeling. Data Access, Feature Exploration, Feature Engineering, Data Exploration, Modeling. You have been given a collection of digital files required for a business audit. They consist of several different formats that you would like to annotate using Oracle Cloud Infrastructure (OCI) Data Labeling. Which THREE types of files could this tool annotate?. Video footage of a conversation in a conference room. Images of computer server racks. A type-written document that details an annual budget. A collection of purchase orders for office supplies. An audio recording of a phone conversation. Which TWO statements about Oracle Cloud Infrastructure (OCI) Open Data service are true?. Open Data includes text and image data repositories for AI and ML. Audio and video formats are not available. Each dataset in Open Data consists of code and tooling usage examples for consumption and reproducibility. Open Data is a dataset repository made for the people that create, use, and manipulate datasets. A primary goal of Open Data is for users to contribute to the data repositories in order to expand the content offered. Subscribers can pay and log into Open Data to view curated datasets that are otherwise not available to the public. You are running a pipeline in the OCI Data Science service and want to override some of the pipeline's default settings. Which of the following statements about overriding pipeline defaults is true?. Pipeline defaults can be overridden only during pipeline creation. Pipeline defaults can be overridden only by the Administrator. Pipeline defaults can be overridden before starting the pipeline execution. Pipeline defaults cannot be overridden once the pipeline has been created. Which of the following options has the correct order of the required machine learning tasks for building a model?. Min_features = ‘Age’ && min_features = ‘Education’. 0 < min_features <=2. Min_features = [‘Age’, ‘Education’]. 0 < min_features <=0.9. How are datasets exported in the OCI Data Labeling service?. As a binary file. As an XML file. As a line-delimited JSON file. As a CSV file. As a data scientist for a hardware company, you have been asked to predict the revenue demand for the upcoming quarter. You develop a time series forecasting model to analyze the data. Select the correct sequence of steps to predict the revenue demand values for the upcoming quarter. Verify, prepare model, deploy, save. Predict, deploy, save, verify, prepare model. Prepare model, deploy, verify, save predict. Prepare model, verify, save, deploy, predict. You have trained a binary classifier for a loan application and saved this model into the model catalog. A colleague wants to examine the model, and you need to share the model with your colleague. From the model catalog, which model artifacts can be shared?. Metadata, hyperparameters, metrics only. Model metadata and hyperparameters only. Models and metrics only. Models, model metadata, hyperparameters, metrics. You want to create a user group for a team of external data science consultants. The consultants should only have the ability to see Data Science resource details but not have the ability to create, delete, or update Data Science resources. What verb should you write in the policy?. Use. Inspect. Manage. Read. You are a data scientist using Oracle AutoML to produce a model and you are evaluating the score metric for the model. Which TWO of the following prevailing metrics would you use for evaluating multiclass classification model?. Mean squared error. Explained variance score. Recall. F1-score. R-squared. You're going to create an Oracle Cloud Infrastructure Anomaly Detection model for multivariate data. Where do you need to store the training data?. Your local machine. MySQL database. Autonomous Data Warehouse. Object Storage Bucket. You have just started as a data scientist at a healthcare company. You have been asked to analyze and improve a deep neural network model, which was built based on the electrocardiogram records of patients. There are no details about the model framework that was built. What would be the best way to find more details about the machine learning models inside model catalog?. Refer to the code inside the model. Check for model taxonomy details. Check for metadata tags. Check for provenance details. Which statement best describes Oracle Cloud Infrastructure Data Science Jobs?. Jobs let you define and run repeatable tasks on fully managed infrastructure. Jobs let you define and run repeatable tasks on customer-managed infrastructure. Jobs let you define and run repeatable tasks on fully managed third part cloud infrastructures. Jobs lets you define and run all Oracle Cloud DevOps workloads. Arrange the following in the correct Git Repository workflow order: 1. Install,configure,and authenticate Git. 2. Configure SSH keys for the Git repository. 3. Create a local and remote Git repository. 4. Commit files to the local Git repository. 5. Push the commit to the remote Git repository. 2,3,1,4,5. 4,2,3,1,5. 3,5,1,2,4. 1,2,3,4,5. While working with Git on Oracle Cloud Infrastructure (OCI) Data Science, you notice that two of the operations are taking more time than the others due to your slow internet speed. Which TWO operations would experience the delay?. Moving the changes into staging area for the next commit. Updating the local repo to match the content from a remote repository. Pushing changes to a remote repository. Making a commit that is taking a snapshot of the local repository for the next push. Converting an existing local project folder to Git repository. What is feature engineering in machine learning used for?. To perform parameter tuning. To interpret ML models. To transform existing features into new ones. To help understand the dataset features. Which of these options allow the sharing and loading back of ML models into a notebook session?. Model provenance. Model taxonomy. Model deployment. Model catalog. Which statement about resource principals is true?. When you authenticate using a resource principal, you need to create and manage credentials to access OCI resources. A resource principal is not a secure way to authenticate to resources, compared to the OCI configuration and API key approach. The Data Science service does not provide authentication via a notebook session’s or job run’s resource principal to access other OCI resources. A resource principal is a feature of IAM that enables resources to be authorized principal actors. What does the Data Science Service template in Oracle Resource Manager (ORM) NOT automatically create?. Required user groups. Dynamic groups. Individual Data Science users. Policies for a basic use case. Which feature of Oracle Cloud Infrastructure Data Science provides an interactive coding environment for building and training machine learning models?. Model Catalog. Jobs. Notebook Sessions. Projects. Which OCI Data Science interaction method can function without the need of scripting?. OCI console. CLI. Language SDKs. REST APIs. Which statement about dynamic groups is true?. They define what Data Science principals, such as users and resources have access to in OCI. They are individual users that are grouped in OCI by administrators and granted access to Data Science resources within compartments. They are local groupings of resources that can be accessed only by certain groups that have received administrator permission. They have matching rules, where compartment-ocid is replaced by the identifier of the compartment created for Data Science. Which of these is a unique feature of the published conda environment?. Provides a comprehensive environment to solve business use cases. Provides availability on network session reactivation. Allows you to save the conda environment to an Object Storage Bucket. Allows you to save the conda environment in a block volume. What is a conda environment?. A system that manages package independancies. A collection of kernels. An open-source and environment management system. An environment deployment system on OracleAI. Which CLI command allows the customized conda environment to be shared with co-workers?. Odsc conda clone. Odsc conda publish. Odsc conda modify. Odsc conda install. Which model has an open source, open model format that allows you to run machine learning models on different platforms?. PySpark. PyTorch. TensorFlow. ONNX. Where are OCI secrets stored?. OCI Object Storage. OCI Vault. Autonomous data warehouse. Oracle databases. What happens when a notebook session is deactivated?. Compute cost increases due to frequent deactivation. The data on boot volume is preserved. The underlying computer instance stops. The block volume attached to the notebook is permanently deleted. Which activity of managing a conda environment requires the conda environment to be activated in your terminal?. Modifying a conda environment. Installing a conda environment. Publishing a conda environment. Cloning a conda environment. What is the correct definition of Git?. Git is a centralized version control system that allows you to revert to previous versions of files as needed. Git is a distributed version control system that allows you to track changes made to a set of files. Git is a distributed version control system that protects teams from simultaneous repo contributions and merge requests. Git is a centralized version control system that allows data scientists and developers to track copious amount of data. Which function's objective is to represent the difference between the predictive value and the target value?. Optimizer function. Fit function. Update function. Cost function. What do you use the score.py file for?. Configure the deployment infrastructure. Executing the inference logic code. Defining the required conda environment. Defining the scaling strategy. Which activity is NOT a part of the machine learning life cycle?. Database Management. Model Deployment. Modeling. Data Access. Which stage in the machine learning life cycle helps in identifying the imbalance present in the data?. Data Modeling. Data Monitoring. Data Exploration. Data Access. Which step is a part of AutoML pipeline?. Feature Extraction. Model saved to Model Catalog. Model Deployment. Feature Selection. Triggering a Pager Duty notification as part of Monitoring is an example of what in the OCI Console?. Action. Rule. Function. Event. As a data scientist, you require a pipeline to train ML models. When can a pipeline run be initiated?. Pipeline can be initiated once it is created. Pipeline can be initiated during the pipeline run state. Pipeline can be initiated after the active state. Pipeline can be initiated before the active state. Which statement accurately describes an aspect of machine learning models?. Model performance degrades over time due to changes in data. Static predictions become increasingly accurate over time. Data models are more static generally require fewer updates than software code. A high-quality model will not need to be retrained as new information is received. Why would you use a mini batch when processing a job in Data Science Jobs?. You want to process data frequently. There is a small amount of total data to process. You do not need to process data quickly. You want several distributed models to run simultaneously. Which statement about logs for Oracle Cloud Infrastructure Jobs is true?. Each job run sends outputs to a single log for that job. Integrating data science jobs resources with logging is mandatory. All stdout and stderr are automatically stored when automatic log creation is enabled. Logs are automatically deleted when the job and job run is deleted. Which statement about Oracle Cloud Infrastructure Data Science Jobs is true?. Jobs provisions the infrastructure to run a process on-demand. Jobs comes with a set of standard tasks that cannot be customized. You must create and manage your own Jobs infrastructure. You must use a single Shell/Bash or Python artifact to run a job. Which step is unique to MLOps, as opposed to DevOps?. Continuous deployment. Continuous integration. Continuous delivery. Continuous training. Which statement about Oracle Cloud Infrastructure Anomaly Detection is true?. Accepted file types are SQL and Python. Data used for analysis can be text or numerical in nature. It is an important tool for detecting fraud, network intrusions and discrepancies in sensor time series analysis. It is trained on a combination of customer and general industry data sets. What is the name of the machine learning library used in Apache Spark?. MLib. GraphX. Structured Streaming. HadoopML. You are a researcher who requires access to large datasets. Which OCI service you would use?. Oracle databases. ADW. OCI Data science. Oracle Open Data. Which OCI service provides a scalable environment for developers and data scientists to run Apache Spark applications at scale?. Data Science. Anomaly Detection. Data Labeling. Data Flow. Using Oracle AutoML, you are tuning hyperparameters on a supported model class and have specified a time budget. AutoML terminates computation once the time budget is exhausted. What would you expect AutoML to return in case the time budget is exhausted before hy-perparameter tuning is completed. The current best known hyperparameter configuration is returned. The last generated hyperpameter configuration is returned. A hyperparameter configuration with a minimum learning rate is returned. A random hyperparameter configuration is returned. Where do calls to stdout and stderr from score.py go in the model deployment?. The file that was defined for them on the virtual machine (VM). The OCI console. The OCI cloud shell, which ca be accessed from the console. The predict log in the oracle cloud infrastructure (oci) logging service as defined in the deployment. During a job run, you receive an error message that no space is left on your disk device. To solve the problem, you must increase the size of the job storage. What would be the most efficient way to do this with Data Science jobs?. Edit the job, change the size of the storage of your job, and start a new job run. On the job run, set the environment variable that helps increase the size of the storage. Create a new job with increased storage size and then run the job. Your code using too much disk space. Refactor the code to identify the problem. After you have created and opened a notebook session, you want to use the Accelerated Data Science (ADS) SDK to access your data and get started with exploratory data analysis. From which TWO places can you access the ADS SDK?. Oracle Big Data Service. Oracle Machine Learning. Conda environment in OCI data science. Python package index (PyPi). Oracle autonomous data warehouse. You are attempting to save a model from a notebook session to the model catalog by using ADS SDK, with resource principal as the authentication signer, and you get a 404 authentication error. Which two should you look for to ensure permissions are setup correctly?. The dynamic groups matching rule exist for notebook sessions in the compartment. The model artifact is saved to the block volume of the notebook session. The policy for the dynamic group grants manages permissions for the model catalog in this compartment. The networking configuration allows access to the Oracle cloud infrastructure services through a service gateway. The policy for your user group grants manage permissions for the model catalog in this compartment. You are a data scientist working inside a notebook session and you attempt to pip install a package from a public repository that is not included in your condo environment. After running this command, you get a network timeout error. What might be missing from your network configuration?. The NAT Gateway with public internet access. Service gateway with private subnet access. FastConnect to an on-primises network. Primary virtual network interface card(VNIC). You have received machine learning model training code, without clear information about the optimal shape to run the training on, how would you proceed to identify the optimal compute shape for your model training that provides a balanced cost and processing time?. Start a smaller shape and monitor the job run metrics and time required to complete the model training: if the compute shape is not the fully utilized, tune the model parameters, and rerun the job. Repeat the process until the shape resources are fully utilized. Start with the strangest compute shape jobs support and monitor the job run metrics and time required to complete the model training. Tune the model that it utilizes as much compute resources as possible, even at an increased cost. Start with a small shape and monitor the utilization metrics and time required to complete the model training. If the compute shape is fully utilized change to compute the has more resources and rerun the job. Repeat the process until it processing time does not improve. Start with a random compute shape and monitor the utilization metrics and time required to finish the model training perform model training optimization and performance tests in advance to identify the right compute shape before running the model training as a job. You are given a task of writing a program that sorts document images by language. Which Oracle AI Service would you use?. Oracle Digital Assistant. OCI Vision. OCI Speech. OCI Language. You are asked to prepare data for a custom built model that requires transcribing Spanish video recordings into a readable text format with profane words identified. Which Oracle Cloud Service would you use?. OCI Anomaly Detection. OCI Speech. OCI Translation. OCI Language. Which Oracle Accelerated data Science (ADS) classes can be used for easy access to data sets from reference libraries and index websites, such as scilit-learn?. Dataset Browser. DatasetFactory. ADS Turner. SecretKeeper. You are working in your notebook session you find that your notebook session does not have enough compute CPU and memory for your workload. How would you scale up your notebook session without losing your work?. Deactivate your notebook session, provision a new notebook session on larger compute shape, and recreate all your file changes. Down your files and data to your local machine and delete your notebook session, provision the notebook session on a larger compute shape and upload your files from your local machine to the new notebook session. Ensure your files and environments are written to the block volume storage under the /home/datascience directory, deactivate the notebook session and activate the notebook larger computer shape selected. Create a temporary bucket in Object Storage write all your files and data to object storage, delete the notebook session, provision and new notebook session on a larger compute shape and copy your files and data from your temporary bucket to your new notebook session. The Oracle AutoML pipeline automates hyperparameter tuning by training the model with different parameters in parallel. You have created an instance of Oracle AutoML as ora-cle _automl and now you want an output with all the different trials performed by Oracle AutoML. Which of the following command gives you the results of all trials?. Oracle.automl.print_trials(). Oracle.automl.visualize_tuning_trials(). Oracle.automl.visualize_adaptive_sampling_trials(). Oracle.automl.visualize_algorith_selection_trials(). You have a data set with fewer than 1000 observations, and you are using Oracle AutoML to build a classifier. While visualizing the results of each stage of the Oracle AutoML pipeline, you notice that no visualization has been generated for one of the stages. Which stage is not visualized?. Feature selection. Algorithm selection. Adaptive sampling. Hyperparameter tuning. For your next data science project, you need access to public geospatial images. Which Oracle Cloud service provides free access to those images?. Oracle Big Data Service. Oracle Analytics Claud. Oracle Cloud Infrastructure (OCI) Data Science. Oracle Open Data. What is a common maxim about data scientists?. They spend 80% of their time finding and preparing data and 20% analyzing it. They spend 80% of their time analyzing data and 20% finding and preparing it. They spend 80% of their time on failed analytics projects and 20% doing useful work. Why is data sampling useful for data scientist?. It lets them analyze data sets in small batches to reduce their use of system resources. It reduces the amount of data storage space that's required for data science applications. It enables them to use a representative subset of data to build accurate analytical models more quickly. True or false? Bias is a common problem in data science applications. True. False. In machine learning, what is the primary difference between supervised and unsupervised learning?. Supervised learning involves data that has been labeled and classified, while unsupervised learning data is unlabeled and unclassified. Supervised learning is monitored closely by data scientists, while they don't play a role in unsupervised learning. Supervised learning is only used for image recognition, while unsupervised learning can be used for various analytics applications. Supervised learning is created and managed by the Data Engineer. Which of the following analytical and statistical techniques do data scientists commonly use?. Classification. Regression. Clustering. All of the above. Which of the following programming languages are most widely used by data scientists?. C and C++. Python, R and SQL. Java and JavaScript. True or false? Data scientists typically need a combination of technical skills, nontechnical ones and suitable personality traits to be successful. True. False. What is the primary difference between a data scientist and a data engineer?. A data engineer collects and prepares data, and a data scientist then analyzes it. A data engineer analyzes data after a data scientist collects and prepares it. A data engineer builds data pipelines and helps prepare data, while a data scientist is responsible for data collection, preparation and analysis. A data engineer creates data flows to be used as templates by the data analyst. What is the first step in the data science process?. Collecting data and preparing it for analysis. Experimenting with and tuning different analytical models. Defining an analytical hypothesis that could provide business value. Working with data owners. Which of the following best describes the principal goal of data science?. To collect and archive exhaustive data sets from various source systems for corporate record keeping uses. To mine and analyze large amounts of data in order to uncover information that can be used for operational improvements and business gains. To collect and prepare data for use as part of analytics applications. Data science is focused on output of the analysis. You are given the task of writing a program that sorts document images by language. Which Oracle service would you use?. Oracle Digital Assistant. OCI Language. OCI Speech. OCI Vision. Six months ago you created and deployed a model that predicts customer churn for a call center. Initially, it was yielding quality predictions. However, over the last two months, users have been questioning the credibility of the predictions. Which TWO methods would you employ to verify accuracy and lower customer churn?. Drift monitoring. Redeploy the model. Operational monitoring. Retrain the model. Validate the model using recent data. You are a data scientist leveraging Oracle Cloud Infrastructure (OCI) to create a model and need some additional python libraries for processing genome sequencing data. Which of the following THREE statements are correct with respect to installing additional python libraries to process the data?. OCI data science allows privileges in notebook sessions. You can install any open source package available in a publicly accessible python package index (PyPi) repository. You can only install libraries using yum and pip as a normal user. You cannon install a library that’s not preinstalled in the provided image. You can install private or custom libraries from your own internal repositories. You have an embarrassingly parallel or distributed batch job with a large amount of data running using Data Science Jobs. What would be the best approach to run the workload?. Create a job in Data Science Jobs and then start the number of simultaneous job runs required for your workload. Create a new job for every job run that you have to run in parallel, because the Data Science Job service can have only one job per job. Create the job in data science jobs and start a job run. When it is done start a new job run until you achieve the number of runs required. Reconfigure the job run because Data Science Jobs does not support embarrassingly parallel. You have created a model and want to use Accelerated Data Service (ADS) SDK to deploy the model. Where are the artifacts to deploy this model with ADS?. OCI Vault. Model Depository. Model Catalog. Data Science Artifactory. You are working as a data scientist for a healthcare company. They decided to analyze the data to find patterns in a large volume of electronic medical records. You are asked to build a PySpark solution to analyze these records in a JupyterLab notebook. What is the order of recommended steps to develop a PySpark application in OCI data science?. Launch a notebook session. Configure core-site.xml. Install a PySpark conda environment. Develop your PySpark application. Create a data flow application with the Accelerated Data Science (ADS) SDK. Configure core-site.xml. Install a PySpark conda environment. Create a Data Flow application with the Accelerated Data Science (ADS) SDK. Develop your PySpark application. Install a spark conda environment. Configure core-site.xml. Launch a notebook session: Create a Data Flow application with the Accelerated Data Science (ADS) SDK. Develop your PySpark application. Launch a notebook session. Install a PySpark conda environment. Configure coresite.xml. Develop your PySpark application. Create a Data Flow application with the Accelerated Data Science (ADS) SDK. You are creating Oracle Cloud Infrastructure (OCI) Data Science job that will run on a recurring basis in a production environment. This job will pick up sensitive data from an Object Storage Bucket, train a model and save it to the model catalog. How would you design the authentication mechanism for the job?. Create a pre-authenticated request (PAA) for the object storage bucket and use that in the job code. Use the resource principal of the job run as the signer in the job code, ensuring there is a dynamic group for this job run with appropriate access to Object storage and the model catalog. Package your personal OC file and keys in the job artifact. Store our personal OCI config file and keys in the Vault, and access the Vault through the job run resource principal. You are a data scientist with a set of text and image files that need annotation, and you want to use Oracle Cloud Infrastructure (OCI) Data Labeling. Which of the following THREE an-notation classes are supported by the tool.?. Object detection. Named entity extraction. Classification (single/multi label). Key-Point and Landmark. Polygonal Segmentation. Semantic Segmentation. Which Oracle Cloud Infrastructure (OCI) Data Science policy is invalid?. Allow group DataScienceGroup to use virtual-network-family in compartment DataScience. Allow group DataScienceGroup to use data-science-model-sessions in compartment DataScience. Allow dynamic-group DataScienceDynamicGroup to manage data-science-projects in compartment DataScience. Allow dynamic-group DataScienceDynamicGroup to manage data-science-family in compartment DataScience. Which is NOT a valid OCI Data Science notebook session approach?. Ensure you don't execute long-running Python processes in a notebook cell. Run the process directly in the terminal and use Python logging to et updates on the progress of your job. Avoid having multiple users in the same notebook session due to the possibility of resource contention and write conflicts. While connecting to data in OCI Object Storage from your notebook session, the best practice is to make a local copy on the device and then upload it to your notebook session block volume. Authentic using your notebook session's resource principal to access other OCI resources. Resource principals provide a more secure way to authenticate to resources compared to the OCI configuration and API approach. You are working as a Data Scientist for a healthcare company. You have a series of neurophysiological data on OCI Data Science and have developed a convolutional neural network (CNN) classification model. It predicts the source of seizures in drug-resistant epileptic patients. You created a model artifact with all the necessary files. When you deployed the model, it failed to run because you did not point to the correct conda environment in the model artifact. Where would you provide instructions to use the correct conda environment?. score.py. runtime.yaml. requirements.txt. model_artifact_validate.py. You have an image classification model in the model catalog which is deployed as an HTTP endpoint using model deployments. Your tenancy administrator is seeing increased demands and has asked you to increase the load balancing bandwidth from the default of 10Mbps. You are provided with the following information: Payload size in KB = 1024 Estimated requests per second = 120 requests/second (Monday through Friday, in every month, in every year) Buffer percentage = 20% What is the optimal load balancing bandwidth to redeploy your model?. 452 Mbps. 52 Mbps. 7052 Mbps. 1152 Mbps. You want to create an anomaly detection model using the OCI anomaly Detection service that avoids as many false alarms as possible. False Alarm Probability (FAP) indicates model performance. How would you set the value of the False Alarm Probability?. High. Low. Zero. Use a function. You have just completed analyzing a set of images by using Oracle Cloud Infrastructure (OCI) Data Labelling, and you want to export the annotated data. Which TWO formats are supported?. CONLL V2003. COCO. Data Labelling Service Proprietary JSON. Spacy. You are a data scientist, you use the Oracle Cloud Infrastructure (OCI) Language service to train custom models. Which types of custom models can be trained?. Image classification, Named Entity Recognition (NER). Text classification, Named Entity Recognition (NER). Sentiment Analysis, Named Entity Recognition (NER). Object detection, Text classification. What preparation steps are required to access an Oracle AI service SDK from a Data Science notebook session?. Create and upload score.py and runtime.yaml. Create and upload the API signing key and config file. Import the REST API. Call the ADS command to enable AI integration. You are using Oracle Cloud Infrastructure (OCI) Anomaly Detection to train a model to detect anomalies in pump sensor data - What are you trying to determine?. How does the required False Alarm Probability setting affect an anomaly detection model?. It is used to disable the reporting of false alarms. It changes the sensitivity of the model to detecting anomalies. It determines how many false alarms occur before an error message is generated. It adds a score to each signal indicating the probability that its a false alarm. You are a data scientist leveraging the Oracle Cloud Infrastructure (OCI) Language AI service for various types of text analyses. Which TWO capabilities can you utilize with this tool?. Topic classification. Table extraction. Sentiment analysis. Sentence diagramming. Punctuation correction. You want to write a program that performs document analysis tasks such as extracting text and tables from a document. Which Oracle AI service would you use?. Oracle Digital Assistant. OCI Speech. OCI Vision. OCI Vision. OCI Language. Which Oracle Accelerated Data Science (ADS) classes can be used for easy access to data sets from reference libraries and index websites such as scikit-learn?. DataLabeling. DatasetBrowser. SecretKeeper. ADSTuner. You are a data scientist trying to load data into your notebook session. You understand that Accelerated Data Science (ADS) SDK supports loading various data formats. Which of the following THREE are ADS supported data formats?. DOCX. Pandas DataFrame. JSON. Raw Images. XML. You are a data scientist leveraging Oracle Cloud Infrastructure (OCI) Data Science to create a model and need some additional Python libraries for processing genome sequencing dat Which of the following THREE statements are correct with respect to installing additional Python libraries to process the data?. You can only install libraries using yum and pip as a normal user. You can install private or custom libraries from your own internal repositories. OCI Data Science allows root privileges in notebook sessions. You can install any open source package available on a publicly accessible Python Package Index (PyPI) repository. You cannot install a library that's not preinstalled in the provided image. You are a data scientist working for a manufacturing company. You have developed a forecasting model to predict the sales demand in the upcoming months. You created a model artifact that contained custom logic requiring third party libraries. When you deployed the model, it failed to run because you did not include all the third party dependencies in the model artifact. What file should be modified to include the missing libraries?. model_artifact_validate.py. score.py. runtime.yaml. requirements.txt. You are a data scientist working for a utilities company. You have developed an algorithm that detects anomalies from a utility reader in the grid. The size of the model artifact is about 2 GB, and you are trying to store it in the model catalog. Which THREE interfaces could you use to save the model artifact into the model catalog?. Oracle Cloud Infrastructure (OCI) Command Line Interface (CLI). Accelerated Data Science (ADS) Software Development Kit (SDK). ODSC CLI. Console. OCI Python SDK. Git CLI. As a data scientist, you are tasked with creating a model training job that is expected to take different hyperparameter values on every run. What is the most efficient way to set those parameters with Oracle Data Science Jobs?. Create a new job every time you need to run your code and pass the parameters as environment variables. Create a new job by setting the required parameters in your code and create a new job for every code change. Create your code to expect different parameters either as environment variables or as command line arguments, which are set on every job run with different values. Create your code to expect different parameters as command line arguments and create a new job every time you run the code. You have an embarrassingly parallel or distributed batch job on a large amount of data that you consider running using Data Science Jobs. What would be the best approach to run the workload?. Create the job in Data Science Jobs and start a job run. When it is done, start a new job run until you achieve the number of runs required. Create the job in Data Science Jobs and then start the number of simultaneous jobs runs required for your workload. Reconfigure the job run because Data Science Jobs does not support embarrassingly parallel workloads. Create a new job for every job run that you have to run in parallel, because the Data Science Jobs service can have only one job run per job. You have received machine learning model training code, without clear information about the optimal shape to run the training. How would you proceed to identify the optimal compute shape for your model training that provides a balanced cost and processing time?. Start with a random compute shape and monitor the utilization metrics and time required to finish the model training. Perform model training optimizations and performance tests in advance to identify the right compute shape before running the model training as a job. Start with a smaller shape and monitor the Job Run metrics and time required to complete the model training. If the compute shape is not fully utilized, tune the model parameters, and re- run the job. Repeat the process until the shape resources are fully utilized. Start with the strongest compute shape Job's support and monitor the Job Run metrics and time required to complete the model training. Tune the model so that it utilizes as much compute resources as possible, even at an increased cost. Start with a smaller shape and monitor the utilization metrics and time required to complete the model training. If the compute shape is fully utilized, change to compute that has more resources and re-run the job. Repeat the process until the processing time does not improve. You have a complex Python code project that could benefit from using Data Science Jobs as it is a repeatable machine learning model training task. The project contains many sub-folders and classes. What is the best way to run this project as a Job?. ZIP the entire code project folder and upload it as a Job artifact. Jobs automatically identifies the_main_ top level where the code is run. Rewrite your code so that it is a single executable Python or Bash/Shell script file. ZIP the entire code project folder and upload it as a Job artifact on job creation. Jobs identifies the main executable file automatically. ZIP the entire code project folder, upload it as a Job artifact on job creation, and set JOB_RUN_ENTRYPOINT to point to the main executable file. You are a data scientist working inside a notebook session and you attempt to pip install a package from a public repository that is not included in your conda environment. After running this command, you get a network timeout error. What might be missing from your networking configuration?. FastConnect to an on-premises network. Primary Virtual Network Interface Card (VNIC). NAT Gateway with public internet access. Service Gateway with private subnet access. You are building a model and need input that represents data as morning, afternoon, or evening. However, the data contains a time stamp. What part of the Data Science life cycle would you be in when creating the new variable?. Data access. Feature engineering. Model type selection. Model validation. Six months ago, you created and deployed a model that predicts customer churn for a call centre. Initially, it was yielding quality predictions. However, over the last two months, users are questioning the credibility of the predictions. Which TWO methods would you employ to verify the accuracy of the model?. Retrain the model. Validate the model using recent data. Drift monitoring. Redeploy the model. Operational monitoring. Which two statements are true about published conda environments?. They are curated by Oracle Cloud Infrastructure (OCI) Data Science. The odac conda init command is used to configure the location of published conda environments. Your notebook session acts as the source to share published conda environments with team members. You can only create a published conda environment by modifying a Data Science conda environment. In addition to service job run environment variables, conda environment variables can be used in Data Science Jobs. You have created a conda environment in your notebook session. This is the first time you are working with published conda environments. You have also created an Object Storage bucket with permission to manage the bucket. Which TWO commands are required to publish the conda environment?. odac conda publish --slug <SLUG>. odsc conda list --override. odsc conda init --bucket_namespace <NAMESPACE> --bucket_name <BUCKET>. odsc conda create --file manifest.yaml. conda activate /home/datascience/conda/<SLUG>. When preparing your model artifact to save it to the Oracle Cloud Infrastructure (OCI) Data Science model catalog, you create a score.py file. What is the purpose of the score.py file?. Configure the deployment infrastructure. Execute the inference logic code. Define the compute scaling strategy. Define the inference server dependencies. You realize that your model deployment is about to reach its utilization limit. What would you do to avoid the issue before requests start to fail? Pick THREE. Update the deployment to add more instances. Delete the deployment. Update the deployment to use fewer instances. Update the deployment to use a larger virtual machine (more CPUs/memory). Reduce the load balancer bandwidth limit so that fewer requests come in. You are working as a data scientist for a healthcare company. They decide to analyze the data to find patterns in a large volume of electronic medical records. You are asked to build a PySpark solution to analyze these records in a JupyterLab notebook. What is the order of recommended steps to develop a PySpark application in Oracle Cloud Infrastructure (OCI) Data Science?. Install a Spark conda environment. Configure core-site.xml. Launch a notebook session. Create a Data Flow application with the Accelerated Data Science (ADS) SDK. Develop your PySpark application. Configure core-site.xml. Install a PySpark conda environment. Create a Data Flow application with the Accelerated Data Science (ADS) SDK. Develop your PySpark application. Launch a notebook session. Launch a notebook session. Configure core-site.xml. Install a PySpark conda environment. Develop your PySpark application. Create a Data Flow application with the Accelerated Data Science (ADS) SDK. Develop your PySpark application. Create a Data Flow application with the Accelerated Data Science (ADS) SDK. Launch a notebook session. Install a PySpark conda environment. Configure core-site.xml. You are a data scientist building a pipeline in the Oracle Cloud Infrastructure (OCI) Data Science service for your machine learning project. You want to optimize the pipeline completion time by running some steps in parallel. Which statement is true about running pipeline steps in parallel?. Steps in a pipeline can be run only sequentially. Pipeline steps can be run in sequence or in parallel, as long as they create a directed acyclic graph (DAG). All pipeline steps are always run in parallel. Parallel steps cannot be run if they are completely independent of each other. You want to build a multistep machine learning workflow by using the Oracle Cloud Infrastructure (OCI) Data Science Pipeline feature. How would you configure the conda environment to run a pipeline step?. Configure a compute shape. Configure a block volume. Use command-line variables. Use environmental variables. You want to write a Python script to create a collection of different projects for your data science team. Which Oracle Cloud Infrastructure (OCI) Data Science interface would you use?. The OCI Software Development Kit (SDK). OCI Console. Command line interface (CLI). Mobile App. You are a data scientist designing an air traffic control model, and you choose to leverage Oracle AutoML You understand that the Oracle AutoML pipeline consists of multiple stages and automatically operates in a certain sequence. What is the correct sequence for the Oracle AutoML pipeline?. Algorithm selection, Feature selection, Adaptive sampling, Hyperparameter tuning Want any exam dump in pdf email me at info.tipsandtricks10@gmail.com (Little Paid). Adaptive sampling, Algorithm selection, Feature selection, Hyperparameter tuning. Adaptive sampling, Feature selection, Algorithm selection, Hyperparameter tuning. Algorithm selection, Adaptive sampling, Feature selection, Hyperparameter tuning. You have trained three different models on your data set using Oracle AutoML. You want to visualize the behavior of each of the models, including the baseline model, on the test set. Which class should be used from the Accelerated Data Science (ADS) SDK to visually compare the models?. EvaluationMetrics. ADSEvaluator. ADSExplainer. ADSTuner. Using Oracle AutoML, you are tuning hyperparameters on a supported model class and have specified a time budget. AutoML terminates computation once the time budget is exhausted. What would you expect AutoML to return in case the time budget is exhausted before hyperparameter tuning is completed?. The current best known hyperparameter configuration. Random hyperparameter configuration. Hyperparameter with minimal machine learning. The last generated hyperparameter configuration. As a data scientist, you are trying to automate a machine learning (ML) workflow and have decided to use Oracle Cloud Infrastructure (OCI) AutoML Pipeline. Which THREE are part of the AutoML Pipeline?. Feature Selection. Adaptive Sampling. Model Deployment. Feature Extraction. Algorithm Selection. You want to use ADSTuner to tune the hyperparameters of a supported model you recently trained. You have just started your search and want to reduce the computational cost as well as access the quality of the model class that you are using. What is the most appropriate search space strategy to choose?. Detailed. ADSTuner doesn't need a search space to tune the hyperparameters. Perfunctory. Pass a dictionary that defines a search space. You have just received a new data set from a colleague. You want to quickly find out summary information about the data set, such as the types of features, the total number of observations, and distributions of the dat Which Accelerated Data Science (ADS) SDK method from the ADSDataset class would you use?. show_corr(). to_xgb (). compute (). show_in_notebook (). You want to make your model more frugal to reduce the cost of collecting and processing data. You plan to do this by removing features that are highly correlated. You would like to create a heat map that displays the correlation so that you can identify candidate features to remove. Which Accelerated Data Science (ADS) SDK method is appropriate to display the comparability between Continuous and Categorical features?. pearson_plot(). cramersv_plot(). correlation_ratio_plot(). corr(). You have built a machine model to predict whether a bank customer is going to default on a loan. You want to use Local Interpretable Model-Agnostic Explanations (LIME) to understand a specific prediction. What is the key idea behind LIME?. Global behaviour of a machine learning model may be complex, while the local behaviour may be approximated with a simpler surrogate model. Model-agnostic techniques are more interpretable than techniques that are dependent on the types of models. Global and local behaviours of machine learning models are similar. Local explanation techniques are model-agnostic, while global explanation techniques are not. You want to evaluate the relationship between feature values and target variables. You have a large number of observations having a near uniform distribution and the features are highly correlated. Which model explanation technique should you choose?. Feature Permutation Importance Explanations. Local Interpretable Model-Agnostic Explanations. Feature Dependence Explanations. Accumulated Local Effects. As you are working in your notebook session, you find that your notebook session does not have enough compute CPU and memory for your workload. How would you scale up your notebook session without losing your work?. Create a temporary bucket on Object Storage, write all your files and data to Object Storage, delete your notebook session, provision a new notebook session on a larger compute shape, Want any exam dump in pdf email me at info.tipsandtricks10@gmail.com (Little Paid) and copy your files and data from your temporary bucket onto your new notebook session. Ensure your files and environments are written to the block volume storage under the /home/datascience directory, deactivate the notebook session, and activate the notebook session with a larger compute shape selected. Download all your files and data to your local machine, delete your notebook session, provision a new notebook session on a larger compute shape, and upload your files from your local machine to the new notebook session. Deactivate your notebook session, provision a new notebook session on a larger compute shape and re-create all of your file changes. Which OCI service enables you to build, train, and deploy machine learning models in the cloud?. Oracle Cloud Infrastructure Data Catalog. Oracle Cloud Infrastructure Data Integration. Oracle Cloud Infrastructure Data Science. Oracle Cloud Infrastructure Data Flow. As a data scientist, you are tasked with creating a model training job that is expected to take different hyperparameter values on every run. What is the most efficient way to set those parameters with Oracle Data Science Jobs?. Create a new job every time you need to run your code and pass the parameters as en-vironment variables. Create your code to expect different parameters as command line arguments, and create it new job every time you run the code. Create a new no by setting the required parameters in your code, and create a new job for mery code change. Create your code to expect different parameters either as environment variables or as command line arguments, which are set on every job run with different values. You are a data scientist using Oracle AutoML to produce a model and you are evaluating the score metric for the model. Which of the following TWO prevailing metrics would you use for evaluating multiclass classification model?. Recall. Mean squared error. F1 Score. R-Squared. Explained variance score. How is the storage associated with OCI Data Science Workspace managed?. Data is stored on local disk within the workspace instance. Data is automatically stored in an attached Object Storage bucket. Data is stored in a separate File Storage service. Data is stored in an external database using block volumes. Which technique can be used for feature engineering in the machine learning lifecycle?. Principal Component Analysis (PCA). K-means clustering. Support Vector Machines (SVM). Gradient boosting. How can you collaborate with team members in OCI Data Science Workspace?. By granting access to specific notebooks and files. By using version control systems integrated with the workspace. By sharing the workspace instance with other users. By enabling chat and video conferencing within the workspace. Where do calls to stdout and stderr from score.py go in a model deployment?. The file that was defined for them on the Virtual stachine (VM). The predict log in the Oracle Cloud Infrastructure (OCI) Logging service as defined in the deployment. The OCI Cloud Shell, which can be accessed from the console. The OCI console. Which OCI service provides a managed Kubernetes service for deploying, scaling, and managing containerized applications?. Oracle Cloud Infrastructure Container Registry. Oracle Cloud Infrastructure Load Balancing. Oracle Cloud Infrastructure Container Engine for Kubernetes. Oracle Cloud Infrastructure Streaming. What preparation steps are required to access an Oracle AI service SDK from a Data Science notebook session?. Call the Accented Data Science (ADS) command to enable Al integration. Create and upload the API signing key and config file. Import the REST API. Create and upload execute.py and runtime.yaml. Which statement about Oracle Cloud Infrastructure Multi-Factor Authentication (MFA) is NOT valid?. Users cannot disable MFA for themselves. A user can register only one device to use for MFA. Users must install a supported authenticator app on the mobile device they intend to register for MFA. An administrator can disable MFA for another user. Which Security Zone policy is NOT valid?. A boot volume can be moved from a security zone to a standard compartment. A compute instance cannot be moved from a security zone to a standard compartment. Resources in a security zone should not be accessible from the public internet. Resources in a security zone must be automatically backed up regularly. You have configured the Management Agent on an Oracle Cloud Infrastructure (OCI) Linux instance for log ingestion purposes. Which is a required configuration for OCI Logging Analytics service to collect data from multiple logs of this Instance?. Log - Log Group Association. Entity - Log Association. Source - Entity Association. Log Group - Source Association. Which Oracle Data Safe feature minimizes the amount of personal data and allows internal test, development, and analytics teams to operate with reduced risk?. data encryption. security assessment. data masking. data discovery. data auditing. You are using a custom application with third-party APIs to manage application and data hosted in an Oracle Cloud Infrastructure (OCI) tenancy. Although your third-party APIs don't support OCI's signature-based authentication, you want them to communicate with OCI resources. Which authentication option must you use to ensure this?. OCI username and Password. API Signing Key. SSH Key Pair with 2048-bit algorithm. Auth Token. In which two ways can you improve data durability in Oracle Cloud Infrastructure Object Storage?. Setup volumes in a RAID1 configuration. Enable server-side encryption. Enable Versioning. Limit delete permissions. Enable client-side encryption. You want to make API calls against other OCI services from your instance without configuring user credentials. How would you achieve this?. Create a dynamic group and add a policy. Create a dynamic group and add your instance. Create a group and add a policy. No configuration is required for making API calls. Which statement is true about Oracle Cloud Infrastructure (OCI) Object Storage server-side encryption?. All the traffic to and from object storage is encrypted by using Transport Layer Security. Encryption is not enabled by default. Customer-provided encryption keys are never stored in OCI Vault service. Each object in a bucket is always encrypted with the same data encryption key. Which statement is true about origin management in web application firewall (WAF)? Statement A: Multiple origins can be defined. Statement B: Only a single origin can be active for a WAF. Only statement B is true. Both the statements are false. Both the statements are true. Only statement A is true. Which of these protects customer data at rest and in transit in a way that allows customers to meet their security and compliance requirements for cryptographic algorithms and key management?. Security controls. Customer isolation. Data encryption. Identity Federation. What is the minimum active storage duration for logs used by Logging Analytics to be archived?. 60 days. 10 days. 30 days. 15 days. Which components are a part of the OCI Identity and Access Management service?. Policies. Regional subnets. Compute instances. VCN. Which web application firewall (WAF) service component must be configured to allow, block, or log network requests when they meet specified criteria?. Protection rules. Bot Management. Origin. Web Application Firewall policy. Which statement is true about standards?. They may be audited. They are result of a regulation or contractual requirement or an industry requirement. They are methods and instructions on how to maintain or accomplish the directives of the policy. They are the foundation of corporate governance. Which cache rules criterion matches if the concatenation of the requested URL path and query are identical to the contents of the value field?. URL_PART_CONTAINS. URL_IS. URL_PART_ENDS_WITH. URL_STARTS_WITH. Which is NOT a compliance document?. Certificate. Penetration test report. Attestation. Bridge letter. On which option do you set Oracle Cloud Infrastructure Budget?. Compartments. Instances. Free-form tags. Tenancy. Which OCI cloud service lets you centrally manage the encryption keys that protect your data and the secret credentials that you use to securely access resources?. Data Safe. Cloud Guard. Data Guard. Vault. Which type of file system does file storage use?. NFSv3. iSCSI. Paravirtualized. NVMe. SSD. Which Oracle Cloud Service provides restricted access to target resources?. Bastion. Internet Gateway. Load balancer. SSL certificate. How can you convert a fixed load balancer to a flexible load balancer?. There is no way to covert the load balancer. Use Update Shape workflows. Delete the fixed load balancer and create a new one. Using the Edit Listener option. Which architecture is based on the principle of “never trust, always verify”?. Federated identity. Zero trust. Fluid perimeter. Defense in depth. Which type of firewalls are designed to protect against web application attacks, such as SQL injection and cross-site scripting?. Stateful inspection firewall. Web Application Firewall. Incident firewall. Packet filtering firewall. What does an audit log event include?. Audit type. Header. Footer. Type of input. Which is NOT a part of Observability and Management Services?. Event Services. OCI Management Service. Logging Analytics. Logging. Which encryption is used for Oracle Data Science?. 256-bit Advanced Encryption Standard (AES-256). Data Encryption Standard (DES). Triple DES (TDES). Twofish. Rivest Shamir Adleman (RSA). Select two reasons why it is important to rotate encryption keys when using Oracle Cloud Infrastructure (OCI) Vault to store credentials or other secrets. Key rotation allows you to encrypt no more than five keys at a time. Key rotation improves encryption efficiency. Periodically rotating keys make it easier to reuse keys. Key rotation reduces risk if a key is ever compromised. Periodically rotating keys limits the amount of data encrypted by one key version. You are a computer vision engineer building an image recognition model. You decide to use Oracle Data Labeling to annotate your image data. Which of the following THREE are possible ways to annotate an image in Data Labeling?. Adding labels to image using semantic segmentation, by drawing multiple bounding boxes to an image. Adding a single label to an image. Adding labels to an image by drawing bounding box to an image, is not supported by Data Labeling. Adding labels to an image using object detection, by drawing bounding boxes to an image. Adding multiple labels to an image. As a data scientist, you are working on a global health data set that has data from more than 50 countries. You want to encode three features, such as 'countries', 'race', and 'body organ' as categories. Which option would you use to encode the categorical feature?. DataFramLabelEncode(). auto_transform(). OneHotEncoder(). show_in_notebook(). As a data scientist, you use the Oracle Cloud Infrastructure (OCI) Language service to train custom models. Which types of custom models can be trained?. Image classification, Named Entity Recognition (NER). Text classification, Named Entity Recognition (NER). Sentiment Analysis, Named Entity Recognition (NER). Object detection, Text classification. You are building a model and need input that represents data as morning, afternoon, or evening. However, the data contains a time stamp. What part of the Data Science life cycle would you be in when creating the new variable?. Model type selection. Model validation. Data access. Feature engineering. As a data scientist, you create models for cancer prediction based on mammographic images. The correct identification is very crucial in this case. After evaluating two models, you arrive at the following confusion matrix. Which model would you prefer and why? Model 1 has Test accuracy is 80% and recall is 70%. Model 2 has Test accuracy is 75% and recall is 85%. Model 2, because recall is high. Model 1, because the test accuracy is high. Model 2, because recall has more impact on predictions in this use case. Model 1, because recall has lesser impact on predictions in this use case. You want to make your model more parsimonious to reduce the cost of collecting and processing data. You plan to do this by removing features that are highly correlated. You would like to create a heat map that displays the correlation so that you can identify candidate features to remove. Which Accelerated Data Science (ADS) SDK method would be appropriate to display the correlation between Continuous and Categorical features?. Corr{}. Correlation_ratio_plot{}. Pearson_plot{}. Cramersv_plot{}. You are attempting to save a model from a notebook session to the model catalog by using the Accelerated Data Science (ADS) SDK, with resource principal as the authentication signer, and you get a 404 authentication error. Which two should you look for to ensure permissions are set up correctly?. The model artifact is saved to the block volume of the notebook session. A dynamic group has rules that matching the notebook sessions in it compartment. The policy for your user group grants manages permissions for the model catalog in this compartment. The policy for a dynamic group grant manages permissions for the model catalog in it compartment. The policy for a dynamic group grant manages permissions for the model catalog in it compartment. The networking configuration allows access to Oracle Cloud Infrastructure services through a Service Gateway. You are preparing a configuration object necessary to create a Data Flow application. Which THREE parameter values should you provide?. The path to the arhive.zip file. The local path to your pySpark script. The compartment of the Data Flow application. The bucket used to read/write the pySpark script in Object Storage. The display name of the application. The feature type TechJob has the following registered validators: Tech-Job.validator.register(name=’is_tech_job’, handler=is_tech_job_default_handler) Tech-Job.validator.register(name=’is_tech_job’, handler= is_tech_job_open_handler, condi-tion=(‘job_family’,)) TechJob.validator.register(name=’is_tech_job’, handler= is_tech_job_closed_handler, condition=(‘job_family’: ‘IT’)) When you run is_tech_job(job_family=’Engineering’), what does the feature type validator system do?. Execute the is_tech_job_default_handler sales handler. Throw an error because the system cannot determine which handler to run. Execute the is_tech_job_closed_handler handler. Execute the is_tech_job_open_handler handler. You are using Oracle Cloud Infrastructure (OCI) Anomaly Detection to train a model to detect anomalies in pump sensor data. How does the required False Alarm Probability setting affect an anomaly detection model?. It is used to disable the reporting of false alarms. It changes the sensitivity of the model to detecting anomalies. It determines how many false alarms occur before an error message is generated. It adds a score to each signal indicating the probability that its a false alarm. You want to write a program that performs document analysis tasks such as extracting text and tables from a document. Which Oracle AI service would you use?. OCI Language. Oracle Digital Assistant. OCI Speech. OCI Vision. You realize that your model deployment is about to reach its utilization limit. What would you do to avoid the issue before requests start to fail? Which THREE steps would you perform?. Update the deployment to add more instances. Delete the deployment. Update the deployment to use fewer instances. Update the deployment to use a larger virtual machine (more CPUs/memory). Reduce the load balancer bandwidth limit so that fewer requests come in. Which THREE types of data are used for Data Labeling?. Audio. Text. Document. Images. Graphs. You are a data scientist leveraging the Oracle Cloud Infrastructure (OCI) Language AI service for various types of text analyses. Which TWO capabilities can you utilize with this tool?. Table extractioN. Punctuation correction. Sentence diagramming. Topic classification. Sentiment analysis. You loaded data into Oracle Cloud Infrastructure (OCI) Data Science. To transform the data, you want to use the Accelerated Data Science (ADS) SDK. When you applied the get_recommendations () tool to the ADSDataset object, it showed you user-detected issues with all the recommended changes to apply to the dataset. Which option should you use to apply all the recommended transformations at once?. get_transformed_dataset (). fit_transform(). auto_transform(). visualize_transforms (). Which of the following TWO non-open source JupyterLab extensions has Oracle Cloud In-frastructure (OCI) Data Science developed and added to the notebook session experience?. Environment Explorer. Table of Contents. Command Palette. Notebook Examples. Terminal. |