Professional-Machine-Learning-Engineer Premium Bundle

Professional-Machine-Learning-Engineer Premium Bundle

Google Professional Machine Learning Engineer Certification Exam

4.5 
(55125 ratings)
60 QuestionsPractice Tests
60 PDFPrint version
November 23, 2024Last update

Google Professional-Machine-Learning-Engineer Free Practice Questions

Want to know Exambible Professional-Machine-Learning-Engineer Exam practice test features? Want to lear more about Google Google Professional Machine Learning Engineer certification experience? Study Pinpoint Google Professional-Machine-Learning-Engineer answers to Latest Professional-Machine-Learning-Engineer questions at Exambible. Gat a success with an absolute guarantee to pass Google Professional-Machine-Learning-Engineer (Google Professional Machine Learning Engineer) test on your first attempt.

Google Professional-Machine-Learning-Engineer Free Dumps Questions Online, Read and Test Now.

NEW QUESTION 1
You work for a social media company. You need to detect whether posted images contain cars. Each training example is a member of exactly one class. You have trained an object detection neural network and deployed the model version to Al Platform Prediction for evaluation. Before deployment, you created an evaluation job and attached it to the Al Platform Prediction model version. You notice that the precision is lower than your business requirements allow. How should you adjust the model's final layer softmax threshold to increase precision?

  • A. Increase the recall
  • B. Decrease the recall.
  • C. Increase the number of false positives
  • D. Decrease the number of false negatives

Answer: D

NEW QUESTION 2
You are developing models to classify customer support emails. You created models with TensorFlow Estimators using small datasets on your on-premises system, but you now need to train the models using large datasets to ensure high performance. You will port your models to Google Cloud and want to minimize code refactoring and infrastructure overhead for easier migration from on-prem to cloud. What should you do?

  • A. Use Al Platform for distributed training
  • B. Create a cluster on Dataproc for training
  • C. Create a Managed Instance Group with autoscaling
  • D. Use Kubeflow Pipelines to train on a Google Kubernetes Engine cluster.

Answer: D

NEW QUESTION 3
You are building a real-time prediction engine that streams files which may contain Personally Identifiable Information (Pll) to Google Cloud. You want to use the Cloud Data Loss Prevention (DLP) API to scan the files. How should you ensure that the Pll is not accessible by unauthorized individuals?

  • A. Stream all files to Google CloudT and then write the data to BigQuery Periodically conduct a bulk scan of the table using the DLP API.
  • B. Stream all files to Google Cloud, and write batches of the data to BigQuery While the data is being written to BigQuery conduct a bulk scan of the data using the DLP API.
  • C. Create two buckets of data Sensitive and Non-sensitive Write all data to the Non-sensitive bucket Periodically conduct a bulk scan of that bucket using the DLP API, and move the sensitive data to the Sensitive bucket
  • D. Create three buckets of data: Quarantine, Sensitive, and Non-sensitive Write all data to the Quarantine bucket.
  • E. Periodically conduct a bulk scan of that bucket using the DLP API, and move the data to either the Sensitive or Non-Sensitive bucket

Answer: A

NEW QUESTION 4
You work for a global footwear retailer and need to predict when an item will be out of stock based on historical inventory data. Customer behavior is highly dynamic since footwear demand is influenced by many different factors. You want to serve models that are trained on all available data, but track your performance on specific subsets of data before pushing to production. What is the most streamlined and reliable way to perform this validation?

  • A. Use the TFX ModelValidator tools to specify performance metrics for production readiness
  • B. Use k-fold cross-validation as a validation strategy to ensure that your model is ready for production.
  • C. Use the last relevant week of data as a validation set to ensure that your model is performing accurately on current data
  • D. Use the entire dataset and treat the area under the receiver operating characteristics curve (AUC ROC) as the main metric.

Answer: A

NEW QUESTION 5
You are an ML engineer at a large grocery retailer with stores in multiple regions. You have been asked to create an inventory prediction model. Your models features include region, location, historical demand, and seasonal popularity. You want the algorithm to learn from new inventory data on a daily basis. Which algorithms should you use to build the model?

  • A. Classification
  • B. Reinforcement Learning
  • C. Recurrent Neural Networks (RNN)
  • D. Convolutional Neural Networks (CNN)

Answer: B

NEW QUESTION 6
You recently designed and built a custom neural network that uses critical dependencies specific to your organization's framework. You need to train the model using a managed training service on Google Cloud. However, the ML framework and related dependencies are not supported by Al Platform Training. Also, both your model and your data are too large to fit in memory on a single machine. Your ML framework of choice uses the scheduler, workers, and servers distribution structure. What should you do?

  • A. Use a built-in model available on Al Platform Training
  • B. Build your custom container to run jobs on Al Platform Training
  • C. Build your custom containers to run distributed training jobs on Al Platform Training
  • D. Reconfigure your code to a ML framework with dependencies that are supported by Al Platform Training

Answer: C

NEW QUESTION 7
Your data science team needs to rapidly experiment with various features, model architectures, and hyperparameters. They need to track the accuracy metrics for various experiments and use an API to query the metrics over time. What should they use to track and report their experiments while minimizing manual effort?

  • A. Use Kubeflow Pipelines to execute the experiments Export the metrics file, and query the results using the Kubeflow Pipelines API.
  • B. Use Al Platform Training to execute the experiments Write the accuracy metrics to BigQuery, and query the results using the BigQueryAPI.
  • C. Use Al Platform Training to execute the experiments Write the accuracy metrics to Cloud Monitoring, and query the results using the Monitoring API.
  • D. Use Al Platform Notebooks to execute the experiment
  • E. Collect the results in a shared Google Sheetsfile, and query the results using the Google Sheets API

Answer: A

NEW QUESTION 8
Your organization wants to make its internal shuttle service route more efficient. The shuttles currently stop at all pick-up points across the city every 30 minutes between 7 am and 10 am. The development team has already built an application on Google Kubernetes Engine that requires users to confirm their presence and shuttle station one day in advance. What approach should you take?

  • A. 1. Build a tree-based regression model that predicts how many passengers will be picked up at each shuttle station.* 2. Dispatch an appropriately sized shuttle and provide the map with the required stops based on the prediction.
  • B. 1. Build a tree-based classification model that predicts whether the shuttle should pick up passengers at each shuttle station.* 2. Dispatch an available shuttle and provide the map with the required stops based on the prediction
  • C. 1. Define the optimal route as the shortest route that passes by all shuttle stations with confirmed attendance at the given time under capacity constraints.* 2 Dispatch an appropriately sized shuttle and indicate the required stops on the map
  • D. 1. Build a reinforcement learning model with tree-based classification models that predict the presenceof passengers at shuttle stops as agents and a reward function around a distance-based metric* 2. Dispatch an appropriately sized shuttle and provide the map with the required stops based on the simulated outcome.

Answer: D

NEW QUESTION 9
Your team needs to build a model that predicts whether images contain a driver's license, passport, or credit card. The data engineering team already built the pipeline and generated a dataset composed of 10,000 images with driver's licenses, 1,000 images with passports, and 1,000 images with credit cards. You now have to train a model with the following label map: ['driversjicense', 'passport', 'credit_card']. Which loss function should you use?

  • A. Categorical hinge
  • B. Binary cross-entropy
  • C. Categorical cross-entropy
  • D. Sparse categorical cross-entropy

Answer: B

NEW QUESTION 10
You are an ML engineer at a regulated insurance company. You are asked to develop an insurance approval model that accepts or rejects insurance applications from potential customers. What factors should you consider before building the model?

  • A. Redaction, reproducibility, and explainability
  • B. Traceability, reproducibility, and explainability
  • C. Federated learning, reproducibility, and explainability
  • D. Differential privacy federated learning, and explainability

Answer: B

NEW QUESTION 11
You have trained a text classification model in TensorFlow using Al Platform. You want to use the trained model for batch predictions on text data stored in BigQuery while minimizing computational overhead. What should you do?

  • A. Export the model to BigQuery ML.
  • B. Deploy and version the model on Al Platform.
  • C. Use Dataflow with the SavedModel to read the data from BigQuery
  • D. Submit a batch prediction job on Al Platform that points to the model location in Cloud Storage.

Answer: A

NEW QUESTION 12
You are building a model to predict daily temperatures. You split the data randomly and then transformed the training and test datasets. Temperature data for model training is uploaded hourly. During testing, your model performed with 97

START Professional-Machine-Learning-Engineer EXAM