- Home
- Amazon-Web-Services
- BDS-C00 Exam
Amazon-Web-Services BDS-C00 Free Practice Questions
It is more faster and easier to pass the Amazon-Web-Services BDS-C00 exam by using High quality Amazon-Web-Services AWS Certified Big Data -Speciality questuins and answers. Immediate access to the Abreast of the times BDS-C00 Exam and find the same core area BDS-C00 questions with professionally verified answers, then PASS your exam with a high score now.
Free demo questions for Amazon-Web-Services BDS-C00 Exam Dumps Below:
NEW QUESTION 1
You have started a new job and are reviewing your company's infrastructure on AWS You notice one
web application where they have an Elastic Load Balancer (&B) in front of web instances in an Auto Scaling Group When you check the metrics for the ELB in CloudWatch you see four healthy instances in Availability Zone (AZ) A and zero in AZ B There are zero unhealthy instances.
What do you need to fix to balance the instances across AZs?
- A. Set the ELB to only be attached to another AZ
- B. Make sure Auto Scaling is configured to launch in both AZs
- C. Make sure your AMI is available in both AZs
- D. Make sure the maximum size of the Auto Scaling Group is greater than 4
Answer: B
NEW QUESTION 2
When will you incur costs with an Elastic IP address (EIP)?
- A. When an EIP is allocated
- B. When it is allocated and associated with a running instance
- C. When it is allocated and associated with a stopped instance
- D. Costs are incurred regardless of whether the EIP associated with a running instance
Answer: C
NEW QUESTION 3
A user has provisioned 2000 IOPS to the EBS volume. The application hosted on that EBS is experiencing less IOPS than provisioned. Which of the below mentioned options does not affect the IOPS of the volume?
- A. The application does not have enough IO for the volume
- B. The instance is EBS optimized
- C. The EC2 instance has 10 Gigabit Network connectivity
- D. The volume size is too large
Answer: D
NEW QUESTION 4
A company needs to monitor the read and write IOPs metrics for their AWS MySQL RDS instances
and send real-time alerts to their operations team. Which AWS services can accomplish this? Choose 2 answers
- A. Amazon Simple Email Service
- B. Amazon CloudWatch
- C. Amazon Simple Queue Service
- D. Amazon Route 53
- E. Amazon Simple Notification Service
Answer: BE
NEW QUESTION 5
A large oil and gas company needs to provide near real-time alerts when peak thresholds are exceeded in its pipeline system. The company has developed a system to capture pipeline metrics such as flow rate, pressure and temperature using millions of sensors. The sensors deliver to AWS IoT.
What is a cost-effective way to provide near real-time alerts on the pipeline metrics?
- A. Create an AWS IoT rule to generate an Amazon SNS notification
- B. Store the data points in an Amazon DynamoDB table and polite peak metrics data from an Amazon EC2 application
- C. Create an Amazon Machine Learning model and invoke with AWS Lambda
- D. Use Amazon Kinesis Streams and a KCL-based application deployed on AWS Elastic Beanstalk
Answer: BD
NEW QUESTION 6
Your Devops team is responsible for a multi-tier, Windows-based web application consisting of web servers, Amazon RDS database instances, and a load balancer behind Amazon Route53. You have been asked by your manager to build a cost-effective rolling deployment solution for this web application.
What method should you use?
- A. Re-deploy your application on an AWS OpsWorks stac
- B. Use the AWS OpsWorks clone stack feature to allow updates between duplicate stacks
- C. Re-deploy your application on Elastic BeanStalk and take advantage of Elastic BeanStalk rolling updates
- D. Re-deploy your application using an AWS CloudFormation template, launch a new AWS CloudFormation stack during each deployment, and then tear down the old stack
- E. Re-deploy your application using an AWS CloudFormation templat
- F. Use AWS CloudFormation rolling deployment policies, create a new policy for your AWS CloudFormation stack, and initiate an update stack operation to deploy new code
Answer: D
NEW QUESTION 7
A user is planning to setup infrastructure on AWS for the Christmas sales. The user is planning to use
Auto Scaling based on the schedule for proactive scaling. What advise would you give to the user?
- A. It is good to schedule now because if the user forgets later on it will not scale up
- B. The scaling should be setup only one week before Christmas
- C. Wait till end of November before scheduling the activity
- D. It is not advisable to use scheduled based scaling
Answer: C
NEW QUESTION 8
Which data store should the organization choose?
- A. Amazon Relational Database Service (RDS)
- B. Amazon Redshift
- C. Amazon DynamoDB
- D. Amazon Elasticsearch
Answer: C
NEW QUESTION 9
An administrator needs to manage a large catalog of items from various external sellers. The administration needs to determine if the items should be identified as minimally dangerous, dangerous or highly dangerous based on their textual description. The administrator already has some items with the danger attribute, but receives hundreds of new item descriptions every day without such classification.
The administrator has a system that captures dangerous goods reports from customer support team or from user feedback. What is a cost –effective architecture to solve this issue?
- A. Build a set of regular expression rules that are based on the existing example
- B. And run them on the DynamoDB streams as every new item description is added to the system.
- C. Build a kinesis Streams process that captures and marks the relevant items in the dangerous goods reports using a Lambda function once more than two reports have been filed.
- D. Build a machine learning model to properly classify dangerous goods and run it on the DynamoDB streams as every new item description is added to the system.
- E. Build a machine learning model with binary classification for dangerous goods and run it on the DynamoDB streams as every new item description is added to the system.
Answer: C
NEW QUESTION 10
Which of the following are characteristics of Amazon VPC subnets? Choose 2 answers
- A. Each subnet maps to a single Availability Zone
- B. A CIDR block mask of /25 is the smallest range supported
- C. Instances in a private subnet can communicate with the internet only if they have an Elastic IP.
- D. By default, all subnets can route between each other, whether they are private or public
- E. Each subnet spans at least 2 Availability zones to provide a high-availability environment
Answer: AD
NEW QUESTION 11
An organization is designing an application architecture. The application will have over 100 TB of data
and will support transactions that arrive at rates from hundreds per second to tens of thousands per second, depending on the day of the week and time of day. All transaction data must be durably and reliably stored. Certain read operations must be performed with strong consistency.
Which solutions meets these requirements?
- A. Use Amazon DynamoDB as the data store and use strong consistent reads when necessary
- B. Use an Amazon Relational Database Service (RDS) instance sized to meet the maximum transaction rate and with the High Availability option enabled.
- C. Deploy a NoSQL data store on top of an Amazon Elastic MapReduce (EMR) cluster, and select the HDFS High Durability option.
- D. Use Amazon Redshift with synchronous replication to Amazon Simple Storage Service (S3) and row- level locking for strong consistency.
Answer: A
NEW QUESTION 12
An Administrator needs to design the event log storage architecture for events from mobile devices.
The event data will be processed by an Amazon EMR cluster daily for aggregated reporting and analytics before being archived.
How should the administrator recommend storing the log data?
- A. Create an Amazon S3 bucket and write log data into folders by device Execute the EMR job on the device folders
- B. Create an Amazon DynamoDB table partitioned on the device and sorted on data, write log data to the tabl
- C. Execute the EMR job on the Amazon DynamoDB table
- D. Create an Amazon S3 bucket and write data into folders by da
- E. Execute the EMR job on the daily folder
- F. Create an Amazon DynamoDB table partitioned on EventID, write log data to tabl
- G. Execute the EMR job on the table
Answer: C
NEW QUESTION 13
An enterprise customer is migrating to Redshift and is considering using dense storage nodes in its
Redshift cluster. The customer wants to migrate 50 TB of data. The customer’s query patterns involve performing many joins with thousands of rows. The customer needs to know how many nodes are needed in its target Redshift cluster. The customer has a limited budget and needs to avoid performing tests unless absolutely needed. Which approach should this customer use?
- A. Start with many small nodes
- B. Start with fewer large nodes
- C. Have two separate clusters with a mix of small and large nodes
- D. Insist on performing multiple tests to determine the optimal configuration
Answer: D
NEW QUESTION 14
A customer needs to determine the optimal distribution strategy for the ORDERS fact table in its
Redshift schema. The ORDERS table has foreign key relationships with multiple dimension tables in this schema.
How should the company determine the most appropriate distribution key for the ORDRES table?
- A. Identity the largest and most frequently joined dimension table and ensure that it and the ORDERS table both have EVEN distribution
- B. Identify the target dimension table and designate the key of this dimension table as the distribution key of the ORDERS table
- C. Identity the smallest dimension table and designate the key of this dimension table as the distribution key of ORDERS table
- D. Identify the largest and most frequently joined dimension table and designate the key of this dimension table as the distribution key for the orders table
Answer: D
NEW QUESTION 15
You have been asked to handle a large data migration from multiple Amazon RDS MySQL instances to
a DynamoDB table. You have been given a short amount of time to complete the data migration. What will allow you to complete this complex data processing workflow?
- A. Create an Amazon Kinesis data stream, pipe in all of the Amazon RDS data, and direct data toward DynamoDB table
- B. Write a script in you language of choice, install the script on an Amazon EC2 instance, and then use Auto Scaling groups to ensure that the latency of the mitigation pipelines never exceeds four seconds in any 15-minutes period.
- C. Write a bash script to run on your Amazon RDS instance that will export data into DynamoDB
- D. Create a data pipeline to export Amazon RDS data and import the data into DynamoDB
Answer: D
NEW QUESTION 16
A telecommunications company needs to predict customer churn (i.e. customers eho decide to
switch a computer). The company has historic records of each customer, including monthly consumption patterns, calls to customer service, and whether the customer ultimately quit th eservice. All of this data is stored in Amazon S3. The company needs to know which customers are likely going to churn soon so that they can win back their loyalty.
What is the optimal approach to meet these requirements?
- A. Use the Amazon Machine Learning service to build the binary classification model based on the dataset stored in Amazon S3. The model will be used regularly to predict churn attribute for existing customers
- B. Use AWS QuickSight to connect it to data stored in Amazon S3 to obtain the necessary business insigh
- C. Plot the churn trend graph to extrapolate churn likelihood for existing customer
- D. Use EMR to run the Hive queries to build a profile of a churning custome
- E. Apply the profile to existing customers to determine the likelihood of churn
- F. Use a Redshift cluster to COPY the data from Amazon S3. Create a user Define Function in Redshift that computers the likelihood of churn
Answer: A
NEW QUESTION 17
A data engineer in a manufacturing company is designing a data processing platform that receives a
large volume of unstructured data. The data engineer must populate a well- structured star schema in Amazon Redshift.
What is the most efficient architecture strategy for this purpose?
- A. Transform the unstructured data using Amazon EMR and generate CSV dat
- B. COPY data into the analysis schema within Redshift.
- C. Load the unstructured data into Redshift, and use string paring functions to extract structured data for inserting into the analysis schema.
- D. When the data is saved to Amazon S3. Use S3 Event Notifications and AWS Lambda to transform the file conten
- E. Insert the data into the analysis schema on Redshift.
- F. Normalize the data using an AWS Marketplace ETL tool persist the result to Amazon S3 and use AWS Lambda to INSERT the data into Redshift.
Answer: B
NEW QUESTION 18
A company is centralizing a large number of unencrypted small files rom multiple Amazon S3 buckets. The company needs to verify that the files contain the same data after centralization.
Which method meets the requirements?
- A. Company the S3 Etags from the source and destination objects
- B. Call the S3 CompareObjects API for the source and destination objects
- C. Place a HEAD request against the source and destination objects comparing SIG v4 header
- D. Compare the size of the source and destination objects
Answer: B
NEW QUESTION 19
You have an ASP.NET web application running in Amazon Elastic BeanStalk. Your next version of the
application requires a third-party Windows installer package to be installed on the instance on first boot and before the application launches.
Which options are possible? Choose 2 answer
- A. In the application’s Global.asax file, run msiexec.exe to install the package using Process.Start() in the Application_Start event handler
- B. In the source bundle’s .ebextensions folder, create a file with a .config extensio
- C. In the file, under the “packages” section and “msi” package manager, include the package’s URL
- D. Launch a new Amazon EC2 instance from the AMI used by the environmen
- E. Log into the instance, install the package and run syspre
- F. Create a new AM
- G. Configure the environment to use the new AMI
- H. In the environment’s configuration, edit the instances configuration and add the package’s URL to the “Packages” section
- I. In the source bundle’s .ebextensions folder, create a “Packages” folde
- J. Place the package in the folder
Answer: BC
NEW QUESTION 20
A customer has a machine learning workflow that consist of multiple quick cycles of reads-writes-
reads on Amazon S3. The customer needs to run the workflow on EMR but is concerned that the reads in subsequent cycles will miss new data critical to the machine learning from the prior cycles.
How should the customer accomplish this?
- A. Turn on EMRFS consistent view when configuring the EMR cluster
- B. Use AWS Data Pipeline to orchestrate the data processing cycles
- C. Set Hadoop.data.consistency = true in the core-site.xml file
- D. Set Hadoop.s3.consistency = true in the core-site.xml file
Answer: B
NEW QUESTION 21
A company operates an international business served from a single AWS region. The company wants to expand into a new country. The regulator for that country requires the Data Architect to maintain a log of financial transactions in the country within 24 hours of production transaction. The production application is latency insensitive. The new country contains another AWS region.
What is the most cost-effective way to meet this requirement?
- A. Use CloudFormation to replicate the production application to the new region
- B. Use Amazon CloudFront to serve application content locally in the country; Amazon CloudFront logs will satisfy the requirement
- C. Continue to serve customers from the existing region while using Amazon Kinesis to stream transaction data to the regulator
- D. Use Amazon S3 cross-region replication to copy and persist production transaction logs to a budget the new country’s region
Answer: D
NEW QUESTION 22
A company is building a new application is AWS. The architect needs to design a system to collect application log events. The design should be a repeatable pattern that minimizes data loss if an application instance fails, and keeps a durable copy of all log data for at least 30 days.
What is the simplest architecture that will allow the architect to analyze the logs?
- A. Write them directly to a Kinesis Firehos
- B. Configure Kinesis Firehose to load the events into an Amazon Redshift cluster for analysis.
- C. Write them to a file on Amazon Simple Storage Service (S3). Write an AWS lambda function that runs in response to the S3 events to load the events into Amazon Elasticsearch service for analysis.
- D. Write them to the local disk and configure the Amazon cloud watch Logs agent to lead the data into CloudWatch Logs and subsequently into Amazon Elasticsearch Service.
- E. Write them to CloudWatch Logs and use an AWS Lambda function to load them into HDFS on an Amazon Elastic MapReduce (EMR) cluster for analysis.
Answer: A
NEW QUESTION 23
A travel website needs to present a graphical quantitative summary of its daily bookings to website
visitors for marketing purposes. The website has millions of visitors per day, but wants to control costs by implementing the least-expensive solution for this visualization. What is the most cost-
effective solution?
- A. Generate a static graph with a transient EMR cluster dail
- B. And store it in Amazon S3
- C. Generate a graph using MicroStrategy backed by a transient EMR cluster
- D. Implement a Jupyter front-end provided by a continuously running EMR cluster leveraging spot instances for task nodes
- E. Implement a Zeppelin application that runs on a long-running EMR cluster
Answer: C
NEW QUESTION 24
A user has created a launch configuration for Auto Scaling where CloudWatch detailed monitoring is
disabled. The user wants to now enable detailed monitoring. How can the user achieve this?
- A. Update the Launch config with CLI to set InstanceMonitoringDisabled = false
- B. The user should change the Auto Scaling group from the AWS console to enable detailed monitoring
- C. Update the Launch config with CLI to set InstanceMonitoring.Enabled = true
- D. Create a new Launch Config with detail monitoring enabled and update the Auto Scaling group
Answer: D
P.S. Easily pass BDS-C00 Exam with 264 Q&As DumpSolutions.com Dumps & pdf Version, Welcome to Download the Newest DumpSolutions.com BDS-C00 Dumps: https://www.dumpsolutions.com/BDS-C00-dumps/ (264 New Questions)