AWS-Certified-Big-Data-Specialty Premium Bundle

AWS-Certified-Big-Data-Specialty Premium Bundle

Amazon AWS Certified Big Data - Speciality Certification Exam

4.5 
(48945 ratings)
0 QuestionsPractice Tests
0 PDFPrint version
November 21, 2024Last update

Amazon AWS-Certified-Big-Data-Specialty Free Practice Questions

It is more faster and easier to pass the Amazon AWS-Certified-Big-Data-Specialty exam by using Guaranteed Amazon Amazon AWS Certified Big Data - Speciality questuins and answers. Immediate access to the Improved AWS-Certified-Big-Data-Specialty Exam and find the same core area AWS-Certified-Big-Data-Specialty questions with professionally verified answers, then PASS your exam with a high score now.

Free AWS-Certified-Big-Data-Specialty Demo Online For Amazon Certifitcation:

NEW QUESTION 1
A company that provides economics data dashboards needs to be able to develop software to display
rich, interactive, data-driven graphics that run in web browsers and leverages the full stack of web standards (HTML, SVG and CSS).
Which technology provides the most appropriate for this requirement?

  • A. D3.js
  • B. Python/Jupyter
  • C. R Studio
  • D. Hue

Answer: C

NEW QUESTION 2
A user has created a launch configuration for Auto Scaling where CloudWatch detailed monitoring is
disabled. The user wants to now enable detailed monitoring. How can the user achieve this?

  • A. Update the Launch config with CLI to set InstanceMonitoringDisabled = false
  • B. The user should change the Auto Scaling group from the AWS console to enable detailed monitoring
  • C. Update the Launch config with CLI to set InstanceMonitoring.Enabled = true
  • D. Create a new Launch Config with detail monitoring enabled and update the Auto Scaling group

Answer: D

NEW QUESTION 3
A company is building a new application is AWS. The architect needs to design a system to collect application log events. The design should be a repeatable pattern that minimizes data loss if an application instance fails, and keeps a durable copy of all log data for at least 30 days.
What is the simplest architecture that will allow the architect to analyze the logs?

  • A. Write them directly to a Kinesis Firehos
  • B. Configure Kinesis Firehose to load the events into an Amazon Redshift cluster for analysis.
  • C. Write them to a file on Amazon Simple Storage Service (S3). Write an AWS lambda function that runs in response to the S3 events to load the events into Amazon Elasticsearch service for analysis.
  • D. Write them to the local disk and configure the Amazon cloud watch Logs agent to lead the data into CloudWatch Logs and subsequently into Amazon Elasticsearch Service.
  • E. Write them to CloudWatch Logs and use an AWS Lambda function to load them into HDFS on an Amazon Elastic MapReduce (EMR) cluster for analysis.

Answer: A

NEW QUESTION 4
A new algorithm has been written in Python to identify SPAM e-mails. The algorithm analyzes the free text contained within a sample set of 1 million e-mails stored on Amazon S3. The algorithm must be scaled across a production of 5 PB, which also resides in Amazon S3 storage
Which AWS service strategy is best for this use case?

  • A. Copy the data into Amazon ElasticCache to perform text analysis on the in-memory data and export the results of the model into Amazon machine learning
  • B. Use Amazon EMR to parallelize the text analysis tasks across the cluster using a streaming program step
  • C. Use Amazon Elasticsearch service to store the text and then use the Python Elastic search client to run analysis against the text index
  • D. Initiate a python job from AWS Data pipeline to run directly against the Amazon S3 text files

Answer: C

Explanation:
Reference: https://aws.amazon.com/blogs/database/indexing-metadata-in-amazon-elasticsearch- service-using-aws-lambda-and-python/

NEW QUESTION 5
When you put objects in Amazon 53, what is the indication that an object was successfully stored?

  • A. A HTTP 200 result code and MD5 checksum, taken together, indicate that the operation was successful
  • B. A success code is inserted into the S3 object metadata
  • C. Amazon S3 is engineered for 99.999999999% durabilit
  • D. Therefore there is no need to confirm that data was inserted.
  • E. Each S3 account has a special bucket named_ s3_log
  • F. Success codes are written to this bucket with a timestamp and checksum

Answer: A

NEW QUESTION 6
You have been tasked with deployment a solution for your company that will store images, which the
marketing department will use for its campaigns. Employees are able to upload images via a web interface, and once uploaded, each image must be resized and watermarked with the company logo. Image resize and watermark is not time-sensitive and can be completed days after upload if required.
How should you design this solution in the most highly available and cost-effective way?

  • A. Configure your web application to upload images to the Amazon Elastic Transcoder servic
  • B. Use the Amazon Elastic Transcoder watermark feature to add the company logo as a watermark on your images and then upload the final image into an Amazon s3 bucket
  • C. Configure your web application to upload images to Amazon S3, and send the Amazon S3 bucket URI to an Amazon SQS queu
  • D. Create an Auto Scaling group and configure it to use Spot instances, specifying a price you are willing to pa
  • E. Configure the instances in this Auto Scaling group to poll the SQS queue for new images and then resize and watermark the image before uploading the final images into Amazon S3
  • F. Configure your web application to upload images to Amazon S3, and send the S3 object URI to an Amazon SQS queu
  • G. Create an Auto Scaling launch configuration that uses Spot instances, specifying a price you are willing to pa
  • H. Configure the instances in this Auto Scaling group to poll the Amazon SQS queue for new images and then resize and watermark the image before uploading the new images into Amazon S3 and deleting the message from the Amazon SQS queue
  • I. Configure your web application to upload images to the local storage of the web serve
  • J. Create a cronjob to execute a script daily that scans this directory for new files and then uses the Amazon EC2 Service API to launch 10 new Amazon EC2 instances, which will resize and watermark the images daily

Answer: C

NEW QUESTION 7
You have a large number of web servers in an Auto Scaling group behind a load balancer. On an hourly basis, you want to filter and process the logs to collect data on unique visitors, and then put that data in a durable data store in order to run reports. Web servers in the Auto Scaling group are constantly launching and terminating based on your scaling policies, but you do not want to lose any of the log data from these servers during a stop/termination initiated by a user or by Auto Scaling. What two approaches will meet these requirements? Choose 2 answers

  • A. Install an Amazon CloudWatch Logs Agent on every web server during the bootstrap proces
  • B. Create a CloudWatch log group and define metric Filters to create custom metrics that track unique visitors from the streaming web server log
  • C. Create a scheduled task on an Amazon EC2 instance that runs every hour to generate a new report based on the CloudWatch custom metrics
  • D. On the web servers, create a scheduled task that executes a script to rotate and transmit the logs to Amazon Glacie
  • E. Ensure that the operating system shutdown procedure triggers a logs transmission when the Amazon EC2 instance is stopped/terminate
  • F. Use Amazon Data pipeline to process data in Amazon Glacier and run reports every hour
  • G. On the web servers, create a scheduled task that executes a script to rotate and transmit the logs to an Amazon S3 bucke
  • H. Ensure that the operating system shutdown process triggers a logs transmission when the Amazon EC2 instance is stopped/terminate
  • I. Use AWS Data Pipeline to move log data from the Amazon S3 bucket to Amazon Redshift in order to process and run reports every hour
  • J. Install an AWS Data Pipeline Logs Agent on every web server during the bootstrap proces
  • K. Create a log group object in AWS Data Pipeline, and define Metric filters to move processed log data directly from the web servers to Amazon Redshift and runs reports every hour

Answer: AC

NEW QUESTION 8
A company operates an international business served from a single AWS region. The company wants to expand into a new country. The regulator for that country requires the Data Architect to maintain a log of financial transactions in the country within 24 hours of production transaction. The production application is latency insensitive. The new country contains another AWS region.
What is the most cost-effective way to meet this requirement?

  • A. Use CloudFormation to replicate the production application to the new region
  • B. Use Amazon CloudFront to serve application content locally in the country; Amazon CloudFront logs will satisfy the requirement
  • C. Continue to serve customers from the existing region while using Amazon Kinesis to stream transaction data to the regulator
  • D. Use Amazon S3 cross-region replication to copy and persist production transaction logs to a budget the new country’s region

Answer: D

NEW QUESTION 9
Which of the following are characteristics of Amazon VPC subnets? Choose 2 answers

  • A. Each subnet maps to a single Availability Zone
  • B. A CIDR block mask of /25 is the smallest range supported
  • C. Instances in a private subnet can communicate with the internet only if they have an Elastic IP.
  • D. By default, all subnets can route between each other, whether they are private or public
  • E. Each subnet spans at least 2 Availability zones to provide a high-availability environment

Answer: AD

NEW QUESTION 10
A customer has an Amazon S3 bucket. Objects are uploaded simultaneously by a cluster of servers from multiple streams of data. The customer maintains a catalog of objects uploaded in Amazon S3 using an Amazon DynamoDB table. This catalog has the following fields StreamName, TimeStamp, and ServerName, TimeStamp, and ServerName, from which ObjectName can be obtained.
The customer needs to define the catalog to support querying for a given stream or server within a defined time range.
Which DynamoDB table scheme is most efficient to support these queries?

  • A. Define a Primary Key with ServerName as Partition Key and TimeStamp as Sort Ke
  • B. Don NOT define a Secondary Index or Global Secondary Index.
  • C. Define a Primary Key with StreamName as Partition Key and TimeStamp followed by ServerName as Sort Ke
  • D. Define a Global Secondary Index with ServerName as Partition Key and TimeStamp followed by StreamName.
  • E. Define a Primary Key with ServerName as Partition Ke
  • F. Define a Local Secondary Index with StreamName as Partition Ke
  • G. Define a Global Secondary Index with TimeStamp as Partition Key.
  • H. Define a Primary Key with ServerName as Partition Ke
  • I. Define a Local Secondary Index with TimeStamp as Partition Ke
  • J. Define a Global Secondary Index with StreamName as Partition key and TimeStamp as Sort Key.

Answer: A

NEW QUESTION 11
Which of the following requires a custom cloudwatch metric to monitor?

  • A. Memory utilization of an EC2 instance
  • B. CPU utilization of an EC2 instance
  • C. Disk usage activity of an EC2 instance
  • D. Data transfer of an EC2 instance

Answer: A

NEW QUESTION 12
Customers have recently been complaining that your web application has randomly stopped responding. During a deep dive of your logs, the team has discovered a major bug in your Java web application. This bug is causing a memory leak that eventually causes the application to crash.
Your web application runs on Amazon EC2 and was built with AWS CloudFormation.
Which techniques should you see to help detect theses problems faster, as well as help eliminate the server’s unresponsiveness? Choose 2 answers

  • A. Update your AWS CloudFormation configuration and enable a CustomResource that uses cfn- signal to detect memory leaks
  • B. Update your CloudWatch metric granularity config for all Amazon EC2 memory metrics to support five-second granularit
  • C. Create a CloudWatch alarm that triggers an Amazon SNS notification to page your team when the application memory becomes too large
  • D. Update your AWS CloudFormation configuration to take advantage of Auto Scaling group
  • E. Configure an Auto Scaling group policy to trigger off your custom CloudWatch metrics
  • F. Create a custom CloudWatch metric that you push your JVM memory usage to create a CloudWatch alarm that triggers an Amazon SNS notification to page your team when the application memory usage becomes too large
  • G. Update your AWS CloudFormation configuration to take advantage of CloudWatch metrics Agen
  • H. Configure the CloudWatch Metrics Agent to monitor memory usage and trigger an Amazon SNS alarm

Answer: CD

NEW QUESTION 13
You have an ASP.NET web application running in Amazon Elastic BeanStalk. Your next version of the
application requires a third-party Windows installer package to be installed on the instance on first boot and before the application launches.
Which options are possible? Choose 2 answer

  • A. In the application’s Global.asax file, run msiexec.exe to install the package using Process.Start() in the Application_Start event handler
  • B. In the source bundle’s .ebextensions folder, create a file with a .config extensio
  • C. In the file, under the “packages” section and “msi” package manager, include the package’s URL
  • D. Launch a new Amazon EC2 instance from the AMI used by the environmen
  • E. Log into the instance, install the package and run syspre
  • F. Create a new AM
  • G. Configure the environment to use the new AMI
  • H. In the environment’s configuration, edit the instances configuration and add the package’s URL to the “Packages” section
  • I. In the source bundle’s .ebextensions folder, create a “Packages” folde
  • J. Place the package in the folder

Answer: BC

NEW QUESTION 14
Which data store should the organization choose?

  • A. Amazon Relational Database Service (RDS)
  • B. Amazon Redshift
  • C. Amazon DynamoDB
  • D. Amazon Elasticsearch

Answer: C

NEW QUESTION 15
What is one key difference between an Amazon EBS-backed and an instance-store backed instance?

  • A. Amazon EBS-backed instances can be stopped and restarted
  • B. Instance-store backed instances can be stopped and restarted
  • C. Auto scaling requires using Amazon EBS-backed instances
  • D. Virtual Private Cloud requires EBS backed instances

Answer: A

NEW QUESTION 16
A company needs a churn prevention model to predict which customers will NOT review their yearly
subscription to the company’s service. The company plans to provide these customers with a promotional offer. A binary classification model that uses Amazon Machine Learning is required. On which basis should this binary classification model be built?

  • A. User profiles (age, gender, income, occupation)
  • B. Last user session
  • C. Each user time series events in the past 3 months
  • D. Quarterly results

Answer: C

NEW QUESTION 17
A data engineer in a manufacturing company is designing a data processing platform that receives a
large volume of unstructured data. The data engineer must populate a well- structured star schema in Amazon Redshift.
What is the most efficient architecture strategy for this purpose?

  • A. Transform the unstructured data using Amazon EMR and generate CSV dat
  • B. COPY data into the analysis schema within Redshift.
  • C. Load the unstructured data into Redshift, and use string paring functions to extract structured data for inserting into the analysis schema.
  • D. When the data is saved to Amazon S3. Use S3 Event Notifications and AWS Lambda to transform the file conten
  • E. Insert the data into the analysis schema on Redshift.
  • F. Normalize the data using an AWS Marketplace ETL tool persist the result to Amazon S3 and use AWS Lambda to INSERT the data into Redshift.

Answer: B

NEW QUESTION 18
A user has provisioned 2000 IOPS to the EBS volume. The application hosted on that EBS is experiencing less IOPS than provisioned. Which of the below mentioned options does not affect the IOPS of the volume?

  • A. The application does not have enough IO for the volume
  • B. The instance is EBS optimized
  • C. The EC2 instance has 10 Gigabit Network connectivity
  • D. The volume size is too large

Answer: D

NEW QUESTION 19
A company with a support organization needs support engineers to be able to search historic cases to
provide fast responses on new issues raised. The company has forwarded all support messages into an Amazon Kinesis Stream. This meets a company objective of using only managed services to reduce.
The company needs an appropriate architecture that allows support engineers to search on historic cases can find similar issues and their associated responses.
Which AWS Lambda action is most appropriate?

  • A. Ingest and index the content into an Amazon Elasticsearch domain
  • B. Stem and tokenize the input and store the results into Amazon ElastiCache
  • C. Write data as JSON into Amazon DynamoDB with primary and secondary indexes
  • D. Aggregate feedback is Amazon S3 using a columnar format with partitioning

Answer: A

NEW QUESTION 20
An Amazon EMR cluster using EMRFS has access to Megabytes of data on Amazon S3, originating
from multiple unique data sources. The customer needs to query common fields across some of the data sets to be able to perform interactive joins and then display results quickly.
Which technology is most appropriate to enable this capability?

  • A. Presto
  • B. MicroStrategy
  • C. Pig
  • D. R Studio

Answer: A

NEW QUESTION 21
......

P.S. Easily pass AWS-Certified-Big-Data-Specialty Exam with 243 Q&As Exambible Dumps & pdf Version, Welcome to Download the Newest Exambible AWS-Certified-Big-Data-Specialty Dumps: https://www.exambible.com/AWS-Certified-Big-Data-Specialty-exam/ (243 New Questions)


START AWS-Certified-Big-Data-Specialty EXAM