DAS-C01 Premium Bundle

DAS-C01 Premium Bundle

AWS Certified Data Analytics - Specialty Certification Exam

4.5 
(55140 ratings)
130 QuestionsPractice Tests
130 PDFPrint version
November 23, 2024Last update

Amazon-Web-Services DAS-C01 Free Practice Questions

we provide Approved Amazon-Web-Services DAS-C01 free question which are the best for clearing DAS-C01 test, and to get certified by Amazon-Web-Services AWS Certified Data Analytics - Specialty. The DAS-C01 Questions & Answers covers all the knowledge points of the real DAS-C01 exam. Crack your Amazon-Web-Services DAS-C01 Exam with latest dumps, guaranteed!

Amazon-Web-Services DAS-C01 Free Dumps Questions Online, Read and Test Now.

NEW QUESTION 1
A medical company has a system with sensor devices that read metrics and send them in real time to an Amazon Kinesis data stream. The Kinesis data stream has multiple shards. The company needs to calculate the average value of a numeric metric every second and set an alarm for whenever the value is above one threshold or below another threshold. The alarm must be sent to Amazon Simple Notification Service (Amazon SNS) in less than 30 seconds.
Which architecture meets these requirements?

  • A. Use an Amazon Kinesis Data Firehose delivery stream to read the data from the Kinesis data stream with an AWS Lambda transformation function that calculates the average per second and sends the alarm to Amazon SNS.
  • B. Use an AWS Lambda function to read from the Kinesis data stream to calculate the average per second and sent the alarm to Amazon SNS.
  • C. Use an Amazon Kinesis Data Firehose deliver stream to read the data from the Kinesis data stream and store it on Amazon S3. Have Amazon S3 trigger an AWS Lambda function that calculates the average per second and sends the alarm to Amazon SNS.
  • D. Use an Amazon Kinesis Data Analytics application to read from the Kinesis data stream and calculatethe average per secon
  • E. Send the results to an AWS Lambda function that sends the alarm to Amazon SNS.

Answer: D

NEW QUESTION 2
A company developed a new elections reporting website that uses Amazon Kinesis Data Firehose to deliver full logs from AWS WAF to an Amazon S3 bucket. The company is now seeking a low-cost option to perform this infrequent data analysis with visualizations of logs in a way that requires minimal development effort.
Which solution meets these requirements?

  • A. Use an AWS Glue crawler to create and update a table in the Glue data catalog from the log
  • B. Use Athena to perform ad-hoc analyses and use Amazon QuickSight to develop data visualizations.
  • C. Create a second Kinesis Data Firehose delivery stream to deliver the log files to Amazon Elasticsearch Service (Amazon ES). Use Amazon ES to perform text-based searches of the logs for ad-hoc analyses and use Kibana for data visualizations.
  • D. Create an AWS Lambda function to convert the logs into .csv forma
  • E. Then add the function to the Kinesis Data Firehose transformation configuratio
  • F. Use Amazon Redshift to perform ad-hoc analyses of the logs using SQL queries and use Amazon QuickSight to develop data visualizations.
  • G. Create an Amazon EMR cluster and use Amazon S3 as the data sourc
  • H. Create an Apache Spark job to perform ad-hoc analyses and use Amazon QuickSight to develop data visualizations.

Answer: A

Explanation:
https://aws.amazon.com/blogs/big-data/analyzing-aws-waf-logs-with-amazon-es-amazon-athena-and-amazon-qu

NEW QUESTION 3
A real estate company has a mission-critical application using Apache HBase in Amazon EMR. Amazon EMR is configured with a single master node. The company has over 5 TB of data stored on an Hadoop Distributed File System (HDFS). The company wants a cost-effective solution to make its HBase data highly available.
Which architectural pattern meets company’s requirements?

  • A. Use Spot Instances for core and task nodes and a Reserved Instance for the EMR master node.Configurethe EMR cluster with multiple master node
  • B. Schedule automated snapshots using AmazonEventBridge.
  • C. Store the data on an EMR File System (EMRFS) instead of HDF
  • D. Enable EMRFS consistent view.Create an EMR HBase cluster with multiple master node
  • E. Point the HBase root directory to an Amazon S3 bucket.
  • F. Store the data on an EMR File System (EMRFS) instead of HDFS and enable EMRFS consistent view.Run two separate EMR clusters in two different Availability Zone
  • G. Point both clusters to the same HBase root directory in the same Amazon S3 bucket.
  • H. Store the data on an EMR File System (EMRFS) instead of HDFS and enable EMRFS consistent view.Create a primary EMR HBase cluster with multiple master node
  • I. Create a secondary EMR HBase read- replica cluster in a separate Availability Zon
  • J. Point both clusters to the same HBase root directory in the same Amazon S3 bucket.

Answer: D

NEW QUESTION 4
An education provider’s learning management system (LMS) is hosted in a 100 TB data lake that is built on Amazon S3. The provider’s LMS supports hundreds of schools. The provider wants to build an advanced analytics reporting platform using Amazon Redshift to handle complex queries with optimal performance. System users will query the most recent 4 months of data 95% of the time while 5% of the queries will leverage data from the previous 12 months.
Which solution meets these requirements in the MOST cost-effective way?

  • A. Store the most recent 4 months of data in the Amazon Redshift cluste
  • B. Use Amazon Redshift Spectrum to query data in the data lak
  • C. Use S3 lifecycle management rules to store data from the previous 12 months in Amazon S3 Glacier storage.
  • D. Leverage DS2 nodes for the Amazon Redshift cluste
  • E. Migrate all data from Amazon S3 to Amazon Redshif
  • F. Decommission the data lake.
  • G. Store the most recent 4 months of data in the Amazon Redshift cluste
  • H. Use Amazon Redshift Spectrum to query data in the data lak
  • I. Ensure the S3 Standard storage class is in use with objects in the data lake.
  • J. Store the most recent 4 months of data in the Amazon Redshift cluste
  • K. Use Amazon Redshift federated queries to join cluster data with the data lake to reduce cost
  • L. Ensure the S3 Standard storage class is in use with objects in the data lake.

Answer: C

NEW QUESTION 5
An online retail company uses Amazon Redshift to store historical sales transactions. The company is required to encrypt data at rest in the clusters to comply with the Payment Card Industry Data Security Standard (PCI DSS). A corporate governance policy mandates management of encryption keys using an on-premises hardware security module (HSM).
Which solution meets these requirements?

  • A. Create and manage encryption keys using AWS CloudHSM Classi
  • B. Launch an Amazon Redshift cluster in a VPC with the option to use CloudHSM Classic for key management.
  • C. Create a VPC and establish a VPN connection between the VPC and the on-premises networ
  • D. Create an HSM connection and client certificate for the on-premises HS
  • E. Launch a cluster in the VPC with the option to use the on-premises HSM to store keys.
  • F. Create an HSM connection and client certificate for the on-premises HS
  • G. Enable HSM encryption on the existing unencrypted cluster by modifying the cluste
  • H. Connect to the VPC where the Amazon Redshift cluster resides from the on-premises network using a VPN.
  • I. Create a replica of the on-premises HSM in AWS CloudHS
  • J. Launch a cluster in a VPC with the option to use CloudHSM to store keys.

Answer: B

NEW QUESTION 6
A company analyzes its data in an Amazon Redshift data warehouse, which currently has a cluster of three dense storage nodes. Due to a recent business acquisition, the company needs to load an additional 4 TB of user data into Amazon Redshift. The engineering team will combine all the user data and apply complex calculations that require I/O intensive resources. The company needs to adjust the cluster's capacity to support the change in analytical and storage requirements.
Which solution meets these requirements?

  • A. Resize the cluster using elastic resize with dense compute nodes.
  • B. Resize the cluster using classic resize with dense compute nodes.
  • C. Resize the cluster using elastic resize with dense storage nodes.
  • D. Resize the cluster using classic resize with dense storage nodes.

Answer: C

NEW QUESTION 7
A company wants to provide its data analysts with uninterrupted access to the data in its Amazon Redshift cluster. All data is streamed to an Amazon S3 bucket with Amazon Kinesis Data Firehose. An AWS Glue job that is scheduled to run every 5 minutes issues a COPY command to move the data into Amazon Redshift.
The amount of data delivered is uneven throughout the day, and cluster utilization is high during certain periods. The COPY command usually completes within a couple of seconds. However, when load spike occurs, locks can exist and data can be missed. Currently, the AWS Glue job is configured to run without retries, with timeout at 5 minutes and concurrency at 1.
How should a data analytics specialist configure the AWS Glue job to optimize fault tolerance and improve data availability in the Amazon Redshift cluster?

  • A. Increase the number of retrie
  • B. Decrease the timeout valu
  • C. Increase the job concurrency.
  • D. Keep the number of retries at 0. Decrease the timeout valu
  • E. Increase the job concurrency.
  • F. Keep the number of retries at 0. Decrease the timeout valu
  • G. Keep the job concurrency at 1.
  • H. Keep the number of retries at 0. Increase the timeout valu
  • I. Keep the job concurrency at 1.

Answer: B

NEW QUESTION 8
A company wants to run analytics on its Elastic Load Balancing logs stored in Amazon S3. A data analyst needs to be able to query all data from a desired year, month, or day. The data analyst should also be able to query a subset of the columns. The company requires minimal operational overhead and the most
cost-effective solution.
Which approach meets these requirements for optimizing and querying the log data?

  • A. Use an AWS Glue job nightly to transform new log files into .csv format and partition by year, month, and da
  • B. Use AWS Glue crawlers to detect new partition
  • C. Use Amazon Athena to query data.
  • D. Launch a long-running Amazon EMR cluster that continuously transforms new log files from Amazon S3 into its Hadoop Distributed File System (HDFS) storage and partitions by year, month, and da
  • E. Use Apache Presto to query the optimized format.
  • F. Launch a transient Amazon EMR cluster nightly to transform new log files into Apache ORC format and partition by year, month, and da
  • G. Use Amazon Redshift Spectrum to query the data.
  • H. Use an AWS Glue job nightly to transform new log files into Apache Parquet format and partition by year, month, and da
  • I. Use AWS Glue crawlers to detect new partition
  • J. Use Amazon Athena to querydata.

Answer: C

NEW QUESTION 9
A company has a data lake on AWS that ingests sources of data from multiple business units and uses Amazon Athena for queries. The storage layer is Amazon S3 using the AWS Glue Data Catalog. The company wants to make the data available to its data scientists and business analysts. However, the company first needs to manage data access for Athena based on user roles and responsibilities.
What should the company do to apply these access controls with the LEAST operational overhead?

  • A. Define security policy-based rules for the users and applications by role in AWS Lake Formation.
  • B. Define security policy-based rules for the users and applications by role in AWS Identity and Access Management (IAM).
  • C. Define security policy-based rules for the tables and columns by role in AWS Glue.
  • D. Define security policy-based rules for the tables and columns by role in AWS Identity and Access Management (IAM).

Answer: D

NEW QUESTION 10
A company operates toll services for highways across the country and collects data that is used to understand usage patterns. Analysts have requested the ability to run traffic reports in near-real time. The company is interested in building an ingestion pipeline that loads all the data into an Amazon Redshift cluster and alerts operations personnel when toll traffic for a particular toll station does not meet a specified threshold. Station data and the corresponding threshold values are stored in Amazon S3.
Which approach is the MOST efficient way to meet these requirements?

  • A. Use Amazon Kinesis Data Firehose to collect data and deliver it to Amazon Redshift and Amazon Kinesis Data Analytics simultaneousl
  • B. Create a reference data source in Kinesis Data Analytics to temporarily store the threshold values from Amazon S3 and compare the count of vehicles for a particular toll station against its corresponding threshold valu
  • C. Use AWS Lambda to publish an Amazon Simple Notification Service (Amazon SNS) notification if the threshold is not met.
  • D. Use Amazon Kinesis Data Streams to collect all the data from toll station
  • E. Create a stream in Kinesis Data Streams to temporarily store the threshold values from Amazon S3. Send both streams to Amazon Kinesis Data Analytics to compare the count of vehicles for a particular toll station against its corresponding threshold valu
  • F. Use AWS Lambda to publish an Amazon Simple Notification Service (Amazon SNS) notification if the threshold is not me
  • G. Connect Amazon Kinesis Data Firehose to Kinesis Data Streams to deliver the data to Amazon Redshift.
  • H. Use Amazon Kinesis Data Firehose to collect data and deliver it to Amazon Redshif
  • I. Then, automatically trigger an AWS Lambda function that queries the data in Amazon Redshift, compares the count of vehicles for a particular toll station against its corresponding threshold values read from Amazon S3, and publishes an Amazon Simple Notification Service (Amazon SNS) notification if the threshold is not met.
  • J. Use Amazon Kinesis Data Firehose to collect data and deliver it to Amazon Redshift and Amazon Kinesis Data Analytics simultaneousl
  • K. Use Kinesis Data Analytics to compare the count of vehicles against the threshold value for the station stored in a table as an in-application stream based on information stored in Amazon S3. Configure an AWS Lambda function as an output for the application that will publish an Amazon Simple Queue Service (Amazon SQS) notification to alert operations personnel if the threshold is not met.

Answer: D

NEW QUESTION 11
A company uses Amazon Redshift as its data warehouse. A new table has columns that contain sensitive data. The data in the table will eventually be referenced by several existing queries that run many times a day.
A data analyst needs to load 100 billion rows of data into the new table. Before doing so, the data analyst must ensure that only members of the auditing group can read the columns containing sensitive data.
How can the data analyst meet these requirements with the lowest maintenance overhead?

  • A. Load all the data into the new table and grant the auditing group permission to read from the tabl
  • B. Load all the data except for the columns containing sensitive data into a second tabl
  • C. Grant the appropriate users read-only permissions to the second table.
  • D. Load all the data into the new table and grant the auditing group permission to read from the tabl
  • E. Use the GRANT SQL command to allow read-only access to a subset of columns to the appropriate users.
  • F. Load all the data into the new table and grant all users read-only permissions to non-sensitive columns.Attach an IAM policy to the auditing group with explicit ALLOW access to the sensitive data columns.
  • G. Load all the data into the new table and grant the auditing group permission to read from the table.Create a view of the new table that contains all the columns, except for those considered sensitive, and grant the appropriate users read-only permissions to the table.

Answer: B

Explanation:
https://aws.amazon.com/blogs/big-data/achieve-finer-grained-data-security-with-column-level-access-control-in

NEW QUESTION 12
An operations team notices that a few AWS Glue jobs for a given ETL application are failing. The AWS Glue jobs read a large number of small JSON files from an Amazon S3 bucket and write the data to a different S3 bucket in Apache Parquet format with no major transformations. Upon initial investigation, a data engineer notices the following error message in the History tab on the AWS Glue console: “Command Failed with Exit Code 1.”
Upon further investigation, the data engineer notices that the driver memory profile of the failed jobs crosses the safe threshold of 50% usage quickly and reaches 90–95% soon after. The average memory usage across all executors continues to be less than 4%.
The data engineer also notices the following error while examining the related Amazon CloudWatch Logs. What should the data engineer do to solve the failure in the MOST cost-effective way?

  • A. Change the worker type from Standard to G.2X.
  • B. Modify the AWS Glue ETL code to use the ‘groupFiles’: ‘inPartition’ feature.
  • C. Increase the fetch size setting by using AWS Glue dynamics frame.
  • D. Modify maximum capacity to increase the total maximum data processing units (DPUs) used.

Answer: B

Explanation:
https://docs.aws.amazon.com/glue/latest/dg/monitor-profile-debug-oom-abnormalities.html#monitor-debug-oom

NEW QUESTION 13
An ecommerce company stores customer purchase data in Amazon RDS. The company wants a solution to store and analyze historical data. The most recent 6 months of data will be queried frequently for analytics workloads. This data is several terabytes large. Once a month, historical data for the last 5 years must be accessible and will be joined with the more recent data. The company wants to optimize performance and cost.
Which storage solution will meet these requirements?

  • A. Create a read replica of the RDS database to store the most recent 6 months of dat
  • B. Copy the historical data into Amazon S3. Create an AWS Glue Data Catalog of the data in Amazon S3 and Amazon RD
  • C. Run historical queries using Amazon Athena.
  • D. Use an ETL tool to incrementally load the most recent 6 months of data into an Amazon Redshift cluste
  • E. Run more frequent queries against this cluste
  • F. Create a read replica of the RDS database to run queries on the historical data.
  • G. Incrementally copy data from Amazon RDS to Amazon S3. Create an AWS Glue Data Catalog of the data in Amazon S3. Use Amazon Athena to query the data.
  • H. Incrementally copy data from Amazon RDS to Amazon S3. Load and store the most recent 6 months of data in Amazon Redshif
  • I. Configure an Amazon Redshift Spectrum table to connect to all historical data.

Answer: D

NEW QUESTION 14
A retail company is building its data warehouse solution using Amazon Redshift. As a part of that effort, the company is loading hundreds of files into the fact table created in its Amazon Redshift cluster. The company wants the solution to achieve the highest throughput and optimally use cluster resources when loading data into the company’s fact table.
How should the company meet these requirements?

  • A. Use multiple COPY commands to load the data into the Amazon Redshift cluster.
  • B. Use S3DistCp to load multiple files into the Hadoop Distributed File System (HDFS) and use an HDFSconnector to ingest the data into the Amazon Redshift cluster.
  • C. Use LOAD commands equal to the number of Amazon Redshift cluster nodes and load the data in parallel into each node.
  • D. Use a single COPY command to load the data into the Amazon Redshift cluster.

Answer: D

Explanation:
https://docs.aws.amazon.com/redshift/latest/dg/c_best-practices-single-copy-command.html

NEW QUESTION 15
Once a month, a company receives a 100 MB .csv file compressed with gzip. The file contains 50,000 property listing records and is stored in Amazon S3 Glacier. The company needs its data analyst to query a subset of the data for a specific vendor.
What is the most cost-effective solution?

  • A. Load the data into Amazon S3 and query it with Amazon S3 Select.
  • B. Query the data from Amazon S3 Glacier directly with Amazon Glacier Select.
  • C. Load the data to Amazon S3 and query it with Amazon Athena.
  • D. Load the data to Amazon S3 and query it with Amazon Redshift Spectrum.

Answer: A

NEW QUESTION 16
A company has developed several AWS Glue jobs to validate and transform its data from Amazon S3 and load it into Amazon RDS for MySQL in batches once every day. The ETL jobs read the S3 data using a DynamicFrame. Currently, the ETL developers are experiencing challenges in processing only the incremental data on every run, as the AWS Glue job processes all the S3 input data on each run.
Which approach would allow the developers to solve the issue with minimal coding effort?

  • A. Have the ETL jobs read the data from Amazon S3 using a DataFrame.
  • B. Enable job bookmarks on the AWS Glue jobs.
  • C. Create custom logic on the ETL jobs to track the processed S3 objects.
  • D. Have the ETL jobs delete the processed objects or data from Amazon S3 after each run.

Answer: B

NEW QUESTION 17
A streaming application is reading data from Amazon Kinesis Data Streams and immediately writing the data to an Amazon S3 bucket every 10 seconds. The application is reading data from hundreds of shards. The batch interval cannot be changed due to a separate requirement. The data is being accessed by Amazon Athena. Users are seeing degradation in query performance as time progresses.
Which action can help improve query performance?

  • A. Merge the files in Amazon S3 to form larger files.
  • B. Increase the number of shards in Kinesis Data Streams.
  • C. Add more memory and CPU capacity to the streaming application.
  • D. Write the files to multiple S3 buckets.

Answer: A

Explanation:
https://aws.amazon.com/blogs/big-data/top-10-performance-tuning-tips-for-amazon-athena/

NEW QUESTION 18
......

P.S. Downloadfreepdf.net now are offering 100% pass ensure DAS-C01 dumps! All DAS-C01 exam questions have been updated with correct answers: https://www.downloadfreepdf.net/DAS-C01-pdf-download.html (130 New Questions)


START DAS-C01 EXAM