CCA-500 Premium Bundle

CCA-500 Premium Bundle

Cloudera Certified Administrator for Apache Hadoop (CCAH) Certification Exam

4.5 
(45780 ratings)
0 QuestionsPractice Tests
0 PDFPrint version
September 29, 2024Last update

Cloudera CCA-500 Free Practice Questions

We provide which are the best for clearing CCA-500 test, and to get certified by Cloudera Cloudera Certified Administrator for Apache Hadoop (CCAH). The covers all the knowledge points of the real CCA-500 exam. Crack your Cloudera CCA-500 Exam with latest dumps, guaranteed!

Cloudera CCA-500 Free Dumps Questions Online, Read and Test Now.

NEW QUESTION 1
A slave node in your cluster has 4 TB hard drives installed (4 x 2TB). The DataNode is configured to store HDFS blocks on all disks. You set the value of the dfs.datanode.du.reserved parameter to 100 GB. How does this alter HDFS block storage?

  • A. 25GB on each hard drive may not be used to store HDFS blocks
  • B. 100GB on each hard drive may not be used to store HDFS blocks
  • C. All hard drives may be used to store HDFS blocks as long as at least 100 GB in total is available on the node
  • D. A maximum if 100 GB on each hard drive may be used to store HDFS blocks

Answer: B

NEW QUESTION 2
You use the hadoop fs –put command to add a file “sales.txt” to HDFS. This file is small enough that it fits into a single block, which is replicated to three nodes in your cluster (with a replicationfactor of 3). One of the nodes holding this file (a single block) fails. How will the cluster handle the replication of file in this situation?

  • A. The file will remain under-replicated until the administrator brings that node back online
  • B. The cluster will re-replicate the file the next time the system administrator reboots the NameNode daemon (as long as the file’s replication factor doesn’t fall below)
  • C. This will be immediately re-replicated and all other HDFS operations on the cluster will halt until the cluster’s replication values are resorted
  • D. The file will be re-replicated automatically after the NameNode determines it is under- replicated based on the block reports it receives from the NameNodes

Answer: D

NEW QUESTION 3
Your Hadoop cluster is configuring with HDFS and MapReduce version 2 (MRv2) on YARN. Can you configure a worker node to run a NodeManager daemon but not a DataNode daemon and still have a functional cluster?

  • A. Ye
  • B. The daemon will receive data from the NameNode to run Map tasks
  • C. Ye
  • D. The daemon will get data from another (non-local) DataNode to run Map tasks
  • E. Ye
  • F. The daemon will receive Map tasks only
  • G. Ye
  • H. The daemon will receive Reducer tasks only

Answer: B

NEW QUESTION 4
Which is the default scheduler in YARN?

  • A. YARN doesn’t configure a default scheduler, you must first assign an appropriate scheduler class in yarn-site.xml
  • B. Capacity Scheduler
  • C. Fair Scheduler
  • D. FIFO Scheduler

Answer: B

Explanation: Reference:http://hadoop.apache.org/docs/r2.4.1/hadoop-yarn/hadoop-yarn-site/CapacityScheduler.html

NEW QUESTION 5
You have a cluster running with the fair Scheduler enabled. There are currently no jobs running on the cluster, and you submit a job A, so that only job A is running on the cluster. A while later, you submit Job B. now Job A and Job B are running on the cluster at the same time. How will the Fair Scheduler handle these two jobs?(Choose two)

  • A. When Job B gets submitted, it will get assigned tasks, while job A continues to run with fewer tasks.
  • B. When Job B gets submitted, Job A has to finish first, before job B can gets scheduled.
  • C. When Job A gets submitted, it doesn’t consumes all the task slots.
  • D. When Job A gets submitted, it consumes all the task slots.

Answer: B

NEW QUESTION 6
Which two features does Kerberos security add to a Hadoop cluster?(Choose two)

  • A. User authentication on all remote procedure calls (RPCs)
  • B. Encryption for data during transfer between the Mappers and Reducers
  • C. Encryption for data on disk (“at rest”)
  • D. Authentication for user access to the cluster against a central server
  • E. Root access to the cluster for users hdfs and mapred but non-root access for clients

Answer: AD

NEW QUESTION 7
A user comes to you, complaining that when she attempts to submit a Hadoop job, it fails. There is a Directory in HDFS named /data/input. The Jar is named j.jar, and the driver class is named DriverClass.
She runs the command:
Hadoop jar j.jar DriverClass /data/input/data/output The error message returned includes the line:
PriviligedActionException as:training (auth:SIMPLE) cause:org.apache.hadoop.mapreduce.lib.input.invalidInputException:
Input path does not exist: file:/data/input What is the cause of the error?

  • A. The user is not authorized to run the job on the cluster
  • B. The output directory already exists
  • C. The name of the driver has been spelled incorrectly on the command line
  • D. The directory name is misspelled in HDFS
  • E. The Hadoop configuration files on the client do not point to the cluster

Answer: A

NEW QUESTION 8
Assume you have a file named foo.txt in your local directory. You issue the following three commands:
Hadoop fs –mkdir input
Hadoop fs –put foo.txt input/foo.txt
Hadoop fs –put foo.txt input
What happens when you issue the third command?

  • A. The write succeeds, overwriting foo.txt in HDFS with no warning
  • B. The file is uploaded and stored as a plain file named input
  • C. You get a warning that foo.txt is being overwritten
  • D. You get an error message telling you that foo.txt already exists, and asking you if you would like to overwrite it.
  • E. You get a error message telling you that foo.txt already exist
  • F. The file is not written to HDFS
  • G. You get an error message telling you that input is not a directory
  • H. The write silently fails

Answer: CE

NEW QUESTION 9
You suspect that your NameNode is incorrectly configured, and is swapping memory to disk. Which Linux commands help you to identify whether swapping is occurring?(Select all that apply)

  • A. free
  • B. df
  • C. memcat
  • D. top
  • E. jps
  • F. vmstat
  • G. swapinfo

Answer: ADF

Explanation: Reference:http://www.cyberciti.biz/faq/linux-check-swap-usage-command/

NEW QUESTION 10
Which YARN daemon or service monitors a Controller’s per-application resource using (e.g., memory CPU)?

  • A. ApplicationMaster
  • B. NodeManager
  • C. ApplicationManagerService
  • D. ResourceManager

Answer: A

NEW QUESTION 11
Which three basic configuration parameters must you set to migrate your cluster from MapReduce 1 (MRv1) to MapReduce V2 (MRv2)?(Choose three)

  • A. Configure the NodeManager to enable MapReduce services on YARN by setting the following property in yarn-site.xml:<name>yarn.nodemanager.hostname</name><value>your_nodeManager_shuffle</value>
  • B. Configure the NodeManager hostname and enable node services on YARN by setting the following property in yarn-site.xml:<name>yarn.nodemanager.hostname</name><value>your_nodeManager_hostname</value>
  • C. Configure a default scheduler to run on YARN by setting the following property in mapred-site.xml:<name>mapreduce.jobtracker.taskScheduler</name><Value>org.apache.hadoop.mapred.JobQueueTaskScheduler</value>
  • D. Configure the number of map tasks per jon YARN by setting the following property in mapred:<name>mapreduce.job.maps</name><value>2</value>
  • E. Configure the ResourceManager hostname and enable node services on YARN by setting the following property in yarn-site.xml:<name>yarn.resourcemanager.hostname</name><value>your_resourceManager_hostname</value>
  • F. Configure MapReduce as a Framework running on YARN by setting the following property in mapred-site.xml:<name>mapreduce.framework.name</name><value>yarn</value>

Answer: AEF

NEW QUESTION 12
Your cluster is running MapReduce version 2 (MRv2) on YARN. Your ResourceManager is configured to use the FairScheduler. Now you want to configure your scheduler such that a new user on the cluster can submit jobs into their own queue application submission. Which configuration should you set?

  • A. You can specify new queue name when user submits a job and new queue can be created dynamically if the property yarn.scheduler.fair.allow-undecleared-pools = true
  • B. Yarn.scheduler.fair.user.fair-as-default-queue = false and yarn.scheduler.fair.allow- undecleared-pools = true
  • C. You can specify new queue name when user submits a job and new queue can be created dynamically if yarn .schedule.fair.user-as-default-queue = false
  • D. You can specify new queue name per application in allocations.xml file and have new jobs automatically assigned to the application queue

Answer: A

NEW QUESTION 13
Your cluster has the following characteristics:
✑ A rack aware topology is configured and on
✑ Replication is set to 3
✑ Cluster block size is set to 64MB
Which describes the file read process when a client application connects into the cluster and requests a 50MB file?

  • A. The client queries the NameNode for the locations of the block, and reads all three copie
  • B. The first copy to complete transfer to the client is the one the client reads as part of hadoop’s speculative execution framework.
  • C. The client queries the NameNode for the locations of the block, and reads from the first location in the list it receives.
  • D. The client queries the NameNode for the locations of the block, and reads from a random location in the list it receives to eliminate network I/O loads by balancing which nodes it retrieves data from any given time.
  • E. The client queries the NameNode which retrieves the block from the nearest DataNode to the client then passes that block back to the client.

Answer: B

NEW QUESTION 14
You have recently converted your Hadoop cluster from a MapReduce 1 (MRv1) architecture to MapReduce 2 (MRv2) on YARN architecture. Your developers are accustomed to specifying map and reduce tasks (resource allocation) tasks when they run jobs: A developer wants to know how specify to reduce tasks when a specific job runs. Which method should you tell that developers to implement?

  • A. MapReduce version 2 (MRv2) on YARN abstracts resource allocation away from the idea of “tasks” into memory and virtual cores, thus eliminating the need for a developer to specify the number of reduce tasks, and indeed preventing the developer from specifying the number of reduce tasks.
  • B. In YARN, resource allocations is a function of megabytes of memory in multiples of 1024m
  • C. Thus, they should specify the amount of memory resource they need by executing –D mapreduce-reduces.memory-mb-2048
  • D. In YARN, the ApplicationMaster is responsible for requesting the resource required for a specific launc
  • E. Thus, executing –D yarn.applicationmaster.reduce.tasks=2 will specify that the ApplicationMaster launch two task contains on the worker nodes.
  • F. Developers specify reduce tasks in the exact same way for both MapReduce version 1 (MRv1) and MapReduce version 2 (MRv2) on YAR
  • G. Thus, executing –D mapreduce.job.reduces-2 will specify reduce tasks.
  • H. In YARN, resource allocation is function of virtual cores specified by the ApplicationManager making requests to the NodeManager where a reduce task is handeled by a single container (and thus a single virtual core). Thus, the developer needs to specify the number of virtual cores to the NodeManager by executing –p yarn.nodemanager.cpu-vcores=2

Answer: D

NEW QUESTION 15
You’re upgrading a Hadoop cluster from HDFS and MapReduce version 1 (MRv1) to one running HDFS and MapReduce version 2 (MRv2) on YARN. You want to set and enforce version 1 (MRv1) to one running HDFS and MapReduce version 2 (MRv2) on YARN. You want to set and enforce a block size of 128MB for all new files written to the cluster after upgrade. What should you do?

  • A. You cannot enforce this, since client code can always override this value
  • B. Set dfs.block.size to 128M on all the worker nodes, on all client machines, and on the NameNode, and set the parameter to final
  • C. Set dfs.block.size to 128 M on all the worker nodes and client machines, and set the parameter to fina
  • D. You do not need to set this value on the NameNode
  • E. Set dfs.block.size to 134217728 on all the worker nodes, on all client machines, and on the NameNode, and set the parameter to final
  • F. Set dfs.block.size to 134217728 on all the worker nodes and client machines, and set the parameter to fina
  • G. You do not need to set this value on the NameNode

Answer: C

NEW QUESTION 16
You have a Hadoop cluster HDFS, and a gateway machine external to the cluster from which clients submit jobs. What do you need to do in order to run Impala on the cluster and submit jobs from the command line of the gateway machine?

  • A. Install the impalad daemon statestored daemon, and daemon on each machine in the cluster, and the impala shell on your gateway machine
  • B. Install the impalad daemon, the statestored daemon, the catalogd daemon, and the impala shell on your gateway machine
  • C. Install the impalad daemon and the impala shell on your gateway machine, and the statestored daemon and catalogd daemon on one of the nodes in the cluster
  • D. Install the impalad daemon on each machine in the cluster, the statestored daemon and catalogd daemon on one machine in the cluster, and the impala shell on your gateway machine
  • E. Install the impalad daemon, statestored daemon, and catalogd daemon on each machine in the cluster and on the gateway node

Answer: D

P.S. Surepassexam now are offering 100% pass ensure CCA-500 dumps! All CCA-500 exam questions have been updated with correct answers: https://www.surepassexam.com/CCA-500-exam-dumps.html (60 New Questions)


START CCA-500 EXAM