Pass4itsure > Cloudera > CCAH > CCA-500 > CCA-500 Online Practice Questions and Answers

CCA-500 Online Practice Questions and Answers

Questions 4

You have a cluster running with the fair Scheduler enabled. There are currently no jobs running on the cluster, and you submit a job A, so that only job A is running on the cluster. A while later, you submit Job B. now Job A and Job B are running on the cluster at the same time. How will the Fair Scheduler handle these two jobs? (Choose two)

A. When Job B gets submitted, it will get assigned tasks, while job A continues to run with fewer tasks.

B. When Job B gets submitted, Job A has to finish first, before job B can gets scheduled.

C. When Job A gets submitted, it doesn't consumes all the task slots.

D. When Job A gets submitted, it consumes all the task slots.

Buy Now
Questions 5

You want to node to only swap Hadoop daemon data from RAM to disk when absolutely necessary. What should you do?

A. Delete the /dev/vmswap file on the node

B. Delete the /etc/swap file on the node

C. Set the ram.swap parameter to 0 in core-site.xml

D. Set vm.swapfile file on the node

E. Delete the /swapfile file on the node

Buy Now
Questions 6

You want to understand more about how users browse your public website. For example, you want to know which pages they visit prior to placing an order. You have a server farm of 200 web servers hosting your website. Which is the most efficient process to gather these web server across logs into your Hadoop cluster analysis?

A. Sample the web server logs web servers and copy them into HDFS using curl

B. Ingest the server web logs into HDFS using Flume

C. Channel these clickstreams into Hadoop using Hadoop Streaming

D. Import all user clicks from your OLTP databases into Hadoop using Sqoop

E. Write a MapReeeduce job with the web servers for mappers and the Hadoop cluster nodes for reducers

Buy Now
Questions 7

Which three basic configuration parameters must you set to migrate your cluster from MapReduce 1 (MRv1) to MapReduce V2 (MRv2)? (Choose three)

A. Configure the NodeManager to enable MapReduce services on YARN by setting the following property in yarn-site.xml: yarn.nodemanager.hostname your_nodeManager_shuffle

B. Configure the NodeManager hostname and enable node services on YARN by setting the following property in yarn-site.xml: yarn.nodemanager.hostname your_nodeManager_hostname

C. Configure a default scheduler to run on YARN by setting the following property in mapred- site.xml: mapreduce.jobtracker.taskScheduler org.apache.hadoop.mapred.JobQueueTaskScheduler

D. Configure the number of map tasks per jon YARN by setting the following property in mapred: mapreduce.job.maps 2

E. Configure the ResourceManager hostname and enable node services on YARN by setting the following property in yarn-site.xml: yarn.resourcemanager.hostname your_resourceManager_hostname

F. Configure MapReduce as a Framework running on YARN by setting the following property in mapredsite.xml: mapreduce.framework.name yarn

Buy Now
Questions 8

Which scheduler would you deploy to ensure that your cluster allows short jobs to finish within a reasonable time without starting long-running jobs?

A. Complexity Fair Scheduler (CFS)

B. Capacity Scheduler

C. Fair Scheduler

D. FIFO Scheduler

Buy Now
Questions 9

Which YARN daemon or service negotiations map and reduce Containers from the Scheduler, tracking their status and monitoring progress?

A. NodeManager

B. ApplicationMaster

C. ApplicationManager

D. ResourceManager

Buy Now
Questions 10

You use the hadoop fs put command to add a file "sales.txt" to HDFS. This file is small enough that it fits into a single block, which is replicated to three nodes in your cluster (with a replication factor of 3). One of the nodes holding this file (a single block) fails. How will the cluster handle the replication of file in this situation?

A. The file will remain under-replicated until the administrator brings that node back online

B. The cluster will re-replicate the file the next time the system administrator reboots the NameNode daemon (as long as the file's replication factor doesn't fall below)

C. This will be immediately re-replicated and all other HDFS operations on the cluster will halt until the cluster's replication values are resorted

D. The file will be re-replicated automatically after the NameNode determines it is under- replicated based on the block reports it receives from the NameNodes

Buy Now
Questions 11

You are migrating a cluster from MApReduce version 1 (MRv1) to MapReduce version 2 (MRv2) on YARN. You want to maintain your MRv1 TaskTracker slot capacities when you migrate. What should you do/

A. Configure yarn.applicationmaster.resource.memory-mb and yarn.applicationmaster.resource.cpu-vcores so that ApplicationMaster container allocations match the capacity you require.

B. You don't need to configure or balance these properties in YARN as YARN dynamically balances resource management capabilities on your cluster

C. Configure mapred.tasktracker.map.tasks.maximum and mapred.tasktracker.reduce.tasks.maximum ub yarn-site.xml to match your cluster's capacity set by the yarn-scheduler.minimum-allocation

D. Configure yarn.nodemanager.resource.memory-mb and yarn.nodemanager.resource.cpu- vcores to match the capacity you require under YARN for each NodeManager

Buy Now
Questions 12

You're upgrading a Hadoop cluster from HDFS and MapReduce version 1 (MRv1) to one running HDFS and MapReduce version 2 (MRv2) on YARN. You want to set and enforce version 1 (MRv1) to one running HDFS and MapReduce version 2 (MRv2) on YARN. You want to set and enforce a block size of 128MB for all new files written to the cluster after upgrade. What should you do?

A. You cannot enforce this, since client code can always override this value

B. Set dfs.block.size to 128M on all the worker nodes, on all client machines, and on the NameNode, and set the parameter to final

C. Set dfs.block.size to 128 M on all the worker nodes and client machines, and set the parameter to final. You do not need to set this value on the NameNode

D. Set dfs.block.size to 134217728 on all the worker nodes, on all client machines, and on the NameNode, and set the parameter to final

E. Set dfs.block.size to 134217728 on all the worker nodes and client machines, and set the parameter to final. You do not need to set this value on the NameNode

Buy Now
Questions 13

You are running a Hadoop cluster with MapReduce version 2 (MRv2) on YARN. You consistently see that MapReduce map tasks on your cluster are running slowly because of excessive garbage collection of JVM, how do you increase JVM heap size property to 3GB to optimize performance?

A. yarn.application.child.java.opts=-Xsx3072m

B. yarn.application.child.java.opts=-Xmx3072m

C. mapreduce.map.java.opts=-Xms3072m

D. mapreduce.map.java.opts=-Xmx3072m

Buy Now
Exam Code: CCA-500
Exam Name: Cloudera Certified Administrator for Apache Hadoop (CCAH)
Last Update: Apr 21, 2024
Questions: 60
10%OFF Coupon Code: SAVE10

PDF (Q&A)

$45.99

VCE

$49.99

PDF + VCE

$59.99