Cloudera CCA-505 Exam Questions and Answers 2021

Proper study guides for CCA-505 Cloudera Certified Administrator for Apache Hadoop (CCAH) CDH5 Upgrade Exam certified begins with preparation products which designed to deliver the by making you pass the CCA-505 test at your first time. Try the free right now.

Online Cloudera CCA-505 free dumps demo Below:

NEW QUESTION 1
Which two are Features of Hadoop's rack topology?

  • A. Configuration of rack awareness is accomplished using a configuration fil
  • B. You cannot use a rack topology script.
  • C. Even for small clusters on a single rack, configuring rack awareness will improve performance.
  • D. Rack location is considered in the HDFS block placement policy
  • E. HDFS is rack aware but MapReduce daemons are not
  • F. Hadoop gives preference to Intra rack data transfer in order to conserve bandwidth

Answer: BC

NEW QUESTION 2
You are configuring a cluster running HDFS, MapReduce version 2 (MRv2) on YARN running Linux. How must you format the underlying filesystem of each DataNode?

  • A. They must not formatted - - HDFS will format the filesystem automatically
  • B. They may be formatted in any Linux filesystem
  • C. They must be formatted as HDFS
  • D. They must be formatted as either ext3 or ext4

Answer: D

NEW QUESTION 3
Your Hadoop cluster contains nodes in three racks. You have NOT configured the dfs.hosts property in the NameNode’s configuration file. What results?

  • A. No new nodes can be added to the cluster until you specify them in the dfs.hosts file
  • B. Presented with a blank dfs.hosts property, the NameNode will permit DatNode specified in mapred.hosts to join the cluster
  • C. Any machine running the DataNode daemon can immediately join the cluster
  • D. The NameNode will update the dfs.hosts property to include machine running DataNode daemon on the next NameNode reboot or with the command dfsadmin -refreshNodes

Answer: C

NEW QUESTION 4
Which Yarn daemon or service monitors a Container’s per-application resource usage (e.g, memory, CPU)?

  • A. NodeManager
  • B. ApplicationMaster
  • C. ApplicationManagerService
  • D. ResourceManager

Answer: A

Explanation: Reference: http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.0.0.2/bk_using-apache-hadoop/content/ch_using-apache-hadoop-4.html (4th para)

NEW QUESTION 5
On a cluster running MapReduce v2 (MRv2) on YARN, a MapReduce job is given a directory of 10 plain text as its input directory. Each file is made up of 3 HDFS blocks. How many Mappers will run?

  • A. We cannot say; the number of Mappers is determined by the RsourceManager
  • B. We cannot say; the number of Mappers is determined by the ApplicationManager
  • C. We cannot say; the number of Mappers is determined by the developer
  • D. 30
  • E. 3
  • F. 10

Answer: E

NEW QUESTION 6
Which is the default scheduler in YARN?

  • A. Fair Scheduler
  • B. FIFO Scheduler
  • C. Capacity Scheduler
  • D. YARN doesn’t configure a default schedule
  • E. You must first assign a appropriate scheduler class in yarn-site.xml

Answer: C

Explanation: Reference: http://hadoop.apache.org/docs/r2.3.0/hadoop-yarn/hadoop-yarn-site/FairScheduler.html

NEW QUESTION 7
You are migrating a cluster from MapReduce version 1 (MRv1) to MapReduce version2 (MRv2) on YARN. To want to maintain your MRv1 TaskTracker slot capacities when you migrate. What should you do?

  • A. Configure yarn.applicationmaster.resource.memory-mb and yarn.applicationmaster.cpu- vcores so that ApplicationMaster container allocations match the capacity you require.
  • B. You don’t need to configure or balance these properties in YARN as YARN dynamically balances resource management capabilities on your cluster
  • C. Configure yarn.nodemanager.resource.memory-mb and yarn.nodemanager.resource.cpu-vcores to match the capacity you require under YARN for each NodeManager
  • D. Configure mapred.tasktracker.map.tasks.maximum and mapred.tasktracker.reduce.tasks.maximum ub yarn.site.xml to match your cluster’s configured capacity set by yarn.scheduler.minimum-allocation

Answer: C

NEW QUESTION 8
You observe that the number of spilled records from Map tasks far exceeds the number of map output records. Your child heap size is 1GB and your io.sort.mb value is set to 100 MB. How would you tune your io.sort.mb value to achieve maximum memory to disk I/O ratio?

  • A. Decrease the io.sort.mb value to 0
  • B. Increase the io.sort.mb to 1GB
  • C. For 1GB child heap size an io.sort.mb of 128 MB will always maximize memory to disk I/O
  • D. Tune the io.sort.mb value until you observe that the number of spilled records equals (or is as close to equals) the number of map output records

Answer: D

NEW QUESTION 9
Your cluster implements HDFS High Availability (HA). Your two NameNodes are named nn01 and nn02. What occurs when you execute the command: hdfs haadmin –failover nn01 nn02

  • A. nn02 becomes the standby NameNode and nn01 becomes the active NameNode
  • B. nn02 is fenced, and nn01 becomes the active NameNode
  • C. nn01 becomes the standby NamNode and nn02 becomes the active NAmeNode
  • D. nn01 is fenced, and nn02 becomes the active NameNode

Answer: D

Explanation: failover – initiate a failover between two NameNodes
This subcommand causes a failover from the first provided NameNode to the second. If the first NameNode is in the Standby state, this command simply transitions the second to the Active state without error. If the first NameNode is in the Active state, an attempt will be made to gracefully transition it to the Standby state. If this fails, the fencing methods (as configured by dfs.ha.fencing.methods) will be attempted in order until one of the methods succeeds. Only after this process will the second NameNode be transitioned to the Active state. If no fencing method succeeds, the second NameNode will not be transitioned to the Active state, and an error will be returned.

NEW QUESTION 10
You are configuring your cluster to run HDFS and MapReduce v2 (MRv2) on YARN. Which daemons need to be installed on your clusters master nodes? (Choose Two)

  • A. ResourceManager
  • B. DataNode
  • C. NameNode
  • D. JobTracker
  • E. TaskTracker
  • F. HMaster

Answer: AC

NEW QUESTION 11
Which YARN process runs as “controller O” of a submitted job and is responsible for resource requests?

  • A. ResourceManager
  • B. NodeManager
  • C. JobHistoryServer
  • D. ApplicationMaster
  • E. JobTracker
  • F. ApplicationManager

Answer: D

NEW QUESTION 12
Your cluster has the following characteristics:
✑ A rack aware topology is configured and on
✑ Replication is not set to 3
✑ Cluster block size is set to 64 MB
Which describes the file read process when a client application connects into the cluster and requests a 50MB file?

  • A. The client queries the NameNode which retrieves the block from the nearest DataNode to the client and then passes that block back to the client.
  • B. The client queries the NameNode for the locations of the block, and reads from a random location in the list it retrieves to eliminate network I/O leads by balancing which nodes it retrieves data from at any given time.
  • C. The client queries the NameNode for the locations of the block, and reads all three copie
  • D. The first copy to complete transfer to the client is the one the client reads as part of Hadoop’s speculative execution framework.
  • E. The client queries the NameNode for the locations of the block, and reads from the first location in the list it receives.

Answer: A

NEW QUESTION 13
You are the hadoop fs –put command to add a file “sales.txt” to HDFS. This file is small enough that it fits into a single block, which is replicated to three nodes in your cluster (with a replication factor of 3). One of the nodes holding this file (a single block) fails. How will the cluster handle the replication of this file in this situation/

  • A. The cluster will re-replicate the file the next time the system administrator reboots the NameNode daemon (as long as the file’s replication doesn’t fall two)
  • B. This file will be immediately re-replicated and all other HDFS operations on the cluster will halt until the cluster’s replication values are restored
  • C. The file will remain under-replicated until the administrator brings that nodes back online
  • D. The file will be re-replicated automatically after the NameNode determines it is under replicated based on the block reports it receives from the DataNodes

Answer: B

NEW QUESTION 14
You are working on a project where you need to chain together MapReduce, Pig jobs. You also needs the ability to use forks, decision, and path joins. Which ecosystem project should you use to perform these actions?

  • A. Oozie
  • B. Zookeeper
  • C. HBase
  • D. Sqoop
  • E. HUE

Answer: A

NEW QUESTION 15
You have installed a cluster running HDFS and MapReduce version 2 (MRv2) on YARN. You have no afs.hosts entry()ies in your hdfs-alte.xml configuration file. You configure a new worker node by setting fs.default.name in its configuration files to point to the NameNode on your cluster, and you start the DataNode daemon on that worker node.
What do you have to do on the cluster to allow the worker node to join, and start storing HDFS blocks?

  • A. Nothing; the worker node will automatically join the cluster when the DataNode daemon is started.
  • B. Without creating a dfs.hosts file or making any entries, run the command hadoop dfsadmin –refreshHadoop on the NameNode
  • C. Create a dfs.hosts file on the NameNode, add the worker node’s name to it, then issue the command hadoop dfsadmin –refreshNodes on the NameNode
  • D. Restart the NameNode

Answer: B

NEW QUESTION 16
A slave node in your cluster has four 2TB hard drives installed (4 x 2TB). The DataNode is
configured to store HDFS blocks on the disks. You set the value of the dfs.datanode.du.reserved parameter to 100GB. How does this alter HDFS block storage?

  • A. A maximum of 100 GB on each hard drive may be used to store HDFS blocks
  • B. All hard drives may be used to store HDFS blocks as long as atleast 100 GB in total is available on the node
  • C. 100 GB on each hard drive may not be used to store HDFS blocks
  • D. 25 GB on each hard drive may not be used to store HDFS blocks

Answer: B

NEW QUESTION 17
You are upgrading a Hadoop cluster from HDFS and MapReduce version 1 (MRv1) to one running HDFS and MapReduce version 2 (MRv2) on YARN. You want to set and enforce a block of 128MB for all new files written to the cluster after the upgrade. What should you do?

  • A. Set dfs.block.size to 128M on all the worker nodes, on all client machines, and on the NameNode, and set the parameter to final.
  • B. Set dfs.block.size to 134217728 on all the worker nodes, on all client machines, and on the NameNode, and set the parameter to final.
  • C. Set dfs.block.size to 134217728 on all the worker nodes and client machines, and set the parameter to fina
  • D. You do need to set this value on the NameNode.
  • E. Set dfs.block.size to 128M on all the worker nodes and client machines, and set the parameter to fina
  • F. You do need to set this value on the NameNode.
  • G. You cannot enforce this, since client code can always override this value.

Answer: C

Thanks for reading the newest CCA-505 exam dumps! We recommend you to try the PREMIUM Certleader CCA-505 dumps in VCE and PDF here: https://www.certleader.com/CCA-505-dumps.html (45 Q&As Dumps)