Dumpsbase collected all the related CCA-500 dumps questions, which are the best and latest in the whole market. Read and study all Dumpsbase Cloudera CCAH CCA-500 exam dumps, you can pass the test in the first attempt.
1. How many Q&As in Dumpsbase CCA-500 dumps?
There are 60 Q&As in Dumpsbase CCAH CCA-500 dumps, which cover all the exam topics of CCA-500 Cloudera Certified Administrator for Apache Hadoop (CCAH).
2. Can I try free CCA-500 demo before I decide to purchase?
Yes, Dumpsbase provides free CCA-500 demo for you to check the quality of Cloudera Certified Administrator for Apache Hadoop (CCAH) CCA-500 dumps.
3. What format will I get after purchasing CCA-500 dumps?
Dumpsbase provides both PDF and Software for CCAH CCA-500 dumps.
PDF version is file which you can print out to read and study all the CCA-500 dumps questions anywhere, and you can also use mobile phone to study them. It is very convenient.
Software is a simulation version, you can test CCA-500 questions in real exam environment.
4. How long will I get CCAH CCA-500 dumps after completing the payment?
After you purchase Dumpsbase Cloudera CCA-500 dumps, you will get Cloudera Certified Administrator for Apache Hadoop (CCAH) CCA-500 exam dumps in 10 minutes in our working time, and in 12 hours in non-working time.
5. If I fail CCA-500 exam with Dumpsbase dumps, will I get full payment fee refund?
Yes, if you fail CCAH CCA-500 by using Dumpsbase dumps questions, you only need scan and send the score report to us via [email protected] After we check and confirm it, we will refund full payment fee to you in one working day.
6. Can I get update after I purchase CCA-500 dumps?
Yes, Dumpsbase provide free update for CCA-500 exam dumps in one year from the date of purchase. If your product is out of one year, you need to re-purchase CCA-500 dumps questions. Contact us by online live support or email, we will send you 50% coupon code.
Question No : 1
You observed that the number of spilled records from Map tasks far exceeds the number of map output records. Your child heap size is 1GB and your io.sort.mb value is set to 1000MB. How would you tune your io.sort.mb value to achieve maximum memory to disk I/O ratio?
A. For a 1GB child heap size an io.sort.mb of 128 MB will always maximize memory to disk I/O
B. Increase the io.sort.mb to 1GB
C. Decrease the io.sort.mb value to 0
D. Tune the io.sort.mb value until you observe that the number of spilled records equals (or is as close to equals) the number of map output records.
Question No : 2
Table schemas in Hive are:
A. Stored as metadata on the NameNode
B. Stored along with the data in HDFS
C. Stored in the Metadata
D. Stored in ZooKeeper
Question No : 3
Assuming you¡¯re not running HDFS Federation, what is the maximum number of NameNode daemons you should run on your cluster in order to avoid a ¡°split-brain¡± scenario with your NameNode when running HDFS High Availability (HA) using Quorum-based storage?
A. Two active NameNodes and two Standby NameNodes
B. One active NameNode and one Standby NameNode
C. Two active NameNodes and on Standby NameNode
D. Unlimited. HDFS High Availability (HA) is designed to overcome limitations on the number of NameNodes you can deploy
Question No : 4
Each node in your Hadoop cluster, running YARN, has 64GB memory and 24 cores. Your yarn.site.xml has the following configuration:
You want YARN to launch no more than 16 containers per node. What should you do?
A. Modify yarn-site.xml with the following property:
B. Modify yarn-sites.xml with the following property:
C. Modify yarn-site.xml with the following property:
D. No action is needed: YARN¡¯s dynamic resource allocation automatically optimizes the node memory and cores
Question No : 5
For each YARN job, the Hadoop framework generates task log file. Where are Hadoop task log files stored?
A. Cached by the NodeManager managing the job containers, then written to a log directory on the NameNode
B. Cached in the YARN container running the task, then copied into HDFS on job completion
C. In HDFS, in the directory of the user who generates the job
D. On the local disk of the slave mode running the task
Question No : 6
You are configuring your cluster to run HDFS and MapReducer v2 (MRv2) on YARN. Which two daemons needs to be installed on your cluster¡¯s master nodes?
Question No : 7
You want to node to only swap Hadoop daemon data from RAM to disk when absolutely necessary. What should you do?
A. Delete the /dev/vmswap file on the node
B. Delete the /etc/swap file on the node
C. Set the ram.swap parameter to 0 in core-site.xml
D. Set vm.swapfile file on the node
E. Delete the /swapfile file on the node
Question No : 8
You have a cluster running with the fair Scheduler enabled. There are currently no jobs running on the cluster, and you submit a job A, so that only job A is running on the cluster. A while later, you submit Job B. now Job A and Job B are running on the cluster at the same time. How will the Fair Scheduler handle these two jobs?
A. When Job B gets submitted, it will get assigned tasks, while job A continues to run with fewer tasks.
B. When Job B gets submitted, Job A has to finish first, before job B can gets scheduled.
C. When Job A gets submitted, it doesn¡¯t consumes all the task slots.
D. When Job A gets submitted, it consumes all the task slots.
Question No : 9
You are running a Hadoop cluster with a NameNode on host mynamenode, a secondary NameNode on host mysecondarynamenode and several DataNodes.
Which best describes how you determine when the last checkpoint happened?
A. Execute hdfs namenode ¨Creport on the command line and look at the Last Checkpoint information
B. Execute hdfs dfsadmin ¨CsaveNamespace on the command line which returns to you the last checkpoint value in fstime file
C. Connect to the web UI of the Secondary NameNode (http://mysecondary:50090/) and look at the ¡°Last Checkpoint¡± information
D. Connect to the web UI of the NameNode (http://mynamenode:50070) and look at the ¡°Last Checkpoint¡± information
Question No : 10
You want to understand more about how users browse your public website. For example, you want to know which pages they visit prior to placing an order. You have a server farm of 200 web servers hosting your website. Which is the most efficient process to gather these web server across logs into your Hadoop cluster analysis?
A. Sample the web server logs web servers and copy them into HDFS using curl
B. Ingest the server web logs into HDFS using Flume
C. Channel these clickstreams into Hadoop using Hadoop Streaming
D. Import all user clicks from your OLTP databases into Hadoop using Sqoop
E. Write a MapReeeduce job with the web servers for mappers and the Hadoop cluster nodes for reducers
Question No : 11
Your cluster¡¯s mapred-start.xml includes the following parameters
And any cluster¡¯s yarn-site.xml includes the following parameters
What is the maximum amount of virtual memory allocated for each map task before YARN will kill its Container?
A. 4 GB
B. 17.2 GB
D. 8.2 GB
E. 24.6 GB
Question No : 12
What does CDH packaging do on install to facilitate Kerberos security setup?
A. Automatically configures permissions for log files at &MAPRED_LOG_DIR/userlogs
B. Creates users for hdfs and mapreduce to facilitate role assignment
C. Creates directories for temp, hdfs, and mapreduce with the correct permissions
D. Creates a set of pre-configured Kerberos keytab files and their permissions
E. Creates and configures your kdc with default cluster values
Some similar or invalid comments have been hidden.