questions/big-data-hadoop/page/28
You can use the following commands in ...READ MORE
Name nodes: hdfs getconf -namenodes Secondary name nodes: hdfs getconf ...READ MORE
Hive has a relational database on the ...READ MORE
Hello. "The system never lies :-P". The service ...READ MORE
Sqoop supports SSL/TLS data transfer with the ...READ MORE
Try this, first stop all the daemons, ...READ MORE
There are a few options for backup ...READ MORE
Hey @supriya. Seems like you have not set ...READ MORE
Try adding this Job job = new Job(conf, ...READ MORE
When you have Hadoop Eclipse plugin installed ...READ MORE
Seems like hadoop path is missing in java.library.path. ...READ MORE
In the command, try mentioning the driver ...READ MORE
Follow the below steps to execute the ...READ MORE
If your block size is 64 MB, ...READ MORE
Follow these steps: Stop namenode Delete the datanode directory ...READ MORE
Try this: sudo service hadoop-master restart After that try ...READ MORE
First start the mysql server: service mysqld start To ...READ MORE
Suppose we want to write a 1 ...READ MORE
hdfs dfs -put input_file_name output_location READ MORE
You can use the get_json_object function to parse the ...READ MORE
Is python installed running on the slaves that ...READ MORE
When the application master fails, each file ...READ MORE
Sqoop stores metadata in a repository and ...READ MORE
ACID stands for Atomicity, Consistency, Isolation, and Durability. Until ...READ MORE
Multiple files are not stored in a ...READ MORE
Yes, both the files i.e. SUCCESS and ...READ MORE
A MapReduce job usually splits the input data-set into ...READ MORE
/user/cloudera/data1 is not a directory, it is ...READ MORE
Step 1: Create includes file in /home/hadoop ...READ MORE
You can do that by selecting the ...READ MORE
Pig can be used in two modes: 1) ...READ MORE
Sqoop is used to transfer any data ...READ MORE
You have to add HADOOP_CLASSPATH environment parameter: expor ...READ MORE
Try to restart the mysqld server and then login: sudo ...READ MORE
The command you are using is wrong. ...READ MORE
Follow these steps: First start hadoop daemons: cd $HADOOP_HOME/sbin ./start-all.sh Now ...READ MORE
You need to sort RDD and take ...READ MORE
Follow these steps: Step 1: Import all these hadoop ...READ MORE
Input Processing Hive's execution engine (referred to as ...READ MORE
The main difference between HDFS High Availability ...READ MORE
This is happening because the file name ...READ MORE
To find this file, your HADOOP_CONF_DIR env ...READ MORE
mapper.py #!/usr/bin/python import sys #Word Count Example # input comes from ...READ MORE
Yes. It is not necessary to set ...READ MORE
When you copy a file from the ...READ MORE
Try this: val new_records = sc.newAPIHadoopRDD(hadoopConf,classOf[ ...READ MORE
Yes, one can build “Spark” for a specific ...READ MORE
hadoop jar hadoop-multiple-streaming.jar \ ...READ MORE
First check if all daemons are running: sudo ...READ MORE
The mapreduce task happens in the following ...READ MORE
OR
At least 1 upper-case and 1 lower-case letter
Minimum 8 characters and Maximum 50 characters
Already have an account? Sign in.