questions/big-data-hadoop
Hi. You'll need to run start-mapred.sh if you want the ...READ MORE
Check your /etc/hosts file, the format should be like ...READ MORE
Add this in your hdfs-site.xml <property> ...READ MORE
Try adding <property> <name>dfs.name.dir</name> <value>/path/to/hdfs/dir</value> ...READ MORE
You have to write this directory in ...READ MORE
The error which you are getting i.e ...READ MORE
Refer the below screenshot for your above ...READ MORE
In your error, it says that the ...READ MORE
Follow the below steps to execute the ...READ MORE
There's a typo in your command: Right command: custdata ...READ MORE
Apache Hive is mainly used for batch processing i.e. ...READ MORE
You can change/replace the LOG_DIR variable to ...READ MORE
Follow these steps to set the map ...READ MORE
The command you are using is wrong. ...READ MORE
Step 1: Create includes file in /home/hadoop ...READ MORE
create table hive_dml (emp_id int, first_name string, last_name ...READ MORE
Hey. It's definitely not a stupid question. ...READ MORE
Follow these steps: First start hadoop daemons: cd $HADOOP_HOME/sbin ./start-all.sh Now ...READ MORE
Yes, you can create manual partition. Here's ...READ MORE
Try to restart the mysqld server and then login: sudo ...READ MORE
It seems that your hive server is ...READ MORE
It depends on the structure of your ...READ MORE
To rectify this errors, you need to ...READ MORE
This is happening because the file name ...READ MORE
You can use the job object to access the ...READ MORE
Try this: First, click on file import appliance. Now ...READ MORE
Running Hive client tools with embedded servers ...READ MORE
Hive CLI connects to a remote HiveServer1 ...READ MORE
You are right. As Hadoop follows WORM ...READ MORE
Sqoop is used to transfer any data ...READ MORE
In the query mentioned in your question, ...READ MORE
Never mind. I forgot to run hadoop namenode ...READ MORE
Yes. It is not necessary to set ...READ MORE
You can use the SPARK_MAJOR_VERSION for this. Suppose ...READ MORE
Seems like you have installed Spark2 but ...READ MORE
Input Processing Hive's execution engine (referred to as ...READ MORE
Please find below the command to uninstall ...READ MORE
When the application master fails, each file ...READ MORE
Seems like the content in core-site.xml file is ...READ MORE
Open spark-shell. scala> import org.apache.spark.sql.hive._ scala> val hc = ...READ MORE
First check if all daemons are running: sudo ...READ MORE
Check the ip address mentioned in core-site.xml ...READ MORE
We would like to say that the ...READ MORE
The issue which you are facing is ...READ MORE
Unfortunately the command that you are giving ...READ MORE
You can use these commands. For namenode: ./hadoop-daemon.sh start ...READ MORE
mapper.py #!/usr/bin/python import sys #Word Count Example # input comes from ...READ MORE
In Hdfs, data and metadata are decoupled. ...READ MORE
Shared FS Basically implies to the high ...READ MORE
Yes, Hadoop 1.0 didn't have standby namenode. ...READ MORE
OR
At least 1 upper-case and 1 lower-case letter
Minimum 8 characters and Maximum 50 characters
Already have an account? Sign in.