questions/big-data-hadoop
In the stored as parameter, you have specified the ...READ MORE
The reason for this error is that ...READ MORE
To upload a file from your local ...READ MORE
You can use either of the below ...READ MORE
To get full query running for the ...READ MORE
Below is what happens with the map-reduce ...READ MORE
With pig, we are not taking data ...READ MORE
The Pig storage is not used only ...READ MORE
Suppose we have a data set as ...READ MORE
There are 3 Ways to Load Data ...READ MORE
If your dataset is in the FTP ...READ MORE
So, we will execute the below command, new_A_2 ...READ MORE
Suppose you need to load this in ...READ MORE
Yes, InputFormatClass and OutputFormatClass are independent of ...READ MORE
The main reason for job.waitForCompletion exists is ...READ MORE
Job job = new Job(conf,"job_name") is just used ...READ MORE
job.setOutputValueClass will set the types expected as ...READ MORE
For HBase Pig integration, you can refer ...READ MORE
In order to merge two or more ...READ MORE
Suppose I have the below parquet file ...READ MORE
You can convert the pdf files with ...READ MORE
There are multiple ways to import unstructured ...READ MORE
Please try the below command: sqoop job -create ...READ MORE
There are different ways to do this. ...READ MORE
We have to use Sqoop-HCatalog Integration here. ...READ MORE
If you want to limit your hadoop ...READ MORE
I guess it should be at /hadoop/hadoop-common-project/hadoop-common/sr ...READ MORE
Yes, it is possible to do so. ...READ MORE
HDFS does not allocate capacity separately based ...READ MORE
You can use hdfs fsck / to ...READ MORE
To resolve this issue, follow the steps ...READ MORE
There are three complex types in hive, arrays: ...READ MORE
When there is space in data nodes ...READ MORE
You are trying to execute the sqoop ...READ MORE
With dev tools you can install directly ...READ MORE
You can manually create a file hadoop-env.sh ...READ MORE
I shall redirect you to a link ...READ MORE
HDFS is a software framework designed for ...READ MORE
Hi, Spark uses GraphX for graph processing to ...READ MORE
If you don't want to turn off ...READ MORE
The elasticsearch Hadoop library is not a ...READ MORE
Hi, You can check this out: 1. Delete all ...READ MORE
Hey, Hive contains significant support for Apache Spark, ...READ MORE
If you are new to this installation ...READ MORE
I found some comments: from the Hadoop ...READ MORE
Download packages rhdfs, rhbase, rmr2 and plyrmr ...READ MORE
I had the same issue. It's solved ...READ MORE
Hi, You can use the command below to ...READ MORE
Hi, You can load data from flat files ...READ MORE
OR
At least 1 upper-case and 1 lower-case letter
Minimum 8 characters and Maximum 50 characters
Already have an account? Sign in.