questions/big-data-hadoop
Below is an example query which you ...READ MORE
Both codes contain different API of Map ...READ MORE
Hi, We can use a normal insert query ...READ MORE
It's because that is the syntax. This ...READ MORE
A value with a wrong datatype causes ...READ MORE
You can use the SUBSTR() in hive ...READ MORE
How to exclude tables in sqoop if ...READ MORE
For avro you can follow the format ...READ MORE
You can use this: import org.apache.spark.sql.functions.struct val df = ...READ MORE
After creating the tables a1 and b1 ...READ MORE
You have to do some configurations as ...READ MORE
Each file Schema = 150bytes Block schema ...READ MORE
Try this instead: select from_unixtime(unix_timestamp()); If you have an ...READ MORE
FileInputFormat : Base class for all file-based InputFormats Other ...READ MORE
Here's a list of Input Formats: CombineFileInputFormat CombineS ...READ MORE
The command you are typing is incorrect. ...READ MORE
You can do it using the following ...READ MORE
It is straight forward and you can achieve ...READ MORE
It is very straight forward, no need ...READ MORE
With MR2, now we should set conf.set("mapreduce.map.output.compress", true) conf.set("mapreduce.output.fileoutputformat.compress", ...READ MORE
The best thing with Millions Songs Dataset ...READ MORE
i need to write some mapreduce pattern ...READ MORE
To get full query running for the ...READ MORE
Enter the below command in the terminal ...READ MORE
Unfortunately, this can't be achieved with open ...READ MORE
Please make sure you connect to spark2-shell ...READ MORE
First you have to have the file ...READ MORE
You need to solve the issue which ...READ MORE
For the above requirement, the memory consumption ...READ MORE
You can use the following code: A = ...READ MORE
You can use the following sample code for ...READ MORE
Hi, You can do one thing, first delete all ...READ MORE
Well, there are two kinds of partitions: 1. ...READ MORE
Yes, InputFormatClass and OutputFormatClass are independent of ...READ MORE
The main reason for job.waitForCompletion exists is that ...READ MORE
Job job = new Job(conf,"job_name"), is just ...READ MORE
job.setOutputValueClass will set the types expected as ...READ MORE
Hey. This error usually occurs when the ...READ MORE
Each reducer uses an OutputFormat to write ...READ MORE
1 - Spark if following slave/master architecture. So ...READ MORE
The issue that you might be getting ...READ MORE
While Apache Hive and Spark SQL perform ...READ MORE
Outer Bag: An outer bag is nothing but ...READ MORE
For integrating Hadoop with CSV, we can use ...READ MORE
About integrating RDBMS with Hadoop, you can ...READ MORE
Yes, it is possible to do so ...READ MORE
When you are loading two different files, ...READ MORE
Please refer to the below code: import org.apache.hadoop.conf.Configuration import ...READ MORE
I will drop the answer in the ...READ MORE
HDFS is capable to accept data in ...READ MORE
OR
At least 1 upper-case and 1 lower-case letter
Minimum 8 characters and Maximum 50 characters
Already have an account? Sign in.