questions/big-data-hadoop
Hi. This is the code I used ...READ MORE
Try this: String cords = it.next().toString(); lattitude = Double.toString((inst.decode(cords))[0]); longitude ...READ MORE
sudo service mysqld restart mysql -u <username> root ...READ MORE
For SQOOP export please try below command: bin/sqoop ...READ MORE
Here is an example of import command. ...READ MORE
You have to add the partition before ...READ MORE
Seems like a hive version problem. insert operation is ...READ MORE
There are many sites you can get ...READ MORE
In your code, you have set some ...READ MORE
It's simple. You just have to add external ...READ MORE
You can use this command: create table employee(Name ...READ MORE
Different relational operators are: for each order by fil ...READ MORE
Yes, one can build “Spark” for a specific ...READ MORE
One of the most attractive features of ...READ MORE
In HA (High Availability) architecture, we have ...READ MORE
External table is created for external use ...READ MORE
Partition helps in increasing the efficiency when ...READ MORE
Partitioning: Hive has been one of the preferred ...READ MORE
Here, we have two tables: Tab1 having columns ...READ MORE
It seems like you are missing a ...READ MORE
select * from tablename where DOJ> '2018-01-01' ...READ MORE
impala-shell -i <domain_name>: <port > READ MORE
To see whether the extraction was done ...READ MORE
Join is a clause that combines the records ...READ MORE
The syntax for Map-side join and Reduce-side ...READ MORE
You can not directory use files from ...READ MORE
hadoop fs -text /hdfs-path-to-zipped-file.gz | hadoop fs ...READ MORE
Try this: hadoop fs -cat hdfs:///user/hadoop/values.txt READ MORE
Try this: val new_records = sc.newAPIHadoopRDD(hadoopConf,classOf[ ...READ MORE
Try this: Configuration configuration = new Configuration(); FileSystem fs ...READ MORE
You can do that by selecting the ...READ MORE
Usually we have Map/Reduce pair written in ...READ MORE
Tasks Write a software application, service, daemon or ...READ MORE
You have to use the right dependency <dependency> ...READ MORE
You can use the hdfs command: hdfs fs ...READ MORE
Try this: val text = sc.wholeTextFiles("student/*") text.collect() ...READ MORE
There is no jobtracker in hadoop 2.2.0 YARN framework. ...READ MORE
LocalFS means it may be your LinuxFS ...READ MORE
As per Cloudera, if you install hadoop ...READ MORE
You can perform an installation or upgrade ...READ MORE
Yes. A comprehensive set of APIs for ...READ MORE
Distributed Cache is an important feature provided ...READ MORE
The official location for Hadoop is the ...READ MORE
Try this: { Configuration config ...READ MORE
First make sure you have ant installed ...READ MORE
If by "using", you mean distributing it, ...READ MORE
You can use a combination of cat and put command. Something ...READ MORE
Make the following changes to the hadoop-env.sh ...READ MORE
In the command, try mentioning the driver ...READ MORE
You dont have to specify the file name ...READ MORE
OR
At least 1 upper-case and 1 lower-case letter
Minimum 8 characters and Maximum 50 characters
Already have an account? Sign in.