Difference between cogroup and full outer join in spark

0 votes
Please explain the difference between cogroup and full outer join in spark.
Jul 13, 2019 in Apache Spark by Hiran
1,698 views

1 answer to this question.

0 votes

Please go through the below explanation :

Full Outer Join

Full outer joins in RDD is same as full outer join in SQL.

  • FULL JOIN returns all matching records from both tables whether the other table matches or not.
  • FULL JOIN can potentially return very large datasets.
  • FULL JOIN and FULL OUTER JOIN are the same.

Also Please go through the below link it had detailed explanation for the full joins.

Group and Co-group

The GROUP and COGROUP operators are identical but GROUP is used in statements involving one relation and COGROUP is used in statements involving two or more relations.

Suppose we have one relation A like below

A = load 'student' AS (name:chararray,age:int,gpa:float);

DUMP A;

(John,18,4.0F)

(Mary,19,3.8F)

(Bill,20,3.9F)

(Joe,18,3.8F)


B = GROUP A BY age;

DUMP B;


(18,{(John,18,4.0F),(Joe,18,3.8F)})

(19,{(Mary,19,3.8F)})

(20,{(Bill,20,3.9F)})

Now we are using Cogroup

Suppose we have two relations, A and B like below

A = LOAD 'data1' AS (owner:chararray,pet:chararray);

DUMP A;


(Alice,turtle)

(Alice,goldfish)

(Alice,cat)

(Bob,dog)

(Bob,cat)


B = LOAD 'data2' AS (friend1:chararray,friend2:chararray);

DUMP B;


(Cindy,Alice)

(Mark,Alice)

(Paul,Bob)

(Paul,Jane)


X = COGROUP A BY owner, B BY friend2;

dump X;


(Alice,{(Alice,turtle),(Alice,goldfish),(Alice,cat)},{(Cindy,Alice),(Mark,Alice)})

(Bob,{(Bob,dog),(Bob,cat)},{(Paul,Bob)})

(Jane,{},{(Paul,Jane)})

In the above example, the first bag is the tuples from the first relation with the matching key field. The second bag is the tuples from the second relation with the matching key field. If no tuples match the key field, the bag is empty.

answered Jul 13, 2019 by Kiran

Related Questions In Apache Spark

0 votes
1 answer

What's the difference between 'filter' and 'where' in Spark SQL?

Both 'filter' and 'where' in Spark SQL ...READ MORE

answered May 23, 2018 in Apache Spark by nitinrawat895
• 10,840 points
9,288 views
+1 vote
3 answers

What is the difference between rdd and dataframes in Apache Spark ?

Comparison between Spark RDD vs DataFrame 1. Release ...READ MORE

answered Aug 27, 2018 in Apache Spark by shams
• 3,580 points
19,269 views
0 votes
1 answer

What is the difference between persist() and cache() in apache spark?

Hi, persist () allows the user to specify ...READ MORE

answered Jul 3, 2019 in Apache Spark by Gitika
• 25,440 points
710 views
0 votes
1 answer

Difference between createOrReplaceTempView and registerTempTable

createOrReplaceTempView() creates/replaces a local temp view with the dataframe provided. Lifetime of this ...READ MORE

answered Apr 25, 2018 in Apache Spark by kurt_cobain
• 9,290 points
2,880 views
+1 vote
1 answer
+1 vote
1 answer

Hadoop Mapreduce word count Program

Firstly you need to understand the concept ...READ MORE

answered Mar 16, 2018 in Data Analytics by nitinrawat895
• 10,840 points
3,831 views
0 votes
1 answer

hadoop.mapred vs hadoop.mapreduce?

org.apache.hadoop.mapred is the Old API  org.apache.hadoop.mapreduce is the ...READ MORE

answered Mar 16, 2018 in Data Analytics by nitinrawat895
• 10,840 points
521 views
+1 vote
11 answers

hadoop fs -put command?

put syntax: put <localSrc> <dest> copy syntax: copyFr ...READ MORE

answered Dec 7, 2018 in Big Data Hadoop by Aditya
20,265 views
0 votes
1 answer

Spark: How can i create temp views in user defined database instead of default database?

You can try the below code: df.registerTempTable(“airports”) sqlContext.sql(" create ...READ MORE

answered Jul 14, 2019 in Apache Spark by Ishan
319 views
0 votes
1 answer

org.apache.spark.sql.AnalysisException: cannot resolve "`id`" given input columns

I have used a header-less csv file ...READ MORE

answered Jul 13, 2019 in Apache Spark by Puneet
4,322 views