questions/apache-spark/page/11
Unless and until you have not changed ...READ MORE
By default, the port of which the ...READ MORE
Spark thinks that it is a good ...READ MORE
When a singleton object is named the ...READ MORE
Hi! I found 2 links on github where ...READ MORE
Check if you are able to access ...READ MORE
val x = sc.parallelize(1 to 10, 2) // ...READ MORE
Yes, you can reorder the dataframe elements. You need ...READ MORE
You have forgotten to mention the case ...READ MORE
Preparing for an interview? We have something ...READ MORE
You have to use "===" instead of ...READ MORE
First, import the data in Spark and ...READ MORE
Apache Spark supports the following four languages: Scala, ...READ MORE
Seems like master and worker are not ...READ MORE
you can access task information using TaskContext: import org.apache.spark.TaskContext sc.parallelize(Seq[Int](), ...READ MORE
How many spark context objects you should ...READ MORE
println("Slayer") is an anonymous block and gets ...READ MORE
You can add external jars as arguments ...READ MORE
GraphX is the Spark API for graphs and ...READ MORE
You can try and check this below ...READ MORE
If, for option 2, you mean have ...READ MORE
Spark 2 doesn't differ much architecture-wise from ...READ MORE
Yes, you can go ahead and write ...READ MORE
I suggest you to check 2 things That jquery.sparkline.js is actually ...READ MORE
Minimizing data transfers and avoiding shuffling helps ...READ MORE
Use Array.maxBy method: val a = Array(("a",1), ("b",2), ...READ MORE
> In order to reduce the processing ...READ MORE
No, it is not necessary to install ...READ MORE
Yes, there is a difference between the ...READ MORE
There are 2 ways to check the ...READ MORE
Spark revolves around the concept of a ...READ MORE
Hadoop 3 is not widely used in ...READ MORE
Spark has various persistence levels to store ...READ MORE
As parquet is a column based storage ...READ MORE
Whenever a node goes down, Spark knows ...READ MORE
I can list some but there can ...READ MORE
Just do the following: Edit your conf/log4j.properties file ...READ MORE
With mapPartion() or foreachPartition(), you can only ...READ MORE
No, it doesn’t provide storage layer but ...READ MORE
Spark is agnostic to the underlying cluster ...READ MORE
Spark SQL is capable of: Loading data from ...READ MORE
There are two popular ways using which ...READ MORE
The full form of RDD is a ...READ MORE
Can you share the screenshots for the ...READ MORE
Spark 2.0+ Spark 2.0 provides native window functions ...READ MORE
According to me, start with a standalone ...READ MORE
Some of the key differences between an RDD and ...READ MORE
In your log4j.properties file you need to ...READ MORE
Let's first look at mapper side differences Map ...READ MORE
SqlContext has a number of createDataFrame methods ...READ MORE
OR
At least 1 upper-case and 1 lower-case letter
Minimum 8 characters and Maximum 50 characters
Already have an account? Sign in.