is not a Parquet file expected magic number at tail 80 65 82 49 but found 51 53 10 10

+1 vote

Hi,

I tried to load one CSV file in SparkR . But it shows me the below error.



Can anyone tell me why I am getting this error?

Thank You

Feb 3, 2020 in Apache Spark by akhtar
• 38,230 points
17,202 views

1 answer to this question.

0 votes

Hi@akhtar,

Here you are trying to read a csv file but it is expecting a parquet file. So you can use the bellow command to avoid this.

df <- read.df(csvPath, "csv", header = "true", inferSchema = "true", na.strings = "NA")

Thank You

answered Feb 3, 2020 by MD
• 95,440 points

Related Questions In Apache Spark

0 votes
1 answer

What is a Parquet file in Spark?

Hey, Parquet is a columnar format file supported ...READ MORE

answered Jul 2, 2019 in Apache Spark by Gitika
• 65,910 points
1,076 views
0 votes
1 answer

Is it better to have one large parquet file or lots of smaller parquet files?

Ideally, you would use snappy compression (default) ...READ MORE

answered May 23, 2018 in Apache Spark by nitinrawat895
• 11,380 points
13,295 views
+1 vote
1 answer

How can I write a text file in HDFS not from an RDD, in Spark program?

Yes, you can go ahead and write ...READ MORE

answered May 29, 2018 in Apache Spark by Shubham
• 13,490 points
7,905 views
0 votes
1 answer

error: identifier expected but ']' found.

Hi, You can try this remove brackets from ...READ MORE

answered Jul 3, 2019 in Apache Spark by Gitika
• 65,910 points
5,028 views
+1 vote
2 answers
0 votes
1 answer

Is it possible to run Apache Spark without Hadoop?

Though Spark and Hadoop were the frameworks designed ...READ MORE

answered May 3, 2019 in Big Data Hadoop by ravikiran
• 4,620 points
993 views
0 votes
1 answer

What do we exactly mean by “Hadoop” – the definition of Hadoop?

The official definition of Apache Hadoop given ...READ MORE

answered Mar 16, 2018 in Big Data Hadoop by Shubham
1,609 views
+1 vote
1 answer

Hadoop Mapreduce word count Program

Firstly you need to understand the concept ...READ MORE

answered Mar 16, 2018 in Data Analytics by nitinrawat895
• 11,380 points
10,555 views
0 votes
1 answer
0 votes
1 answer

The number of stages in a job is equal to the number of RDDs in DAG. however, under one of the cgiven conditions, the scheduler can truncate the lineage. identify it.

Hi@Edureka, Spark's internal scheduler may truncate the lineage of the RDD graph ...READ MORE

answered Nov 26, 2020 in Apache Spark by MD
• 95,440 points
3,357 views
webinar REGISTER FOR FREE WEBINAR X
REGISTER NOW
webinar_success Thank you for registering Join Edureka Meetup community for 100+ Free Webinars each month JOIN MEETUP GROUP