Spark streaming with Kafka dependency error

0 votes

I am writing a program where I am trying to stream data using Spark streaming from Kafka. But when I am executing the program I am getting the following error:

An exception or error caused a run to abort: kafka.server.KafkaServer$.$lessinit$greater$default$2()Lorg/apache/kafka/common/utils/Time;
java.lang.NoSuchMethodError: kafka.server.KafkaServer$.$lessinit$greater$default$2()Lorg/apache/kafka/common/utils/Time;
    at net.manub.embeddedkafka.EmbeddedKafkaSupport$class.startKafka(EmbeddedKafka.scala:467)
    at net.manub.embeddedkafka.EmbeddedKafka$.startKafka(EmbeddedKafka.scala:38)
    at net.manub.embeddedkafka.EmbeddedKafka$.start(EmbeddedKafka.scala:55)
    at iris.orange.ScalaTest$$anonfun$1.apply$mcV$sp(ScalaTest.scala:10)

I am not sure but I think this error is regarding dependency. Can anyone help me in understanding the error? Thanks in advance.

Jul 4, 2018 in Apache Spark by code799
109 views

1 answer to this question.

Your answer

Your name to display (optional):
Privacy: Your email address will only be used for sending these notifications.
0 votes

Your error is with the version of spark-kafka-streaming. Trying using this Spark-streaming-Kafka jar. If It does not work then try changing the version of the jar accordingly.

<dependency>
    <groupId>org.apache.spark</groupId>
    <artifactId>spark-streaming-kafka-0-10_2.10</artifactId>
    <version>2.0.0</version>
</dependency>
answered Jul 5, 2018 by Shubham
• 12,230 points

Related Questions In Apache Spark

+1 vote
2 answers

Hadoop 3 compatibility with older versions of Hive, Pig, Sqoop and Spark

Hadoop 3 is not widely used in ...READ MORE

answered Apr 20, 2018 in Apache Spark by kurt_cobain
• 9,260 points
1,187 views
0 votes
1 answer

Spark Monitoring with Ganglia

Ganglia looks like a good option for ...READ MORE

answered May 4, 2018 in Apache Spark by kurt_cobain
• 9,260 points
139 views
0 votes
1 answer

Is it possible to run Spark and Mesos along with Hadoop?

Yes, it is possible to run Spark ...READ MORE

answered May 29, 2018 in Apache Spark by Data_Nerd
• 2,340 points
23 views
0 votes
1 answer
+1 vote
1 answer
0 votes
1 answer

Is Kafka and Zookeeper are required in a Big Data Cluster?

Apache Kafka is one of the components ...READ MORE

answered Mar 22, 2018 in Big Data Hadoop by nitinrawat895
• 9,030 points
253 views
0 votes
3 answers

Can we run Spark without using Hadoop?

No, you can run spark without hadoop. ...READ MORE

answered May 7 in Big Data Hadoop by pradeep
88 views
0 votes
1 answer

What is the benefit of using CDH over other Distributors?

CDH is basically a packaged deal, where ...READ MORE

answered Mar 29, 2018 in Big Data Hadoop by kurt_cobain
• 9,260 points
30 views
0 votes
1 answer

Getting error while connecting zookeeper in Kafka - Spark Streaming integration

I guess you need provide this kafka.bootstrap.servers ...READ MORE

answered May 24, 2018 in Apache Spark by Shubham
• 12,230 points
340 views
0 votes
1 answer

© 2018 Brain4ce Education Solutions Pvt. Ltd. All rights Reserved.
"PMP®","PMI®", "PMI-ACP®" and "PMBOK®" are registered marks of the Project Management Institute, Inc. MongoDB®, Mongo and the leaf logo are the registered trademarks of MongoDB, Inc.