Kafka SASL SCRAM

0 votes

Already that day in a row I have been trying unsuccessfully to configure SASL / SCRAM for Kafka. I will be grateful to everyone who can help. I have Kafka version 2.5.0 - https://www.apache.org/dyn/closer.cgi?path=/kafka/2.5.0/kafka_2.12-2.5.0.tgz I also have the latest version of Zookeeper - https://www.apache.org/dyn/closer.lua/zookeeper/zookeeper-3.6.1/apache-zookeeper-3.6.1-bin.tar.gz

First I set up the zoo.cfg as follows:

dataDir=/home/duck/Public/zookeeper_logs
maxClientCnxns=0
authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider
requireClientAuthScheme=sasl
jaasLoginRenew=3600000

Next, I created a zookeeper jaas file:

zookeeper_jaas.conf

Server {
   org.apache.kafka.common.security.scram.ScramLoginModule required
   username="admin"
   password="admin-secret"
   user_admin="admin-secret";
};
QuorumServer {
       org.apache.zookeeper.server.auth.DigestLoginModule required
       user_admin="admin-secret";
};

QuorumLearner {
       org.apache.zookeeper.server.auth.DigestLoginModule required
       username="admin"
    password="admin-secret"
    user_admin="admin-secret";
};

Next, I set up the jvm: SERVER_JVMFLAGS = "- Djava.security.auth.login.config = / home / duck / Public / zookeeper / zk_jaas.conf" After that i launched Zookeeper: $zkHome/bin/skServer.sh start

Next, I created a JAAS file for kafka:

kafka_server_jaas.conf:

KafkaServer {
   org.apache.kafka.common.security.scram.ScramLoginModule required
   username="admin"
   password="admin-secret"
   user_admin="admin-secret";
};

Client {
   org.apache.kafka.common.security.scram.ScramLoginModule required
   username="admin"
   password="admin-secret";
};

After I executed:

export KAFKA_OPTS="-Djava.security.auth.login.config=/home/duck/Public/kafka/config/kafka_server_jaas.conf"

The settings for Kafka broker are as follows:

server.properties

############################# Server Basics #############################

# The id of the broker. This must be set to a unique integer for each broker.
broker.id=0

# Use below for SASL/SCRAM only (No SSL)
# For rest of the brokers change port (highlighted below) to 9091 and 9092
# Using SASL_PLAINTEXT as we do not have SSL
listeners=SASL_PLAINTEXT://localhost:9092
security.inter.broker.protocol=SASL_PLAINTEXT
sasl.mechanism.inter.broker.protocol=SCRAM-SHA-256
sasl.enabled.mechanisms=SCRAM-SHA-256

############################# Socket Server Settings #############################
num.network.threads=3
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
############################# Log Basics #############################
log.dirs=/tmp/kafka-logs
num.partitions=1
num.recovery.threads.per.data.dir=1
############################# Internal Topic Settings  #############################
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1

############################# Log Retention Policy #############################
log.retention.hours=168
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000

############################# Zookeeper #############################
zookeeper.connect=localhost:2181
zookeeper.connection.timeout.ms=18000
  • If it matters, I use Java-13 After all the settings, the Kafka server starts up, and even works for a while, after which it generates errors of the following kind:

    [2020-06-03 20:23:30,096] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) [2020-06-03 20:23:30,097] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) [2020-06-03 20:23:30,097] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) [2020-06-03 20:23:30,097] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) [2020-06-03 20:23:30,097] INFO Client environment:os.version=5.3.0-55-generic (org.apache.zookeeper.ZooKeeper) [2020-06-03 20:23:30,097] INFO Client environment:user.name=duck (org.apache.zookeeper.ZooKeeper) [2020-06-03 20:23:30,097] INFO Client environment:user.home=/home/duck (org.apache.zookeeper.ZooKeeper) [2020-06-03 20:23:30,097] INFO Client environment:user.dir=/home/duck/Public/kafka (org.apache.zookeeper.ZooKeeper) [2020-06-03 20:23:30,097] INFO Client environment:os.memory.free=980MB (org.apache.zookeeper.ZooKeeper) [2020-06-03 20:23:30,097] INFO Client environment:os.memory.max=1024MB (org.apache.zookeeper.ZooKeeper) [2020-06-03 20:23:30,098] INFO Client environment:os.memory.total=1024MB (org.apache.zookeeper.ZooKeeper) [2020-06-03 20:23:30,101] INFO Initiating client connection, connectString=localhost:2181 sessionTimeout=18000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@24105dc5 (org.apache.zookeeper.ZooKeeper) [2020-06-03 20:23:30,108] INFO jute.maxbuffer value is 4194304 Bytes (org.apache.zookeeper.ClientCnxnSocket) [2020-06-03 20:23:30,116] INFO zookeeper.request.timeout value is 0. feature enabled= (org.apache.zookeeper.ClientCnxn) [2020-06-03 20:23:30,120] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient) [2020-06-03 20:23:30,187] INFO Client successfully logged in. (org.apache.zookeeper.Login) [2020-06-03 20:23:30,190] INFO Client will use DIGEST-MD5 as SASL mechanism. (org.apache.zookeeper.client.ZooKeeperSaslClient) [2020-06-03 20:23:30,194] INFO Opening socket connection to server localhost/127.0.0.1:2181. Will attempt to SASL-authenticate using Login Context section 'Client' (org.apache.zookeeper.ClientCnxn) [2020-06-03 20:23:30,202] INFO Socket error occurred: localhost/127.0.0.1:2181: Connection refused (org.apache.zookeeper.ClientCnxn) [2020-06-03 20:23:31,307] INFO Client successfully logged in. (org.apache.zookeeper.Login) [2020-06-03 20:23:31,307] INFO Client will use DIGEST-MD5 as SASL mechanism. (org.apache.zookeeper.client.ZooKeeperSaslClient) [2020-06-03 20:23:31,309] INFO Opening socket connection to server localhost/127.0.0.1:2181. Will attempt to SASL-authenticate using Login Context section 'Client' (org.apache.zookeeper.ClientCnxn) [2020-06-03 20:23:31,310] INFO Socket error occurred: localhost/127.0.0.1:2181: Connection refused (org.apache.zookeeper.ClientCnxn) [2020-06-03 20:23:32,413] INFO Client successfully logged in. (org.apache.zookeeper.Login) [2020-06-03 20:23:32,414] INFO Client will use DIGEST-MD5 as SASL mechanism. (org.apache.zookeeper.client.ZooKeeperSaslClient) [2020-06-03 20:23:32,415] INFO Opening socket connection to server localhost/127.0.0.1:2181. Will attempt to SASL-authenticate using Login Context section 'Client' (org.apache.zookeeper.ClientCnxn) [2020-06-03 20:23:32,416] INFO Socket error occurred: localhost/127.0.0.1:2181: Connection refused (org.apache.zookeeper.ClientCnxn) [2020-06-03 20:23:33,519] INFO Client successfully logged in. (org.apache.zookeeper.Login) [2020-06-03 20:23:33,519] INFO Client will use DIGEST-MD5 as SASL mechanism. (org.apache.zookeeper.client.ZooKeeperSaslClient) [2020-06-03 20:23:33,522] INFO Opening socket connection to server localhost/127.0.0.1:2181. Will attempt to SASL-authenticate using Login Context section 'Client' (org.apache.zookeeper.ClientCnxn) [2020-06-03 20:23:33,523] INFO Socket error occurred: localhost/127.0.0.1:2181: Connection refused (org.apache.zookeeper.ClientCnxn)

    [2020-06-03 20:23:34,626] INFO Client successfully logged in. (org.apache.zookeeper.Login) [2020-06-03 20:23:34,626] INFO Client will use DIGEST-MD5 as SASL mechanism. (org.apache.zookeeper.client.ZooKeeperSaslClient) [2020-06-03 20:23:34,628] INFO Opening socket connection to server localhost/127.0.0.1:2181. Will attempt to SASL-authenticate using Login Context section 'Client' (org.apache.zookeeper.ClientCnxn) [2020-06-03 20:23:34,629] INFO Socket error occurred: localhost/127.0.0.1:2181: Connection refused (org.apache.zookeeper.ClientCnxn)

    [2020-06-03 20:23:35,445] INFO Terminating process due to signal SIGINT (org.apache.kafka.common.utils.LoggingSignalHandler) [2020-06-03 20:23:35,450] INFO shutting down (kafka.server.KafkaServer) [2020-06-03 20:23:35,455] ERROR Fatal error during KafkaServer shutdown. (kafka.server.KafkaServer) java.lang.IllegalStateException: Kafka server is still starting up, cannot shut down! at kafka.server.KafkaServer.shutdown(KafkaServer.scala:602) at kafka.server.KafkaServerStartable.shutdown(KafkaServerStartable.scala:54) at kafka.Kafka$.$anonfun$main$3(Kafka.scala:80) at kafka.utils.Exit$.$anonfun$addShutdownHook$1(Exit.scala:38) at java.base/java.lang.Thread.run(Thread.java:830) [2020-06-03 20:23:35,459] ERROR Halting Kafka. (kafka.server.KafkaServerStartable)

Jun 4, 2020 in Apache Kafka by progDuck
• 140 points
545 views

1 answer to this question.

0 votes

Hi@progDuck,

By default it is connecting to localhost:2181, since there is no zookeeper server running on client host so it fails. So specify zookeeper server and port in your command then it will work.

answered Jun 5, 2020 by MD
• 95,180 points

Related Questions In Apache Kafka

0 votes
1 answer

Is there any change in consumer offsets if a new partition(s) is added to a Kafka topic?

Yes, it stays the same. An offset is ...READ MORE

answered Jul 9, 2018 in Apache Kafka by nitinrawat895
• 11,380 points
927 views
0 votes
1 answer

Kafka topic not being deleted

By default in Kafka version 0.10, delete.topic.enable ...READ MORE

answered Jul 9, 2018 in Apache Kafka by nitinrawat895
• 11,380 points
1,631 views
0 votes
1 answer

Re-balancing error while reading messages from Kafka.

rebalance.backoff.ms defines the time for which Kafka ...READ MORE

answered Jul 9, 2018 in Apache Kafka by Shubham
• 13,480 points
1,547 views
+1 vote
10 answers

Writing the Kafka consumer output to a file

System.out.println(String.valueOf(output.offset()) + ": " + new String(bytes, ...READ MORE

answered Dec 7, 2018 in Apache Kafka by Harsh
21,180 views
+1 vote
1 answer

Is Kafka and Zookeeper are required in a Big Data Cluster?

Apache Kafka is one of the components ...READ MORE

answered Mar 22, 2018 in Big Data Hadoop by nitinrawat895
• 11,380 points
898 views
0 votes
1 answer

which one to choose for log analysis?

It is not about that you can ...READ MORE

answered Apr 6, 2018 in Big Data Hadoop by nitinrawat895
• 11,380 points
132 views
0 votes
1 answer

Getting error while connecting zookeeper in Kafka - Spark Streaming integration

I guess you need provide this kafka.bootstrap.servers ...READ MORE

answered May 24, 2018 in Apache Spark by Shubham
• 13,480 points
1,672 views
0 votes
1 answer

Kafka Feature

Here are some of the important features of ...READ MORE

answered Jun 7, 2018 in Apache Spark by Data_Nerd
• 2,390 points
402 views
0 votes
1 answer

Kafka SASL/SCRAM

Hi@progDuck, By default it is connecting to localhost:2181, ...READ MORE

answered Jun 5, 2020 in Apache Kafka by MD
• 95,180 points
730 views
0 votes
14 answers

How can I delete all the messages from a Kafka topic?

To delete all the messages from a Kafka topic.  There are many approach ...READ MORE

answered Jun 19, 2020 in Apache Kafka by Pawan
• 380 points
158,314 views