Kafka SASL zookeeper authentication

2019-01-22 22:33发布

问题:

I am facing the following error while enabling SASL on Zookeeper and broker authentication.

[2017-04-18 15:54:10,476] DEBUG Size of client SASL token: 0 
(org.apache.zookeeper.server.ZooKeeperServer)
[2017-04-18 15:54:10,476] ERROR cnxn.saslServer is null: cnxn object did not initialize its saslServer properly. (org.apache.zookeeper.server.    ZooKeeperServer)
[2017-04-18 15:54:10,478] ERROR SASL authentication failed using login context 'Client'. (org.apache.zookeeper.client.ZooKeeperSaslClient)
[2017-04-18 15:54:10,478] DEBUG Received event: WatchedEvent state:AuthFailed type:None path:null (org.I0Itec.zkclient.ZkClient)
[2017-04-18 15:54:10,478] INFO zookeeper state changed (AuthFailed) (org.I0Itec.zkclient.ZkClient)
[2017-04-18 15:54:10,478] DEBUG Leaving process event (org.I0Itec.zkclient.ZkClient)
[2017-04-18 15:54:10,478] DEBUG Closing ZkClient... (org.I0Itec.zkclient.ZkClient)
[2017-04-18 15:54:10,478] INFO Terminate ZkClient event thread. (org.I0Itec.zkclient.ZkEventThread)
[2017-04-18 15:54:10,478] DEBUG Closing ZooKeeper connected to localhost:2181 (org.I0Itec.zkclient.ZkConnection)
[2017-04-18 15:54:10,478] DEBUG Close called on already closed client (org.apache.zookeeper.ZooKeeper)
[2017-04-18 15:54:10,478] DEBUG Closing ZkClient...done (org.I0Itec.zkclient.ZkClient)
[2017-04-18 15:54:10,480] FATAL Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
org.I0Itec.zkclient.exception.ZkAuthFailedException: Authentication failure
    at org.I0Itec.zkclient.ZkClient.waitForKeeperState(ZkClient.java:947)
    at org.I0Itec.zkclient.ZkClient.waitUntilConnected(ZkClient.java:924)
    at org.I0Itec.zkclient.ZkClient.connect(ZkClient.java:1231)
    at org.I0Itec.zkclient.ZkClient.<init>(ZkClient.java:157)
    at org.I0Itec.zkclient.ZkClient.<init>(ZkClient.java:131)
    at kafka.utils.ZkUtils$.createZkClientAndConnection(ZkUtils.scala:79)
    at kafka.utils.ZkUtils$.apply(ZkUtils.scala:61)
    at kafka.server.KafkaServer.initZk(KafkaServer.scala:329)
    at kafka.server.KafkaServer.startup(KafkaServer.scala:187)
    at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:39)
    at kafka.Kafka$.main(Kafka.scala:67)
    at kafka.Kafka.main(Kafka.scala)
[2017-04-18 15:54:10,482] INFO shutting down (kafka.server.KafkaServer)

Following configuration is given in the JAAS file, which is passed as KAFKA_OPTS to take it as JVM parameter:-

  KafkaServer {
        org.apache.kafka.common.security.plain.PlainLoginModule required
        username="admin"
        password="admin-secret"
        user_admin="admin-secret"
    };

    Client {
    org.apache.kafka.common.security.plain.PlainLoginModule required
    username="admin"
    password="admin-secret";

    };

kafka broker's server.properties has following extra fields set:-

zookeeper.set.acl=true
security.inter.broker.protocol=SASL_PLAINTEXT
sasl.mechanism.inter.broker.protocol=PLAIN
sasl.enabled.mechanisms=PLAIN
ssl.client.auth=required
ssl.endpoint.identification.algorithm=HTTPS
ssl.keystore.location=path
ssl.keystore.password=anything
ssl.key.password=anything
ssl.truststore.location=path
ssl.truststore.password=anything

Zookeeper properties are as follows:-

authProvider.1=org.apache.zookeeper.server.auth.DigestAuthenticationProvider
jaasLoginRenew=3600000
requireClientAuthScheme=sasl

回答1:

I found the issue by increasing the log level to DEBUG. Basically follow the steps below. I don't use SSL but you will integrate it without any issue.

Following are my configuration files:

server.properties

security.inter.broker.protocol=SASL_PLAINTEXT
sasl.mechanism.inter.broker.protocol=PLAIN
sasl.enabled.mechanisms=PLAIN

authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
allow.everyone.if.no.acl.found=true
auto.create.topics.enable=false
broker.id=0
listeners=SASL_PLAINTEXT://localhost:9092
advertised.listeners=SASL_PLAINTEXT://localhost:9092
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600

advertised.host.name=localhost
num.partitions=1
num.recovery.threads.per.data.dir=1
log.flush.interval.messages=30000000
log.flush.interval.ms=1800000
log.retention.minutes=30
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
delete.topic.enable=true
zookeeper.connect=localhost:2181
zookeeper.connection.timeout.ms=6000
super.users=User:admin

zookeeper.properties

dataDir=/tmp/zookeeper
clientPort=2181
maxClientCnxns=0
authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider
requireClientAuthScheme=sasl
jaasLoginRenew=3600000

producer.properties

security.protocol=SASL_PLAINTEXT
sasl.mechanism=PLAIN
bootstrap.servers=localhost:9092
compression.type=none

consumer.properties

security.protocol=SASL_PLAINTEXT
sasl.mechanism=PLAIN
zookeeper.connect=localhost:2181
zookeeper.connection.timeout.ms=6000
group.id=test-consumer-group

Now are the most important files for making your server starting without any issue:

zookeeper_jaas.conf

Server {
   org.apache.kafka.common.security.plain.PlainLoginModule required
   username="admin"
   password="admin-secret"
   user_admin="admin-secret";
};

kafka_server_jaas.conf

KafkaServer {
   org.apache.kafka.common.security.plain.PlainLoginModule required
   username="admin"
   password="admin-secret"
   user_admin="admin-secret";
};

Client {
   org.apache.kafka.common.security.plain.PlainLoginModule required
   username="admin"
   password="admin-secret";
};

After doing all these configuration, on a first terminal window:

Terminal 1

From kafka root directory

$ export KAFKA_OPTS="-Djava.security.auth.login.config=/home/usename/Documents/kafka_2.11-0.10.1.0/config/zookeeper_jaas.conf"
$ bin/zookeeper-server-start.sh config/zookeeper.properties

Terminal 2

From kafka root directory

$ export KAFKA_OPTS="-Djava.security.auth.login.config=/home/usename/Documents/kafka_2.11-0.10.1.0/config/kafka_server_jaas.conf"
$ bin/kafka-server-start.sh config/server.properties

[BEGIN UPDATE]

kafka_client_jaas.conf

KafkaClient {
  org.apache.kafka.common.security.plain.PlainLoginModule required
  username="admin"
  password="admin-secret";
};

Terminal 3

On a client terminal, export client jaas conf file and start consumer:

$ export KAFKA_OPTS="-Djava.security.auth.login.config=/home/username/Documents/kafka_2.11-0.10.1.0/kafka_client_jaas.conf"
$ ./bin/kafka-console-consumer.sh --new-consumer --zookeeper localhost:2181 --topic test-topic --from-beginning --consumer.config=config/consumer.properties  --bootstrap-server=localhost:9092

Terminal 4

If you also want to produce, do this on another terminal window:

$ export KAFKA_OPTS="-Djava.security.auth.login.config=/home/username/Documents/kafka_2.11-0.10.1.0/kafka_client_jaas.conf"
$ ./bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test-topic --producer.config=config/producer.properties

[END UPDATE]



回答2:

You need to create a JAAS config file for Zookeeper and make it use it.

Create a file JAAS config file for Zookeeper with a content like this:

Server {
    org.apache.zookeeper.server.auth.DigestLoginModule required
    user_admin="admin-secret";
};

Where user (admin) and password (admin-secret) must match with username and password that you have in Client section of Kafka JAAS config file.

To make Zookeeper use the JAAS config file, pass the following JVM flag to Zookeeper pointing to the file created before.

-Djava.security.auth.login.config=/path/to/server/jaas/file.conf"

If you are using Zookeeper included with Kafka package you can launch Zookeeper like this, assuming that your Zookeeper JAAS config file is located in ./config/zookeeper_jaas.conf

EXTRA_ARGS=-Djava.security.auth.login.config=./config/zookeeper_jaas.conf ./bin/zookeeper-server-start.sh ./config/zookeeper.properties