Using apache kafka in SSL mode

2019-09-14 14:47发布

问题:

I'm trying to set up kafka in SSL [1-way] mode. I've gone through the official documentation and successfully generated the certificates. I'll note down the behavior for 2 different cases. This setup has only one broker and one zookeeper.

Case-1: Inter-broker communication - Plaintext

Relevant entries in my server.properties file are as follows:

listeners=PLAINTEXT://localhost:9092, SSL://localhost:9093
ssl.keystore.location=/Users/xyz/home/ssl/server.keystore.jks
ssl.keystore.password=****
ssl.key.password=****

I've added a client-ssl.properties in kafka config dir with following entries:

security.protocol=SSL
ssl.truststore.location=/Users/xyz/home/ssl/client.truststore.jks
ssl.truststore.password=****

If I put bootstrap.servers=localhost:9093 or bootstrap.servers=localhost:9092 in my config/producer.properties file, my console-producers/consumers work fine. Is that the intended behavior? If yes, then why? Because I'm specifically trying to connect to localhost:9093 from producer/consumer in SSL mode.

Case-2: Inter-broker communication - SSL

Relevant entries in my server.properties file are as follows:

security.inter.broker.protocol=SSL
listeners=SSL://localhost:9093
ssl.keystore.location=/Users/xyz/home/ssl/server.keystore.jks
ssl.keystore.password=****
ssl.key.password=****

My client-ssl.properties file remains the same. I put bootstrap.servers=localhost:9093 in producer.properties file. Now, none of my producer/consumer can connect to kafka. I get the following msg:

WARN Error while fetching metadata with correlation id 0 : {test=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)

What am I doing wrong?

In all these cases I'm using the following commands to start producers/consumers:

./kafka-console-producer.sh --broker-list localhost:9093 --topic test --producer.config ../config/client-ssl.properties
./kafka-console-consumer.sh --bootstrap-server localhost:9093 --topic test --consumer.config ../config/client-ssl.properties

回答1:

Make sure that the common names (CN) in your certificates match your hostname. SSL protocol verify CN against hostname. I guess here you should have CN=localhost. I had a similar issue and that's how I fixed it.



回答2:

One important information regarding this: The behavior where the CN has to be equal to the hostname can be deactivated, by adding the following line to server.properties:

    ssl.endpoint.identification.algorithm=

The default value for this setting is set to https, which ultimately activates the host to CN verification. This is the default since Kafka 2.0.

I've successfully tested a SSL setup (just on the broker side though) with the following properties:

    ############################ SSL Config #################################
    ssl.truststore.location=/path/to/kafka.truststore.jks
    ssl.truststore.password=TrustStorePassword
    ssl.keystore.location=/path/to/kafka.server.keystore.jks
    ssl.keystore.password=KeyStorePassword
    ssl.key.password=PrivateKeyPassword
    security.inter.broker.protocol=SSL
    listeners=SSL://localhost:9093
    advertised.listeners=SSL://127.0.0.1:9093
    ssl.client.auth=required
    ssl.endpoint.identification.algorithm=

You can also find a Shell script to generate SSL certificates (with key- and truststores) alongside some documentation in this github project: https://github.com/confluentinc/confluent-platform-security-tools