Kafka + Zookeeper: Connection to node -1 could not

2020-03-08 07:22发布

I am running in my locahost both Zookeeper and Kafka (1 instance each).

I create succesfully a topic from kafka:

./bin/kafka-topics.sh --zookeeper localhost:2181 --create --replication-factor 1 --partitions 1 --topic Hello-Nicola

Created topic "Hello-Nicola".

Kafka logs show:

[2017-12-06 16:00:17,753] INFO [KafkaServer id=0] started (kafka.server.KafkaServer)
[2017-12-06 16:03:19,347] INFO [ReplicaFetcherManager on broker 0] Removed fetcher for partitions Hello-Nicola-0 (kafka.server.ReplicaFetcherManager)
[2017-12-06 16:03:19,393] INFO Loading producer state from offset 0 for partition Hello-Nicola-0 with message format version 2 (kafka.log.Log)
[2017-12-06 16:03:19,406] INFO Completed load of log Hello-Nicola-0 with 1 log segments, log start offset 0 and log end offset 0 in 35 ms (kafka.log.Log)
[2017-12-06 16:03:19,408] INFO Created log for partition [Hello-Nicola,0] in /tmp/kafka-logs with properties {compression.type -> producer, message.format.version -> 1.0-IV0, file.delete.delay.ms -> 60000, max.message.bytes -> 1000012, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, cleanup.policy -> [delete], flush.ms -> 9223372036854775807, segment.ms -> 604800000, segment.bytes -> 1073741824, retention.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760, flush.messages -> 9223372036854775807}. (kafka.log.LogManager)
[2017-12-06 16:03:19,409] INFO [Partition Hello-Nicola-0 broker=0] No checkpointed highwatermark is found for partition Hello-Nicola-0 (kafka.cluster.Partition)
[2017-12-06 16:03:19,411] INFO Replica loaded for partition Hello-Nicola-0 with initial high watermark 0 (kafka.cluster.Replica)
[2017-12-06 16:03:19,413] INFO [Partition Hello-Nicola-0 broker=0] Hello-Nicola-0 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)

But Zookeeper logs show:

2017-12-06 16:03:19,299 [myid:] - INFO  [ProcessThread(sid:0 cport:2181)::PrepRequestProcessor@653] - Got user-level KeeperException when processing sessionid:0x1000177fb3d0001 type:create cxid:0x43 zxid:0x26 txntype:-1 reqpath:n/a Error Path:/brokers/topics/Hello-Nicola/partitions/0 Error:KeeperErrorCode = NoNode for /brokers/topics/Hello-Nicola/partitions/0
2017-12-06 16:03:19,302 [myid:] - INFO  [ProcessThread(sid:0 cport:2181)::PrepRequestProcessor@653] - Got user-level KeeperException when processing sessionid:0x1000177fb3d0001 type:create cxid:0x44 zxid:0x27 txntype:-1 reqpath:n/a Error Path:/brokers/topics/Hello-Nicola/partitions Error:KeeperErrorCode = NoNode for /brokers/topics/Hello-Nicola/partitions

If I try to produce messages:

./bin/kafka-console-producer.sh --broker-list localhost:9092 --topic Hello-Nicola
>ciao
[2017-12-06 16:04:21,897] WARN [Producer clientId=console-producer] Connection to node -1 could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2017-12-06 16:04:22,000] WARN [Producer clientId=console-producer] Connection to node -1 could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)

server.properties (in kafka) is:

broker.id=0
listeners=PLAINTEXT://mylocal-0:9092
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
num.partitions=1
num.recovery.threads.per.data.dir=1
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1
log.retention.hours=168
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
zookeeper.connect=localhost:2181
zookeeper.connection.timeout.ms=6000
group.initial.rebalance.delay.ms=0

It seems that Zookeeper didn't registrer any broker.

Any suggestion, please?

4条回答
萌系小妹纸
2楼-- · 2020-03-08 07:49

UPD: if you are running in single-node mode:

I have seen this message in spark console log while trying to deploy application. Solved by changing this parameter in server.properties:

listeners=PLAINTEXT://myhostname:9092

to

listeners=PLAINTEXT://localhost:9092

make sure that you have java process listening on 9092 with netstat -lptu

查看更多
萌系小妹纸
3楼-- · 2020-03-08 08:06

If this happens suddenly after it was working, you should try to restart Kafka first.

In my case, restarting solved the problem:

$docker-compose down && docker-compose up -d
查看更多
该账号已被封号
4楼-- · 2020-03-08 08:08

Change:

#listeners=PLAINTEXT://:9092`

in server.properties to:

listeners=PLAINTEXT://localhost:9092

Note: You also need to uncomment this statement aka remove the # symbol.

查看更多
家丑人穷心不美
5楼-- · 2020-03-08 08:11

I found the error. Observing zookeeper logs when the server started I noticed:

server.1=mylocal-0.:2888:3888

with a dot (.) after the name of the host.

The script that produces the zookeeper's config is from https://github.com/kubernetes/contrib/blob/master/statefulsets/zookeeper/zkGenConfig.sh

Looking inside I see that DOMAIN is not filled:

HOST=`hostname -s`
DOMAIN=`hostname -d`

function print_servers() {
    for (( i=1; i<=$ZK_REPLICAS; i++ ))
    do
        echo "server.$i=$NAME-$((i-1)).$DOMAIN:$ZK_SERVER_PORT:$ZK_ELECTION_PORT"
    done
}

For my case (localhost) I don't need domain, so I removed that variable.

Now zookeeper and kafka communicate with no errors.

查看更多
登录 后发表回答