Confluent kafkarest ERROR Server died unexpectedly

2019-08-14 13:17发布

I am running Kafka via Confluent platform. I have followed the steps as per documented here, https://docs.confluent.io/2.0.0/quickstart.html#quickstart

start zookeeper,

$ sudo ./bin/zookeeper-server-start ./etc/kafka/zookeeper.properties

start kafka,

$ sudo ./bin/kafka-server-start ./etc/kafka/server.properties

start schema-registry command,

$ sudo ./bin/schema-registry-start ./etc/schema-registry/schema-registry.properties

All are running fine.

Next I want to run REST proxy commands, as per documented here, https://docs.confluent.io/2.0.0/kafka-rest/docs/intro.html#quickstart

$ sudo bin/kafka-rest-start

But this commands fail with the following error. (ERROR Server died unexpectedly: (io.confluent.kafkarest.KafkaRestMain:63) java.lang.RuntimeException: Atleast one of bootstrap.servers or zookeeper.connect needs to be configured).

All are running fine. I don't understand why am I getting this error, could you please help solving this?

ESDGH-C02K648W:confluent-4.0.0 user$ sudo bin/kafka-rest-start
[2018-01-09 14:44:06,922] INFO KafkaRestConfig values: 
    metric.reporters = []
    client.security.protocol = PLAINTEXT
    bootstrap.servers = 
    response.mediatype.default = application/vnd.kafka.v1+json
    authentication.realm = 
    ssl.keystore.type = JKS
    metrics.jmx.prefix = kafka.rest
    ssl.truststore.password = [hidden]
    id = 
    host.name = 
    consumer.request.max.bytes = 67108864
    client.ssl.truststore.location = 
    ssl.endpoint.identification.algorithm = 
    compression.enable = false
    client.zk.session.timeout.ms = 30000
    client.ssl.keystore.type = JKS
    client.ssl.cipher.suites = 
    client.ssl.keymanager.algorithm = SunX509
    client.ssl.protocol = TLS
    response.mediatype.preferred = [application/vnd.kafka.v1+json, application/vnd.kafka+json, application/json]
    client.sasl.kerberos.ticket.renew.window.factor = 0.8
    ssl.truststore.type = JKS
    consumer.iterator.backoff.ms = 50
    access.control.allow.origin = 
    ssl.truststore.location = 
    ssl.keystore.password = [hidden]
    zookeeper.connect = 
    port = 8082
    client.ssl.keystore.password = [hidden]
    client.ssl.provider = 
    client.init.timeout.ms = 60000
    simpleconsumer.pool.size.max = 25
    simpleconsumer.pool.timeout.ms = 1000
    ssl.client.auth = false
    consumer.iterator.timeout.ms = 1
    client.sasl.kerberos.service.name = 
    ssl.trustmanager.algorithm = 
    authentication.method = NONE
    schema.registry.url = http://localhost:8081
    client.ssl.truststore.type = JKS
    request.logger.name = io.confluent.rest-utils.requests
    ssl.key.password = [hidden]
    client.sasl.kerberos.ticket.renew.jitter = 0.05
    client.ssl.endpoint.identification.algorithm = 
    authentication.roles = [*]
    client.ssl.trustmanager.algorithm = PKIX
    metrics.num.samples = 2
    consumer.threads = 1
    ssl.protocol = TLS
    client.ssl.keystore.location = 
    debug = false
    listeners = []
    ssl.provider = 
    ssl.enabled.protocols = []
    client.sasl.kerberos.min.time.before.relogin = 60000
    producer.threads = 5
    shutdown.graceful.ms = 1000
    ssl.keystore.location = 
    consumer.request.timeout.ms = 1000
    ssl.cipher.suites = []
    client.timeout.ms = 500
    consumer.instance.timeout.ms = 300000
    client.sasl.kerberos.kinit.cmd = /usr/bin/kinit
    client.ssl.key.password = [hidden]
    access.control.allow.methods = 
    ssl.keymanager.algorithm = 
    metrics.sample.window.ms = 30000
    client.ssl.truststore.password = [hidden]
    client.ssl.enabled.protocols = TLSv1.2,TLSv1.1,TLSv1
    kafka.rest.resource.extension.class = 
    client.sasl.mechanism = GSSAPI
 (io.confluent.kafkarest.KafkaRestConfig:175)
[2018-01-09 14:44:06,954] INFO Logging initialized @402ms (org.eclipse.jetty.util.log:186)
[2018-01-09 14:44:07,154] ERROR Server died unexpectedly:  (io.confluent.kafkarest.KafkaRestMain:63)
java.lang.RuntimeException: Atleast one of bootstrap.servers or zookeeper.connect needs to be configured
    at io.confluent.kafkarest.KafkaRestApplication.setupInjectedResources(KafkaRestApplication.java:104)
    at io.confluent.kafkarest.KafkaRestApplication.setupResources(KafkaRestApplication.java:83)
    at io.confluent.kafkarest.KafkaRestApplication.setupResources(KafkaRestApplication.java:45)
    at io.confluent.rest.Application.createServer(Application.java:157)
    at io.confluent.rest.Application.start(Application.java:495)
    at io.confluent.kafkarest.KafkaRestMain.main(KafkaRestMain.java:56)
ESDGH-C02K648W:confluent-4.0.0 user$ 

2条回答
淡お忘
2楼-- · 2019-08-14 13:26

The kafka-rest-start script takes a properties file as an argument. This is documented further down in the quick start you have linked.

查看更多
爱情/是我丢掉的垃圾
3楼-- · 2019-08-14 13:46

The kafka-rest-start script takes a properties file as an argument. you must pass ./etc/kafka-rest/kafka-rest.properties in command line.

bin/kafka-rest-start ./etc/kafka-rest/kafka-rest.properties

查看更多
登录 后发表回答