Not able to access messages from confluent kafka o

2019-08-25 01:18发布

Confluent Kafka 5.0.0 has been installed on AWS EC2 which has Public IP say 54.XX.XX.XX Opened port 9092 on the EC2 machine with 0.0.0.0

In /etc/kafka/server.properties I have

advertised.listeners=PLAINTEXT://54.XX.XX.XX:9092  
listeners=PLAINTEXT://0.0.0.0:9092

In /etc/kafka/producer.properties I have bootstrap.servers=0.0.0.0:9092

on local machine In /etc/kafka/consumer.properties I have bootstrap.servers=54.XX.XX.XX:9092

In the EC2, started kafka 'confluent start' and created 'mytopic'

My producer.py code running from local machine looks like (relavant portion):

from confluent_kafka import Producer
broker = '54.XX.XX.XX'
topic = 'mytopic'
    p = Producer({'bootstrap.servers': broker})

    for data in dictList:
        p.poll(0)
        sendme = json.dumps(data)
        p.produce(topic, sendme.encode('utf-8'), callback=delivery_report)

    p.flush()

This seems to write messages to 'mytopic' in the kafka stream in the EC2. I can see those messages in 'kafkacat -b 54.XX.XX.XX -t mytopic' on the EC2.

But I am not able to access those message from local machine as a simple message printing consumer, with code as below:

from confluent_kafka import Consumer, KafkaError, KafkaException
import json
import sys

broker = '54.XX.XX.XX'
topic = 'mytopic'
group = 'mygroup'
     c = Consumer({
         'bootstrap.servers': broker,
         'group.id': group,
         'session.timeout.ms': 6000,
         'default.topic.config': {
             'auto.offset.reset': 'smallest'
         }
     })
     basic_consume_loop(c,[topic])

def basic_consume_loop(consumer, topics):
    try:
        consumer.subscribe(topics)

        while running:
            msg = consumer.poll(timeout=1.0)
            if msg is None: continue

            if msg.error():
                if msg.error().code() == KafkaError._PARTITION_EOF:
                    # End of partition event
                    sys.stderr.write('{} [{}] reached end at offset {}\n'.format(msg.topic(), msg.partition(), msg.offset()))
                    data_process()
                elif msg.error():
                    raise KafkaException(msg.error())
            else:
                msg_process(msg)
    finally:
        # Close down consumer to commit final offsets.
        print("Shutting down the consumer")
        consumer.close()

It just hangs, did I miss any settings?

1条回答
等我变得足够好
2楼-- · 2019-08-25 02:14

Following steps seem to work.

On both, local and EC2 machine, in /etc/kakfa/server.properties set

listeners=PLAINTEXT://0.0.0.0:9092
advertised.listeners=PLAINTEXT://54.XX.XX.XX:9092

On local machine, in /etc/kakfa/producer.properties set

bootstrap.servers=0.0.0.0:9092

On EC2 machine, in /etc/kakfa/producer.properties set

bootstrap.servers=localhost:9092

On both local and EC2 machine, in /etc/kakfa/consumer.properties set

bootstrap.servers=0.0.0.0:9092
group.id=mygroup

Use 'confluent-start' to start all necessary daemons on remote EC2 machine. On local machine, Confluent is NOT made running.

On local machine (for ip hiding, optional):

export KAFKA_PRODUCER_IP=54.XX.XX.XX

With this, producer from local machine,can put messages on remote EC2 Kafka by following:

broker = os.environ['KAFKA_PRODUCER_IP'] + ':9092'
topic = 'mytopic'
p = Producer({'bootstrap.servers': broker})

From local machine, messages could be fetched from remote EC2 kafka by following:

broker = os.environ['KAFKA_PRODUCER_IP'] + ':9092'
topic = 'mytopic'
group = 'mygroup'
     c = Consumer({
         'bootstrap.servers': broker,
         'group.id': group,
         'session.timeout.ms': 6000,
         'default.topic.config': {
             'auto.offset.reset': 'smallest'
         }
     })

These steps seems to work. There could be some redundancies, if so, do point out.

查看更多
登录 后发表回答