One way to do that as the Kafka documentation shows is through kafka.tools.MirrorMaker which can do that trick. However, I need to copy a topic (say test with 1 partition) (its content and meta data) from a production environment to a development environment where connectivity is not there. I could do simple file transfer between environments though. My question: if I move the *.log and .index from the folder test-0 to the destination Kafka cluster, is that good enough? Or there is more that I need to do like meta data and ZooKeeper-related data that I need to move too?
标签:
apache-kafka
相关问题
- Delete Messages from a Topic in Apache Kafka
- Serializing a serialized Thrift struct to Kafka in
- Kafka broker shutdown while cleaning up log files
- Getting : Error importing Spark Modules : No modul
- How to transform all timestamp fields when using K
相关文章
- Kafka doesn't delete old messages in topics
- Kafka + Spark Streaming: constant delay of 1 secon
- Spring Kafka Template implementaion example for se
- How to fetch recent messages from Kafka topic
- Determine the Kafka-Client compatibility with kafk
- Kafka to Google Cloud Platform Dataflow ingestion
- Kafka Producer Metrics
- Spark Structured Streaming + Kafka Integration: Mi
Just copying the log and indexes will not suffice - kafka stores offsets and topic meta data in zookeeper. MirrorMaker is actually a quite simple tool, it spawns consumers to the source topic as well as producers to the target topic and runs until all consumers consumed the source queue. You can't find a simpler process to migrate a topic.
What worked for me in your scenario was the following sequence of actions:
retention.ms
config so that Kafka doesn't delete your presumably outdated segments).kafka-logs-<hash>/<your-topic>-0
).This also works if your Kafka is run from docker-compose (but you'll have to set up an appropriate volume, of course).