Set timestamp in output with Kafka Streams fails f

2019-07-10 17:11发布

问题:

Suppose we have a transformer (written in Scala)

new Transformer[String, V, (String, V)]() {
  var context: ProcessorContext = _

  override def init(context: ProcessorContext): Unit = {
    this.context = context
  }

  override def transform(key: String, value: V): (String, V) = {
    val timestamp = toTimestamp(value)
    context.forward(key, value, To.all().withTimestamp(timestamp))
    key -> value
  }

  override def close(): Unit = ()
}

where toTimestamp is just a function which returns an a timestamp fetched from the record value. Once it gets executed, there's an NPE:

Exception in thread "...-6f3693b9-4e8d-4e65-9af6-928884320351-StreamThread-5" java.lang.NullPointerException
    at org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:110)
    at CustomTransformer.transform()
    at CustomTransformer.transform()
    at org.apache.kafka.streams.scala.kstream.KStream$$anon$1$$anon$2.transform(KStream.scala:302)
    at org.apache.kafka.streams.scala.kstream.KStream$$anon$1$$anon$2.transform(KStream.scala:300)
    at 

what essentially happens is that ProcessorContextImpl fails in:

public <K, V> void forward(final K key, final V value, final To to) {
    toInternal.update(to);
    if (toInternal.hasTimestamp()) {
        recordContext.setTimestamp(toInternal.timestamp());
    }
    final ProcessorNode previousNode = currentNode();

because the recordContext was not initialized (an it could only be done internally by KafkaStreams).

This is a follow up question Set timestamp in output with Kafka Streams 1

回答1:

If you work with transformer, you need to make sure that a new Transformer object is create when TransformerSupplier#get() is called. (cf. https://docs.confluent.io/current/streams/faq.html#why-do-i-get-an-illegalstateexception-when-accessing-record-metadata)

In the original question, I thought it's about your context variable that results in NPE, but now I realized it's about the Kafka Streams internals.

The Scala API has a bug in 2.0.0 that may result in the case that the same Transformer instance is reused (https://issues.apache.org/jira/browse/KAFKA-7250). I think that you are hitting this bug. Rewriting your code a little bit should fix the issues. Note, that Kafka 2.0.1 and Kafka 2.1.0 contain a fix.



回答2:

@matthias-j-sax Same behavior if processor reused in Java code.

    Topology topology = new Topology();
    MyProcessor myProcessor = new MyProcessor();
    topology.addSource("source", "topic-1")
            .addProcessor(
                    "processor",
                    () -> {
                        return myProcessor;
                    },
                    "source"
            )
            .addSink("sink", "topic-2", "processor");
    KafkaStreams streams = new KafkaStreams(topology, config);
    streams.start();