Structured Streaming - Consume each message

2020-07-27 05:32发布

问题:

What would be the "recommended" way to process each message as it comes through Structured streaming pipeline (i m on spark 2.1.1 with source being Kafka 0.10.2.1) ?

So far, I am looking at dataframe.mapPartitions (since i need to connect to HBase, whose client connection classes are not serizalable, hence mapPartitions).

ideas ?

回答1:

You should be able to use a foreach output sink: https://spark.apache.org/docs/latest/structured-streaming-programming-guide.html#output-sinks and https://spark.apache.org/docs/latest/structured-streaming-programming-guide.html#using-foreach

Even though the client is not serializable, you don't have to open it in your ForeachWriter constructor. Just leave it None/null, and initialize it in the open method, which is called after serialization, but only once per task.

In sort-of-pseudo-code:

class HBaseForeachWriter extends ForeachWriter[MyType] {
  var client: Option[HBaseClient] = None
  def open(partitionId: Long, version: Long): Boolean = {
    client = Some(... open a client ...)
  }
  def process(record: MyType) = {
    client match {
      case None => throw Exception("shouldn't happen")
      case Some(cl) => {
        ... use cl to write record ...
      }
    }
  }
  def close(errorOrNull: Throwable): Unit = {
    client.foreach(cl => cl.close())
  }
}