I've connected the Kafka Stream to the Spark. As well as I've trained Apache Spark Mlib model to prediction based on a streamed text. My problem is, get a prediction I need to pass a DataFramework.
//kafka stream
val stream = KafkaUtils.createDirectStream[String, String](
ssc,
PreferConsistent,
Subscribe[String, String](topics, kafkaParams)
)
//load mlib model
val model = PipelineModel.load(modelPath)
stream.foreachRDD { rdd =>
rdd.foreach { record =>
//to get a prediction need to pass DF
val toPredict = spark.createDataFrame(Seq(
(1L, record.value())
)).toDF("id", "review")
val prediction = model.transform(test)
}
}
My problem is, Spark streaming doesn't allow to create a DataFrame. Is there any way to do that? Can I use case class or struct?
It's possible to create a
DataFrame
orDataset
from an RDD as you would in core Spark. To do that, we need to apply a schema. Within theforeachRDD
we can then transform the resulting RDD into a DataFrame that can be further used with an ML pipeline.