The simplest method would be to make use of the DataFrame abstraction shipped with Spark.
val sqlContext = new SQLContext(sc)
val stream = KafkaUtils.createDirectStream[String, String, StringDecoder, StringDecoder](
ssc, kafkaParams, Set("myTopicName"))
stream.foreachRDD(
rdd => {
val dataFrame = sqlContext.read.json(rdd.map(_._2)) //converts json to DF
//do your operations on this DF. You won't even require a model class.
})
The simplest method would be to make use of the DataFrame abstraction shipped with Spark.
I use Play Framework's library for Json. You can add it to your project as a standalone module. Usage is as follows: