How to create a Dataset of Maps?

2019-01-15 10:20发布

问题:

I'm using Spark 2.2 and am running into troubles when attempting to call spark.createDataset on a Seq of Map.

Code and output from my Spark Shell session follow:

// createDataSet on Seq[T] where T = Int works
scala> spark.createDataset(Seq(1, 2, 3)).collect
res0: Array[Int] = Array(1, 2, 3)

scala> spark.createDataset(Seq(Map(1 -> 2))).collect
<console>:24: error: Unable to find encoder for type stored in a Dataset.  
Primitive types (Int, String, etc) and Product types (case classes) are 
supported by importing spark.implicits._
Support for serializing other types will be added in future releases.
       spark.createDataset(Seq(Map(1 -> 2))).collect
                          ^

// createDataSet on a custom case class containing Map works
scala> case class MapHolder(m: Map[Int, Int])
defined class MapHolder

scala> spark.createDataset(Seq(MapHolder(Map(1 -> 2)))).collect
res2: Array[MapHolder] = Array(MapHolder(Map(1 -> 2)))

I've tried import spark.implicits._, though I'm fairly certain that's implicitly imported by the Spark shell session.

Is this is a case not covered by current encoders?

回答1:

It is not covered in 2.2, but can be easily addressed. You can add required Encoder using ExpressionEncoder, either explicitly:

import org.apache.spark.sql.catalyst.encoders.ExpressionEncoder  
import org.apache.spark.sql.Encoder

spark
  .createDataset(Seq(Map(1 -> 2)))(ExpressionEncoder(): Encoder[Map[Int, Int]])

or implicitly:

implicit def mapIntIntEncoder: Encoder[Map[Int, Int]] = ExpressionEncoder()
spark.createDataset(Seq(Map(1 -> 2)))


回答2:

Just FYI that the above expression just works in Spark 2.3 (as of this commit if I'm not mistaken).

scala> spark.version
res0: String = 2.3.0

scala> spark.createDataset(Seq(Map(1 -> 2))).collect
res1: Array[scala.collection.immutable.Map[Int,Int]] = Array(Map(1 -> 2))

I think it's because newMapEncoder is now part of spark.implicits.

scala> :implicits
...
  implicit def newMapEncoder[T <: scala.collection.Map[_, _]](implicit evidence$3: reflect.runtime.universe.TypeTag[T]): org.apache.spark.sql.Encoder[T]

You could "disable" the implicit by using the following trick and give the above expression a try (that will lead to an error).

trait ThatWasABadIdea
implicit def newMapEncoder(ack: ThatWasABadIdea) = ack

scala> spark.createDataset(Seq(Map(1 -> 2))).collect
<console>:26: error: Unable to find encoder for type stored in a Dataset.  Primitive types (Int, String, etc) and Product types (case classes) are supported by importing spark.implicits._  Support for serializing other types will be added in future releases.
       spark.createDataset(Seq(Map(1 -> 2))).collect
                          ^