My objective is to create a MyDataFrame
class that will know how to fetch data at a given path, but I want to provide type-safety. I'm having some trouble using a frameless.TypedDataset
with type bounds on remote data. For example
sealed trait Schema
final case class TableA(id: String) extends Schema
final case class TableB(id: String) extends Schema
class MyDataFrame[T <: Schema](path: String, implicit val spark: SparkSession) {
def read = TypedDataset.create(spark.read.parquet(path)).as[T]
}
But I keep getting could not find implicit value for evidence parameter of type frameless.TypedEncoder[org.apache.spark.sql.Row]
. I know that TypedDataset.create
needs an Injection
for this to work. But I'm not sure how I would write this for a generic T
. I thought the compiler would be able to deduce that since all subtypes of Schema
are case class
es that it would work.
Anybody ever run into this?
All implicit parameters should be in the last parameter list and this parameter list should be separate from non-implicit ones.
If you try to compile
you'll see error
So let's just add corresponding implicit parameter
we'll have error
So let's add one more implicit parameter
or with kind-projector
You can create custom type class
or
or
or