Scala 2.11 is out and the 22 fields limit for case classes seems to be fixed (Scala Issue, Release Notes).
This has been an issue for me for a while because I use case classes to model database entities that have more than 22 fields in Play + Postgres Async. My solution in Scala 2.10 was to break the models into multiple case classes, but I find this solution hard to maintain and extend, and I was hoping I could implement something as described below after switching to Play 2.3.0-RC1 + Scala 2.11.0:
package entities
case class MyDbEntity(
id: String,
field1: String,
field2: Boolean,
field3: String,
field4: String,
field5: String,
field6: String,
field7: String,
field8: String,
field9: String,
field10: String,
field11: String,
field12: String,
field13: String,
field14: String,
field15: String,
field16: String,
field17: String,
field18: String,
field19: String,
field20: String,
field21: String,
field22: String,
field23: String,
)
object MyDbEntity {
import play.api.libs.json.Json
import play.api.data._
import play.api.data.Forms._
implicit val entityReads = Json.reads[MyDbEntity]
implicit val entityWrites = Json.writes[MyDbEntity]
}
The code above fails to compile with the following message for both the "Reads" and the "Writes":
No unapply function found
Updating the "Reads" and "Writes" to:
implicit val entityReads: Reads[MyDbEntity] = (
(__ \ "id").read[Long] and
(__ \ "field_1").read[String]
........
)(MyDbEntity.apply _)
implicit val postWrites: Writes[MyDbEntity] = (
(__ \ "id").write[Long] and
(__ \ "user").write[String]
........
)(unlift(MyDbEntity.unapply))
Also doesn't work:
implementation restricts functions to 22 parameters
value unapply is not a member of object models.MyDbEntity
My understanding is that Scala 2.11 still has some limitations on functions and that something like what I described above is not possible yet. This seems weird to me as I don't see the benefit of lifting the restrictions on case classes if one it's major users cases is still not supported, so I'm wondering if I'm missing something.
Pointers to issues or implementation details are more than welcome! Thanks!
We were also breaking our models into multiple case classes, but this was quickly becoming unmanageable. We use Slick as our object relational mapper, and Slick 2.0 comes with a code generator that we use to generate classes (which come with apply methods and copy constructors to mimic case classes) along with methods to instantiate models from Json (we do not automatically generate methods to convert models into Json because we have too many special cases to deal with). Using the Slick code generator does not require you to use Slick as your object relational mapper.
This is part of the input to the code generator - this method takes a JsObject and uses it to either instantiate a new model or update an existing model.
For example, with our ActivityLog model this produces the following code. If "original" is None then this is being called from a "createFromJson" method and we instantiate a new model; if "original" is Some(activityLog) then this is being called from an "updateFromJson" method and we update the existing model. The "condenseUnit" method being called on the "val errs = ..." line takes a Seq[Try[Unit]] and produces a Try[Unit]; if the Seq has any errors then the Try[Unit] concatenates the exception messages. The parseJsonField and parseField methods are not generated - they're just referenced from the generated code.
You can use Jackson's Scala module. Play's json feature is built upon Jackson scala . I don't know why they put a 22 field limit here while jackson supports more than 22 fields. It may make sense that a function call can never use more than 22 parameters, but we can have hundreds of columns inside a DB entity, so this restriction here is ridiculous and makes Play a less productive toy. check this out:
This is not possible, out of the box, for several reasons:
First, as gourlaysama pointed it out, play-json library used scala macro to avoid bolierplate code, and the current code relies of the
unapply
andapply
methods to retrieve fields. This explains the first error message in your question.Secondly play-json library relies of a functional library which currently works only with a fixed number of parameters corresponding to previous case class fields arity limit. This explains the second error message in your question.
However it is possible to bypass the second point by either:
using shapeless Automatic Typeclass Derivation feature. Naveen Gattu has written an excellent gist doing exaclty so.
overridding default functiunal builder
First, creating the missing
FunctionalBuilder
:and then by providing your own
FunctionalBuilderOps
instance:Finally, regarding the first point, I have sent a pull request to try to simplify the current implementation.
cases where case classes might not work; one of these cases is that the case classes cannot take more than 22 fields. Another case can be that you do not know about schema beforehand. In this approach, the data is loaded as an RDD of the row objects. Schema is created separately using the StructType and StructField objects, which represent a table and a field respectively. Schema is applied to the row RDD to create DataFrame in Spark.
I'm making a library. please try this https://github.com/xuwei-k/play-twenty-three
I tried Shapeless "Automatic Typeclass Derivation" based solution proposed in another answer, and it didn't work for our models - was throwing StackOverflow exceptions (case class with ~30 fields and 4 nested collections of case classes with 4-10 fields).
So, we've adopted this solution and it worked flawlessly. Confirmed that by writing ScalaCheck test. Notice, that it requires Play Json 2.4.