I have downloaded the latest update on text classification template. I created a new app and imported stopwords.json and emails.json by specifying app id
$ pio import --appid <appID> --input data/stopwords.json
$ pio import --appid <appID> --input data/emails.json
Then I changed engine.json and given my app name in it.
{
"id": "default",
"description": "Default settings",
"engineFactory": "org.template.textclassification.TextClassificationEngine",
"datasource": {
"params": {
"appName": "<myapp>",
"evalK": 3
}
But the next step ie, evaluation fails with an error empty.maxBy
. A part of error is pasted below
[INFO] [Engine$] Preparator: org.template.textclassification.Preparator@79a13920
[INFO] [Engine$] AlgorithmList: List(org.template.textclassification.LRAlgorithm@420a8042)
[INFO] [Engine$] Serving: org.template.textclassification.Serving@faea4da
Exception in thread "main" java.lang.UnsupportedOperationException: empty.maxBy
at scala.collection.TraversableOnce$class.maxBy(TraversableOnce.scala:223)
at scala.collection.AbstractTraversable.maxBy(Traversable.scala:105)
at org.template.textclassification.PreparedData.<init> (Preparator.scala:160)
at org.template.textclassification.Preparator.prepare(Preparator.scala:39)
at org.template.textclassification.Preparator.prepare(Preparator.scala:35)
at io.prediction.controller.PPreparator.prepareBase(PPreparator.scala:34)
at io.prediction.controller.Engine$$anonfun$25.apply(Engine.scala:758)
at scala.collection.MapLike$MappedValues.get(MapLike.scala:249)
at scala.collection.MapLike$MappedValues.get(MapLike.scala:249)
at scala.collection.MapLike$class.apply(MapLike.scala:140)
at scala.collection.AbstractMap.apply(Map.scala:58)
Then I tried pio train
but training also fails after showing some observations. Error shown is java.lang.OutOfMemoryError: Java heap space
. A part of the error is pasted below.
[INFO] [Engine$] Data santiy check is on.
[INFO] [Engine$] org.template.textclassification.TrainingData supports data sanity check. Performing check.
Observation 1 label: 1.0
Observation 2 label: 0.0
Observation 3 label: 0.0
Observation 4 label: 1.0
Observation 5 label: 1.0
[INFO] [Engine$] org.template.textclassification.PreparedData does not support data sanity check. Skipping check.
[WARN] [BLAS] Failed to load implementation from: com.github.fommil.netlib.NativeSystemBLAS
[WARN] [BLAS] Failed to load implementation from: com.github.fommil.netlib.NativeRefBLAS
[INFO] [Engine$] org.template.textclassification.NBModel does not support data sanity check. Skipping check.
[INFO] [Engine$] EngineWorkflow.train completed
[INFO] [Engine] engineInstanceId=AU3g4XyhTrUUakX3xepP
[INFO] [CoreWorkflow$] Inserting persistent model
Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
at java.util.Arrays.copyOf(Arrays.java:3236)
at java.io.ByteArrayOutputStream.grow(ByteArrayOutputStream.java:118)
at java.io.ByteArrayOutputStream.ensureCapacity(ByteArrayOutputStream.java:93)
at java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java:153)
at com.esotericsoftware.kryo.io.Output.flush(Output.java:155)
at com.twitter.chill.Tuple2Serializer.write(TupleSerializers.scala:36)
at com.twitter.chill.Tuple2Serializer.write(TupleSerializers.scala:33)
at com.esotericsoftware.kryo.Kryo.writeClassAndObject(Kryo.java:568)
at com.twitter.chill.TraversableSerializer$$anonfun$write$1.apply(Traversable.scala:29)
Is this because of memory shortage? I have run the previous version of same template with text classification data of greater than 40mb without issues. Is evaluation a must for training? Also could you please explain how the evaluation is performed?