I just want to ask on the specifics how to successfully use checkpointInterval in Spark. And what do you mean by this comment in the code for ALS: https://github.com/apache/spark/blob/master/mllib/src/main/scala/org/apache/spark/mllib/recommendation/ALS.scala
If the checkpoint directory is not set in [[org.apache.spark.SparkContext]], * this setting is ignored.
- How can we set checkPoint directory? Can we use any hdfs-compatible directory for this?
- Is using setCheckpointInterval the correct way to implement checkpointing in ALS to avoid Stack Overflow errors?
Edit:
You can use
SparkContext.setCheckpointDir
. As far as I remember in local mode both local and DFS paths work just fine, but on the cluster the directory must be a HDFS path.It should help. See SPARK-1006
PS: It seems that in order to actually perform check-point in ALS, the
checkpointDir
must be set or check-pointing won't be effective [Ref. here.]