What's the difference between the following 2?
object Example1 {
def main(args: Array[String]): Unit = {
try {
val spark = SparkSession.builder.getOrCreate
// spark code here
} finally {
spark.close
}
}
}
object Example2 {
val spark = SparkSession.builder.getOrCreate
def main(args: Array[String]): Unit = {
// spark code here
}
}
I know that SparkSession implements Closeable and it hints that it needs to be closed. However, I can't think of any issues if the SparkSession is just created as in Example2 and never closed directly.
In case of success or failure of the Spark application (and exit from main method), the JVM will terminate and the SparkSession will be gone with it. Is this correct?
IMO: The fact that the SparkSession is a singleton should not make a big difference either.
You should always close your
SparkSession
when you are done with its use (even if the final outcome were just to follow a good practice of giving back what you've been given).Closing a
SparkSession
may trigger freeing cluster resources that could be given to some other application.SparkSession
is a session and as such maintains some resources that consume JVM memory. You can have as many SparkSessions as you want (see SparkSession.newSession to create a session afresh) but you don't want them to use memory they should not if you don't use one and henceclose
the one you no longer need.SparkSession
is Spark SQL's wrapper around Spark Core's SparkContext and so under the covers (as in any Spark application) you'd have cluster resources, i.e. vcores and memory, assigned to yourSparkSession
(throughSparkContext
). That means that as long as yourSparkContext
is in use (usingSparkSession
) the cluster resources won't be assigned to other tasks (not necessarily Spark's but also for other non-Spark applications submitted to the cluster). These cluster resources are yours until you say "I'm done" which translates to...close
.If however, after
close
, you simply exit a Spark application, you don't have to think about executingclose
since the resources will be closed automatically anyway. The JVMs for the driver and executors terminate and so does the (heartbeat) connection to the cluster and so eventually the resources are given back to the cluster manager so it can offer them to use by some other application.Both are same!
Spark session's
stop
/close
eventually calls spark context'sstop
Spark context has run time shutdown hook to close the spark context before exiting the JVM. Please find the spark code below for adding shutdown hook while creating the context
So this will be called irrespective of how JVM exits. If you
stop()
manually, this shutdown hook will be cancelled to avoid duplication