According to the Spark on Mesos docs one needs to set the spark.executor.uri
pointing to a Spark distribution:
val conf = new SparkConf()
.setMaster("mesos://HOST:5050")
.setAppName("My app")
.set("spark.executor.uri", "<path to spark-1.4.1.tar.gz uploaded above>")
The docs also note that one can build a custom version of the Spark distribution.
My question now is whether it is possible/desirable to pre-package external libraries such as
- spark-streaming-kafka
- elasticsearch-spark
- spark-csv
which will be used in mostly all of the job-jars I'll submit via spark-submit
to
- reduce the time
sbt assembly
need to package the fat jars - reduce the size of the fat jars which need to be submitted
If so, how can this be achieved? Generally speaking, are there some hints on how the fat jar generation on job submitting process can be speed up?
Background is that I want to run some code-generation for Spark jobs, and submit these right away and show the results in a browser frontend asynchronously. The frontend part shouldn't be too complicated, but I wonder how the backend part can be achieved.
After I discovered the Spark JobServer project, I decided that this is the most suitable one for my use case.
It supports dynamic context creation via a REST API, as well as adding JARs to the newly created context manually/programmatically. It also is capable of runnign low-latency synchronous jobs, which is exactly what I need.
I created a Dockerfile so you can try it out with the most recent (supported) versions of Spark (1.4.1), Spark JobServer (0.6.0) and buit-in Mesos support (0.24.1):
References:
When you say pre-package do you really mean distribute to all the slaves and set up the jobs to use those packages so that you don't need to download those every time? That might be an option, however it sounds a bit cumbersome because distributing everything to the slaves and keeping all the packages up to date is not an easy task.
How about breaking your .tar.gz into smaller pieces, so that instead of a single fat file your jobs fetch several smaller files? In this case it should be possible to leverage the Mesos Fetcher Cache. So you'll see bad performance when the agent cache is cold, but once it warms up (i.e. once one job runs and downloads the common files locally) consecutive jobs will complete faster.
Create sample maven project with your all dependencies and then use maven plugin
maven-shade-plugin
. It will create one shade jar in your target folder.Here is sample pom
Yeah, you can copy the dependencies out to the workers and put them in the system-wide jvm lib directory in order to get them on the classpath.
Then you can mark those dependencies as provided in your sbt build, and they won't be included in the assembly. This does speed up assembly and transfer time.
I haven't tried this on mesos specifically, but have used it on spark standalone for things that are in every job and rarely change.