How to change memory per node for apache spark wor

2019-01-10 23:09发布

问题:

I am configuring an Apache Spark cluster.

When I run the cluster with 1 master and 3 slaves, I see this on the master monitor page:

Memory
2.0 GB (512.0 MB Used)
2.0 GB (512.0 MB Used)
6.0 GB (512.0 MB Used)

I want to increase the used memory for the workers but I could not find the right config for this. I have changed spark-env.sh as below:

export SPARK_WORKER_MEMORY=6g
export SPARK_MEM=6g
export SPARK_DAEMON_MEMORY=6g
export SPARK_JAVA_OPTS="-Dspark.executor.memory=6g"
export JAVA_OPTS="-Xms6G -Xmx6G"

But the used memory is still the same. What should I do to change used memory?

回答1:

When using 1.0.0+ and using spark-shell or spark-submit, use the --executor-memory option. E.g.

spark-shell --executor-memory 8G ...

0.9.0 and under:

When you start a job or start the shell change the memory. We had to modify the spark-shell script so that it would carry command line arguments through as arguments for the underlying java application. In particular:

OPTIONS="$@"
...
$FWDIR/bin/spark-class $OPTIONS org.apache.spark.repl.Main "$@"

Then we can run our spark shell as follows:

spark-shell -Dspark.executor.memory=6g

When configuring it for a standalone jar, I set the system property programmatically before creating the spark context and pass the value in as a command line argument (I can make it shorter than the long winded system props then).

System.setProperty("spark.executor.memory", valueFromCommandLine)

As for changing the default cluster wide, sorry, not entirely sure how to do it properly.

One final point - I'm a little worried by the fact you have 2 nodes with 2GB and one with 6GB. The memory you can use will be limited to the smallest node - so here 2GB.



回答2:

In Spark 1.1.1, to set the Max Memory of workers. in conf/spark.env.sh, write this:

export SPARK_EXECUTOR_MEMORY=2G

If you have not used the config file yet, copy the template file

cp conf/spark-env.sh.template conf/spark-env.sh

Then make the change and don't forget to source it

source conf/spark-env.sh


回答3:

In my case, I use ipython notebook server to connect to spark. I want to increase the memory for executor.

This is what I do:

from pyspark import SparkContext
from pyspark.conf import SparkConf

conf = SparkConf()
conf.setMaster(CLUSTER_URL).setAppName('ipython-notebook').set("spark.executor.memory", "2g")

sc = SparkContext(conf=conf)


回答4:

According to Spark documentation you can change the Memory per Node with command line argument --executor-memory while submitting your application. E.g.

./bin/spark-submit \
  --class org.apache.spark.examples.SparkPi \
  --master spark://master.node:7077 \
  --executor-memory 8G \
  --total-executor-cores 100 \
  /path/to/examples.jar \
  1000

I've tested and it works.



回答5:

The default configuration for the worker is to allocate Host_Memory - 1Gb for each worker. The configuration parameter to manually adjust that value is SPARK_WORKER_MEMORY, like in your question:

export SPARK_WORKER_MEMORY=6g.