- I have a remote Ubuntu server on linode.com with 4 cores and 8G RAM
- I have a Spark-2 cluster consisting of 1 master and 1 slave on my remote Ubuntu server.
I have started PySpark shell locally on my MacBook, connected to my master node on remote server by:
$ PYSPARK_PYTHON=python3 /vagrant/spark-2.0.0-bin-hadoop2.7/bin/pyspark --master spark://[server-ip]:7077
I tried executing simple Spark example from website:
from pyspark.sql import SparkSession spark = SparkSession \ .builder \ .appName("Python Spark SQL basic example") \ .config("spark.some.config.option", "some-value") \ .getOrCreate() df = spark.read.json("/path/to/spark-2.0.0-bin-hadoop2.7/examples/src/main/resources/people.json")
I have got error
Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
I have enough memory on my server and also on my local machine, but I am getting this weird error again and again. I have 6G for my Spark cluster, my script is using only 4 cores with 1G memory per node.
[
I have Googled for this error and tried to setup different memory configs, also disabled firewall on both machines, but it does not helped me. I have no idea how to fix it.
Is someone faced the same problem? Any ideas?
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试):
问题:
回答1:
You are submitting application in the client mode. It means that driver process is started on your local machine.
When executing Spark applications all machines have to be able to communicate with each other. Most likely your driver process is not reachable from the executors (for example it is using private IP or is hidden behind firewall). If that is the case you can confirm that by checking executor logs (go to application, select on of the workers with the status EXITED
and check stderr
. You "should" see that executor is failing due to org.apache.spark.rpc.RpcTimeoutException
).
There are two possible solutions:
- Submit application from the machine which can be reached from you cluster.
- Submit application in the cluster mode. This will use cluster resources to start driver process so you have to account for that.