- I have a remote Ubuntu server on linode.com with 4 cores and 8G RAM
- I have a Spark-2 cluster consisting of 1 master and 1 slave on my remote Ubuntu server.
I have started PySpark shell locally on my MacBook, connected to my master node on remote server by:
$ PYSPARK_PYTHON=python3 /vagrant/spark-2.0.0-bin-hadoop2.7/bin/pyspark --master spark://[server-ip]:7077
I tried executing simple Spark example from website:
from pyspark.sql import SparkSession spark = SparkSession \ .builder \ .appName("Python Spark SQL basic example") \ .config("spark.some.config.option", "some-value") \ .getOrCreate() df = spark.read.json("/path/to/spark-2.0.0-bin-hadoop2.7/examples/src/main/resources/people.json")
I have got error
Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
I have enough memory on my server and also on my local machine, but I am getting this weird error again and again. I have 6G for my Spark cluster, my script is using only 4 cores with 1G memory per node.
[
I have Googled for this error and tried to setup different memory configs, also disabled firewall on both machines, but it does not helped me. I have no idea how to fix it.
Is someone faced the same problem? Any ideas?
相关问题
- How to maintain order of key-value in DataFrame sa
- How to remove spaces in between characters without
- split data frame into two by column value [duplica
- Removing duplicate dataframes in a list
- Select first row from multiple dataframe and bind
相关文章
- How to convert summary output to a data frame?
- Livy Server: return a dataframe as JSON?
- Paste all possible diagonals of an n*n matrix or d
- How to apply multiple functions to a groupby objec
- How to remove seconds from datetime?
- SQL query Frequency Distribution matrix for produc
- Pyspark error: Java gateway process exited before
- Transforming multiindex to row-wise multi-dimensio
You are submitting application in the client mode. It means that driver process is started on your local machine.
When executing Spark applications all machines have to be able to communicate with each other. Most likely your driver process is not reachable from the executors (for example it is using private IP or is hidden behind firewall). If that is the case you can confirm that by checking executor logs (go to application, select on of the workers with the status
EXITED
and checkstderr
. You "should" see that executor is failing due toorg.apache.spark.rpc.RpcTimeoutException
).There are two possible solutions: