Whenever I am trying to execute a simple processing in pyspark, it fails to open the socket.
>>> myRDD = sc.parallelize(range(6), 3)
>>> sc.runJob(myRDD, lambda part: [x * x for x in part])
Above throws exception -
port 53554 , proto 6 , sa ('127.0.0.1', 53554)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Volumes/work/bigdata/spark-custom/python/pyspark/context.py", line 917, in runJob
return list(_load_from_socket(port, mappedRDD._jrdd_deserializer))
File "/Volumes/work/bigdata/spark-custom/python/pyspark/rdd.py", line 143, in _load_from_socket
raise Exception("could not open socket")
Exception: could not open socket
>>> 15/08/30 19:03:05 ERROR PythonRDD: Error while sending iterator
java.net.SocketTimeoutException: Accept timed out
at java.net.PlainSocketImpl.socketAccept(Native Method)
at java.net.AbstractPlainSocketImpl.accept(AbstractPlainSocketImpl.java:404)
at java.net.ServerSocket.implAccept(ServerSocket.java:545)
at java.net.ServerSocket.accept(ServerSocket.java:513)
at org.apache.spark.api.python.PythonRDD$$anon$2.run(PythonRDD.scala:613)
I checked through rdd.py _load_from_socket and realised it gets the port , but the server is not even started or sp runJob might be the issue-
port = self._jvm.PythonRDD.runJob(self._jsc.sc(), mappedRDD._jrdd, partitions)