I created a spark cluster, ssh into the master, and launch the shell:
MASTER=yarn-client ./spark/bin/pyspark
When I do the following:
x = sc.textFile("s3://location/files.*")
xt = x.map(lambda x: handlejson(x))
table= sqlctx.inferSchema(xt)
I get the following error:
Error from python worker:
/usr/bin/python: No module named pyspark
PYTHONPATH was:
/mnt1/var/lib/hadoop/tmp/nm-local-dir/usercache/hadoop/filecache/11/spark-assembly-1.1.0-hadoop2.4.0.jar
java.io.EOFException
java.io.DataInputStream.readInt(DataInputStream.java:392)
org.apache.spark.api.python.PythonWorkerFactory.startDaemon(PythonWorkerFactory.scala:151)
org.apache.spark.api.python.PythonWorkerFactory.createThroughDaemon(PythonWorkerFactory.scala:78)
org.apache.spark.api.python.PythonWorkerFactory.create(PythonWorkerFactory.scala:54)
org.apache.spark.SparkEnv.createPythonWorker(SparkEnv.scala:97)
org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:66)
org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:62)
org.apache.spark.scheduler.Task.run(Task.scala:54)
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:177)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
java.lang.Thread.run(Thread.java:745)
I also checked PYTHONPATH
>>> os.environ['PYTHONPATH'] '/home/hadoop/spark/python/lib/py4j-0.8.2.1-src.zip:/home/hadoop/spark/python/:/home/hadoop/spark/lib/spark-assembly-1.1.0-hadoop2.4.0.jar'
And looked inside the jar for pyspark, and it's there:
jar -tf /home/hadoop/spark/lib/spark-assembly-1.1.0-hadoop2.4.0.jar | grep pyspark
pyspark/
pyspark/shuffle.py
pyspark/resultiterable.py
pyspark/files.py
pyspark/accumulators.py
pyspark/sql.py
pyspark/java_gateway.py
pyspark/join.py
pyspark/serializers.py
pyspark/shell.py
pyspark/rddsampler.py
pyspark/rdd.py
....
Has anyone run into this before? Thanks!
You'll want to reference these Spark issues:
The solution (assuming you would rather not rebuild your jar):
This is fixed in later builds on EMR. See https://github.com/awslabs/emr-bootstrap-actions/tree/master/spark for release notes and instructions.