importing pyspark in python shell

2019-01-03 12:36发布

This is a copy of someone else's question on another forum that was never answered, so I thought I'd re-ask it here, as I have the same issue. (See http://geekple.com/blogs/feeds/Xgzu7/posts/351703064084736)

I have Spark installed properly on my machine and am able to run python programs with the pyspark modules without error when using ./bin/pyspark as my python interpreter.

However, when I attempt to run the regular Python shell, when I try to import pyspark modules I get this error:

from pyspark import SparkContext

and it says

"No module named pyspark".

How can I fix this? Is there an environment variable I need to set to point Python to the pyspark headers/libraries/etc.? If my spark installation is /spark/, which pyspark paths do I need to include? Or can pyspark programs only be run from the pyspark interpreter?

17条回答
我命由我不由天
2楼-- · 2019-01-03 12:44

Turns out that the pyspark bin is LOADING python and automatically loading the correct library paths. Check out $SPARK_HOME/bin/pyspark :

# Add the PySpark classes to the Python path:
export PYTHONPATH=$SPARK_HOME/python/:$PYTHONPATH

I added this line to my .bashrc file and the modules are now correctly found!

查看更多
贪生不怕死
3楼-- · 2019-01-03 12:45
export PYSPARK_PYTHON=/home/user/anaconda3/bin/python
export PYSPARK_DRIVER_PYTHON=jupyter
export PYSPARK_DRIVER_PYTHON_OPTS='notebook'

This is what I did for using my Anaconda distribution with Spark. This is Spark version independent. You can change the first line to your users' python bin. Also, as of Spark 2.2.0 PySpark is available as a Stand-alone package on PyPi but I am yet to test it out.

查看更多
做个烂人
4楼-- · 2019-01-03 12:47

If it prints such error:

ImportError: No module named py4j.java_gateway

Please add $SPARK_HOME/python/build to PYTHONPATH:

export SPARK_HOME=/Users/pzhang/apps/spark-1.1.0-bin-hadoop2.4
export PYTHONPATH=$SPARK_HOME/python:$SPARK_HOME/python/build:$PYTHONPATH
查看更多
相关推荐>>
5楼-- · 2019-01-03 12:52

I am running a spark cluster, on CentOS VM, which is installed from cloudera yum packages.

Had to set the following variables to run pyspark.

export SPARK_HOME=/usr/lib/spark;
export PYTHONPATH=$SPARK_HOME/python:$SPARK_HOME/python/lib/py4j-0.9-src.zip:$PYTHONPATH
查看更多
Fickle 薄情
6楼-- · 2019-01-03 12:53

On Mac, I use Homebrew to install Spark (formula "apache-spark"). Then, I set the PYTHONPATH this way so the Python import works:

export SPARK_HOME=/usr/local/Cellar/apache-spark/1.2.0
export PYTHONPATH=$SPARK_HOME/libexec/python:$SPARK_HOME/libexec/python/build:$PYTHONPATH

Replace the "1.2.0" with the actual apache-spark version on your mac.

查看更多
等我变得足够好
7楼-- · 2019-01-03 12:53

For a Spark execution in pyspark two components are required to work together:

  • pyspark python package
  • Spark instance in a JVM

When launching things with spark-submit or pyspark, these scripts will take care of both, i.e. they set up your PYTHONPATH, PATH, etc, so that your script can find pyspark, and they also start the spark instance, configuring according to your params, e.g. --master X

Alternatively, it is possible to bypass these scripts and run your spark application directly in the python interpreter likepython myscript.py. This is especially interesting when spark scripts start to become more complex and eventually receive their own args.

  1. Ensure the pyspark package can be found by the Python interpreter. As already discussed either add the spark/python dir to PYTHONPATH or directly install pyspark using pip install.
  2. Set the parameters of spark instance from your script (those that used to be passed to pyspark).
    • For spark configurations as you'd normally set with --conf they are defined with a config object (or string configs) in SparkSession.builder.config
    • For main options (like --master, or --driver-mem) for the moment you can set them by writing to the PYSPARK_SUBMIT_ARGS environment variable. To make things cleaner and safer you can set it from within Python itself, and spark will read it when starting.
  3. Start the instance, which just requires you to call getOrCreate() from the builder object.

Your script can therefore have something like this:

from pyspark.sql import SparkSession

if __name__ == "__main__":
    if spark_main_opts:
        # Set main options, e.g. "--master local[4]"
        os.environ['PYSPARK_SUBMIT_ARGS'] = spark_main_opts + " pyspark-shell"

    # Set spark config
    spark = (SparkSession.builder
             .config("spark.checkpoint.compress", True)
             .config("spark.jars.packages", "graphframes:graphframes:0.5.0-spark2.1-s_2.11")
             .getOrCreate())
查看更多
登录 后发表回答