Goal: I am trying to get apache-spark pyspark to be appropriately interpreted within my pycharm IDE.
Problem: I currently receive the following error:
ImportError: cannot import name accumulators
I was following the following blog to help me through the process. http://renien.github.io/blog/accessing-pyspark-pycharm/
Due to the fact my code was taking the except path I personally got rid of the try: except: just to see what the exact error was.
Prior to this I received the following error:
ImportError: No module named py4j.java_gateway
This was fixed simply by typing '$sudo pip install py4j' in bash.
My code currently looks like the following chunk:
import os
import sys
# Path for spark source folder
os.environ['SPARK_HOME']="[MY_HOME_DIR]/spark-1.2.0"
# Append pyspark to Python Path
sys.path.append("[MY_HOME_DIR]/spark-1.2.0/python/")
try:
from pyspark import SparkContext
print ("Successfully imported Spark Modules")
except ImportError as e:
print ("Can not import Spark Modules", e)
sys.exit(1)
My Questions:
1. What is the source of this error? What is the cause?
2. How do I remedy the issue so I can run pyspark in my pycharm editor.
NOTE: The current interpreter I use in pycharm is Python 2.7.8 (~/anaconda/bin/python)
Thanks ahead of time!
Don
I ran into this issue as well. To solve it, I commented out line 28 in
~/spark/spark/python/pyspark/context.py
, the file which was causing the error:As the accumulator import seems to be covered by the following line (29), there doesn't seem to be an issue. Spark is now running fine (after
pip install py4j
).This looks to me like a circular-dependency bug.
In
MY_HOME_DIR]/spark-1.2.0/python/pyspark/context.py
remove or comment-out the linefrom pyspark import accumulators
.It's about 6 lines of code from the top.
I filed an issue with the Spark project here:
https://issues.apache.org/jira/browse/SPARK-4974
I came across the same error. I just installed py4j.
No necessity to set bashrc.
I was able to find a fix for this on Windows, but not really sure the root cause of it.
If you open accumulators.py, then you see that first there is a header comment, followed by help text and then the import statements. move one or more of the import statements just after the comment block and before the help text. This worked on my system and i was able to import pyspark without any issues.
Firstly, set your environment var
make sure that you use your own version name
and then, restart! it is important to validate you setting.
I ran into the same issue using cdh 5.3
in the end this actually turned out to be pretty easy to resolve. I noticed that the script /usr/lib/spark/bin/pyspark has variables defined for ipython
I installed anaconda to /opt/anaconda
then finally....
executed
which now functions as expected.