pyspark : NameError: name 'spark' is not d

2019-03-16 02:38发布

问题:

I am copying the pyspark.ml example from the official document website: http://spark.apache.org/docs/latest/api/python/pyspark.ml.html#pyspark.ml.Transformer

data = [(Vectors.dense([0.0, 0.0]),), (Vectors.dense([1.0, 1.0]),),(Vectors.dense([9.0, 8.0]),), (Vectors.dense([8.0, 9.0]),)]
df = spark.createDataFrame(data, ["features"])
kmeans = KMeans(k=2, seed=1)
model = kmeans.fit(df)

However, the example above wouldn't run and gave me the following errors:

---------------------------------------------------------------------------
NameError                                 Traceback (most recent call last)
<ipython-input-28-aaffcd1239c9> in <module>()
      1 from pyspark import *
      2 data = [(Vectors.dense([0.0, 0.0]),), (Vectors.dense([1.0, 1.0]),),(Vectors.dense([9.0, 8.0]),), (Vectors.dense([8.0, 9.0]),)]
----> 3 df = spark.createDataFrame(data, ["features"])
      4 kmeans = KMeans(k=2, seed=1)
      5 model = kmeans.fit(df)

NameError: name 'spark' is not defined

What additional configuration/variable needs to be set to get the example running?

回答1:

Since you are calling createDataFrame(), you need to do this:

df = sqlContext.createDataFrame(data, ["features"])

instead of this:

df = spark.createDataFrame(data, ["features"])

spark stands there as the sqlContext.


In general, some people have that as sc, so if that didn't work, you could try:

df = sc.createDataFrame(data, ["features"])


回答2:

You can add

from pyspark.context import SparkContext
from pyspark.sql.session import SparkSession
sc = SparkContext('local')
spark = SparkSession(sc)

to the begining of your codes to define a SparkSession, then the spark.createDataFrame() should work.