Column features must be of type org.apache.spark.m

2020-03-03 07:33发布

I want to run this code in pyspark (spark 2.1.1):

from pyspark.ml.feature import PCA

bankPCA = PCA(k=3, inputCol="features", outputCol="pcaFeatures") 
pcaModel = bankPCA.fit(bankDf)    
pcaResult = pcaModel.transform(bankDF).select("label", "pcaFeatures")    
pcaResult.show(truncate= false)

But I get this error:

requirement failed: Column features must be of type org.apache.spark.ml.linalg.Vect orUDT@3bfc3ba7 but was actually org.apache.spark.mllib.linalg.VectorUDT@f71b0bce.

1条回答
淡お忘
2楼-- · 2020-03-03 08:32

Example that you can find here:

from pyspark.ml.feature import PCA
from pyspark.ml.linalg import Vectors

data = [(Vectors.sparse(5, [(1, 1.0), (3, 7.0)]),),
    (Vectors.dense([2.0, 0.0, 3.0, 4.0, 5.0]),),
    (Vectors.dense([4.0, 0.0, 0.0, 6.0, 7.0]),)]
df = spark.createDataFrame(data, ["features"])

pca = PCA(k=3, inputCol="features", outputCol="pcaFeatures")
model = pca.fit(df)

... other code ...

As you can see above, df is a dataframe which contains Vectors.sparse() and Vectors.dense() that are imported from pyspark.ml.linalg.

Probably, your bankDf contains Vectors imported from pyspark.mllib.linalg.

So you have to set that Vectors in your dataframes are imported

from pyspark.ml.linalg import Vectors 

instead of:

from pyspark.mllib.linalg import Vectors

Maybe you can find interesting this stackoverflow question.

查看更多
登录 后发表回答