Problems with pySpark Columnsimilarities

2020-05-08 23:55发布

问题:

tl;dr How do I use pySpark to compare the similarity of rows?

I have a numpy array where I would like to compare the similarities of each row to one another

print (pdArray)
#[[ 0.  1.  0. ...,  0.  0.  0.]
# [ 0.  0.  3. ...,  0.  0.  0.]
# [ 0.  0.  0. ...,  0.  0.  7.]
# ..., 
# [ 5.  0.  0. ...,  0.  1.  0.]
# [ 0.  6.  0. ...,  0.  0.  3.]
# [ 0.  0.  0. ...,  2.  0.  0.]]

Using scipy I can compute cosine similarities as follow...

pyspark.__version__
# '2.2.0'

from sklearn.metrics.pairwise import cosine_similarity
similarities = cosine_similarity(pdArray)

similarities.shape
# (475, 475)

print(similarities)
array([[  1.00000000e+00,   1.52204908e-03,   8.71545594e-02, ...,
          3.97681174e-04,   7.02593036e-04,   9.90472253e-04],
       [  1.52204908e-03,   1.00000000e+00,   3.96760121e-04, ...,
          4.04724413e-03,   3.65324300e-03,   5.63519735e-04],
       [  8.71545594e-02,   3.96760121e-04,   1.00000000e+00, ...,
          2.62367141e-04,   1.87878869e-03,   8.63876439e-06],
       ..., 
       [  3.97681174e-04,   4.04724413e-03,   2.62367141e-04, ...,
          1.00000000e+00,   8.05217639e-01,   2.69724702e-03],
       [  7.02593036e-04,   3.65324300e-03,   1.87878869e-03, ...,
          8.05217639e-01,   1.00000000e+00,   3.00229809e-03],
       [  9.90472253e-04,   5.63519735e-04,   8.63876439e-06, ...,
          2.69724702e-03,   3.00229809e-03,   1.00000000e+00]])

As I am looking to expand to much larger sets than my original (475 row) matrix I am looking at using Spark via pySpark

from pyspark.mllib.linalg.distributed import RowMatrix

#load data into spark 
tempSpark =  sc.parallelize(pdArray)
mat = RowMatrix(tempSpark)

# Calculate exact similarities
exact = mat.columnSimilarities()

exact.entries.first()
# MatrixEntry(128, 211, 0.004969676943490767)

# Now when I get the data out I do the following...
# Convert to a RowMatrix.
rowMat = approx.toRowMatrix()
t_3 = rowMat.rows.collect()
a_3 = np.array([(x.toArray()) for x in t_3])
a_3.shape
# (488, 749)

As you can see the shape of the data is a) no longer square (which it should be and b) has dimensions which do not match the original number of rows... now it does match (in part_ the number of features in each row (len(pdArray[0]) = 749) but I don't know where the 488 is coming from

The presence of 749 makes me think I need to transpose my data first. Is that correct?

Finally, if this is the case why are the dimensions not (749, 749) ?

回答1:

First, the columnSimilarities method only returns the off diagonal entries of the upper triangular portion of the similarity matrix. With the absence of the 1's along the diagonal, you may have 0's for entire rows in the resulting similarity matrix.

Second, a pyspark RowMatrix doesn't have meaningful row indices. So essentially when converting from a CoordinateMatrix to a RowMatrix, the i value in the MatrixEntry is being mapped to whatever is convenient (probably some incrementing index). So what is likely happening is the rows that have all 0's are simply being ignored and the matrix is being squished vertically when you convert it to a RowMatrix.

It probably makes sense to inspect the dimension of the similarity matrix immediately after computation with the columnSimilarities method. You can do this by using the numRows() and the numCols() methods.

print(exact.numRows(),exact.numCols())

Other than that, it does sound like you need to transpose your matrix to get the correct vector similarities. Furthermore, if there is some reason that you need this in a RowMatrix-like form, you could try using an IndexedRowMatrix which does have meaningful row indices and would preserve the row index from the original CoordinateMatrix upon conversion.