PySpark pairwise distance between row

511 Views Asked by At

Now I am working with PySpark, and wondering is there a way to do pairwise distance between row. For instance, there is a dataset like this.

+--------------------+------------+--------+-------+-------+
|             product| Mitsubishi | Toyota | Tesla | Honda |
+--------------------+------------+--------+-------+-------+
|Mitsubishi          |           0|     0.8|    0.2|      0|
|Toyota              |           0|       0|      0|      0|  
|Tesla               |         0.1|     0.4|      0|    0.3|
|Honda               |           0|     0.5|    0.1|      0|
+--------------------+------------+--------+-------+-------+

I'm curious, because in pandas I used this line of code using sklearn:

from sklearn.metrics import pairwise_distances
array = df1_corr.drop(columns=['new_product_1']).values
correlation = pairwise_distances(array, array, metric = 'correlation')

How about PySpark, is there any built in pairwise_distance on it? or in sparkml?

1

There are 1 best solutions below

0
On