Removing duplicates from Spark RDDPair values

1.6k Views Asked by At

I am new to Python and also Spark. I've an pair RDD containing (key, List) but some of the values are duplicate. RDD is of the form (zipCode,streets) I want a pair RDD which does not contain duplicates. I am trying to achieve it using python. Can anyone please help on this.

(zipcode, streets)

streetsGroupedByZipCode = zipCodeStreetsPairTuple.groupByKey()
dayGroupedHosts.take(2)

[(123456, <pyspark.resultiterable.ResultIterable at 0xb00518ec>),
 (523900, <pyspark.resultiterable.ResultIterable at 0xb005192c>)]

zipToUniqueStreets = streetsGroupedByZipCode.map(lambda (x,y):(x,y.distinct()))

Above one does not work

1

There are 1 best solutions below

1
On BEST ANSWER

I'd do something like this :

streetsGroupedByZipCode.map(x => (x._1, x._2.groupBy(_._2).map(_._2.head)))

distinct on a tuple doesn't work as you said, so group your list by tuple, and get only the first element at the end.

val data = Seq((1, Seq((1, 1), (2, 2), (2, 2))), (10, Seq((1, 1), (1, 1), (3, 3))), (10, Seq((1, 2), (2, 4), (1, 2))))

gives :

(10,Map(1 -> 1, 3 -> 3))
(1,Map(2 -> 2, 1 -> 1))
(10,Map(1 -> 2, 2 -> 4))