Scala Reduce() operation on a RDD[Array[Int]]

411 Views Asked by At

I have a RDD of a 1 dimensional matrix. I am trying to do a very basic reduce operation to sum up the values of the same position of the matrix from various partitions.

I am using:

var z=x.reduce((a,b)=>a+b)

or

var z=x.reduce(_ + _)

But I am getting an error saying: type mismatch; found Array[Int], expected:String

I looked it up and found the link Is there a better way for reduce operation on RDD[Array[Double]]

So I tried using import.spire.implicits._ So now I don't have any compilation error, but after running the code I am getting a java.lang.NoSuchMethodError. I have provided the entire error below. Any help would be appreciated.

java.lang.NoSuchMethodError: scala.Product.$init$(Lscala/Product;)V
at spire.math.NumberTag$Integral$.<init>(NumberTag.scala:9)
at spire.math.NumberTag$Integral$.<clinit>(NumberTag.scala)
at spire.std.BigIntInstances.$init$(bigInt.scala:80)
at spire.implicits$.<init>(implicits.scala:6)
at spire.implicits$.<clinit>(implicits.scala)
at main.scala.com.ucr.edu.SparkScala.HistogramRDD$$anonfun$9.apply(HistogramRDD.scala:118)
at main.scala.com.ucr.edu.SparkScala.HistogramRDD$$anonfun$9.apply(HistogramRDD.scala:118)
at scala.collection.TraversableOnce$$anonfun$reduceLeft$1.apply(TraversableOnce.scala:190)
at scala.collection.TraversableOnce$$anonfun$reduceLeft$1.apply(TraversableOnce.scala:185)
at scala.collection.Iterator$class.foreach(Iterator.scala:893)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
at scala.collection.TraversableOnce$class.reduceLeft(TraversableOnce.scala:185)
at scala.collection.AbstractIterator.reduceLeft(Iterator.scala:1336)
at org.apache.spark.rdd.RDD$$anonfun$reduce$1$$anonfun$15.apply(RDD.scala:1012)
at org.apache.spark.rdd.RDD$$anonfun$reduce$1$$anonfun$15.apply(RDD.scala:1010)
at org.apache.spark.SparkContext$$anonfun$33.apply(SparkContext.scala:2125)
at org.apache.spark.SparkContext$$anonfun$33.apply(SparkContext.scala:2125)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
at org.apache.spark.scheduler.Task.run(Task.scala:108)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:338)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
1

There are 1 best solutions below

2
On BEST ANSWER

From my understanding you're trying to reduce the items by position in the arrays. You should consider to zip your arrays while reducing the rdd :

val a: RDD[Array[Int]] = ss.createDataset[Array[Int]](Seq(Array(1,2,3), Array(4,5,6))).rdd

    a.reduce{case (a: Array[Int],b: Array[Int]) =>
        val ziped = a.zip(b)
        ziped.map{case (i1, i2) => i1 + i2}
    }.foreach(println)

outputs :

5
7
9