Scala: collect_list() over Window with keeping null values

6.3k Views Asked by At

I have a data frame like the below:

+----+----+----+
|colA|colB|colC|
+----+----+----+
|1   |1   |23  |
|1   |2   |63  |
|1   |3   |null|
|1   |4   |32  |
|2   |2   |56  |
+----+----+----+

I apply the below instructions such that I create a sequence of values in column C:

import org.apache.spark.sql.functions._
import org.apache.spark.sql.expressions._
df.withColumn("colD", 
collect_list("colC").over(Window.partitionBy("colA").orderBy("colB")))

The result is like this such that column D is created and includes values of column C as a sequence while it has removed null value:

+----+----+----+------------+
|colA|colB|colC|colD        |
+----+----+----+------------+
|1   |1   |23  |[23]        |
|1   |2   |63  |[23, 63]    |
|1   |3   |null|[23, 63]    |
|1   |4   |32  |[23,63,32]  |
|2   |2   |56  |[56]        |
+----+----+----+------------+

However, I would like to keep null values in the new column and have the below result:

+----+----+----+-----------------+
|colA|colB|colC|colD             |
+----+----+----+-----------------+
|1   |1   |23  |[23]             |
|1   |2   |63  |[23, 63]         |
|1   |3   |null|[23, 63, null]   |
|1   |4   |32  |[23,63,null, 32] |
|2   |2   |56  |[56]             |
+----+----+----+-----------------+

As you see I still have null values in the result. Do you know how can I do it?

2

There are 2 best solutions below

1
On

Since collect_list automatically removes all nulls, one approach would be to temporarily replace null with a designated number, say Int.MinValue, before applying the method, and use a UDF to restore those numbers back to null afterward:

import org.apache.spark.sql.functions._
import org.apache.spark.sql.expressions._

val df = Seq(
  (Some(1), Some(1), Some(23)),
  (Some(1), Some(2), Some(63)),
  (Some(1), Some(3), None),
  (Some(1), Some(4), Some(32)),
  (Some(2), Some(2), Some(56))
).toDF("colA", "colB", "colC")

def replaceWithNull(n: Int) = udf( (arr: Seq[Int]) =>
  arr.map( i => if (i != n) Some(i) else None )
)

df.withColumn( "colD", replaceWithNull(Int.MinValue)(
    collect_list(when($"colC".isNull, Int.MinValue).otherwise($"colC")).
      over(Window.partitionBy("colA").orderBy("colB"))
  )
).show
// +----+----+----+------------------+
// |colA|colB|colC|              colD|
// +----+----+----+------------------+
// |   1|   1|  23|              [23]|
// |   1|   2|  63|          [23, 63]|
// |   1|   3|null|    [23, 63, null]|
// |   1|   4|  32|[23, 63, null, 32]|
// |   2|   2|  56|              [56]|
// +----+----+----+------------------+
0
On

As LeoC mentioned collect_list will drop null values. There seems to be a workaround to this behavior. By wrapping each scalar into array following by collect_list will result in [[23], [63], [], [32]] then when you do flatten on that you will get [23, 63,, 32]. Those missing values in arrays are nulls.

collect_list and flatten built-in sql functions I believe were introduced in Spark 2.4. I didn't look into implementation to verify this is expected behavior so I don't know how reliable this solution is.

import org.apache.spark.sql.functions._
import org.apache.spark.sql.expressions._

val df = Seq(
  (Some(1), Some(1), Some(23)),
  (Some(1), Some(2), Some(63)),
  (Some(1), Some(3), None),
  (Some(1), Some(4), Some(32)),
  (Some(2), Some(2), Some(56))
).toDF("colA", "colB", "colC")

val newDf = df.withColumn("colD", flatten(collect_list(array("colC"))
    .over(Window.partitionBy("colA").orderBy("colB"))))


+----+----+----+-------------+
|colA|colB|colC|         colD|
+----+----+----+-------------+
|   1|   1|  23|         [23]|
|   1|   2|  63|     [23, 63]|
|   1|   3|null|    [23, 63,]|
|   1|   4|  32|[23, 63,, 32]|
|   2|   2|  56|         [56]|
+----+----+----+-------------+