Structured streaming with spark-kafka 3.3.2 failing

76 Views Asked by At

I have spark structured streaming application that happily worked with spark 3.0.1. Now I've tried to upgrade to spark 3.3.2 and getting following exception:

> 23/12/15 00:14:45 ERROR MicroBatchExecution: Query [id = 32bd0f16-dd6b-463c-a742-191782d8e1e0, runId = 9d865ad3-7a4e-41e3-af77-cc7db6896cfb] terminated with error
> org.apache.spark.SparkException: The Spark SQL phase planning failed with an internal error. Please, fill a bug report in, and provide the full stack trace.
>   at org.apache.spark.sql.execution.QueryExecution$.toInternalError(QueryExecution.scala:500)
>   at org.apache.spark.sql.execution.QueryExecution$.withInternalError(QueryExecution.scala:512)
>   at org.apache.spark.sql.execution.QueryExecution.$anonfun$executePhase$1(QueryExecution.scala:185)
>   at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:779)
>   at org.apache.spark.sql.execution.QueryExecution.executePhase(QueryExecution.scala:184)
>   at org.apache.spark.sql.execution.QueryExecution.sparkPlan$lzycompute(QueryExecution.scala:145)
>   at org.apache.spark.sql.execution.QueryExecution.sparkPlan(QueryExecution.scala:138)
>   at org.apache.spark.sql.execution.QueryExecution.$anonfun$executedPlan$1(QueryExecution.scala:158)
>   at org.apache.spark.sql.catalyst.QueryPlanningTracker.measurePhase(QueryPlanningTracker.scala:111)
>   at org.apache.spark.sql.execution.QueryExecution.$anonfun$executePhase$2(QueryExecution.scala:185)
>   at org.apache.spark.sql.execution.QueryExecution$.withInternalError(QueryExecution.scala:510)
>   at org.apache.spark.sql.execution.QueryExecution.$anonfun$executePhase$1(QueryExecution.scala:185)
>   at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:779)
>   at org.apache.spark.sql.execution.QueryExecution.executePhase(QueryExecution.scala:184)
>   at org.apache.spark.sql.execution.QueryExecution.executedPlan$lzycompute(QueryExecution.scala:158)
>   at org.apache.spark.sql.execution.QueryExecution.executedPlan(QueryExecution.scala:151)
>   at org.apache.spark.sql.execution.streaming.MicroBatchExecution.$anonfun$runBatch$15(MicroBatchExecution.scala:656)
>   at org.apache.spark.sql.execution.streaming.ProgressReporter.reportTimeTaken(ProgressReporter.scala:375)
>   at org.apache.spark.sql.execution.streaming.ProgressReporter.reportTimeTaken$(ProgressReporter.scala:373)
>   at org.apache.spark.sql.execution.streaming.StreamExecution.reportTimeTaken(StreamExecution.scala:68)
>   at org.apache.spark.sql.execution.streaming.MicroBatchExecution.runBatch(MicroBatchExecution.scala:646)
>   at org.apache.spark.sql.execution.streaming.MicroBatchExecution.$anonfun$runActivatedStream$2(MicroBatchExecution.scala:256)
>   at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
>   at org.apache.spark.sql.execution.streaming.ProgressReporter.reportTimeTaken(ProgressReporter.scala:375)
>   at org.apache.spark.sql.execution.streaming.ProgressReporter.reportTimeTaken$(ProgressReporter.scala:373)
>   at org.apache.spark.sql.execution.streaming.StreamExecution.reportTimeTaken(StreamExecution.scala:68)
>   at org.apache.spark.sql.execution.streaming.MicroBatchExecution.$anonfun$runActivatedStream$1(MicroBatchExecution.scala:219)
>   at org.apache.spark.sql.execution.streaming.ProcessingTimeExecutor.execute(TriggerExecutor.scala:67)
>   at org.apache.spark.sql.execution.streaming.MicroBatchExecution.runActivatedStream(MicroBatchExecution.scala:213)
>   at org.apache.spark.sql.execution.streaming.StreamExecution.$anonfun$runStream$1(StreamExecution.scala:307)
>   at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
>   at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:779)
>   at org.apache.spark.sql.execution.streaming.StreamExecution.org$apache$spark$sql$execution$streaming$StreamExecution$$runStream(StreamExecution.scala:285)
>   at org.apache.spark.sql.execution.streaming.StreamExecution$$anon$1.run(StreamExecution.scala:208)
> Caused by: java.lang.NullPointerException
>   at org.apache.spark.sql.kafka010.KafkaOffsetReaderConsumer.getSortedExecutorList(KafkaOffsetReaderConsumer.scala:484)
>   at org.apache.spark.sql.kafka010.KafkaOffsetReaderConsumer.getOffsetRangesFromResolvedOffsets(KafkaOffsetReaderConsumer.scala:539)
>   at org.apache.spark.sql.kafka010.KafkaMicroBatchStream.planInputPartitions(KafkaMicroBatchStream.scala:197)
>   at org.apache.spark.sql.execution.datasources.v2.MicroBatchScanExec.inputPartitions$lzycompute(MicroBatchScanExec.scala:45)
>   at org.apache.spark.sql.execution.datasources.v2.MicroBatchScanExec.inputPartitions(MicroBatchScanExec.scala:45)
>   at org.apache.spark.sql.execution.datasources.v2.DataSourceV2ScanExecBase.supportsColumnar(DataSourceV2ScanExecBase.scala:142)
>   at org.apache.spark.sql.execution.datasources.v2.DataSourceV2ScanExecBase.supportsColumnar$(DataSourceV2ScanExecBase.scala:141)
>   at org.apache.spark.sql.execution.datasources.v2.MicroBatchScanExec.supportsColumnar(MicroBatchScanExec.scala:29)
>   at org.apache.spark.sql.execution.datasources.v2.DataSourceV2Strategy.apply(DataSourceV2Strategy.scala:153)
>   at org.apache.spark.sql.catalyst.planning.QueryPlanner.$anonfun$plan$1(QueryPlanner.scala:63)
>   at scala.collection.Iterator$$anon$11.nextCur(Iterator.scala:486)
>   at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:492)
>   at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:491)
>   at org.apache.spark.sql.catalyst.planning.QueryPlanner.plan(QueryPlanner.scala:93)
>   at org.apache.spark.sql.execution.SparkStrategies.plan(SparkStrategies.scala:69)
>   at org.apache.spark.sql.catalyst.planning.QueryPlanner.$anonfun$plan$3(QueryPlanner.scala:78)
>   at scala.collection.TraversableOnce$folder$1.apply(TraversableOnce.scala:196)
>   at scala.collection.TraversableOnce$folder$1.apply(TraversableOnce.scala:194)
>   at scala.collection.Iterator.foreach(Iterator.scala:943)
>   at scala.collection.Iterator.foreach$(Iterator.scala:943)
>   at scala.collection.AbstractIterator.foreach(Iterator.scala:1431)
>   at scala.collection.TraversableOnce.foldLeft(TraversableOnce.scala:199)
>   at scala.collection.TraversableOnce.foldLeft$(TraversableOnce.scala:192)
>   at scala.collection.AbstractIterator.foldLeft(Iterator.scala:1431)
>   at org.apache.spark.sql.catalyst.planning.QueryPlanner.$anonfun$plan$2(QueryPlanner.scala:75)
>   at scala.collection.Iterator$$anon$11.nextCur(Iterator.scala:486)
>   at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:492)
>   at org.apache.spark.sql.catalyst.planning.QueryPlanner.plan(QueryPlanner.scala:93)
>   at org.apache.spark.sql.execution.SparkStrategies.plan(SparkStrategies.scala:69)
>   at org.apache.spark.sql.catalyst.planning.QueryPlanner.$anonfun$plan$3(QueryPlanner.scala:78)
>   at scala.collection.TraversableOnce$folder$1.apply(TraversableOnce.scala:196)
>   at scala.collection.TraversableOnce$folder$1.apply(TraversableOnce.scala:194)
>   at scala.collection.Iterator.foreach(Iterator.scala:943)
>   at scala.collection.Iterator.foreach$(Iterator.scala:943)
>   at scala.collection.AbstractIterator.foreach(Iterator.scala:1431)
>   at scala.collection.TraversableOnce.foldLeft(TraversableOnce.scala:199)
>   at scala.collection.TraversableOnce.foldLeft$(TraversableOnce.scala:192)
>   at scala.collection.AbstractIterator.foldLeft(Iterator.scala:1431)
>   at org.apache.spark.sql.catalyst.planning.QueryPlanner.$anonfun$plan$2(QueryPlanner.scala:75)
>   at scala.collection.Iterator$$anon$11.nextCur(Iterator.scala:486)
>   at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:492)
>   at org.apache.spark.sql.catalyst.planning.QueryPlanner.plan(QueryPlanner.scala:93)
>   at org.apache.spark.sql.execution.SparkStrategies.plan(SparkStrategies.scala:69)
>   at org.apache.spark.sql.catalyst.planning.QueryPlanner.$anonfun$plan$3(QueryPlanner.scala:78)
>   at scala.collection.TraversableOnce$folder$1.apply(TraversableOnce.scala:196)
>   at scala.collection.TraversableOnce$folder$1.apply(TraversableOnce.scala:194)
>   at scala.collection.Iterator.foreach(Iterator.scala:943)
>   at scala.collection.Iterator.foreach$(Iterator.scala:943)
>   at scala.collection.AbstractIterator.foreach(Iterator.scala:1431)
>   at scala.collection.TraversableOnce.foldLeft(TraversableOnce.scala:199)
>   at scala.collection.TraversableOnce.foldLeft$(TraversableOnce.scala:192)
>   at scala.collection.AbstractIterator.foldLeft(Iterator.scala:1431)
>   at org.apache.spark.sql.catalyst.planning.QueryPlanner.$anonfun$plan$2(QueryPlanner.scala:75)
>   at scala.collection.Iterator$$anon$11.nextCur(Iterator.scala:486)
>   at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:492)
>   at org.apache.spark.sql.catalyst.planning.QueryPlanner.plan(QueryPlanner.scala:93)
>   at org.apache.spark.sql.execution.SparkStrategies.plan(SparkStrategies.scala:69)
>   at org.apache.spark.sql.catalyst.planning.QueryPlanner.$anonfun$plan$3(QueryPlanner.scala:78)
>   at scala.collection.TraversableOnce$folder$1.apply(TraversableOnce.scala:196)
>   at scala.collection.TraversableOnce$folder$1.apply(TraversableOnce.scala:194)
>   at scala.collection.Iterator.foreach(Iterator.scala:943)
>   at scala.collection.Iterator.foreach$(Iterator.scala:943)
>   at scala.collection.AbstractIterator.foreach(Iterator.scala:1431)
>   at scala.collection.TraversableOnce.foldLeft(TraversableOnce.scala:199)
>   at scala.collection.TraversableOnce.foldLeft$(TraversableOnce.scala:192)
>   at scala.collection.AbstractIterator.foldLeft(Iterator.scala:1431)
>   at org.apache.spark.sql.catalyst.planning.QueryPlanner.$anonfun$plan$2(QueryPlanner.scala:75)
>   at scala.collection.Iterator$$anon$11.nextCur(Iterator.scala:486)
>   at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:492)
>   at org.apache.spark.sql.catalyst.planning.QueryPlanner.plan(QueryPlanner.scala:93)
>   at org.apache.spark.sql.execution.SparkStrategies.plan(SparkStrategies.scala:69)
>   at org.apache.spark.sql.catalyst.planning.QueryPlanner.$anonfun$plan$3(QueryPlanner.scala:78)
>   at scala.collection.TraversableOnce$folder$1.apply(TraversableOnce.scala:196)
>   at scala.collection.TraversableOnce$folder$1.apply(TraversableOnce.scala:194)
>   at scala.collection.Iterator.foreach(Iterator.scala:943)
>   at scala.collection.Iterator.foreach$(Iterator.scala:943)
>   at scala.collection.AbstractIterator.foreach(Iterator.scala:1431)
>   at scala.collection.TraversableOnce.foldLeft(TraversableOnce.scala:199)
>   at scala.collection.TraversableOnce.foldLeft$(TraversableOnce.scala:192)
>   at scala.collection.AbstractIterator.foldLeft(Iterator.scala:1431)
>   at org.apache.spark.sql.catalyst.planning.QueryPlanner.$anonfun$plan$2(QueryPlanner.scala:75)
>   at scala.collection.Iterator$$anon$11.nextCur(Iterator.scala:486)
>   at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:492)
>   at org.apache.spark.sql.catalyst.planning.QueryPlanner.plan(QueryPlanner.scala:93)
>   at org.apache.spark.sql.execution.SparkStrategies.plan(SparkStrategies.scala:69)
>   at org.apache.spark.sql.catalyst.planning.QueryPlanner.$anonfun$plan$3(QueryPlanner.scala:78)
>   at scala.collection.TraversableOnce$folder$1.apply(TraversableOnce.scala:196)
>   at scala.collection.TraversableOnce$folder$1.apply(TraversableOnce.scala:194)
>   at scala.collection.Iterator.foreach(Iterator.scala:943)
>   at scala.collection.Iterator.foreach$(Iterator.scala:943)
>   at scala.collection.AbstractIterator.foreach(Iterator.scala:1431)
>   at scala.collection.TraversableOnce.foldLeft(TraversableOnce.scala:199)
>   at scala.collection.TraversableOnce.foldLeft$(TraversableOnce.scala:192)
>   at scala.collection.AbstractIterator.foldLeft(Iterator.scala:1431)
>   at org.apache.spark.sql.catalyst.planning.QueryPlanner.$anonfun$plan$2(QueryPlanner.scala:75)
>   at scala.collection.Iterator$$anon$11.nextCur(Iterator.scala:486)
>   at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:492)
>   at org.apache.spark.sql.catalyst.planning.QueryPlanner.plan(QueryPlanner.scala:93)
>   at org.apache.spark.sql.execution.SparkStrategies.plan(SparkStrategies.scala:69)
>   at org.apache.spark.sql.catalyst.planning.QueryPlanner.$anonfun$plan$3(QueryPlanner.scala:78)
>   at scala.collection.TraversableOnce$folder$1.apply(TraversableOnce.scala:196)
>   at scala.collection.TraversableOnce$folder$1.apply(TraversableOnce.scala:194)
>   at scala.collection.Iterator.foreach(Iterator.scala:943)
>   at scala.collection.Iterator.foreach$(Iterator.scala:943)
>   at scala.collection.AbstractIterator.foreach(Iterator.scala:1431)
>   at scala.collection.TraversableOnce.foldLeft(TraversableOnce.scala:199)
>   at scala.collection.TraversableOnce.foldLeft$(TraversableOnce.scala:192)
>   at scala.collection.AbstractIterator.foldLeft(Iterator.scala:1431)
>   at org.apache.spark.sql.catalyst.planning.QueryPlanner.$anonfun$plan$2(QueryPlanner.scala:75)
>   at scala.collection.Iterator$$anon$11.nextCur(Iterator.scala:486)
>   at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:492)
>   at org.apache.spark.sql.catalyst.planning.QueryPlanner.plan(QueryPlanner.scala:93)
>   at org.apache.spark.sql.execution.SparkStrategies.plan(SparkStrategies.scala:69)
>   at org.apache.spark.sql.catalyst.planning.QueryPlanner.$anonfun$plan$3(QueryPlanner.scala:78)
>   at scala.collection.TraversableOnce$folder$1.apply(TraversableOnce.scala:196)
>   at scala.collection.TraversableOnce$folder$1.apply(TraversableOnce.scala:194)
>   at scala.collection.Iterator.foreach(Iterator.scala:943)
>   at scala.collection.Iterator.foreach$(Iterator.scala:943)
>   at scala.collection.AbstractIterator.foreach(Iterator.scala:1431)
>   at scala.collection.TraversableOnce.foldLeft(TraversableOnce.scala:199)
>   at scala.collection.TraversableOnce.foldLeft$(TraversableOnce.scala:192)
>   at scala.collection.AbstractIterator.foldLeft(Iterator.scala:1431)
>   at org.apache.spark.sql.catalyst.planning.QueryPlanner.$anonfun$plan$2(QueryPlanner.scala:75)
>   at scala.collection.Iterator$$anon$11.nextCur(Iterator.scala:486)
>   at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:492)
>   at org.apache.spark.sql.catalyst.planning.QueryPlanner.plan(QueryPlanner.scala:93)
>   at org.apache.spark.sql.execution.SparkStrategies.plan(SparkStrategies.scala:69)
>   at org.apache.spark.sql.catalyst.planning.QueryPlanner.$anonfun$plan$3(QueryPlanner.scala:78)
>   at scala.collection.TraversableOnce$folder$1.apply(TraversableOnce.scala:196)
>   at scala.collection.TraversableOnce$folder$1.apply(TraversableOnce.scala:194)
>   at scala.collection.Iterator.foreach(Iterator.scala:943)
>   at scala.collection.Iterator.foreach$(Iterator.scala:943)
>   at scala.collection.AbstractIterator.foreach(Iterator.scala:1431)
>   at scala.collection.TraversableOnce.foldLeft(TraversableOnce.scala:199)
>   at scala.collection.TraversableOnce.foldLeft$(TraversableOnce.scala:192)
>   at scala.collection.AbstractIterator.foldLeft(Iterator.scala:1431)
>   at org.apache.spark.sql.catalyst.planning.QueryPlanner.$anonfun$plan$2(QueryPlanner.scala:75)
>   at scala.collection.Iterator$$anon$11.nextCur(Iterator.scala:486)
>   at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:492)
>   at org.apache.spark.sql.catalyst.planning.QueryPlanner.plan(QueryPlanner.scala:93)
>   at org.apache.spark.sql.execution.SparkStrategies.plan(SparkStrategies.scala:69)
>   at org.apache.spark.sql.catalyst.planning.QueryPlanner.$anonfun$plan$3(QueryPlanner.scala:78)
>   at scala.collection.TraversableOnce$folder$1.apply(TraversableOnce.scala:196)
>   at scala.collection.TraversableOnce$folder$1.apply(TraversableOnce.scala:194)
>   at scala.collection.Iterator.foreach(Iterator.scala:943)
>   at scala.collection.Iterator.foreach$(Iterator.scala:943)
>   at scala.collection.AbstractIterator.foreach(Iterator.scala:1431)
>   at scala.collection.TraversableOnce.foldLeft(TraversableOnce.scala:199)
>   at scala.collection.TraversableOnce.foldLeft$(TraversableOnce.scala:192)
>   at scala.collection.AbstractIterator.foldLeft(Iterator.scala:1431)
>   at org.apache.spark.sql.catalyst.planning.QueryPlanner.$anonfun$plan$2(QueryPlanner.scala:75)
>   at scala.collection.Iterator$$anon$11.nextCur(Iterator.scala:486)
>   at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:492)
>   at org.apache.spark.sql.catalyst.planning.QueryPlanner.plan(QueryPlanner.scala:93)
>   at org.apache.spark.sql.execution.SparkStrategies.plan(SparkStrategies.scala:69)
>   at org.apache.spark.sql.catalyst.planning.QueryPlanner.$anonfun$plan$3(QueryPlanner.scala:78)
>   at scala.collection.TraversableOnce$folder$1.apply(TraversableOnce.scala:196)
>   at scala.collection.TraversableOnce$folder$1.apply(TraversableOnce.scala:194)
>   at scala.collection.Iterator.foreach(Iterator.scala:943)
>   at scala.collection.Iterator.foreach$(Iterator.scala:943)
>   at scala.collection.AbstractIterator.foreach(Iterator.scala:1431)
>   at scala.collection.TraversableOnce.foldLeft(TraversableOnce.scala:199)
>   at scala.collection.TraversableOnce.foldLeft$(TraversableOnce.scala:192)
>   at scala.collection.AbstractIterator.foldLeft(Iterator.scala:1431)
>   at org.apache.spark.sql.catalyst.planning.QueryPlanner.$anonfun$plan$2(QueryPlanner.scala:75)
>   at scala.collection.Iterator$$anon$11.nextCur(Iterator.scala:486)
>   at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:492)
>   at org.apache.spark.sql.catalyst.planning.QueryPlanner.plan(QueryPlanner.scala:93)
>   at org.apache.spark.sql.execution.SparkStrategies.plan(SparkStrategies.scala:69)
>   at org.apache.spark.sql.catalyst.planning.QueryPlanner.$anonfun$plan$3(QueryPlanner.scala:78)
>   at scala.collection.TraversableOnce$folder$1.apply(TraversableOnce.scala:196)
>   at scala.collection.TraversableOnce$folder$1.apply(TraversableOnce.scala:194)
>   at scala.collection.Iterator.foreach(Iterator.scala:943)
>   at scala.collection.Iterator.foreach$(Iterator.scala:943)
>   at scala.collection.AbstractIterator.foreach(Iterator.scala:1431)
>   at scala.collection.TraversableOnce.foldLeft(TraversableOnce.scala:199)
>   at scala.collection.TraversableOnce.foldLeft$(TraversableOnce.scala:192)
>   at scala.collection.AbstractIterator.foldLeft(Iterator.scala:1431)
>   at org.apache.spark.sql.catalyst.planning.QueryPlanner.$anonfun$plan$2(QueryPlanner.scala:75)
>   at scala.collection.Iterator$$anon$11.nextCur(Iterator.scala:486)
>   at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:492)
>   at org.apache.spark.sql.catalyst.planning.QueryPlanner.plan(QueryPlanner.scala:93)
>   at org.apache.spark.sql.execution.SparkStrategies.plan(SparkStrategies.scala:69)
>   at org.apache.spark.sql.catalyst.planning.QueryPlanner.$anonfun$plan$3(QueryPlanner.scala:78)
>   at scala.collection.TraversableOnce$folder$1.apply(TraversableOnce.scala:196)
>   at scala.collection.TraversableOnce$folder$1.apply(TraversableOnce.scala:194)
>   at scala.collection.Iterator.foreach(Iterator.scala:943)
>   at scala.collection.Iterator.foreach$(Iterator.scala:943)
>   at scala.collection.AbstractIterator.foreach(Iterator.scala:1431)
>   at scala.collection.TraversableOnce.foldLeft(TraversableOnce.scala:199)
>   at scala.collection.TraversableOnce.foldLeft$(TraversableOnce.scala:192)
>   at scala.collection.AbstractIterator.foldLeft(Iterator.scala:1431)
>   at org.apache.spark.sql.catalyst.planning.QueryPlanner.$anonfun$plan$2(QueryPlanner.scala:75)
>   at scala.collection.Iterator$$anon$11.nextCur(Iterator.scala:486)
>   at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:492)
>   at org.apache.spark.sql.catalyst.planning.QueryPlanner.plan(QueryPlanner.scala:93)
>   at org.apache.spark.sql.execution.SparkStrategies.plan(SparkStrategies.scala:69)
>   at org.apache.spark.sql.execution.QueryExecution$.createSparkPlan(QueryExecution.scala:459)
>   at org.apache.spark.sql.execution.QueryExecution.$anonfun$sparkPlan$1(QueryExecution.scala:145)
>   at org.apache.spark.sql.catalyst.QueryPlanningTracker.measurePhase(QueryPlanningTracker.scala:111)
>   at org.apache.spark.sql.execution.QueryExecution.$anonfun$executePhase$2(QueryExecution.scala:185)
>   at org.apache.spark.sql.execution.QueryExecution$.withInternalError(QueryExecution.scala:510)
> 32 more

There is nothing unique about this application it just reads input kafka topic does some processing and writes into output kafka topic. The exception happens after initial batch gets successfully written into output topic when MicroBatchExecution Starting new streaming query. I can't debug application as it runs in AWS. Could you please recommend how to approach this issue

I've tried to exclude whole processing part - just read from kafka and write nothing into output (filter out all records) but problem is still present

0

There are 0 best solutions below