I am getting below error when trying to run a pyspark streaming job using Kafka -
.start()
^^^^^^^
File "C:\spark\spark-3.5.1-bin-hadoop3\python\lib\pyspark.zip\pyspark\sql\streaming\readwriter.py", line 1527, in start
File "C:\spark\spark-3.5.1-bin-hadoop3\python\lib\py4j-0.10.9.7-src.zip\py4j\java_gateway.py", line 1322, in __call__
File "C:\spark\spark-3.5.1-bin-hadoop3\python\lib\pyspark.zip\pyspark\errors\exceptions\captured.py", line 179, in deco
File "C:\spark\spark-3.5.1-bin-hadoop3\python\lib\py4j-0.10.9.7-src.zip\py4j\protocol.py", line 326, in get_return_value
**py4j.protocol.Py4JJavaError: An error occurred while calling o50.start.
: ExitCodeException exitCode=-1073741515:**
My code looks like -
query = coordinates_df.writeStream \
.foreachBatch(process_batch) \
.format("console") \
.option("checkpointLocation","checkpoint_dir") \
.outputMode("complete") \
.start()
# Keep the job running indefinitely
query.awaitTermination()
I am running the code with below command -
spark-submit --packages org.apache.spark:spark-sql-kafka-0-10_2.12:3.5.1 kafka_pyspark.py
I have winutils.exe in my hadoop home.the job runs fine if i use read instead of readStream and if do not use writeStream. Can someone please help me to resolve this ? I am running this on windows.