I am using this simple piece of code to read a stream of json files from a directory. The code works just fine on Databricks notebook however throws me an error while running it locally. I am using databricks-connect (version 8.1) to connect and run the script through the cluster.
from pyspark.sql.types import StructType
from pyspark.sql import SparkSession
spark = SparkSession.builder.appName("ProcessSensorData").getOrCreate()
userschema = StructType().add("ID", "string").add("Created", "string")\
.add("Data", "string").add("DeviceID", "string").add("Size", "string")
df = spark.readStream.schema(userschema).json("dbfs:/mnt/")
df.writeStream.format("parquet")\
.option("checkpointLocation", "dbfs:/mnt/parquet/demo_checkpoint1")\
.option("path", "dbfs:/mnt/parquet/demo_parquet1")\
.start()
The above code works fine locally when I use "read" instead of "readStream". I have tried using different ways to readstream using options, format and also confirmed my connection with the databricks cluster. I have pyspark version 3.1.1 and java 8. I always get the following error:
21/04/21 09:10:44 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
21/04/21 09:10:45 WARN MetricsSystem: Using default name SparkStatusTracker for source because neither spark.metrics.namespace nor spark.app.id is set.
Traceback (most recent call last):
File "/Users/dir/spark_process.py", line 6, in <module>
df = spark.readStream.schema(userschema).json("dbfs:/mnt/")
File "/Users/dir/venv/lib/python3.9/site-packages/pyspark/sql/streaming.py", line 631, in json
return self._df(self._jreader.json(path))
File "/Users/dir/venv/lib/python3.9/site-packages/py4j/java_gateway.py", line 1304, in __call__
return_value = get_return_value(
File "/Users/dir/venv/lib/python3.9/site-packages/pyspark/sql/utils.py", line 110, in deco
return f(*a, **kw)
File "/Users/dir/venv/lib/python3.9/site-packages/py4j/protocol.py", line 326, in get_return_value
raise Py4JJavaError(
py4j.protocol.Py4JJavaError: An error occurred while calling o31.json.
: java.lang.UnsupportedOperationException
at com.databricks.sql.transaction.directory.DirectoryAtomicReadProtocol$.filterDirectoryListing(DirectoryAtomicReadProtocol.scala:28)
at org.apache.spark.sql.execution.datasources.InMemoryFileIndex$.listLeafFiles(InMemoryFileIndex.scala:375)
at org.apache.spark.sql.execution.datasources.InMemoryFileIndex$.$anonfun$bulkListLeafFiles$2(InMemoryFileIndex.scala:282)
at scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:238)
at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
at scala.collection.TraversableLike.map(TraversableLike.scala:238)
at scala.collection.TraversableLike.map$(TraversableLike.scala:231)
at scala.collection.AbstractTraversable.map(Traversable.scala:108)
at org.apache.spark.sql.execution.datasources.InMemoryFileIndex$.bulkListLeafFiles(InMemoryFileIndex.scala:274)
at org.apache.spark.sql.execution.datasources.InMemoryFileIndex.listLeafFiles(InMemoryFileIndex.scala:139)
at org.apache.spark.sql.execution.datasources.InMemoryFileIndex.refresh0(InMemoryFileIndex.scala:102)
at org.apache.spark.sql.execution.datasources.InMemoryFileIndex.<init>(InMemoryFileIndex.scala:74)
at org.apache.spark.sql.execution.datasources.DataSource.createInMemoryFileIndex(DataSource.scala:620)
at org.apache.spark.sql.execution.datasources.DataSource.$anonfun$sourceSchema$2(DataSource.scala:296)
at org.apache.spark.sql.execution.datasources.DataSource.tempFileIndex$lzycompute$1(DataSource.scala:183)
at org.apache.spark.sql.execution.datasources.DataSource.tempFileIndex$1(DataSource.scala:183)
at org.apache.spark.sql.execution.datasources.DataSource.getOrInferFileFormatSchema(DataSource.scala:188)
at org.apache.spark.sql.execution.datasources.DataSource.sourceSchema(DataSource.scala:288)
at org.apache.spark.sql.execution.datasources.DataSource.sourceInfo$lzycompute(DataSource.scala:137)
at org.apache.spark.sql.execution.datasources.DataSource.sourceInfo(DataSource.scala:137)
at org.apache.spark.sql.execution.streaming.StreamingRelation$.apply(StreamingRelation.scala:33)
at org.apache.spark.sql.streaming.DataStreamReader.loadInternal(DataStreamReader.scala:264)
at org.apache.spark.sql.streaming.DataStreamReader.load(DataStreamReader.scala:280)
at org.apache.spark.sql.streaming.DataStreamReader.json(DataStreamReader.scala:361)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:380)
at py4j.Gateway.invoke(Gateway.java:295)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:251)
at java.lang.Thread.run(Thread.java:748)
Process finished with exit code 1
If anyone can help me resolve this issue, it would be a great help, thanks!
Databricks Connect does not support Structured Streaming. You can find this and other limitations on their official documentation