I've a question on hive metastore support for delta lake, I've defined a metastore on a standalone spark session with the following configurations
pyspark --conf "spark.sql.catalog.spark_catalog=org.apache.spark.sql.delta.catalog.DeltaCatalog" --conf "spark.sql.extensions=io.delta.sql.DeltaSparkSessionExtension"
spark = SparkSession \
.builder \
.appName("Python Spark SQL Hive integration example") \
.config("spark.sql.warehouse.dir", '/mnt/data/db/medilake/') \
.config("spark.hadoop.datanucleus.autoCreateSchema", 'true') \
.enableHiveSupport() \
.getOrCreate()
and it all worked while i was in the session according to the doc https://docs.delta.io/latest/delta-batch.html#-control-data-location&language-python
then when i opened a new session i got this error
>>> spark.sql("SELECT * FROM BRONZE.user").show()
20/09/04 20:30:27 WARN ObjectStore: Failed to get database bronze, returning NoSuchObjectException
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/opt/spark-3.0.0-bin-hadoop3.2/python/pyspark/sql/session.py", line 646, in sql
return DataFrame(self._jsparkSession.sql(sqlQuery), self._wrapped)
File "/opt/spark-3.0.0-bin-hadoop3.2/python/lib/py4j-0.10.9-src.zip/py4j/java_gateway.py", line 1305, in __call__
File "/opt/spark-3.0.0-bin-hadoop3.2/python/pyspark/sql/utils.py", line 137, in deco
raise_from(converted)
File "<string>", line 3, in raise_from
pyspark.sql.utils.AnalysisException: Table or view not found: BRONZE.user; line 1 pos 14;
'Project [*]
+- 'UnresolvedRelation [BRONZE, user]
i can still see the data arranged in the folders but now databases are empty (used to be bronze/silver/gold)
>>> spark.catalog.listDatabases()
[Database(name='default', description='Default Hive database', locationUri='file:/mnt/data/db/medilake/spark-warehouse')]
folders:
root@m:/mnt/data/db/medilake# ll
total 16
drwxr-xr-x 9 root root 4096 Sep 4 19:05 bronze.db
-rw-r--r-- 1 root root 708 Sep 4 19:45 derby.log
drwxr-xr-x 4 root root 4096 Sep 4 19:36 gold.db
drwxr-xr-x 5 root root 4096 Sep 4 19:45 metastore_db
conf:
>>> spark.conf.get("spark.sql.warehouse.dir")
'/mnt/data/db/medilake/'
how can I set the property to work from any working directory?
.config("spark.sql.warehouse.dir", '/mnt/data/db/medilake/')
just needed to follow the instructions. setting the spark.sql.warehouse.dir on the spark-default.conf did the trick