Spark SQL : INSERT Statement with JDBC does not support default value

960 Views Asked by At

I am trying to read/write data from other databases using JDBC. just following the doc https://spark.apache.org/docs/latest/sql-data-sources-jdbc.html

But I found Spark SQL does not work well with Default value or AUTO_INCREMENT

CREATE TEMPORARY VIEW jdbcTable
USING org.apache.spark.sql.jdbc
OPTIONS (
  url "jdbc:postgresql:dbserver",
  dbtable "schema.tablename",
  user 'username',
  password 'password'
)

INSERT INTO TABLE jdbcTable (id) values (1)

Here is my DDL

CREATE TABLE `tablename`  (
  `id` int(11) NOT NULL AUTO_INCREMENT,
  `age` int(11) NULL DEFAULT 0,
  PRIMARY KEY (`id`) USING BTREE
) 

The error org.apache.spark.sql.AnalysisException: unknown requires that the data to be inserted have the same number of columns as the target table: target table has 2 column(s) but the inserted data has 1 column(s), including 0 partition column(s) having constant value(s).

Is there any way to support Default value or AUTO_INCREMENT? thx

1

There are 1 best solutions below

0
On

I have discovered this same issue with columns with DEFAULT and also COMPUTED columns. If you are using SQL Server you can consider an AFTER INSERT TRIGGER otherwise you may need to calculate the id on the INSERT side.