I'm writing to report an issue I've encountered with the Debezium Oracle connector.
Upon starting the Kafka-topic, the connector successfully captures the tables script as a "create" script. However, I've noticed that the type field in the captured JSON remains as "CREATE" even for events involving alterations to a table in Oracle.
Furthermore, once the topic is established, subsequent changes in Data Definition Language (DDL) operations (such as ALTER or TERMINATE) and Data Manipulation Language (DML) operations (including INSERT, UPDATE, or DELETE) are not being captured by the topic.
This discrepancy is hindering our ability to accurately track changes within our Oracle database.
Note: With the LogMiner, I can see that the DDL and DML changes are being captured in the redo log files.
-----------------------
Debezium Oracle Connector Version: 2.5
Oracle Database Version: 19c
kafka: 3.2.3
Operating System Details: RHEL8
Configuration steps in Oracle DB:
----------------------
Step 1: Switch the DB to ARCHIVELOG mode
SQL> SHUTDOWN IMMEDIATE;
SQL> STARTUP MOUNT;
SQL> ALTER DATABASE ARCHIVELOG;
SQL> ALTER DATABASE OPEN;
Step 2: Enable Force Logging for Database
SQL> ALTER DATABASE FORCE LOGGING;
Step 3: Enable Force Logging for Tablespace
SQL> ALTER TABLESPACE KAFKA1 FORCE LOGGING;
Step 4: Enable Supplemental Logging for Database
SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;
Step 5: Enabling the Archive log for the table:
SQL> alter table KAFKA_ADMIN.KAFKA_SAMPLE_1 logging;
Step 6: Enabling the Supplemental log for the table:
SQL> ALTER TABLE KAFKA_ADMIN.KAFKA_SAMPLE_1 ADD SUPPLEMENTAL LOG DATA (ALL, PRIMARY KEY, UNIQUE) COLUMNS;
Step 7: Force a log switch to generate a new archived redo log file:
SQL> ALTER SYSTEM SWITCH LOGFILE;
Step 8: Add redo logfile to the Oracle LogMiner:
SQL> exec dbms_logmnr.add_logfile( logfilename =>'/visdata1/visdb/redo03.log', options =>dbms_logmnr.new);
exec dbms_logmnr.add_logfile( logfilename =>'/visdata1/visdb/redo01.log', options =>dbms_logmnr.new);
exec dbms_logmnr.add_logfile( logfilename =>'/visdata1/visdb/redo02.log', options =>dbms_logmnr.new);
PL/SQL procedure successfully completed.
Step 9: Start Oracle LogMiner:
SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR( -
OPTIONS => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG);>
PL/SQL procedure successfully completed.
Step 10: Query the V$LOGMNR_CONTENTS view to retrieve the changes captured by LogMiner:
SQL> SELECT SCN, username, SQL_REDO, SQL_UNDO FROM V$LOGMNR_CONTENTS WHERE seg_owner = 'KAFKA_ADMIN' and seg_name like 'KAFKA_SAMPLE_1';
SQL> SELECT SCN, username, SQL_REDO, SQL_UNDO FROM V$LOGMNR_CONTENTS WHERE TABLE_NAME = 'KAFKA_SAMPLE_1';```
Steps Followed in Oracle DB:
---------------------
Creation of Table: Initially, we created a table in Oracle using standard SQL DDL commands.
Table Alteration: Subsequently, alterations were made to the table structure, such as modifying column lengths or adding constraints.
Steps Followed in Kafka Connector:
-------------------------
Configuration Setup: Initial setup of the Kafka connector with appropriate configurations.
Connector Configuration:
name=oracle-audit-connector
connector.class=io.debezium.connector.oracle.OracleConnector
database.hostname=10.16.1.106
database.port=1521
database.user=KAFKA_ADMIN
database.password=vision123
database.dbname=VISDB
database.history.kafka.bootstrap.servers=10.16.1.230:9092
table.include.list=KAFKA_ADMIN.KAFKA_SAMPLE_1
topic.prefix=oracle-audit-
schema.history.internal.kafka.topic=KAFKA_ADMIN.KAFKA_SAMPLE_1
schema.history.internal.kafka.bootstrap.servers=10.16.1.230:9092
JSON received in the topic:
{
"source" : {
"server" : "oracle-audit-"
},
"position" : {
"snapshot_scn" : "54966357755",
"snapshot" : true,
"scn" : "54966357755",
"snapshot_completed" : false
},
"ts_ms" : 1708524975267,
"databaseName" : "VISDB",
"schemaName" : "KAFKA_ADMIN",
"ddl" : "\n CREATE TABLE \"KAFKA_ADMIN\".\"KAFKA_SAMPLE_1\" \n (\t\"NAME\" VARCHAR2(50), \n\t\"AGE\" NUMBER, \n\t\"SURNAME\" VARCHAR2(50), \n\t CONSTRAINT \"PK_KAFKA_SAMPLE_1\" PRIMARY KEY (\"NAME\", \"SURNAME\")\n USING INDEX ENABLE, \n\t SUPPLEMENTAL LOG DATA (ALL) COLUMNS, \n\t SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS, \n\t SUPPLEMENTAL LOG DATA (UNIQUE INDEX) COLUMNS\n ) ;",
"tableChanges" : [ {
"type" : "CREATE",
"id" : "\"VISDB\".\"KAFKA_ADMIN\".\"KAFKA_SAMPLE_1\"",
"table" : {
"defaultCharsetName" : null,
"primaryKeyColumnNames" : [ "NAME", "SURNAME" ],
"columns" : [ {
"name" : "NAME",
"jdbcType" : 12,
"typeName" : "VARCHAR2",
"typeExpression" : "VARCHAR2",
"charsetName" : null,
"length" : 50,
"position" : 1,
"optional" : false,
"autoIncremented" : false,
"generated" : false,
"comment" : null,
"hasDefaultValue" : false,
"enumValues" : [ ]
}, {
"name" : "AGE",
"jdbcType" : 2,
"typeName" : "NUMBER",
"typeExpression" : "NUMBER",
"charsetName" : null,
"length" : 0,
"position" : 2,
"optional" : true,
"autoIncremented" : false,
"generated" : false,
"comment" : null,
"hasDefaultValue" : true,
"enumValues" : [ ]
}, {
"name" : "SURNAME",
"jdbcType" : 12,
"typeName" : "VARCHAR2",
"typeExpression" : "VARCHAR2",
"charsetName" : null,
"length" : 50,
"position" : 3,
"optional" : false,
"autoIncremented" : false,
"generated" : false,
"comment" : null,
"hasDefaultValue" : false,
"enumValues" : [ ]
} ],
"attributes" : [ ]
},
"comment" : null
} ]
}
{
"source" : {
"server" : "oracle-audit-"
},
"position" : {
"snapshot_scn" : "54966357755",
"snapshot" : true,
"scn" : "54966357755",
"snapshot_completed" : false
},
"ts_ms" : 1708525596421,
"databaseName" : "VISDB",
"schemaName" : "KAFKA_ADMIN",
"ddl" : "\n CREATE TABLE \"KAFKA_ADMIN\".\"KAFKA_SAMPLE_1\" \n (\t\"NAME\" VARCHAR2(50), \n\t\"AGE\" NUMBER, \n\t\"SURNAME\" VARCHAR2(100), \n\t CONSTRAINT \"PK_KAFKA_SAMPLE_1\" PRIMARY KEY (\"NAME\", \"SURNAME\")\n USING INDEX ENABLE, \n\t SUPPLEMENTAL LOG DATA (ALL) COLUMNS, \n\t SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS, \n\t SUPPLEMENTAL LOG DATA (UNIQUE INDEX) COLUMNS\n ) ;",
"tableChanges" : [ {
"type" : "CREATE",
"id" : "\"VISDB\".\"KAFKA_ADMIN\".\"KAFKA_SAMPLE_1\"",
"table" : {
"defaultCharsetName" : null,
"primaryKeyColumnNames" : [ "NAME", "SURNAME" ],
"columns" : [ {
"name" : "NAME",
"jdbcType" : 12,
"typeName" : "VARCHAR2",
"typeExpression" : "VARCHAR2",
"charsetName" : null,
"length" : 50,
"position" : 1,
"optional" : false,
"autoIncremented" : false,
"generated" : false,
"comment" : null,
"hasDefaultValue" : false,
"enumValues" : [ ]
}, {
"name" : "AGE",
"jdbcType" : 2,
"typeName" : "NUMBER",
"typeExpression" : "NUMBER",
"charsetName" : null,
"length" : 0,
"position" : 2,
"optional" : true,
"autoIncremented" : false,
"generated" : false,
"comment" : null,
"hasDefaultValue" : true,
"enumValues" : [ ]
}, {
"name" : "SURNAME",
"jdbcType" : 12,
"typeName" : "VARCHAR2",
"typeExpression" : "VARCHAR2",
"charsetName" : null,
"length" : 100,
"position" : 3,
"optional" : false,
"autoIncremented" : false,
"generated" : false,
"comment" : null,
"hasDefaultValue" : false,
"enumValues" : [ ]
} ],
"attributes" : [ ]
},
"comment" : null
} ]
}
I expected the type field to change to "ALTER" when alterations are made to the table schema, but this doesn't seem to be happening. Instead, it retains the initial "CREATE" type.
Has anyone else experienced this issue or have any insights into why this might be happening? Any suggestions on how to address this would be greatly appreciated.