Schema changes cannot complete during Debezium snapshotting

430 Views Asked by At

I am using Debezium for Change Data Capture(CDC) on PostgreSQL database.

I face an issue during Debezium initial snapshot stage, it is ok to make some row-level changes like update, delete,... but if I execute some schema changes on a table on the include.list, the statement cannot complete and I cannot access the table until I shut down the connector or the whole snapshot stage for all tables complete.

It causes no trouble after the snapshotting complete, and start streaming CDC, I can do truncate table, add column,... without any issues.

Does anyone know what's happening here? Does Postgres lock schema changes during snapshotting?

As I found on Debezium 2.4 document, all snapshot mode on Postgresql are lockless.

Please help! Thank you very much.

Below is my postgres source connector:

{
    "name": "main-connector-prod-1",
    "config": {
        "connector.class": "io.debezium.connector.postgresql.PostgresConnector",
        "tasks.max": "1",
        "topic.prefix": "sink-db-main-prod",
        "metrics.prefix": "sink-db-main-prod",
        "signal.kafka.topic": "sink-db-main-prod-signal",
        "signal.data.collection": "public.debezium_signal",
        "publication.name": "db_main_prod_publication",
        "slot.name": "db_main_prod",
        "snapshot.mode": "initial",
        "database.server.name": "postgres",
        "database.hostname": "10.148.0.6",
        "database.port": "***",
        "database.user": "***",
        "database.password": "***",
        "database.server.id": "1",
        "database.dbname": "***",
        "table.include.list": "public.***",
        "heartbeat.interval.ms": "5000",
        "schema.history.internal.kafka.bootstrap.servers": "kafka:9092",
        "schema.history.internal.kafka.topic": "schema-changes.inventory",
        "max.request.size": "104857600",
        "key.converter": "org.apache.kafka.connect.json.JsonConverter",
        "key.converter.schemas.enable": "true",
        "value.converter": "org.apache.kafka.connect.json.JsonConverter",
        "value.converter.schemas.enable": "true",
        "plugin.name": "pgoutput",
        "publication.autocreate.mode": "all_tables",
        "time.precision.mode": "connect",
        "max.batch.size": "40960",
        "max.queue.size": "163840",
        "offset.flush.timeout.ms": "60000",
        "offset.flush.interval.ms": "10000",
        "skipped.operations": "none",
        "snapshot.fetch.size": "51200",
        "snapshot.max.threads": "3"
    }
}

I've trying to use AvroConverter rather than JsonConverter, but I don't think it can solve the prolem.

0

There are 0 best solutions below