I have a spring boot application that is a consumer to topic-A. After it finish the business logic, it needs to publish a result to topic-B. At the same time, it needs to do a KSQL-DB INSERT operation to a existing ktable-01. e.g.
INSERT INTO table_01(ROWTIME, KEY_COL, COL_A) VALUES (1510923225000, 'key', 'A');
All the topics, Kstreams and Ktables are on one cluster of Kafka server running on-prem. I need to make sure the publish to the result topic-B and insert to ktable-01 in one atomic transaction. If publish failed or insert failed, I want the Kafka transaction to keep the data integrity. I know the consistency is kept if
I checked online resources, cannot find a clear statement about this possibility. I know publish and subscribe can be controlled by Kafka transaction. Data integrity can also be kept within a stream/table operations. But, I doubt whether it is possible to still keep consistency between Kafka server and KsqlDB if they run on the same cluster.