May I know the common/suggested practice when we need to perform re-ingestion in Spark structured streaming pipeline? For ex: any bug in consumer streaming code which reads from a queue. In such cases we as consumer reading from queue would need to perform re-ingestion for specific period of time / offset.
Do we create separate streaming job to handle re-ingestion for time period or we stop the existing incremental streaming job, make code changes to rerun for specific time-offset/resume, again suspend to revert back to earlier incremental code?