I create batches of images using ImageDataGenerator function in Tensorflow.
I know that each batch of images for each epochs is slightly different according to the different transformations that ImageDataGenerator randomly applies to my dataset.
datagen_full_data_aug = ImageDataGenerator(**data_full_aug)
train_generator_images=datagen_full_data_aug.flow_from_dataframe(
dataframe=train,
directory=folder_images,
x_col='filename',
class_mode=None,
shuffle=True,
seed = seed,
batch_size=batch_size,
target_size=(image_size,image_size))
This is how I train/fit my model.
history=model.fit(train_generator,
steps_per_epoch=steps_per_epoch,
epochs=EPOCHS,
verbose=2,
validation_data=validation_generator,
validation_steps=val_steps_per_epoch,
callbacks=[checkpoint],
batch_size = batch_size)
I sometimes have a few variations of my accuracy during training during specific epochs.
IoU-Accuracy fluctuation
I would like to save individual batches of images for each epochs in different folders (like epoch_1, epoch_2...etc.) so that I can analyse which images might have made the accuracy fluctuate so much.
And I would be able to conlude which transformations might have influenced the model for a specific epochs.
If each image could reuse the name it was given in my dataset for sorting puproses it would be great.
How should I proceed ?
I tried the save function :
save_to_dir=None,
save_prefix='',
save_format='png',
But this saves all the images of all epochs into one folder. And we can't distinguish the images from the 1st epochs from the 2nd...etc.