I am trying to train an autoencoder for image inpainting where the input images are the corrupted ones, and the output images are the ground truth.
The dataset used is organized as:
/Dataset
/corrupted
img1.jpg
img2.jpg
.
.
/groundTruth
img1.jpg
img2.jpg
.
.
The number of images used is relatively large. How can I feed the data to the model using Keras image data generators? I checked flow_from_directory method but couldn't find a proper class_mode to use (each image in the 'corrupted' folder maps to the one with the same name in 'groundTruth' folder)
If there no pre-built image data generator that provides the functionality you require, you can create your own custom data generator.
To do so, you must create your new data generator class by subclassing
tf.keras.utils.Sequence
. You are required to implement the__getitem__
and the__len__
methods in the your new class.__len__
must return the number of batches in your dataset, while__getitem__
must return the elements in a single batch as a tuple.You can read the official docs here. Below is a code example:
Hope the answer was helpful!