CelebA Dataset inaccessible using tfds.load()

3k Views Asked by At

I am trying to use the CelebA dataset in a deep learning project. I have the zipped folder from Kaggle. I wanted to unzip and then split the images into training, testing, and validation, but then found out that it would not be possible on my not-so-powerful system.

So, to avoid wasting time, I wanted to use the TensorFlow-datasets method to load the CelebA dataset. But unfortunately, the dataset is inaccessible with the following error:

(Code first)

ds = tfds.load('celeb_a', split='train', download=True)
---------------------------------------------------------------------------
KeyError                                  Traceback (most recent call last)
<ipython-input-69-d7b9371eb674> in <module>
----> 1 ds = tfds.load('celeb_a', split='train', download=True)

c:\users\aman\appdata\local\programs\python\python38\lib\site-packages\tensorflow_datasets\core\load.py in load(name, split, data_dir, batch_size, shuffle_files, download, as_supervised, decoders, read_config, with_info, builder_kwargs, download_and_prepare_kwargs, as_dataset_kwargs, try_gcs)
    344   if download:
    345     download_and_prepare_kwargs = download_and_prepare_kwargs or {}
--> 346     dbuilder.download_and_prepare(**download_and_prepare_kwargs)
    347 
    348   if as_dataset_kwargs is None:

c:\users\aman\appdata\local\programs\python\python38\lib\site-packages\tensorflow_datasets\core\dataset_builder.py in download_and_prepare(self, download_dir, download_config)
    383           self.info.read_from_directory(self._data_dir)
    384         else:
--> 385           self._download_and_prepare(
    386               dl_manager=dl_manager,
    387               download_config=download_config)

c:\users\aman\appdata\local\programs\python\python38\lib\site-packages\tensorflow_datasets\core\dataset_builder.py in _download_and_prepare(self, dl_manager, download_config)
   1020   def _download_and_prepare(self, dl_manager, download_config):
   1021     # Extract max_examples_per_split and forward it to _prepare_split
-> 1022     super(GeneratorBasedBuilder, self)._download_and_prepare(
   1023         dl_manager=dl_manager,
   1024         max_examples_per_split=download_config.max_examples_per_split,

c:\users\aman\appdata\local\programs\python\python38\lib\site-packages\tensorflow_datasets\core\dataset_builder.py in _download_and_prepare(self, dl_manager, **prepare_split_kwargs)
    959     split_generators_kwargs = self._make_split_generators_kwargs(
    960         prepare_split_kwargs)
--> 961     for split_generator in self._split_generators(
    962         dl_manager, **split_generators_kwargs):
    963       if str(split_generator.split_info.name).lower() == "all":

c:\users\aman\appdata\local\programs\python\python38\lib\site-packages\tensorflow_datasets\image\celeba.py in _split_generators(self, dl_manager)
    137     all_images = {
    138         os.path.split(k)[-1]: img for k, img in
--> 139         dl_manager.iter_archive(downloaded_dirs["img_align_celeba"])
    140     }
    141 

c:\users\aman\appdata\local\programs\python\python38\lib\site-packages\tensorflow_datasets\core\download\download_manager.py in iter_archive(self, resource)
    559     if isinstance(resource, six.string_types):
    560       resource = resource_lib.Resource(path=resource)
--> 561     return extractor.iter_archive(resource.path, resource.extract_method)
    562 
    563   def extract(self, path_or_paths):

c:\users\aman\appdata\local\programs\python\python38\lib\site-packages\tensorflow_datasets\core\download\extractor.py in iter_archive(path, method)
    221     An iterator of `(path_in_archive, f_obj)`
    222   """
--> 223   return _EXTRACT_METHODS[method](path)

KeyError: <ExtractMethod.NO_EXTRACT: 1>

Could someone explain what I am doing wrong?

On a side-note, if this does not work, is there a way to convert the already downloaded zipped file from Kaggle into the required format without unzipping and then iterating over each image individually? Basically, I cannot go down the unzip-then-split route for such a large dataset...

TIA!


EDIT: I tried the same on Colab, but getting a similar error: enter image description here

2

There are 2 best solutions below

1
On BEST ANSWER

It seems like there is some sort of quota limit for downloading form GDrive. Go to the google drive link shown in the error, and make a copy to your drive. You can download the copy alternatively through libraries such as gdown, google_drive_downloader.

0
On

upgrade the tfds to the nightly version, which worked for me