yield n files from disk

57 Views Asked by At

I am trying to read file from the disk and then split it into [features and labels]

def generator(data_path):
   x_text=[]
   counter=0
   _y=[]
   for root, dirs, files in os.walk(data_path):
       for _file in files:
           if _file.endswith(".txt"):
               _contents = list(open(data_path+_file, "r", encoding="UTF8",errors='ignore').readlines())
               _contents = [s.strip() for s in _contents]
               x_text=x_text+_contents

               y_examples=[0,0,0]
               y_examples[counter]=1
               y_labels = [y_examples for s in _contents]
               counter+=1

               _y=_y+y_labels

   return [x_text, _y]

I have huge 3.5GB of data in the disk and I cant read it into the memory at the same time. How can I modify this code to generate n files at a time for processing.

for X_batch, y_batch in generator(data_path):
         feed_dict = {X: X_batch, y: y_batch}

Is there an more efficient way to read this huge data in tensorflow?

0

There are 0 best solutions below