Keras custom data generator for large hdf5 file wh

2020-05-21 04:38发布

I'm trying to use the pretrained InceptionV3 model to classify the food-101 dataset, which containts food images for 101 categories, 1000 per category. I've preprocessed this dataset into a single hdf5 file (I assumed this is beneficial compared to loading images on the go when training) so far, which has the following tables inside:

Food-101 hdf5 file structure

The data split is the standard 70% train, 20% validation, 10% test, so for example the valid_img has a size of 20200*299*299*3. The labels are onehotencoded for Keras, so valid_labels has a size of 20200*101.

This hdf5 file has a size of 27.1 GB, so it will not fit into my memory. (Have 8 GB of it, although effectively only probably 4-5 gigs is usable while running Ubuntu. Also my GPU is GTX 960 with 2 GB of VRAM, and so far it looked like 1.5 GB is available for python when I try to start the training script). I'm using Tensorflow backend.

The first idea I had is to use model.train_on_batch() with a double nested for loop like this:

#Loading InceptionV3, adding my fully connected layers, compiling model...    

dataset = h5py.File('/home/uzoltan/PycharmProjects/food-101/food-101_299x299.hdf5', 'r')
    epoch = 50
    for i in range(epoch):
        for i in range(100): #1000 images can fit in the memory easily, this could probably be range(10) too
            train_images = dataset["train_img"][i * 706:(i + 1) * 706, ...]
            train_labels = dataset["train_labels"][i * 706:(i + 1) * 706, ...]
            val_images = dataset["valid_img"][i * 202:(i + 1) * 202, ...]
            val_labels = dataset["valid_labels"][i * 202:(i + 1) * 202, ...]
            model.train_on_batch(x=train_images, y=train_labels, class_weight=None,
                                 sample_weight=None, )

My problem with this approach is that train_on_batch provides 0 support for validation or batch shuffling, so that the batches are not in the same order every epoch.

So I looked towards model.fit_generator() which has the nice property of providing all the same functionality as fit(), plus with the built in ImageDataGenerator you can do image augmentations (rotations, horizontal flips, etc.) at the same time with the CPU, so that your model can be more robust. My problem here is, that if I understand it correctly, the ImageDataGenerator.flow(x,y) method needs all the samples and labels at once, but my training/validation data wont fit into my RAM.

Here is where I think custom data generators come into the picture, but after looking extensively at some examples I could find on the Keras GitHub/Issues page, I still dont really get how should I implement a custom generator, which would read in batches of data from my hdf5 file. Can someone provide me with a good example or pointers? How could I couple the custom batch generator with the image augmentations? Or maybe is it easier to implement some kind of manual validation and batch shuffling for train_on_batch()? If so, I could use some pointer there too.

4条回答
干净又极端
2楼-- · 2020-05-21 05:15

For anyone still looking for an answer, I made the following "crude wrapper" around ImageDataGeneator's apply_transform method.

from numpy.random import uniform, randint
from tensorflow.python.keras.preprocessing.image import ImageDataGenerator
import numpy as np

class CustomImagesGenerator:
    def __init__(self, x, zoom_range, shear_range, rescale, horizontal_flip, batch_size):
        self.x = x
        self.zoom_range = zoom_range
        self.shear_range = shear_range
        self.rescale = rescale
        self.horizontal_flip = horizontal_flip
        self.batch_size = batch_size
        self.__img_gen = ImageDataGenerator()
        self.__batch_index = 0

    def __len__(self):
        # steps_per_epoch, if unspecified, will use the len(generator) as a number of steps.
        # hence this
        return np.floor(self.x.shape[0]/self.batch_size)

    def next(self):
        return self.__next__()

    def __next__(self):
        start = self.__batch_index*self.batch_size
        stop = start + self.batch_size
        self.__batch_index += 1
        if stop > len(self.x):
            raise StopIteration
        transformed = np.array(self.x[start:stop])  # loads from hdf5
        for i in range(len(transformed)):
            zoom = uniform(self.zoom_range[0], self.zoom_range[1])
            transformations = {
                'zx': zoom,
                'zy': zoom,
                'shear': uniform(-self.shear_range, self.shear_range),
                'flip_horizontal': self.horizontal_flip and bool(randint(0,2))
            }
            transformed[i] = self.__img_gen.apply_transform(transformed[i], transformations)
        return transformed * self.rescale

It can be called like so:

import h5py
f = h5py.File("my_heavy_dataset_file.hdf5", 'r')
images = f['mydatasets/images']

my_gen = CustomImagesGenerator(
    images, 
    zoom_range=[0.8, 1], 
    shear_range=6, 
    rescale=1./255, 
    horizontal_flip=True, 
    batch_size=64
)

model.fit_generator(my_gen)
查看更多
一纸荒年 Trace。
3楼-- · 2020-05-21 05:19

You want to write a function which loads images from the HDF5 and then yields (not returns) them as a numpy array. Here is a simple example which uses OpenCV to load images directly from .png/.jpg files in a given directory:

def generate_data(directory, batch_size):
    """Replaces Keras' native ImageDataGenerator."""
    i = 0
    file_list = os.listdir(directory)
    while True:
        image_batch = []
        for b in range(batch_size):
            if i == len(file_list):
                i = 0
                random.shuffle(file_list)
            sample = file_list[i]
            i += 1
            image = cv2.resize(cv2.imread(sample[0]), INPUT_SHAPE)
            image_batch.append((image.astype(float) - 128) / 128)

        yield np.array(image_batch)

Obviously you will have to modify it to read from the HDF5 instead.

Once you have written your function, the usage is simply:

model.fit_generator(
generate_data('~/my_data', batch_size),
steps_per_epoch=len(os.listdir('~/my_data')) // batch_size)

Again modified to reflect the fact that you are reading from an HDF5 and not a directory.

查看更多
等我变得足够好
4楼-- · 2020-05-21 05:23

If I understood you correctly, you want to use the data (which does not fit in the memory) from HDF5 and at the same time use data augmentation on it.

I'm in the same situation as you, and I found this code that maybe can be helpful with some few modifications:

https://gist.github.com/wassname/74f02bc9134897e3fe4e60784f5aaa15

查看更多
别忘想泡老子
5楼-- · 2020-05-21 05:35

this is my solution for shuffle data per epoch with h5 file. indices means train or val index list.

def generator(h5path, indices, batchSize=128, is_train=True, aug=None):

    db = h5py.File(h5path, "r")
    with open("mean.json") as f:
        mean = json.load(f)
    meanV = np.array([mean["R"], mean["G"], mean["B"]])

    while True:

        np.random.shuffle(indices)
        for i in range(0, len(indices), batchSize):
            t0 = time()
            batch_indices = indices[i:i+batchSize]
            batch_indices.sort()

            by = db["labels"][batch_indices,:]
            bx = db["images"][batch_indices,:,:,:]

            bx[:,:,:,0] -= meanV[0]
            bx[:,:,:,1] -= meanV[1]
            bx[:,:,:,2] -= meanV[2]
            t1=time()

            if is_train:

                #bx = random_crop(bx, (224,224))
                if aug is not None:
                    bx,by = next(aug.flow(bx,by,batchSize))

            yield (bx,by)


h5path='all_224.hdf5'   
model.fit_generator(generator(h5path, train_indices, batchSize=batchSize, is_train=True, aug=aug),
                steps_per_epoch = 20000//batchSize,
                validation_data= generator(h5path, test_indices, is_train=False, batchSize=batchSize), 
                validation_steps = 2424//batchSize,
                epochs=args.epoch, 
                max_queue_size=100,
                callbacks=[checkpoint, early_stop])
查看更多
登录 后发表回答