I have 2 sets of image patches data i.e. training and testing sets. Both of these have been written to LMDB files. I am running convolutional neurall network on this data using Caffe.
The problem is that the data stored on hard disk is occupying considerable amount of space and is hampering my efforts to introduce more training data with deliberate noise addition to make my model more robust.
Is there a way where I can send image patches from my program directly to the CNN (in Caffe) without storing them in LMDB? I am currently using python to generate patches from the images for the training data set.
Other than defining custom python layers, you can use the following options:
use
ImageData
layer: it has a source parameter (source: name of a text file, with each line giving an image filename and label)use
MemoryData
layer: using which you can load input images directly from memory to your network using ‘setinputarrays‘ method in python. Be cautious about using this layer as it only accepts labels which are single values and you cannot use images as labels (e.g. In semantic segmentation)use a deploy version of your network like this:
use an HDF5 input layer (more or less ine lmdb, but lmdb is more computationally efficient)
You can find the details of these layers here: http://caffe.berkeleyvision.org/tutorial/layers.html
There are examples available online as well.
You can write your own python data layer. See discussions here and implementation for of input data layer for video stream here.
Basically you will need add to you network description layer like:
and implement the layer interface in Python: