How to classify images using Spark and Caffe

2019-02-19 12:21发布

问题:

I am using Caffe to do image classification, can I am using MAC OS X, Pyhton.

Right now I know how to classify a list of images using Caffe with Spark python, but if I want to make it faster, I want to use Spark.

Therefore, I tried to apply the image classification on each element of an RDD, the RDD created from a list of image_path. However, Spark does not allow me to do so.

Here is my code:

This is the code for image classification:

# display image name, class number, predicted label
def classify_image(image_path, transformer, net):
    image = caffe.io.load_image(image_path)
    transformed_image = transformer.preprocess('data', image)
    net.blobs['data'].data[...] = transformed_image
    output = net.forward()
    output_prob = output['prob'][0]
    pred = output_prob.argmax()

    labels_file = caffe_root + 'data/ilsvrc12/synset_words.txt'
    labels = np.loadtxt(labels_file, str, delimiter='\t')
    lb = labels[pred]

    image_name = image_path.split(images_folder_path)[1]

    result_str = 'image: '+image_name+'  prediction: '+str(pred)+'  label: '+lb
    return result_str

This this the code generates Caffe parameters and apply the classify_image method on each element of the RDD:

def main():
    sys.path.insert(0, caffe_root + 'python')
    caffe.set_mode_cpu()
    model_def = caffe_root + 'models/bvlc_reference_caffenet/deploy.prototxt'
    model_weights = caffe_root + 'models/bvlc_reference_caffenet/bvlc_reference_caffenet.caffemodel'

    net = caffe.Net(model_def,
                model_weights,
                caffe.TEST)

    mu = np.load(caffe_root + 'python/caffe/imagenet/ilsvrc_2012_mean.npy')
    mu = mu.mean(1).mean(1)

    transformer = caffe.io.Transformer({'data': net.blobs['data'].data.shape})

    transformer.set_transpose('data', (2,0,1))
    transformer.set_mean('data', mu)
    transformer.set_raw_scale('data', 255)
    transformer.set_channel_swap('data', (2,1,0))

    net.blobs['data'].reshape(50,
                          3,
                          227, 227)

    image_list= []
    for image_path in glob.glob(images_folder_path+'*.jpg'):
        image_list.append(image_path)

    images_rdd = sc.parallelize(image_list)
    transformer_bc = sc.broadcast(transformer)
    net_bc = sc.broadcast(net)
    image_predictions = images_rdd.map(lambda image_path: classify_image(image_path, transformer_bc, net_bc))
    print image_predictions

if __name__ == '__main__':
    main()

As you can see, here I tried to broadcast the caffe parameters, transformer_bc = sc.broadcast(transformer), net_bc = sc.broadcast(net) The error is:

RuntimeError: Pickling of "caffe._caffe.Net" instances is not enabled

Before I am doing the broadcast, the error was :

Driver stacktrace.... Caused by: org.apache.spark.api.python.PythonException: Traceback (most recent call last):....

So, do you know, is there any way I can classify images using Caffe and Spark but also take advantage of Spark?

回答1:

When you work with complex, non-native objects initialization has to moved directly to the workers for example with singleton module:

net_builder.py:

import cafe 

net = None

def build_net(*args, **kwargs):
     ...  # Initialize net here
     return net       

def get_net(*args, **kwargs):
    global net
    if net is None:
        net = build_net(*args, **kwargs)
    return net

main.py:

import net_builder

sc.addPyFile("net_builder.py")

def classify_image(image_path, transformer, *args, **kwargs):
    net = net_builder.get_net(*args, **kwargs)

It means you'll have to distribute all required files as well. It can be done either manually or using SparkFiles mechanism.

On a side note you should take a look at the SparkNet package.