How to extract convolutional neural network from K

2020-06-23 06:36发布

问题:

I'm interested in using the Networkx Python package to perform network analysis on convolutional neural networks. To achieve this I want to extract the edge and weight information from Keras model objects and put them into a Networkx Digraph object where it can be (1) written to a graphml file and (2) be subject to the graph analysis tools available in Networkx.

Before jumping in further, let me clarify and how to consider pooling. Pooling (examples: max, or average) means that the entries within a convolution window will be aggregated, creating an ambiguity on 'which' entry would be used in the graph I want to create. To resolve this, I would like every possible choice included in the graph as I can account for this later as needed.

For the sake of example, let's consider doing this with VGG16. Keras makes it pretty easy to access the weights while looping over the layers.

from keras.applications.vgg16 import VGG16

model = VGG16()

for layer_index, layer in enumerate(model.layers):
    GW = layer.get_weights()
    if layer_index == 0:
        print(layer_index, layer.get_config()['name'], layer.get_config()['batch_input_shape'])
    elif GW:
        W, B =  GW
        print(layer_index, layer.get_config()['name'], W.shape, B.shape)
    else:
        print(layer_index, layer.get_config()['name'])

Which will print the following:

0 input_1 (None, 224, 224, 3)
1 block1_conv1 (3, 3, 3, 64) (64,)
2 block1_conv2 (3, 3, 64, 64) (64,)
3 block1_pool
4 block2_conv1 (3, 3, 64, 128) (128,)
5 block2_conv2 (3, 3, 128, 128) (128,)
6 block2_pool
7 block3_conv1 (3, 3, 128, 256) (256,)
8 block3_conv2 (3, 3, 256, 256) (256,)
9 block3_conv3 (3, 3, 256, 256) (256,)
10 block3_pool
11 block4_conv1 (3, 3, 256, 512) (512,)
12 block4_conv2 (3, 3, 512, 512) (512,)
13 block4_conv3 (3, 3, 512, 512) (512,)
14 block4_pool
15 block5_conv1 (3, 3, 512, 512) (512,)
16 block5_conv2 (3, 3, 512, 512) (512,)
17 block5_conv3 (3, 3, 512, 512) (512,)
18 block5_pool
19 flatten
20 fc1 (25088, 4096) (4096,)
21 fc2 (4096, 4096) (4096,)
22 predictions (4096, 1000) (1000,)

For the convolutional layers, I've read that the tuples will represent (filter_x, filter_y, filter_z, num_filters) where filter_x, filter_y, filter_z give the shape of the filter and num_filters is the number of filters. There's one bias term for each filter, so the last tuple in these rows will also equal the number of filters.

While I've read explanations of how the convolutions within a convolutional neural network behave conceptually, I seem to be having a mental block when I get to handling the shapes of the layers in the model object.

Once I know how to loop over the edges of the Keras model, with Networkx I should be able to easily code the construction of the Networkx object. The code for this might loosely resemble something like this, where keras_edges is an iterable that contains tuples formatted as (in_node, out_node, edge_weight).

import networkx as nx

g = nx.DiGraph()

g.add_weighted_edges_from(keras_edges)

nx.write_graphml(g, 'vgg16.graphml') 

So to be specific, how do I loop over all the edges in a way that accounts for the shape of the layers and the pooling in the way I described above?

回答1:

Since Keras doesn't have an edge element, and a Keras node seems to be something totally different (a Keras node is an entire layer when it's used, it's the layer as presented in the graph of the model)

So, assuming you are using the smallest image possible (which is equal to the kernel size), and that you're creating nodes manually (sorry, I don't know how it works in networkx):

For a convolution that:

  • Has i input channels (channels in the image that comes in)
  • Has o output channels (the selected number of filters in keras)
  • Has kernel_size = (x, y)

You already know the weights, which are shaped (x, y, i, o).

You would have something like:

#assuming a node here is one pixel from one channel only:

#kernel sizes x and y
kSizeX = weights.shape[0]
kSizeY = weights.shape[1]

#in and out channels
inChannels = weights.shape[2]
outChannels = weights.shape[3]

#slide steps x
stepsX = image.shape[0] - kSizeX + 1
stepsY = image.shape[1] - kSizeY + 1


#stores the final results
all_filter_results = []

for ko in range(outChannels): #for each output filter

    one_image_results = np.zeros((stepsX, stepsY))

    #for each position of the sliding window 
    #if you used the smallest size image, start here
    for pos_x in range(stepsX):      
        for pos_y in range(stepsY):

            #storing the results of a single step of a filter here:
            one_slide_nodes = []

            #for each weight in the filter
            for kx in range(kSizeX):
                for ky in range(kSizeY):
                    for ki in range(inChannels):

                        #the input node is a pixel in a single channel
                        in_node = image[pos_x + kx, pos_y + ky, ki]

                        #one multiplication, single weight x single pixel
                        one_slide_nodes.append(weights[kx, ky, ki, ko] * in_node)

                        #so, here, you have in_node and weights

            #the results of each step in the slide is the sum of one_slide_nodes:
            slide_result = sum(one_slide_nodes)
            one_image_results[pos_x, pos_y] = slide_result   
    all_filter_results.append(one_image_results)