keras custom loss pure python (without keras backe

2019-07-13 20:33发布

I am currently programming an autoencoder for image compression. I would like to use a custom loss function written in pure python, i.e. without making use of keras backend functions. Is this at all possible and if so how? If it is possible I'd be very grateful for a minimum working example (MWE). Please look at this MWE, in particular the mse_keras function:

# -*- coding: utf-8 -*-

import matplotlib.pyplot as plt
import numpy as np
import keras.backend as K
from keras.datasets import mnist
from keras.models import Model, Sequential
from keras.layers import Input, Dense


def mse_keras(A,B):
    mse = K.mean(K.square(A - B), axis=-1)
    return mse


# Loads the training and test data sets (ignoring class labels)
(x_train, _), (x_test, _) = mnist.load_data()

# Scales the training and test data to range between 0 and 1.
max_value = float(x_train.max())
x_train = x_train.astype('float32') / max_value
x_test = x_test.astype('float32') / max_value


x_train.shape, x_test.shape
# ((60000, 28, 28), (10000, 28, 28))


x_train = x_train.reshape((len(x_train), np.prod(x_train.shape[1:])))
x_test = x_test.reshape((len(x_test), np.prod(x_test.shape[1:])))

(x_train.shape, x_test.shape)
# ((60000, 784), (10000, 784))


# input dimension = 784
input_dim = x_train.shape[1]
encoding_dim = 32

compression_factor = float(input_dim) / encoding_dim
print("Compression factor: %s" % compression_factor)

autoencoder = Sequential()
autoencoder.add(Dense(encoding_dim, input_shape=(input_dim,), activation='relu'))
autoencoder.add(Dense(input_dim, activation='sigmoid'))

autoencoder.summary()

input_img = Input(shape=(input_dim,))
encoder_layer = autoencoder.layers[0]
encoder = Model(input_img, encoder_layer(input_img))

encoder.summary()


autoencoder.compile(optimizer='adam', loss=mse_keras, metrics=['mse'])
history=autoencoder.fit(x_train, x_train,
                        epochs=3,
                        batch_size=256,
                        shuffle=True,
                        validation_data=(x_test, x_test))

num_images = 10
np.random.seed(42)
random_test_images = np.random.randint(x_test.shape[0], size=num_images)

decoded_imgs = autoencoder.predict(x_test)


#print(history.history.keys())

plt.figure()
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])

plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'test', 'mse1', 'val_mse1'], loc='upper left')
plt.show()


plt.figure(figsize=(18, 4))

for i, image_idx in enumerate(random_test_images):
    # plot original image
    ax = plt.subplot(3, num_images, i + 1)
    plt.imshow(x_test[image_idx].reshape(28, 28))
    plt.gray()
    ax.get_xaxis().set_visible(False)
    ax.get_yaxis().set_visible(False)

    # plot reconstructed image
    ax = plt.subplot(3, num_images, 2*num_images + i + 1)
    plt.imshow(decoded_imgs[image_idx].reshape(28, 28))
    plt.gray()
    ax.get_xaxis().set_visible(False)
    ax.get_yaxis().set_visible(False)
plt.show()

The code above is a MWE for a custom loss function using the Keras backend. However, this is not what I want! I would like to substitute the mse_keras function in my code with something like this:

def my_mse(A,B):
    mse = ((A - B) ** 2).mean(axis=None)
    return mse

This is again just a MWE. It is pure python and scipy. NO KERAS BACKEND! Is it possible to to use pure python functions as loss functions (I tried with py_func, but it didn't work for me.) The reason why I am asking is because eventually I would like to use a way more complicated loss function which is already implemented in python. And, I don't see how I could reimplement it using the keras backend. (I also don't have the time to do that, to be honest)

(For the curious: The functions which I would like to use as a loss function can be seen here: https://github.com/aizvorski/video-quality)

Any help would be greatly appreciated. Backend can be theano, tensorflow, I dodn't care. If it is possible please provide me with a MWE in python 3.X.

Many thanks in advance. Your help is much appreciated.

1条回答
闹够了就滚
2楼-- · 2019-07-13 21:04

You cannot use a pure Python function as a loss for Keras. As you probably train on a GPU and python uses the CPU this would create overhead by transferring results from/to the GPU memory.

from https://keras.io/losses/

You can either pass the name of an existing loss function, or pass a TensorFlow/Theano symbolic function that returns a scalar for each data-point and takes the following two arguments: y_true, y_pred

Your function would be (same as the original one)

def my_mse(A,B):
    mse = K.mean(K.pow(A - B, 2), axis=None)
    return mse

However, check the Keras API, it wants a scalar for each data point, so taking the mean will probably not work like this with axis=None.

I had a quick look at the loss functions you linked and implementing them in Keras should be possible and not too difficult. Keras (or actually the backend Tensorflow) has a similar interface to numpy. It might be useful to understand how the computational graph of the backend (i.e. tensorflow) works to implement the losses.

查看更多
登录 后发表回答