-->

How can I sort the values in a custom Keras / Tens

2019-05-14 10:13发布

问题:

Introduction

I would like to implement a custom loss function to Keras. I want to do this, because I am not happy with the current result for my dataset. I think the reason for this is because currently the built-in loss functions focuses on the whole dataset. And I just want to focus on the top values in my dataset. That is why I came up with the following idea for a custom loss function:

Custom Loss Function Idea

The custom loss function should take the top 4 predictions with the highest value and subtract it with the corresponding true value. Then take the absolute value from this subtraction, multiply it with some weights and add it to the total loss sum.

For better understanding of this custom loss function I programmed it with a list input. Hopefully it is better understandable with this example:

The following example calculates the loss = 4*abs(0.7-0.5)+3*abs(0.5-0.7)+2*abs(0.4-0.45) +1*abs(0.4-0.3) = 1.6 for i=0

Then it divides it by div_top which in this example is 10 (for i=0 it would be 0.16), repeats everything for all other i and finally takes the average over all samples.

top = 4
div_top = 0.5*top*(top+1)


def own_loss(y_true, y_pred):
    loss_per_sample = [0]*len(y_pred)
    for i in range(len(y_pred)):
        sorted_pred, sorted_true = (list(t) for t in zip(*sorted(zip(y_pred[i], y_true[i]))))
        for k in range(top):
            loss_per_sample[i] += (top-k)*abs(sorted_pred[-1-k]-sorted_true[-1-k])
    loss_per_sample = [t/div_top for t in loss_per_sample]
    return sum(loss_per_sample)/len(loss_per_sample)


y_pred = [[0.1, 0.4, 0.7, 0.4, 0.4, 0.5, 0.3, 0.2],
          [0.3, 0.8, 0.5, 0.3, 0.1, 0.0, 0.1, 0.5],
          [0.5, 0.6, 0.6, 0.8, 0.3, 0.6, 0.7, 0.1]]

y_true = [[0.2, 0.45, 0.5, 0.3, 0.4, 0.7, 0.22, 0.1],
          [0.4, 0.9, 0.3, 0.0, 0.2, 0.1, 0.11, 0.8],
          [0.4, 0.7, 0.4, 0.3, 0.4, 0.7, 0.6, 0.05]]

print(own_loss(y_true, y_pred)) # Output is 0.196667

Implementation to Keras

I would like to use this function in Keras as a custom loss function. This would look like this:

import numpy as np
from keras.datasets import boston_housing
from keras.layers import LSTM
from keras.models import Sequential
from keras.optimizers import RMSprop

(pre_x_train, pre_y_train), (x_test, y_test) = boston_housing.load_data()
"""
The following 8 lines are to format the dataset to a 3D numpy array
4*101*13. I do this so that it matches my real dataset with is formatted
to a 3D numpy array 47*731*179. It is not important to understand the following 
8 lines for the loss function itself.
"""
x_train = [[0]*101]*4
y_train = [[0]*101]*4
for i in range(4):
    for k in range(101):
        x_train[i][k] = pre_x_train[i*101+k]
        y_train[i][k] = pre_y_train[i*101+k]
train_x = np.array([np.array([np.array(k) for k in i]) for i in x_train])
train_y = np.array([np.array([np.array(k) for k in i]) for i in y_train])


top = 4
div_top = 0.5*top*(top+1)


def own_loss(y_true, y_pred):
    loss_per_sample = [0]*len(y_pred)
    for i in range(len(y_pred)):
        sorted_pred, sorted_true = (list(t) for t in zip(*sorted(zip(y_pred[i], y_true[i]))))
        for k in range(top):
            loss_per_sample[i] += (top-k)*abs(sorted_pred[-1-k]-sorted_true[-1-k])
    loss_per_sample = [t/div_top for t in loss_per_sample]
    return sum(loss_per_sample)/len(loss_per_sample)


model = Sequential()
model.add(LSTM(units=64, batch_input_shape=(None, 101, 13), return_sequences=True))
model.add(LSTM(units=101, return_sequences=False, activation='linear'))
# compile works with loss='mean_absolute_error' but not with loss=own_loss
model.compile(loss=own_loss, optimizer=RMSprop())

model.fit(train_x, train_y, epochs=16, verbose=2, batch_size=1, validation_split=None, shuffle=False)

Obviously this above Keras example won't work. But I also have no clue how I could this get to work.

Ways to solve the Problem

I read the following articles, trying to solve the problem:

Keras custom metric iteration

How to use a custom objective function for a model?

I also read the Keras backend page:

Keras Backends

And Tensorflow Top_k page:

tf.nn.top_k

Which seems for me the most promising approach, but after many different ways to implement it still does not work. I could get the correct pred_y values when sorting with top_k but I could not get the corresponding true_y values.

Does anybody have an idea how I could implement the custom loss function?

回答1:

Assumption

  • Use tf.nn.top_k for sorting tensors. It means that "If two elements are equal, the lower-index element appears first" as explained in the API document.

Suggested solution

top = 4
div_top = 0.5*top*(top+1)

def getitems_by_indices(values, indices):
    return tf.map_fn(
        lambda x: tf.gather(x[0], x[1]), (values, indices), dtype=values.dtype
    )

def own_loss(y_true, y_pred):
    y_pred_top_k, y_pred_ind_k = tf.nn.top_k(y_pred, top)
    y_true_top_k = getitems_by_indices(y_true, y_pred_ind_k)
    loss_per_sample = tf.reduce_mean(
        tf.reduce_sum(
            tf.abs(y_pred_top_k - y_true_top_k) *
                tf.range(top, 0, delta=-1, dtype=y_pred.dtype),
            axis=-1
        ) / div_top
    )
    return loss_per_sample

model = Sequential()
model.add(LSTM(units=64, batch_input_shape=(None, 101, 13), return_sequences=True))
model.add(LSTM(units=101, return_sequences=False, activation='linear'))
# compile works with loss='mean_absolute_error' but not with loss=own_loss
model.compile(loss=own_loss, optimizer=RMSprop())

model.train_on_batch(train_x, train_y)

Comment

  • Is there any better implementation of getitems_by_indices()?
  • The current implementation of getitems_by_indices() used Sungwoon Kim's idea.