I want to monitor the dimension of y_pred
by defining my own custom metric (using the Theano backend)
def shape_test(y_true, y_pred):
return K.shape(y_pred)[0]
I was assuming that the dimension of y_pred
in the custom metric function is equal to the mini batch size. However, I get weird output. See a small reproducible example below.
#imports and definitions
import numpy
numpy.random.seed(1234)
import keras.backend as K
from keras.models import Sequential
from keras.layers import Dense
from keras.optimizers import SGD
neuron_num=20
dim_input=2
#batch size will be important below!
batch_size=2048
TT=int(1e4)
#sample data
X=numpy.random.randn(TT,dim_input)
eps=numpy.random.randn(TT)
Y=0.3*X[:,0]+0.5*X[:,1]+eps
x={"is":X[:(TT/2),:],"os":X[(TT/2+1):,:]}
y={"is":Y[:(TT/2)],"os":Y[(TT/2+1):]}
This is the custom metric as given above
def shape_test(y_true, y_pred):
return K.shape(y_pred)[0]
Now define a simple NN
sgd=SGD(lr=1e-2,nesterov=True)
model=Sequential()
model.add(Dense(neuron_num,
input_dim=x["is"].shape[1],
init="glorot_normal",
activation="tanh"))
model.add(Dense(neuron_num,init="glorot_normal",activation="tanh"))
model.add(Dense(1,init="glorot_normal",activation="linear"))
model.compile(loss="mean_squared_error",
optimizer=sgd,
metrics=["mean_squared_error",shape_test])
model.fit(x["is"],
y["is"],
validation_data=(x["os"],y["os"]),
nb_epoch=1,
batch_size=batch_size,
verbose=False).history
This gives
#{'loss': [1.834826689338684],
# 'mean_squared_error': [1.834826689338684],
# 'shape_test': [1841],
# 'val_loss': [1.4931119817522769],
# 'val_mean_squared_error': [1.4931119817522769],
# 'val_shape_test': [1841.1716343268654]}
I would have expected to see 'shape_test': [2048]
instead of 'shape_test': [1841]
, as the batch size is 2048.
This seems very weird. Is this possibly a bug?
I am using Python 2.7.6
, Keras==1.0.8
, Theano==0.8.2
and the CPU.