So I basically copypasted the code from the tensorflow tutorial adapted to this model:
Which tries to model a neural network to identify "stairs" shape, as it shows here:
import numpy as np
import tensorflow as tf
import _pickle as cPickle
with open("var_x.txt", "rb") as fp: # Unpickling
var_x = cPickle.load(fp)
with open("var_y.txt", "rb") as fp: # Unpickling
var_y = cPickle.load(fp)
# Declare list of features, we only have one real-valued feature
def model_fn(features, labels, mode):
# Build a linear model and predict values
W = tf.get_variable("W", [4], dtype=tf.float64)
b = tf.get_variable("b", [1], dtype=tf.float64)
y = tf.sigmoid( W*features['x'] + b)
# Loss sub-graph
loss = tf.reduce_sum(tf.square(y - labels))
# Training sub-graph
global_step = tf.train.get_global_step()
optimizer = tf.train.GradientDescentOptimizer(0.01)
train = tf.group(optimizer.minimize(loss),
tf.assign_add(global_step, 1))
# EstimatorSpec connects subgraphs we built to the
# appropriate functionality.
return tf.estimator.EstimatorSpec(
mode=mode,
predictions=y,
loss=loss,
train_op=train)
estimator = tf.estimator.Estimator(model_fn=model_fn)
# define our data sets
x_train = np.array(var_x)
y_train = np.array(var_y)
input_fn = tf.estimator.inputs.numpy_input_fn(
{"x": x_train}, y_train, batch_size=4, num_epochs=10, shuffle=True)
# train
estimator.train(input_fn=input_fn, steps=1000)
# Here we evaluate how well our model did.
print(estimator.get_variable_value("b"))
print(estimator.get_variable_value("W"))
new_samples = np.array(
[255., 1., 255., 255.], dtype=np.float64)
predict_input_fn = tf.estimator.inputs.numpy_input_fn(
x={"x": new_samples},
num_epochs=1,
shuffle=False)
predictions = list(estimator.predict(input_fn=predict_input_fn))
print(predictions)
The problem is that when I try to predict a figure that should clearly be a stairs: [255., 1., 255., 255.] I get a "ValueError: None values not supported.". The training works just fine (save for the facts that the weights it finds are not very similar to the ones here: http://blog.kaggle.com/2017/11/27/introduction-to-neural-networks/). But the predict method doesn't work. This code is mustly just a copy of the tensorflow example, adapted to a four dimensional vector for x.
In your
model_fn
, you define theloss
in every mode (train / eval / predict). This means that even in predict mode, thelabels
will be used and need to be provided.When you are in predict mode, you actually just need to return the predictions so you can return early from the function:
By the way,
W * features
returns a tensor of shape(4,)
, you will need to sum it before adding the bias.