I am able to train my model and use ML Engine for prediction but my results don't include any identifying information. This works fine when submitting one row at a time for prediction but when submitting multiple rows I have no way of connecting the prediction back to the original input data. The GCP documentation discusses using instance keys but I can't find any example code that trains and predicts using an instance key. Taking the GCP census example how would I update the input functions to pass a unique ID through the graph and ignore it during training yet return the unique ID with predictions? Or alternatively if anyone knows of a different example already using keys that would help as well.
def serving_input_fn():
feature_placeholders = {
column.name: tf.placeholder(column.dtype, [None])
for column in INPUT_COLUMNS
}
features = {
key: tf.expand_dims(tensor, -1)
for key, tensor in feature_placeholders.items()
}
return input_fn_utils.InputFnOps(
features,
None,
feature_placeholders
)
def generate_input_fn(filenames,
num_epochs=None,
shuffle=True,
skip_header_lines=0,
batch_size=40):
def _input_fn():
files = tf.concat([
tf.train.match_filenames_once(filename)
for filename in filenames
], axis=0)
filename_queue = tf.train.string_input_producer(
files, num_epochs=num_epochs, shuffle=shuffle)
reader = tf.TextLineReader(skip_header_lines=skip_header_lines)
_, rows = reader.read_up_to(filename_queue, num_records=batch_size)
row_columns = tf.expand_dims(rows, -1)
columns = tf.decode_csv(row_columns, record_defaults=CSV_COLUMN_DEFAULTS)
features = dict(zip(CSV_COLUMNS, columns))
# Remove unused columns
for col in UNUSED_COLUMNS:
features.pop(col)
if shuffle:
features = tf.train.shuffle_batch(
features,
batch_size,
capacity=batch_size * 10,
min_after_dequeue=batch_size*2 + 1,
num_threads=multiprocessing.cpu_count(),
enqueue_many=True,
allow_smaller_final_batch=True
)
label_tensor = parse_label_column(features.pop(LABEL_COLUMN))
return features, label_tensor
return _input_fn
Update: I was able to use the suggested code from this answer below I just needed to alter it slightly to update the output alternatives in the model_fn_ops instead of just the prediction dict. However, this only works if my serving input function is coded for json inputs similar to this. My serving input function was previously modeled after the CSV serving input function in the Census Core Sample.
I think my problem is coming from the build_standardized_signature_def function and even more so the is_classification_problem function that it calls. The input dict length using the csv serving function is 1 so this logic ends up using the classification_signature_def which only ends up displaying the scores (which turns out are actually the probabilities) whereas the input dict length is greater than 1 with the json serving input function and instead the predict_signature_def is used which includes all of the outputs.
UPDATE: In version 1.3 the contrib estimators (tf.contrib.learn.DNNClassifier for example), were changed to inherit from the core estimator class tf.estimator.Estimator which unlike it's predecessor, hides the model function as a private class member, so you'll need to replace
estimator.model_fn
in the solution below withestimator._model_fn
.Josh's answer points you to the Flowers example, which is a good solution if you want to use a custom estimator. If you want to stick with a canned estimator, (e.g. the
tf.contrib.learn.DNNClassifiers
) you can wrap it in a custom estimator that adds support for keys. (Note: I think it's likely canned estimators will gain key support when they move into core).my_key_estimator
can then be used exactly like yourDNNClassifier
would be used, except it will expect a feature with the name'key'
from input_fns (prediction, evaluation and training).EDIT2: You will also need to add the corresponding input tensor to the prediction input function of your choice. For example, a new JSON serving input fn would look like:
(slightly different between 1.2 and 1.3, as
tf.contrib.learn.InputFnOps
is replaced withtf.estimator.export.ServingInputReceiver
, and padding tensors to rank 2 is no longer necessary in 1.3)Then ML Engine will send a tensor named "key" with your prediction request, which will be passed to your model, and through with your predictions.
EDIT3: Modified
key_model_fn_gen
to support ignoring missing key values. EDIT4: Added key for predictionGreat question. The Cloud ML Engine flowers sample does this, by using the tf.identity operation to pass a string straight through from input to output. Here are the relevant lines during graph construction.
For batch prediction you need to insert "key": "some_key_value" into your instance records. For online prediction you would query the above graph with a JSON request like: