Unknown Error Sending Data to Google Cloud ML Cust

2019-07-04 05:31发布

问题:

I am trying to write a custom ML prediction routine on AI Platform to get text data from a client, do some custom preprocessing, pass it into the model, and run the model. I was able to package and deploy this code on Google cloud successfully. However, every time I try to send a request to it from node.js, I get back data: { error: 'Prediction failed: unknown error.' },.

Here is my relevant custom prediction routine code. Note that I set instances to my text in the client and then tokenize and preprocess it in the custom prediction routine.

def __init__(self, model, session, saver, dictionary):
    self.model = model
    self.sess = session

@classmethod
def from_path(cls, model_dir):
    m = Model(learning_rate=0.1)
    session = tf.Session()
    session.run(tf.global_variables_initializer())
    session.run(tf.local_variables_initializer())
    saver = tf.train.Saver(max_to_keep=0)
    saver.restore(session, (os.path.join(model_dir, 'model.ckpt')))
    return cls(m, session)

def predict(self, instances, **kwargs):
    utterance = nltk.word_tokenize(instances)
    utterance = self.preprocess_utterance(utterance)

    preds = self.sess.run([self.model['preds'], feed_dict={'input_data': utterance)
    return preds

Here is my Node.js code:

   text_string = "Hello how are you?"
   google.auth.getApplicationDefault(function (err, authClient, projectId) {
        if (err) {
            console.log('Authentication failed because of ', err);
            return;
        }
        if (authClient.createScopedRequired && authClient.createScopedRequired()) {
            var scopes = ['https://www.googleapis.com/auth/cloud-platform'];
            authClient = authClient.createScoped(scopes);
        }
        var request = {
            name: "projects/" + projectId + "/models/classifier",
            resource: {"instances": [message_string]},

            // This is a "request-level" option
            auth: authClient
        };

        machinelearning.projects.predict(request, function (err, result) {

            console.log(result)

            if (err) {
                console.log(err);
            } else {
                console.log(result);
                res.status(200).send('Hello, world! This is the prediction: ' + JSON.stringify(result)).end();
            }
        });
    });

In this code I am just sending the text to the google Cloud model. The request body is: body: '{"instances":["Hello how are you?"]}',

Does anyone have an idea of why it's failing?

If not, then does anyone have any idea of how I can debug this? An unknown error message is not useful at all.

Edit:

Here is the output from saved_model_cli with the --all option.

signature_def['serving_default']:
  The given SavedModel SignatureDef contains the following input(s):
    inputs['length_input'] tensor_info:
        dtype: DT_INT32
        shape: ()
        name: Placeholder_3:0
    inputs['seqlen'] tensor_info:
        dtype: DT_INT32
        shape: (-1)
        name: Placeholder_2:0
    inputs['indicator'] tensor_info:
        dtype: DT_INT32
        shape: (-1, 2)
        name: Placeholder_1:0
    inputs['input_data'] tensor_info:
        dtype: DT_INT32
        shape: (-1, -1)
        name: Placeholder:0
    inputs['y'] tensor_info:
        dtype: DT_INT32
        shape: (-1, -1)
        name: Placeholder_4:0
  The given SavedModel SignatureDef contains the following output(s):
    outputs['preds'] tensor_info:
        dtype: DT_INT32
        shape: (-1, -1)
        name: Cast:0
  Method name is: tensorflow/serving/predict

Based on this, I should provide this dictionary as input, but it does not work.

{"instances": [ { "input_data": [138, 30, 66], "length_input": 1, "indicator": [[0, 0]], "seqlen": [3], "y": [138, 30, 66] } ]}

回答1:

I figured out the issue. The issue was not the formatting of the input data. Rather it was in NLTK. NLTK.word_tokenize was throwing an error because it did not have the data necessary to do the tokenization. I had to upload the data to Google Cloud or use a tokenization method that didn't require any data files to solve this problem.

I don't know why this Google Cloud custom prediction routine software doesn't tell its users the errors that are happening, but through all my efforts it always just returns Unknown error whenever something goes wrong. If I had known precisely what the error was, this would have been an easy fix.



回答2:

I think you need:

{instances: [
 {"input_data": "hello, how are you?"},
 {"input_data": "who is this?"}
]}

but we can confirm if we can look at result of calling saved_model_cli on your SavedModel files.