Import a simple Tensorflow frozen_model.pb file an

2020-06-04 03:57发布

I am trying to import a graph I exported from Tensorflow Python into Tensorflow C++. I've already successfully re-imported the graph into Python. The only thing I want now is to write the same code in C++ but I am not sure about the C++ api functions and there usage as the documentation on the Tensorflow website is not good enough.

Here's the C++ code I found so far.

C++:

namespace tf = tensorflow;

tf::Session* session;

tf::Status status = tf::NewSession(tf::SessionOptions(), &session);
checkStatus(status);

tf::GraphDef graph_def;
status = ReadBinaryProto(tf::Env::Default(), "./models/frozen_model.pb", &graph_def);
checkStatus(status);

status = session->Create(graph_def);
checkStatus(status);

tf::Tensor x(tf::DT_FLOAT, tf::TensorShape());
tf::Tensor y(tf::DT_FLOAT, tf::TensorShape());

x.scalar<float>()() = 23.0;
y.scalar<float>()() = 19.0;

std::vector<std::pair<tf::string, tf::Tensor>> input_tensors = {{"x", x}, {"y", y}};
std::vector<string> vNames; // vector of names for required graph nodes
vNames.push_back("prefix/input_neurons:0");
vNames.push_back("prefix/prediction_restore:0");
std::vector<tf::Tensor> output_tensors;

status = session->Run({}, vNames,  {}, &output_tensors);
checkStatus(status);

tf::Tensor output = output_tensors[0];
std::cout << "Success: " << output.scalar<float>() << "!" << std::endl;
session->Close();
return 0;

The problem I am having with the current c++ code above is that it says it cannot find any operation by the name of prefix/input_neurons:0. Although there is an operation in the graph because when i import this graph in the Python code (shown below), it works perfectly fine.

Here's the Python code to import the graph successfully.

Python: ( Works perfectly fine )

def load_graph(frozen_graph_filename):
    # We load the protobuf file from the disk and parse it to retrieve the 
    # unserialized graph_def
    with tf.gfile.GFile(frozen_graph_filename, "rb") as f:
        graph_def = tf.GraphDef()
        graph_def.ParseFromString(f.read())

    # Then, we can use again a convenient built-in function to import a graph_def into the 
    # current default Graph
    with tf.Graph().as_default() as graph:
        tf.import_graph_def(
            graph_def, 
            input_map=None, 
            return_elements=None, 
            name="prefix", 
            op_dict=None, 
            producer_op_list=None
        )
    return graph

# We use our "load_graph" function
graph = load_graph("./models/frozen_model.pb")

# We can verify that we can access the list of operations in the graph
for op in graph.get_operations():
    print(op.name)     # <--- printing the operations snapshot below
    # prefix/Placeholder/inputs_placeholder
    # ...
    # prefix/Accuracy/predictions

# We access the input and output nodes
x = graph.get_tensor_by_name('prefix/input_neurons:0')
y = graph.get_tensor_by_name('prefix/prediction_restore:0')

# We launch a Session
with tf.Session(graph=graph) as sess:

    test_features = [[0.377745556,0.009904444,0.063231111,0.009904444,0.003734444,0.002914444,0.008633333,0.000471111,0.009642222,0.05406,0.050163333,7e-05,0.006528889,0.000314444,0.00649,0.043956667,0.016816667,0.001644444,0.016906667,0.00204,0.027342222,0.13864]]
        # compute the predicted output for test_x
    pred_y = sess.run( y, feed_dict={x: test_features} )
    print(pred_y)

Update

I can print the operations from the python script. Here's the screenshot.

enter image description here

Here's the error I get.

enter image description here

1条回答
何必那么认真
2楼-- · 2020-06-04 04:30

See the Run function reference: in c++ the input is first the input dict, then the output nodes, then the other operations that need to be run, then the output vector (optinoally with extra arguments, but it looks like you don't need them). This call should work:

status = session->Run({{"prefix/input_neurons:0", x}}, {"prefix/prediction_restore:0"}, {}, &output_tensors);

If you want to set x to the same values as in python (there is very probably a way to do this without copying data, but I don't know how), you can do this before calling Run():

std::vector<float> test_features = {0.377745556,0.009904444,0.063231111,0.009904444,0.003734444,0.002914444,0.008633333,0.000471111,0.009642222,0.05406,0.050163333,7e-05,0.006528889,0.000314444,0.00649,0.043956667,0.016816667,0.001644444,0.016906667,0.00204,0.027342222,0.13864};
int n_features = test_features.size();
x= tf::Tensor(tf::DT_FLOAT, tf::TensorShape({1,n_features}));
auto x_mapped = x.tensor<float, 2>();

for (int i = 0; i< n_features; i++)
{
    x_mapped(0, i) = test_features[i];
}

Tell me if it's better with this !

查看更多
登录 后发表回答