Error loading TensorFlow graph with C API - tensorflow

I'm trying to use the TensorFlow C API to load and execute a graph. It keeps failing and I can't figure out why.
I first use this Python script to create a very simple graph and save it to a file.
import tensorflow as tf
graph = tf.Graph()
with graph.as_default():
input = tf.placeholder(tf.float32, [10, 3], name='input')
output = tf.reduce_sum(input**2, name='output')
tf.train.write_graph(graph, '.', 'test.pbtxt')
Then I use this C++ code to load it in.
#include <fstream>
#include <iostream>
#include <string>
#include <c_api.h>
using namespace std;
int main() {
ifstream graphFile("test.pbtxt");
string graphText((istreambuf_iterator<char>(graphFile)), istreambuf_iterator<char>());
TF_Buffer* buffer = TF_NewBufferFromString(graphText.c_str(), graphText.size());
TF_Graph* graph = TF_NewGraph();
TF_ImportGraphDefOptions* importOptions = TF_NewImportGraphDefOptions();
TF_Status* status = TF_NewStatus();
TF_GraphImportGraphDef(graph, buffer, importOptions, status);
cout<<TF_GetCode(status)<<endl;
return 0;
}
The status code it prints is 3, or TF_INVALID_ARGUMENT. Which argument is invalid and why? I verified the file contents are loaded correctly into graphText, and all the other arguments are trivial.

First of all, I think you should write the Graph with as_graph_def(), in your case:
with open('test.pb', 'wb') as f:
f.write(graph.as_graph_def().SerializeToString())
Apart from it, I recommend you not to use the C API directly as it is error prone with memory leaks. Instead I have tried your code using cppflow, a C++ wrapper, and it works like a charm. I have used the following code:
# Load model
Model model("../test.pb");
# Declare tensors by name
auto input = new Tensor(model, "input");
auto output = new Tensor(model, "output");
# Feed data
std::vector<float> data(30, 1);
input->set_data(data);
# Run and show
model.run(input, output);
std::cout << output->get_data<float>()[0] << std::endl;

Related

Tensorflow Lite Model: Incompatible shapes for input and output array

I'm currently working on a Tensorflow Lite image classifier app that can recognice UNO cards. But when I'm running the float model in the class ImageClassifier, something is wrong.
The error is the next one:
java.lang.IllegalArgumentException: Cannot copy from a TensorFlowLite tensor (Identity) with shape [1, 10647, 4] to a Java object with shape [1, 15].
Here's the code that throw that error:
tflite.run(imgData, labelProbArray);
And this is how I have created imgData and labelProbArray:
private static final int DIM_BATCH_SIZE = 1;
private static final int DIM_PIXEL_SIZE = 3; //r+g+b = 1+1+1
static final int DIM_IMG_SIZE_X = 416;
static final int DIM_IMG_SIZE_Y = 416;
imgData = ByteBuffer.allocateDirect(DIM_BATCH_SIZE * DIM_IMG_SIZE_X * DIM_IMG_SIZE_Y * DIM_PIXEL_SIZE * 4); //The last value because size of float is 4
labelProbArray = new float[1][labelList.size()]; // {1, 15}
You can chech the inputs and ouputs of the .tflite file. Source.
I know you should create a buffer for the output values, but I tried to import this and didn't work:
import org.tensorflow.lite.support.tensorbuffer.TensorBuffer;
Any ideas? Thank you so much for read me^^
Edit v2:
Thanks to yyoon I realise that I didn't populate my model with metadata, so I run this line in my cmd:
python ./metadata_writer_for_image_classifier_uno.py \ --model_file=./model_without_metadata/custom.tflite \ --label_file=./model_without_metadata/labels.txt \ --export_directory=model_with_metadata
Before that, I modified this file with my data:
_MODEL_INFO = {
"custom.tflite":
ModelSpecificInfo(
name="UNO image classifier",
version="v1",
image_width=416,
image_height=416,
image_min=0,
image_max=255,
mean=[127.5],
std=[127.5],
num_classes=15)
}
And another error appeared:
ValueError: The number of output tensors (2) should match the number of output tensor metadata (1)
Idk why my model have 2 tensors outputs...

A Good Way to Expose CUPY MemoryPointer in C/C++?

NumPy provides well-defined C APIs so that one can easily handle NumPy array in C/C++ space. For example, if I have a C function that takes C arrays (pointers) as arguments, I can just #include <numpy/arrayobject.h>, and pass a NumPy array to it by accessing its data member (or use the C API PyArray_DATA).
Recently I want to achieve the same for CuPy, but I cannot find a header file that I can include. To be specific, my goal is as follows:
I have some CUDA kernels and their callers written in C/C++. The callers run on host but take handles of memory buffers on device as arguments. The computed results of the callers are also stored on device.
I want to wrap the callers into Python functions so that I can control when to transfer data from device to host in Python. That means I have to wrap the resulted device memory pointers in Python objects. CuPy's ndarray is the best choice I can think of.
I can't use CuPy's user-defined-kenrel mechanism because the functions I want to wrap are not directly CUDA kernels. They must contain host code.
Currently, I've found a workaround. I write the Python functions in cython, which take CuPy arrays as inputs and return CuPy arrays. And then I cast .data.ptr attribute into C's size_t type, and then further cast it to whatever pointer type I need. Example code follows.
Example Code
//kernel.cu
#include <math.h>
__global__ void vecSumKernel(float *A, float *B, float *C, int n) {
int i = threadIdx.x + blockIdx.x * blockDim.x;
if (i < n)
C[i] = A[i] + B[i];
}
// This is the C function I want to wrap into Python.
// Notice it does not allocate any memory on device. I want that to be done by cupy.
extern "C" void vecSum(float *A_d, float *B_d, float *C_d, int n) {
int threadsPerBlock = 512;
if (threadsPerBlock > n) threadsPerBlock = n;
int nBlocks = (int)ceilf((float)n / (float)threadsPerBlock);
vecSumKernel<<<nBlocks, threadsPerBlock>>>(A_d, B_d, C_d, n);
}
//kernel.h
#ifndef KERNEL_H_
#define KERNEL_H_
void vecSum(float *A_d, float *B_d, float *C_d, int n);
#endif
# test_module.pyx
import cupy as cp
import numpy as np
cdef extern from "kernel.h":
void vecSum(float *A_d, float *B_d, float *C_d, int n)
cdef vecSum_wrapper(size_t aPtr, size_t bPtr, size_t cPtr, int n):
# here the Python int -- cp.ndarray.data.ptr -- is first cast to size_t,
# and then cast to (float *).
vecSum(<float*>aPtr, <float*>bPtr, <float*>cPtr, n)
# This is the Python function I want to use
# a, b are cupy arrays
def vec_sum(a, b):
a_ptr = a.data.ptr
b_ptr = b.data.ptr
n = a.shape[0]
output = cp.empty(shape=(n,), dtype=a.dtype)
c_ptr = output.data.ptr
vecSum_wrapper(a_ptr, b_ptr, c_ptr, n)
return output
Compile and Run
To compile, one can first compile the kernel.cu into a static library, say, libVecSum. Then use cython to compile test_module.pyx int test_module.c, and build the Python extension as usual.
# setup.py
from setuptools import Extension, setup
ext_module = Extension(
"cupyExt.test_module",
sources=["cupyExt/test_module.c"],
library_dirs=["cupyExt/"],
libraries=['libVecSum', 'cudart'])
setup(
name="cupyExt",
version="0.0.0",
ext_modules = [ext_module],
)
It seems working.
>>> import cupy as cp
>>> from cupyExt import test_module
>>> a = cp.ones(5, dtype=cp.float32) * 3
>>> b = cp.arange(5, dtype=cp.float32)
>>> c = test_module.vec_sum(a, b)
>>> print(c.device)
<CUDA Device 0>
>>> print(c)
[3. 4. 5. 6. 7.]
Any better ways?
I am not sure if this way is memory safe. I also feel the casting from .data.ptr to C pointers is not good. I want to know people's thoughts and comments on this.

How to import an saved Tensorflow model train using tf.estimator and predict on input data

I have save the model using tf.estimator .method export_savedmodel as follows:
export_dir="exportModel/"
feature_spec = tf.feature_column.make_parse_example_spec(feature_columns)
input_receiver_fn = tf.estimator.export.build_parsing_serving_input_receiver_fn(feature_spec)
classifier.export_savedmodel(export_dir, input_receiver_fn, as_text=False, checkpoint_path="Model/model.ckpt-400")
How can I import this saved model and use for predictions?
I tried to search for a good base example, but it appears the documentation and samples are a bit scattered for this topic. So let's start with a base example: the tf.estimator quickstart.
That particular example doesn't actually export a model, so let's do that (not need for use case 1):
def serving_input_receiver_fn():
"""Build the serving inputs."""
# The outer dimension (None) allows us to batch up inputs for
# efficiency. However, it also means that if we want a prediction
# for a single instance, we'll need to wrap it in an outer list.
inputs = {"x": tf.placeholder(shape=[None, 4], dtype=tf.float32)}
return tf.estimator.export.ServingInputReceiver(inputs, inputs)
export_dir = classifier.export_savedmodel(
export_dir_base="/path/to/model",
serving_input_receiver_fn=serving_input_receiver_fn)
Huge asterisk on this code: there appears to be a bug in TensorFlow 1.3 that doesn't allow you to do the above export on a "canned" estimator (such as DNNClassifier). For a workaround, see the "Appendix: Workaround" section.
The code below references export_dir (return value from the export step) to emphasize that it is not "/path/to/model", but rather, a subdirectory of that directory whose name is a timestamp.
Use Case 1: Perform prediction in the same process as training
This is an sci-kit learn type of experience, and is already exemplified by the sample. For completeness' sake, you simply call predict on the trained model:
classifier.train(input_fn=train_input_fn, steps=2000)
# [...snip...]
predictions = list(classifier.predict(input_fn=predict_input_fn))
predicted_classes = [p["classes"] for p in predictions]
Use Case 2: Load a SavedModel into Python/Java/C++ and perform predictions
Python Client
Perhaps the easiest thing to use if you want to do prediction in Python is SavedModelPredictor. In the Python program that will use the SavedModel, we need code like this:
from tensorflow.contrib import predictor
predict_fn = predictor.from_saved_model(export_dir)
predictions = predict_fn(
{"x": [[6.4, 3.2, 4.5, 1.5],
[5.8, 3.1, 5.0, 1.7]]})
print(predictions['scores'])
Java Client
package dummy;
import java.nio.FloatBuffer;
import java.util.Arrays;
import java.util.List;
import org.tensorflow.SavedModelBundle;
import org.tensorflow.Session;
import org.tensorflow.Tensor;
public class Client {
public static void main(String[] args) {
Session session = SavedModelBundle.load(args[0], "serve").session();
Tensor x =
Tensor.create(
new long[] {2, 4},
FloatBuffer.wrap(
new float[] {
6.4f, 3.2f, 4.5f, 1.5f,
5.8f, 3.1f, 5.0f, 1.7f
}));
// Doesn't look like Java has a good way to convert the
// input/output name ("x", "scores") to their underlying tensor,
// so we hard code them ("Placeholder:0", ...).
// You can inspect them on the command-line with saved_model_cli:
//
// $ saved_model_cli show --dir $EXPORT_DIR --tag_set serve --signature_def serving_default
final String xName = "Placeholder:0";
final String scoresName = "dnn/head/predictions/probabilities:0";
List<Tensor> outputs = session.runner()
.feed(xName, x)
.fetch(scoresName)
.run();
// Outer dimension is batch size; inner dimension is number of classes
float[][] scores = new float[2][3];
outputs.get(0).copyTo(scores);
System.out.println(Arrays.deepToString(scores));
}
}
C++ Client
You'll likely want to use tensorflow::LoadSavedModel with Session.
#include <unordered_set>
#include <utility>
#include <vector>
#include "tensorflow/cc/saved_model/loader.h"
#include "tensorflow/core/framework/tensor.h"
#include "tensorflow/core/public/session.h"
namespace tf = tensorflow;
int main(int argc, char** argv) {
const string export_dir = argv[1];
tf::SavedModelBundle bundle;
tf::Status load_status = tf::LoadSavedModel(
tf::SessionOptions(), tf::RunOptions(), export_dir, {"serve"}, &bundle);
if (!load_status.ok()) {
std::cout << "Error loading model: " << load_status << std::endl;
return -1;
}
// We should get the signature out of MetaGraphDef, but that's a bit
// involved. We'll take a shortcut like we did in the Java example.
const string x_name = "Placeholder:0";
const string scores_name = "dnn/head/predictions/probabilities:0";
auto x = tf::Tensor(tf::DT_FLOAT, tf::TensorShape({2, 4}));
auto matrix = x.matrix<float>();
matrix(0, 0) = 6.4;
matrix(0, 1) = 3.2;
matrix(0, 2) = 4.5;
matrix(0, 3) = 1.5;
matrix(0, 1) = 5.8;
matrix(0, 2) = 3.1;
matrix(0, 3) = 5.0;
matrix(0, 4) = 1.7;
std::vector<std::pair<string, tf::Tensor>> inputs = {{x_name, x}};
std::vector<tf::Tensor> outputs;
tf::Status run_status =
bundle.session->Run(inputs, {scores_name}, {}, &outputs);
if (!run_status.ok()) {
cout << "Error running session: " << run_status << std::endl;
return -1;
}
for (const auto& tensor : outputs) {
std::cout << tensor.matrix<float>() << std::endl;
}
}
Use Case 3: Serve a model using TensorFlow Serving
Exporting models in a manner amenable to serving a Classification model requires that the input be a tf.Example object. Here's how we might export a model for TensorFlow serving:
def serving_input_receiver_fn():
"""Build the serving inputs."""
# The outer dimension (None) allows us to batch up inputs for
# efficiency. However, it also means that if we want a prediction
# for a single instance, we'll need to wrap it in an outer list.
example_bytestring = tf.placeholder(
shape=[None],
dtype=tf.string,
)
features = tf.parse_example(
example_bytestring,
tf.feature_column.make_parse_example_spec(feature_columns)
)
return tf.estimator.export.ServingInputReceiver(
features, {'examples': example_bytestring})
export_dir = classifier.export_savedmodel(
export_dir_base="/path/to/model",
serving_input_receiver_fn=serving_input_receiver_fn)
The reader is referred to TensorFlow Serving's documentation for more instructions on how to setup TensorFlow Serving, so I'll only provide the client code here:
# Omitting a bunch of connection/initialization code...
# But at some point we end up with a stub whose lifecycle
# is generally longer than that of a single request.
stub = create_stub(...)
# The actual values for prediction. We have two examples in this
# case, each consisting of a single, multi-dimensional feature `x`.
# This data here is the equivalent of the map passed to the
# `predict_fn` in use case #2.
examples = [
tf.train.Example(
features=tf.train.Features(
feature={"x": tf.train.Feature(
float_list=tf.train.FloatList(value=[6.4, 3.2, 4.5, 1.5]))})),
tf.train.Example(
features=tf.train.Features(
feature={"x": tf.train.Feature(
float_list=tf.train.FloatList(value=[5.8, 3.1, 5.0, 1.7]))})),
]
# Build the RPC request.
predict_request = predict_pb2.PredictRequest()
predict_request.model_spec.name = "default"
predict_request.inputs["examples"].CopyFrom(
tensor_util.make_tensor_proto(examples, tf.float32))
# Perform the actual prediction.
stub.Predict(request, PREDICT_DEADLINE_SECS)
Note that the key, examples, that is referenced in the predict_request.inputs needs to match the key used in the serving_input_receiver_fn at export time (cf. the constructor to ServingInputReceiver in that code).
Appendix: Working around Exports from Canned Models in TF 1.3
There appears to be a bug in TensorFlow 1.3 in which canned models do not export properly for use case 2 (the problem does not exist for "custom" estimators). Here's is a workaround that wraps a DNNClassifier to make things work, specifically for the Iris example:
# Build 3 layer DNN with 10, 20, 10 units respectively.
class Wrapper(tf.estimator.Estimator):
def __init__(self, **kwargs):
dnn = tf.estimator.DNNClassifier(**kwargs)
def model_fn(mode, features, labels):
spec = dnn._call_model_fn(features, labels, mode)
export_outputs = None
if spec.export_outputs:
export_outputs = {
"serving_default": tf.estimator.export.PredictOutput(
{"scores": spec.export_outputs["serving_default"].scores,
"classes": spec.export_outputs["serving_default"].classes})}
# Replace the 3rd argument (export_outputs)
copy = list(spec)
copy[4] = export_outputs
return tf.estimator.EstimatorSpec(mode, *copy)
super(Wrapper, self).__init__(model_fn, kwargs["model_dir"], dnn.config)
classifier = Wrapper(feature_columns=feature_columns,
hidden_units=[10, 20, 10],
n_classes=3,
model_dir="/tmp/iris_model")
I dont think there is a bug with canned Estimators (or rather if there was ever one, it has been fixed). I was able to successfully export a canned estimator model using Python and import it in Java.
Here is my code to export the model:
a = tf.feature_column.numeric_column("a");
b = tf.feature_column.numeric_column("b");
feature_columns = [a, b];
model = tf.estimator.DNNClassifier(feature_columns=feature_columns ...);
# To export
feature_spec = tf.feature_column.make_parse_example_spec(feature_columns);
export_input_fn = tf.estimator.export.build_parsing_serving_input_receiver_fn(feature_spec);
servable_model_path = model.export_savedmodel(servable_model_dir, export_input_fn, as_text=True);
To import the model in Java, I used the Java client code provided by rhaertel80 above and it works. Hope this also answers Ben Fowler's question above.
It appears that the TensorFlow team does not agree that there is a bug in version 1.3 using canned estimators for exporting a model under use case #2. I submitted a bug report here:
https://github.com/tensorflow/tensorflow/issues/13477
The response I received from TensorFlow is that the input must only be a single string tensor. It appears that there may be a way to consolidate multiple features into a single string tensor using serialized TF.examples, but I have not found a clear method to do this. If anyone has code showing how to do this, I would be appreciative.
You need to export the saved model using tf.contrib.export_savedmodel and you need to define input receiver function to pass input to.
Later you can load the saved model ( generally saved.model.pb) from the disk and serve it.
TensorFlow: How to predict from a SavedModel?

In TensorFlow's C++ api, how to generate a graph file to visualization with tensorboard?

There is a way to create a file with Python, which can be visualization by TensorBoard(see here). I have tried with this code and it works well.
import tensorflow as tf
a = tf.add(1, 2,)
b = tf.multiply(a, 3)
c = tf.add(4, 5,)
d = tf.multiply(c, 6,)
e = tf.multiply(4, 5,)
f = tf.div(c, 6,)
g = tf.add(b, d)
h = tf.multiply(g, f)
with tf.Session() as sess:
print(sess.run(h))
with tf.Session() as sess:
writer = tf.summary.FileWriter("output", sess.graph)
print(sess.run(h))
writer.close()
Now I am using TensorFlow API to create my computations. How can I visualize my computations with TensorBoard?
There have a FileWrite interface in C++ api also, but I have not seen any example. Is it the same interface ?
See my answer here, which gives you a 26-liner in c++ to do this:
#include <tensorflow/core/util/events_writer.h>
#include <string>
#include <iostream>
void write_scalar(tensorflow::EventsWriter* writer, double wall_time, tensorflow::int64 step,
const std::string& tag, float simple_value) {
tensorflow::Event event;
event.set_wall_time(wall_time);
event.set_step(step);
tensorflow::Summary::Value* summ_val = event.mutable_summary()->add_value();
summ_val->set_tag(tag);
summ_val->set_simple_value(simple_value);
writer->WriteEvent(event);
}
int main(int argc, char const *argv[]) {
std::string envent_file = "./events";
tensorflow::EventsWriter writer(envent_file);
for (int i = 0; i < 150; ++i)
write_scalar(&writer, i * 20, i, "loss", 150.f / i);
return 0;
}
Looks like you want tensorflow::EventsWriter from tensorflow/core/util/events_writer.h. You'll need to manually create an Event object to use it though.
The python code in tf.summary.FileWriter handles a lot of the details for you though, I'd suggest only using the C++ API if absolutely necessary... Is there a compelling reason to implement your training in C++?

Tensorflow: how to add user custom op accepting two 1D vec tensor and output a scalar?

I'm trying below but not work.
#include "tensorflow/core/framework/op.h"
#include "tensorflow/core/framework/op_kernel.h"
using namespace tensorflow;
REGISTER_OP("Auc")
.Input("predicts: T1")
.Input("labels: T2")
.Output("z: double")
.Attr("T1: {float, double}")
.Attr("T2: {int32, int64}")
.SetIsCommutative()
.Doc(R"doc(
Given preidicts and labels output it's auc
)doc");
class AucOp : public OpKernel {
public:
explicit AucOp(OpKernelConstruction* context) : OpKernel(context) {}
void Compute(OpKernelContext* context) override {
// Grab the input tensor
const Tensor& predicts_tensor = context->input(0);
const Tensor& labels_tensor = context->input(1);
auto predicts = predicts_tensor.flat<double>();
auto labels = labels_tensor.flat<int32>();
// Create an output tensor
Tensor* output_tensor = NULL;
TensorShape output_shape;
OP_REQUIRES_OK(context, context->allocate_output(0, output_shape, &output_tensor));
output_tensor->flat<double>().setConstant(predicts(0) * labels(0));
}
};
REGISTER_KERNEL_BUILDER(Name("Auc").Device(DEVICE_CPU), AucOp);
test.py
predicts = tf.constant([0.8, 0.5, 0.12])
labels = tf.constant([-1, 1, 1])
output = tf.user_ops.auc(predicts, labels)
with tf.Session() as sess:
init = tf.initialize_all_variables()
sess.run(init)
print output.eval()
./test.py
I tensorflow/core/common_runtime/local_device.cc:40] Local device intra op parallelism threads: 8
I tensorflow/core/common_runtime/direct_session.cc:60] Direct session inter op parallelism threads: 8
F ./tensorflow/core/public/tensor.h:453] Check failed: dtype() == DataTypeToEnum::v() (1 vs. 2)
Aborted
The issue is that the predicts tensor in your Python program has type float, and your op registration accepts this as a valid type for the predicts input (since T1 can be float or double), but AucOp::Compute() assumes that the predicts input always has type double (in the call to predicts_tensor.flat<double>()). The tensorflow::Tensor class does not convert the type of elements in the tensor when you ask for values of a different type, and instead it raises a fatal error.
There are several possible solutions:
To get things working quickly, you could change the type of predicts in your Python program to tf.float64 (which is a synonym for double in the Python front-end):
predicts = tf.constant([0.8, 0.5, 0.12], dtype=tf.float64)
You could start by defining a simpler op that accepts inputs of a single type only:
REGISTER_OP("Auc")
.Input("predicts: double")
.Input("labels: int32")
...;
You could add code in the AucOp::Compute() method to test the input type and access the input values as appropriate. (Use this->input_type(i) to find the type of the ith input.
You could define a templated class AucOp<TPredict, TLabel>, then use TypeConstraint<> in the REGISTER_KERNEL_BUILDER call to define specializations for each of the four valid combinations of prediction and label types. This would look something like:
REGISTER_KERNEL_BUILDER(Name("Auc")
.Device(DEVICE_CPU)
.TypeConstraint<float>("T1")
.TypeConstraint<int32>("T2"),
AucOp<float, int32>);
// etc. for AucOp<double, int32>, AucOp<float, int64>, and AucOp<double, int64>.