Tensorflow C API placeholder / input variable setting - tensorflow

I am trying to use the TensorFlow C-API to run a implementation of LeNet that has been saved from a Keras/TF model, but I am having consistent problems with setting the input. Relevant piece of code is:
// Load the image with openCV
CvMat * img = cvLoadImageM(argv[1], CV_LOAD_IMAGE_COLOR );
// Create an Tensor from the image
int64_t dims4[]={1,1,28,28};
TF_Tensor * imgTensor = TF_NewTensor(TF_FLOAT,dims4,4,img,28*28*sizeof(float),NULL,NULL);
TF_Operation* init_op2 = TF_GraphOperationByName(graph, "conv2d_1_input");
TF_Operation* targets[] = {init_op2};
// Build up the inputs
TF_Output inp = {
init_op2,
0
};
TF_Output * inputs[] = {&inp};
TF_Tensor * input_values[] = {imgTensor};
printf("\nBefore\n");
TF_SessionRun(session, NULL,
&inp, inputvalues, 1, // inputs
NULL, NULL, 0, // outputs
&init_op2, 1, // targets
NULL,
status);
printf("After\n");
printf("Status %d %s\n", TF_GetCode(status), TF_Message(status));
However, anyway I try to build up the input tensor, I get the error status and message:
Status 3 You must feed a value for placeholder tensor 'conv2d_1_input' with dtype float and shape [?,1,28,28]
[[Node: conv2d_1_input = Placeholder[_output_shapes=[[?,1,28,28]], dtype=DT_FLOAT, shape=[?,1,28,28], _device=...]()]]
Any suggestions on what I am doing wrong?

In your call to TF_SessionRun, you're also providing the conv2d_1_input operation as a "target". The error message can be improved, but it's basically complaining that you're asking the session to execute a placeholder operation, which it can't - which isn't possible (see the note in the documentation for tf.placeholder)
Shouldn't you be asking for a different target or output tensor from the call to TF_SessionRun with something like:
TF_Output out = { TF_GraphOperationByName(graph, "<name_of_output_tensor>"), 0 };
TF_Tensor* outputvalues = NULL;
TF_SessionRun(session, NULL,
&inp, inputvalues, 1, // inputs
&out, &outputvalues, 1, // outputs
NULL, 0, // targets
NULL, status);
Hope that helps.

Related

Tensorflow Lite Model: Incompatible shapes for input and output array

I'm currently working on a Tensorflow Lite image classifier app that can recognice UNO cards. But when I'm running the float model in the class ImageClassifier, something is wrong.
The error is the next one:
java.lang.IllegalArgumentException: Cannot copy from a TensorFlowLite tensor (Identity) with shape [1, 10647, 4] to a Java object with shape [1, 15].
Here's the code that throw that error:
tflite.run(imgData, labelProbArray);
And this is how I have created imgData and labelProbArray:
private static final int DIM_BATCH_SIZE = 1;
private static final int DIM_PIXEL_SIZE = 3; //r+g+b = 1+1+1
static final int DIM_IMG_SIZE_X = 416;
static final int DIM_IMG_SIZE_Y = 416;
imgData = ByteBuffer.allocateDirect(DIM_BATCH_SIZE * DIM_IMG_SIZE_X * DIM_IMG_SIZE_Y * DIM_PIXEL_SIZE * 4); //The last value because size of float is 4
labelProbArray = new float[1][labelList.size()]; // {1, 15}
You can chech the inputs and ouputs of the .tflite file. Source.
I know you should create a buffer for the output values, but I tried to import this and didn't work:
import org.tensorflow.lite.support.tensorbuffer.TensorBuffer;
Any ideas? Thank you so much for read me^^
Edit v2:
Thanks to yyoon I realise that I didn't populate my model with metadata, so I run this line in my cmd:
python ./metadata_writer_for_image_classifier_uno.py \ --model_file=./model_without_metadata/custom.tflite \ --label_file=./model_without_metadata/labels.txt \ --export_directory=model_with_metadata
Before that, I modified this file with my data:
_MODEL_INFO = {
"custom.tflite":
ModelSpecificInfo(
name="UNO image classifier",
version="v1",
image_width=416,
image_height=416,
image_min=0,
image_max=255,
mean=[127.5],
std=[127.5],
num_classes=15)
}
And another error appeared:
ValueError: The number of output tensors (2) should match the number of output tensor metadata (1)
Idk why my model have 2 tensors outputs...

TensorflowJS: how to reset input/output shapes for pretrained model in TFJS

For the pre-trained model in python we can reset input/output shapes:
from tensorflow import keras
# Load the model
model = keras.models.load_model('models/generator.h5')
# Define arbitrary spatial dims, and 3 channels.
inputs = keras.Input((None, None, 3))
# Trace out the graph using the input:
outputs = model(inputs)
# Override the model:
model = keras.models.Model(inputs, outputs)
The source code
I'm trying to do the same in TFJS:
// Load the model
this.model = await tf.loadLayersModel('/assets/fast_srgan/model.json');
// Define arbitrary spatial dims, and 3 channels.
const inputs = tf.layers.input({shape: [null, null, 3]});
// Trace out the graph using the input.
const outputs = this.model.apply(inputs) as tf.SymbolicTensor;
// Override the model.
this.model = tf.model({inputs: inputs, outputs: outputs});
TFJS does not support one of the layers in the model:
...
u = keras.layers.Conv2D(filters, kernel_size=3, strides=1, padding='same')(layer_input)
u = tf.nn.depth_to_space(u, 2) # <- TFJS does not support this layer
u = keras.layers.PReLU(shared_axes=[1, 2])(u)
...
I wrote my own:
import * as tf from '#tensorflow/tfjs';
export class DepthToSpace extends tf.layers.Layer {
constructor() {
super({});
}
computeOutputShape(shape: Array<number>) {
// I think the issue is here
// because the error occurs during initialization of the model
return [null, ...shape.slice(1, 3).map(x => x * 2), 32];
}
call(input): tf.Tensor {
const result = tf.depthToSpace(input[0], 2);
return result;
}
static get className() {
return 'TensorFlowOpLayer';
}
}
Using the model:
tf.tidy(() => {
let img = tf.browser.fromPixels(this.imgLr.nativeElement, 3);
img = tf.div(img, 255);
img = tf.expandDims(img, 0);
let sr = this.model.predict(img) as tf.Tensor;
sr = tf.mul(tf.div(tf.add(sr, 1), 2), 255).arraySync()[0];
tf.browser.toPixels(sr as tf.Tensor3D, this.imgSrCanvas.nativeElement);
});
but I get the error:
Error: Input 0 is incompatible with layer p_re_lu: expected axis 1 of input shape to have value 96 but got shape 1,128,128,32.
The pre-trained model was trained with 96x96 pixels images. If I use the 96x96 image, it works. But if I try to use other sizes (for example 128x128), It doesn't work. In python, we can easily reset input/output shapes. Why it doesn't work in JS?
To define a new model from the layers of the previous model, you need to use tf.model
this.model = tf.model({inputs: inputs, outputs: outputs});
I tried to debug this class:
import * as tf from '#tensorflow/tfjs';
export class DepthToSpace extends tf.layers.Layer {
constructor() {
super({});
}
computeOutputShape(shape: Array<number>) {
return [null, ...shape.slice(1, 3).map(x => x * 2), 32];
}
call(input): tf.Tensor {
const result = tf.depthToSpace(input[0], 2);
return result;
}
static get className() {
return 'TensorFlowOpLayer';
}
}
and saw: when I do not try to rewrite the size, the computeOutputShape, method works only twice, and it works 4 times when I try to reset inputs/outputs. Well, then I opened the model's JSON file and changed inputs from [null, 96, 96, 32] to [null, 128, 128, 32] and removed these lines:
// Define arbitrary spatial dims, and 3 channels.
const inputs = tf.layers.input({shape: [null, null, 3]});
// Trace out the graph using the input.
const outputs = this.model.apply(inputs) as tf.SymbolicTensor;
// Override the model.
this.model = tf.model({inputs: inputs, outputs: outputs});
And now it works with 128x128 images. It looks like the piece of code above, adds the layers instead of rewriting them.

How to use vectors created by P5 createVector as a tensor in tensorflow.js

I am using p5 to return the vector path of a drawn line. All the vectors in the line are pushed into an array that holds all the vectors. I'm trying to use this as a tensor but I keep getting an error saying
Error when checking model input: the Array of Tensors that you are passing to your model is not the size the model expected. Expected to see 1 Tensor(s), but instead got the following list of Tensor(s):
When I opened the array on the dev tool, each vector was printed like this:
0: Vector {p5: p5, x: 0.5150300601202404, y: -0.25450901803607207, z: 0}
could it be the p5 text in the vector array that's giving me the error? Here's my model and fit code:
let vectorpath = []; //vector path array
// model, setting layers till next '-----'
const model = tf.sequential();
model.add(tf.layers.dense({units: 4, inputShape: [2, 2], activation: 'sigmoid'}));
model.add(tf.layers.dense({units: 2, activation: 'sigmoid'}));
console.log(JSON.stringify(model.outputs[0].shape));
model.weights.forEach(w => {
console.log(w.name, w.shape);
});
// -----
//this is under the draw function so it is continually updated
const labels = tf.randomUniform([0, 1]);
function onBatchEnd(batch, logs) {
console.log('Accuracy', logs.acc);
}
model.fit(vectorpath, labels, {
epochs: 5,
batchSize: 32,
callbacks: {onBatchEnd}
}).then(info => {
console.log('Final accuracy', info.history.acc);
});
What could be causing the error? and how can I fix it?
The question's pretty vague but I'm really just not sure.

Converting Keras Gru model to tf-lite

I am trying to convert my custom Keras model, with two bidirectional GRU layers, to tf-lite for use on mobile devices. I converted my model to the protobuff format and tried to convert it with the given code by TensorFlow:
converter = tf.lite.TFLiteConverter.from_frozen_graph('gru.pb', input_arrays=['input_array'], output_arrays=['output_array'])
tflite_model = converter.convert()
When I execute this it runs for a bit and then I get the following error:
F tensorflow/lite/toco/tooling_util.cc:1455] Should not get here: 5
So I looked up that file and it states the following:
void MakeArrayDims(int num_dims, int batch, int height, int width, int depth,
std::vector<int>* out_dims) {
CHECK(out_dims->empty());
if (num_dims == 0) {
return;
} else if (num_dims == 1) {
CHECK_EQ(batch, 1);
*out_dims = {depth};
} else if (num_dims == 2) {
*out_dims = {batch, depth};
} else if (num_dims == 3) {
CHECK_EQ(batch, 1);
*out_dims = {height, width, depth};
} else if (num_dims == 4) {
*out_dims = {batch, height, width, depth};
} else {
LOG(FATAL) << "Should not get here: " << num_dims;
}
}
Which seems to be correct since I am using 5 dimensions: [Batch, Sequence, Height, Width, Channels]
Google didn't help me much with this issue, but maybe I am using the wrong search terms.
So is there any way to avoid this error, or does tf-lite simply not support sequences?
ps.
I am using TensorFlow 1.14 with python3 in the given docker container.

Tensorflow: how to add user custom op accepting two 1D vec tensor and output a scalar?

I'm trying below but not work.
#include "tensorflow/core/framework/op.h"
#include "tensorflow/core/framework/op_kernel.h"
using namespace tensorflow;
REGISTER_OP("Auc")
.Input("predicts: T1")
.Input("labels: T2")
.Output("z: double")
.Attr("T1: {float, double}")
.Attr("T2: {int32, int64}")
.SetIsCommutative()
.Doc(R"doc(
Given preidicts and labels output it's auc
)doc");
class AucOp : public OpKernel {
public:
explicit AucOp(OpKernelConstruction* context) : OpKernel(context) {}
void Compute(OpKernelContext* context) override {
// Grab the input tensor
const Tensor& predicts_tensor = context->input(0);
const Tensor& labels_tensor = context->input(1);
auto predicts = predicts_tensor.flat<double>();
auto labels = labels_tensor.flat<int32>();
// Create an output tensor
Tensor* output_tensor = NULL;
TensorShape output_shape;
OP_REQUIRES_OK(context, context->allocate_output(0, output_shape, &output_tensor));
output_tensor->flat<double>().setConstant(predicts(0) * labels(0));
}
};
REGISTER_KERNEL_BUILDER(Name("Auc").Device(DEVICE_CPU), AucOp);
test.py
predicts = tf.constant([0.8, 0.5, 0.12])
labels = tf.constant([-1, 1, 1])
output = tf.user_ops.auc(predicts, labels)
with tf.Session() as sess:
init = tf.initialize_all_variables()
sess.run(init)
print output.eval()
./test.py
I tensorflow/core/common_runtime/local_device.cc:40] Local device intra op parallelism threads: 8
I tensorflow/core/common_runtime/direct_session.cc:60] Direct session inter op parallelism threads: 8
F ./tensorflow/core/public/tensor.h:453] Check failed: dtype() == DataTypeToEnum::v() (1 vs. 2)
Aborted
The issue is that the predicts tensor in your Python program has type float, and your op registration accepts this as a valid type for the predicts input (since T1 can be float or double), but AucOp::Compute() assumes that the predicts input always has type double (in the call to predicts_tensor.flat<double>()). The tensorflow::Tensor class does not convert the type of elements in the tensor when you ask for values of a different type, and instead it raises a fatal error.
There are several possible solutions:
To get things working quickly, you could change the type of predicts in your Python program to tf.float64 (which is a synonym for double in the Python front-end):
predicts = tf.constant([0.8, 0.5, 0.12], dtype=tf.float64)
You could start by defining a simpler op that accepts inputs of a single type only:
REGISTER_OP("Auc")
.Input("predicts: double")
.Input("labels: int32")
...;
You could add code in the AucOp::Compute() method to test the input type and access the input values as appropriate. (Use this->input_type(i) to find the type of the ith input.
You could define a templated class AucOp<TPredict, TLabel>, then use TypeConstraint<> in the REGISTER_KERNEL_BUILDER call to define specializations for each of the four valid combinations of prediction and label types. This would look something like:
REGISTER_KERNEL_BUILDER(Name("Auc")
.Device(DEVICE_CPU)
.TypeConstraint<float>("T1")
.TypeConstraint<int32>("T2"),
AucOp<float, int32>);
// etc. for AucOp<double, int32>, AucOp<float, int64>, and AucOp<double, int64>.