TensorflowJS: how to reset input/output shapes for pretrained model in TFJS - tensorflow

For the pre-trained model in python we can reset input/output shapes:
from tensorflow import keras
# Load the model
model = keras.models.load_model('models/generator.h5')
# Define arbitrary spatial dims, and 3 channels.
inputs = keras.Input((None, None, 3))
# Trace out the graph using the input:
outputs = model(inputs)
# Override the model:
model = keras.models.Model(inputs, outputs)
The source code
I'm trying to do the same in TFJS:
// Load the model
this.model = await tf.loadLayersModel('/assets/fast_srgan/model.json');
// Define arbitrary spatial dims, and 3 channels.
const inputs = tf.layers.input({shape: [null, null, 3]});
// Trace out the graph using the input.
const outputs = this.model.apply(inputs) as tf.SymbolicTensor;
// Override the model.
this.model = tf.model({inputs: inputs, outputs: outputs});
TFJS does not support one of the layers in the model:
...
u = keras.layers.Conv2D(filters, kernel_size=3, strides=1, padding='same')(layer_input)
u = tf.nn.depth_to_space(u, 2) # <- TFJS does not support this layer
u = keras.layers.PReLU(shared_axes=[1, 2])(u)
...
I wrote my own:
import * as tf from '#tensorflow/tfjs';
export class DepthToSpace extends tf.layers.Layer {
constructor() {
super({});
}
computeOutputShape(shape: Array<number>) {
// I think the issue is here
// because the error occurs during initialization of the model
return [null, ...shape.slice(1, 3).map(x => x * 2), 32];
}
call(input): tf.Tensor {
const result = tf.depthToSpace(input[0], 2);
return result;
}
static get className() {
return 'TensorFlowOpLayer';
}
}
Using the model:
tf.tidy(() => {
let img = tf.browser.fromPixels(this.imgLr.nativeElement, 3);
img = tf.div(img, 255);
img = tf.expandDims(img, 0);
let sr = this.model.predict(img) as tf.Tensor;
sr = tf.mul(tf.div(tf.add(sr, 1), 2), 255).arraySync()[0];
tf.browser.toPixels(sr as tf.Tensor3D, this.imgSrCanvas.nativeElement);
});
but I get the error:
Error: Input 0 is incompatible with layer p_re_lu: expected axis 1 of input shape to have value 96 but got shape 1,128,128,32.
The pre-trained model was trained with 96x96 pixels images. If I use the 96x96 image, it works. But if I try to use other sizes (for example 128x128), It doesn't work. In python, we can easily reset input/output shapes. Why it doesn't work in JS?

To define a new model from the layers of the previous model, you need to use tf.model
this.model = tf.model({inputs: inputs, outputs: outputs});

I tried to debug this class:
import * as tf from '#tensorflow/tfjs';
export class DepthToSpace extends tf.layers.Layer {
constructor() {
super({});
}
computeOutputShape(shape: Array<number>) {
return [null, ...shape.slice(1, 3).map(x => x * 2), 32];
}
call(input): tf.Tensor {
const result = tf.depthToSpace(input[0], 2);
return result;
}
static get className() {
return 'TensorFlowOpLayer';
}
}
and saw: when I do not try to rewrite the size, the computeOutputShape, method works only twice, and it works 4 times when I try to reset inputs/outputs. Well, then I opened the model's JSON file and changed inputs from [null, 96, 96, 32] to [null, 128, 128, 32] and removed these lines:
// Define arbitrary spatial dims, and 3 channels.
const inputs = tf.layers.input({shape: [null, null, 3]});
// Trace out the graph using the input.
const outputs = this.model.apply(inputs) as tf.SymbolicTensor;
// Override the model.
this.model = tf.model({inputs: inputs, outputs: outputs});
And now it works with 128x128 images. It looks like the piece of code above, adds the layers instead of rewriting them.

Related

Output probability of prediction in tensorflow.js

I have a model.json generated from tensorflow via tensorflow.js coverter
In the original implementation of model in tensorflow in python, it is built like this:
model = models.Sequential([
base_model,
layers.Dropout(0.2),
layers.Flatten(),
layers.Dense(128, activation='relu'),
layers.Dense(num_classes)
])
In tensorflow, the probability can be generated by score = tf.nn.softmax(predictions[0]), according to the tutorial on official website.
How do I get this probability in tensorflow.js?
I have copied the codes template as below:
$("#predict-button").click(async function () {
if (!modelLoaded) { alert("The model must be loaded first"); return; }
if (!imageLoaded) { alert("Please select an image first"); return; }
let image = $('#selected-image').get(0);
// Pre-process the image
console.log( "Loading image..." );
let tensor = tf.browser.fromPixels(image, 3)
.resizeNearestNeighbor([224, 224]) // change the image size
.expandDims()
.toFloat()
// RGB -> BGR
let predictions = await model.predict(tensor).data();
console.log(predictions);
let top5 = Array.from(predictions)
.map(function (p, i) { // this is Array.map
return {
probability: p,
className: TARGET_CLASSES[i] // we are selecting the value from the obj
};
}).sort(function (a, b) {
return b.probability - a.probability;
}).slice(0, 2);
console.log(top5);
$("#prediction-list").empty();
top5.forEach(function (p) {
$("#prediction-list").append(`<li>${p.className}: ${p.probability.toFixed(6)}</li>`);
});
How should I modify the above code?
The output is just the same as the value of variable 'predictions':
Float32Array(5)
0: -2.5525975227355957
1: 7.398464679718018
2: -3.252196788787842
3: 4.710395812988281
4: -4.636396408081055
buffer: (...)
byteLength: (...)
byteOffset: (...)
length: (...)
Symbol(Symbol.toStringTag): (...)
__proto__: TypedArray
0: {probability: 7.398464679718018, className: "Sunflower"}
1: {probability: 4.710395812988281, className: "Rose"}
length: 2
__proto__: Array(0)
Please help!!!
Thanks!
In order to extract the probabilities from the logits of the model using a softmax function you can do the following:
This is the array of logits that are also the predictions you get from the model
const logits = [-2.5525975227355957, 7.398464679718018, -3.252196788787842, 4.710395812988281, -4.636396408081055]
You can call tf.softmax() on the array of values
const probabilities = tf.softmax(logits)
Result:
[0.0000446, 0.9362511, 0.0000222, 0.0636765, 0.0000056]
Then if you wanted to get the index with the highest probability you can make use of tf.argMax():
const results = tf.argMax(probabilities).dataSync()[0]
Result:
1
Edit
I am not too familiar with jQuery so this might not be correct. But here is how I would get the probabilities of the outputs in descending order:
let probabilities = tf.softmax(predictions).dataSync();
$("#prediction-list").empty();
probabilities.forEach(function(p, i) {
$("#prediction-list").append(
`<li>${TARGET_CLASSES[i]}: ${p.toFixed(6)}</li>`
);
});

Tensorflow: converting H5 layer model to TFJS version leads to Unknown layer: TensorFlowOpLayer error when it works in TS

I'm trying to run the converted model from the repository: https://github.com/HasnainRaz/Fast-SRGAN. Well, the conversion was successful. But when I tried to initialize the model, I saw the error: "Unknown layer: TensorFlowOpLayer.". If we will investigate the saved model, we can see TensorFlowOpLayer:
The model structure
As I understood it is this peace of code:
keras.layers.UpSampling2D(size=2, interpolation='bilinear')(layer_input).
I decided to write my own class "TensorFlowOpLayer".
import * as tf from '#tensorflow/tfjs';
export class TensorFlowOpLayer extends tf.layers.Layer {
constructor() {
super({});
}
computeOutputShape(shape: Array<number>) {
return [1, null, null, 32];
}
call(input_3): tf.Tensor {
const result = tf.layers.upSampling2d({ size: [2, 2], dataFormat: 'channelsLast', interpolation: 'bilinear' }).apply(input_3) as tf.Tensor;
return result;
}
static get className() {
return 'TensorFlowOpLayer';
}
}
But it doesn't work. Can someone help me to understand how to write to the method "computeOutputShape"?
And second misunderstanding, why on the picture above we see the next order of layers:
Conv2D -> TensorFlowOpLayer -> PReLU
As I understood the TensorFlowOpLayer layer is "UpSampling2D" in the python code. The H5 model was investigated through the site: https://netron.app
u = keras.layers.UpSampling2D(size=2, interpolation='bilinear')(layer_input)
u = keras.layers.Conv2D(self.gf, kernel_size=3, strides=1, padding='same')(u)
u = keras.layers.PReLU(shared_axes=[1, 2])(u)
The initializing of the model in TS:
async loadModel() {
this.model = await tf.loadLayersModel('/assets/fast_srgan/model.json');
const inputs = tf.layers.input({shape: [null, null, 32]});
const outputs = this.model.apply(inputs) as tf.SymbolicTensor;
this.model = tf.model({inputs: inputs, outputs: outputs});
console.log("Model has been loaded");
}
like in python code:
from tensorflow import keras
# Load the model
model = keras.models.load_model('models/generator.h5')
# Define arbitrary spatial dims, and 3 channels.
inputs = keras.Input((None, None, 3))
# Trace out the graph using the input:
outputs = model(inputs)
# Override the model:
model = keras.models.Model(inputs, outputs)
Then, how is it used:
tf.tidy(() => {
let img = tf.browser.fromPixels(this.imgLr.nativeElement, 3);
img = tf.div(img, 255.0);
img = tf.image.resizeNearestNeighbor(img, [96, 96]);
img = tf.expandDims(img, 0);
let sr = this.model.predict(img) as tf.Tensor;
});
like in python code:
def predict(img):
# Rescale to 0-1.
lr = tf.math.divide(img, 255)
# Get super resolution image
sr = model.predict(tf.expand_dims(lr, axis=0))
return sr[0]
When I added my own class "TensorFlowOpLayer" I see the next error:
"expected input1 to have shape [null,null,null,32] but got array with shape [1,96,96,3]."
Solved the issue. The issue related to the version of the code and the saved model. The author of the code refactored the code and didn't change the saved model. I rewrote the needed class:
import * as tf from '#tensorflow/tfjs';
export class DepthToSpace extends tf.layers.Layer {
constructor() {
super({});
}
computeOutputShape(shape: Array<number>) {
return [null, ...shape.slice(1, 3).map(x => x * 2), 32];
}
call(input): tf.Tensor {
input = input[0];
const result = tf.depthToSpace(input, 2);
return result;
}
static get className() {
return 'TensorFlowOpLayer';
}
}
and it works.
The author's original code is:
u = keras.layers.Conv2D(filters, kernel_size=3, strides=1, padding='same')(layer_input)
u = tf.nn.depth_to_space(u, 2)
u = keras.layers.PReLU(shared_axes=[1, 2])(u)

How to use vectors created by P5 createVector as a tensor in tensorflow.js

I am using p5 to return the vector path of a drawn line. All the vectors in the line are pushed into an array that holds all the vectors. I'm trying to use this as a tensor but I keep getting an error saying
Error when checking model input: the Array of Tensors that you are passing to your model is not the size the model expected. Expected to see 1 Tensor(s), but instead got the following list of Tensor(s):
When I opened the array on the dev tool, each vector was printed like this:
0: Vector {p5: p5, x: 0.5150300601202404, y: -0.25450901803607207, z: 0}
could it be the p5 text in the vector array that's giving me the error? Here's my model and fit code:
let vectorpath = []; //vector path array
// model, setting layers till next '-----'
const model = tf.sequential();
model.add(tf.layers.dense({units: 4, inputShape: [2, 2], activation: 'sigmoid'}));
model.add(tf.layers.dense({units: 2, activation: 'sigmoid'}));
console.log(JSON.stringify(model.outputs[0].shape));
model.weights.forEach(w => {
console.log(w.name, w.shape);
});
// -----
//this is under the draw function so it is continually updated
const labels = tf.randomUniform([0, 1]);
function onBatchEnd(batch, logs) {
console.log('Accuracy', logs.acc);
}
model.fit(vectorpath, labels, {
epochs: 5,
batchSize: 32,
callbacks: {onBatchEnd}
}).then(info => {
console.log('Final accuracy', info.history.acc);
});
What could be causing the error? and how can I fix it?
The question's pretty vague but I'm really just not sure.

TFJS predict vs Python predict

I trained my model using Keras in Python and I converted my model to a tfjs model to use it in my webapp. I also wrote a small prediction script in python to validate my model on unseen data. In python it works perfectly, but when I'm trying to predict in my webapp it goes wrong.
This is the code I use in Python to create tensors and predict based on these created tensors:
input_dict = {name: tf.convert_to_tensor([value]) for name, value in sample_v.items()}
predictions = model.predict(input_dict)
classes = predictions.argmax(axis=-1)
In TFJS however it seems I can't pass a dict (or object) to the predict function, but if I write code to convert it to a tensor array (like I found on some places online), it still doesn't seem to work.
Object.keys(input).forEach((k) => {
input[k] = tensor1d([input[k]]);
});
console.log(Object.values(input));
const prediction = await model.executeAsync(Object.values(input));
console.log(prediction);
If I do the above, I get the following error: The shape of dict['key_1'] provided in model.execute(dict) must be [-1,1], but was [1]
If I then convert it to this code:
const input = { ...track.audioFeatures };
Object.keys(input).forEach((k) => {
input[k] = tensor2d([input[k]], [1, 1]);
});
console.log(Object.values(input));
I get the error that some dtypes have to be int32 but are float32. No problem, I can set the dtype manually:
const input = { ...track.audioFeatures };
Object.keys(input).forEach((k) => {
if (k === 'int_key') {
input[k] = tensor2d([input[k]], [1, 1], 'int32');
} else {
input[k] = tensor2d([input[k]], [1, 1]);
}
});
console.log(Object.values(input));
I still get the same error, but if I print it, I can see the datatype is set to int32.
I'm really confused as to why this is and why I can't just do like python and just put a dict (or object) in TFJS, and how to fix the issues I'm having.
Edit 1: Complete Prediction Snippet
const model = await loadModel();
const input = { ...track.audioFeatures };
Object.keys(input).forEach((k) => {
if (k === 'time_signature') {
input[k] = tensor2d([parseInt(input[k], 10)], [1, 1], 'int32');
} else {
input[k] = tensor2d([input[k]], [1, 1]);
}
});
console.log(Object.values(input));
const prediction = model.predict(Object.values(input));
console.log(prediction);
Edit 2: added full errormessage

How to import an saved Tensorflow model train using tf.estimator and predict on input data

I have save the model using tf.estimator .method export_savedmodel as follows:
export_dir="exportModel/"
feature_spec = tf.feature_column.make_parse_example_spec(feature_columns)
input_receiver_fn = tf.estimator.export.build_parsing_serving_input_receiver_fn(feature_spec)
classifier.export_savedmodel(export_dir, input_receiver_fn, as_text=False, checkpoint_path="Model/model.ckpt-400")
How can I import this saved model and use for predictions?
I tried to search for a good base example, but it appears the documentation and samples are a bit scattered for this topic. So let's start with a base example: the tf.estimator quickstart.
That particular example doesn't actually export a model, so let's do that (not need for use case 1):
def serving_input_receiver_fn():
"""Build the serving inputs."""
# The outer dimension (None) allows us to batch up inputs for
# efficiency. However, it also means that if we want a prediction
# for a single instance, we'll need to wrap it in an outer list.
inputs = {"x": tf.placeholder(shape=[None, 4], dtype=tf.float32)}
return tf.estimator.export.ServingInputReceiver(inputs, inputs)
export_dir = classifier.export_savedmodel(
export_dir_base="/path/to/model",
serving_input_receiver_fn=serving_input_receiver_fn)
Huge asterisk on this code: there appears to be a bug in TensorFlow 1.3 that doesn't allow you to do the above export on a "canned" estimator (such as DNNClassifier). For a workaround, see the "Appendix: Workaround" section.
The code below references export_dir (return value from the export step) to emphasize that it is not "/path/to/model", but rather, a subdirectory of that directory whose name is a timestamp.
Use Case 1: Perform prediction in the same process as training
This is an sci-kit learn type of experience, and is already exemplified by the sample. For completeness' sake, you simply call predict on the trained model:
classifier.train(input_fn=train_input_fn, steps=2000)
# [...snip...]
predictions = list(classifier.predict(input_fn=predict_input_fn))
predicted_classes = [p["classes"] for p in predictions]
Use Case 2: Load a SavedModel into Python/Java/C++ and perform predictions
Python Client
Perhaps the easiest thing to use if you want to do prediction in Python is SavedModelPredictor. In the Python program that will use the SavedModel, we need code like this:
from tensorflow.contrib import predictor
predict_fn = predictor.from_saved_model(export_dir)
predictions = predict_fn(
{"x": [[6.4, 3.2, 4.5, 1.5],
[5.8, 3.1, 5.0, 1.7]]})
print(predictions['scores'])
Java Client
package dummy;
import java.nio.FloatBuffer;
import java.util.Arrays;
import java.util.List;
import org.tensorflow.SavedModelBundle;
import org.tensorflow.Session;
import org.tensorflow.Tensor;
public class Client {
public static void main(String[] args) {
Session session = SavedModelBundle.load(args[0], "serve").session();
Tensor x =
Tensor.create(
new long[] {2, 4},
FloatBuffer.wrap(
new float[] {
6.4f, 3.2f, 4.5f, 1.5f,
5.8f, 3.1f, 5.0f, 1.7f
}));
// Doesn't look like Java has a good way to convert the
// input/output name ("x", "scores") to their underlying tensor,
// so we hard code them ("Placeholder:0", ...).
// You can inspect them on the command-line with saved_model_cli:
//
// $ saved_model_cli show --dir $EXPORT_DIR --tag_set serve --signature_def serving_default
final String xName = "Placeholder:0";
final String scoresName = "dnn/head/predictions/probabilities:0";
List<Tensor> outputs = session.runner()
.feed(xName, x)
.fetch(scoresName)
.run();
// Outer dimension is batch size; inner dimension is number of classes
float[][] scores = new float[2][3];
outputs.get(0).copyTo(scores);
System.out.println(Arrays.deepToString(scores));
}
}
C++ Client
You'll likely want to use tensorflow::LoadSavedModel with Session.
#include <unordered_set>
#include <utility>
#include <vector>
#include "tensorflow/cc/saved_model/loader.h"
#include "tensorflow/core/framework/tensor.h"
#include "tensorflow/core/public/session.h"
namespace tf = tensorflow;
int main(int argc, char** argv) {
const string export_dir = argv[1];
tf::SavedModelBundle bundle;
tf::Status load_status = tf::LoadSavedModel(
tf::SessionOptions(), tf::RunOptions(), export_dir, {"serve"}, &bundle);
if (!load_status.ok()) {
std::cout << "Error loading model: " << load_status << std::endl;
return -1;
}
// We should get the signature out of MetaGraphDef, but that's a bit
// involved. We'll take a shortcut like we did in the Java example.
const string x_name = "Placeholder:0";
const string scores_name = "dnn/head/predictions/probabilities:0";
auto x = tf::Tensor(tf::DT_FLOAT, tf::TensorShape({2, 4}));
auto matrix = x.matrix<float>();
matrix(0, 0) = 6.4;
matrix(0, 1) = 3.2;
matrix(0, 2) = 4.5;
matrix(0, 3) = 1.5;
matrix(0, 1) = 5.8;
matrix(0, 2) = 3.1;
matrix(0, 3) = 5.0;
matrix(0, 4) = 1.7;
std::vector<std::pair<string, tf::Tensor>> inputs = {{x_name, x}};
std::vector<tf::Tensor> outputs;
tf::Status run_status =
bundle.session->Run(inputs, {scores_name}, {}, &outputs);
if (!run_status.ok()) {
cout << "Error running session: " << run_status << std::endl;
return -1;
}
for (const auto& tensor : outputs) {
std::cout << tensor.matrix<float>() << std::endl;
}
}
Use Case 3: Serve a model using TensorFlow Serving
Exporting models in a manner amenable to serving a Classification model requires that the input be a tf.Example object. Here's how we might export a model for TensorFlow serving:
def serving_input_receiver_fn():
"""Build the serving inputs."""
# The outer dimension (None) allows us to batch up inputs for
# efficiency. However, it also means that if we want a prediction
# for a single instance, we'll need to wrap it in an outer list.
example_bytestring = tf.placeholder(
shape=[None],
dtype=tf.string,
)
features = tf.parse_example(
example_bytestring,
tf.feature_column.make_parse_example_spec(feature_columns)
)
return tf.estimator.export.ServingInputReceiver(
features, {'examples': example_bytestring})
export_dir = classifier.export_savedmodel(
export_dir_base="/path/to/model",
serving_input_receiver_fn=serving_input_receiver_fn)
The reader is referred to TensorFlow Serving's documentation for more instructions on how to setup TensorFlow Serving, so I'll only provide the client code here:
# Omitting a bunch of connection/initialization code...
# But at some point we end up with a stub whose lifecycle
# is generally longer than that of a single request.
stub = create_stub(...)
# The actual values for prediction. We have two examples in this
# case, each consisting of a single, multi-dimensional feature `x`.
# This data here is the equivalent of the map passed to the
# `predict_fn` in use case #2.
examples = [
tf.train.Example(
features=tf.train.Features(
feature={"x": tf.train.Feature(
float_list=tf.train.FloatList(value=[6.4, 3.2, 4.5, 1.5]))})),
tf.train.Example(
features=tf.train.Features(
feature={"x": tf.train.Feature(
float_list=tf.train.FloatList(value=[5.8, 3.1, 5.0, 1.7]))})),
]
# Build the RPC request.
predict_request = predict_pb2.PredictRequest()
predict_request.model_spec.name = "default"
predict_request.inputs["examples"].CopyFrom(
tensor_util.make_tensor_proto(examples, tf.float32))
# Perform the actual prediction.
stub.Predict(request, PREDICT_DEADLINE_SECS)
Note that the key, examples, that is referenced in the predict_request.inputs needs to match the key used in the serving_input_receiver_fn at export time (cf. the constructor to ServingInputReceiver in that code).
Appendix: Working around Exports from Canned Models in TF 1.3
There appears to be a bug in TensorFlow 1.3 in which canned models do not export properly for use case 2 (the problem does not exist for "custom" estimators). Here's is a workaround that wraps a DNNClassifier to make things work, specifically for the Iris example:
# Build 3 layer DNN with 10, 20, 10 units respectively.
class Wrapper(tf.estimator.Estimator):
def __init__(self, **kwargs):
dnn = tf.estimator.DNNClassifier(**kwargs)
def model_fn(mode, features, labels):
spec = dnn._call_model_fn(features, labels, mode)
export_outputs = None
if spec.export_outputs:
export_outputs = {
"serving_default": tf.estimator.export.PredictOutput(
{"scores": spec.export_outputs["serving_default"].scores,
"classes": spec.export_outputs["serving_default"].classes})}
# Replace the 3rd argument (export_outputs)
copy = list(spec)
copy[4] = export_outputs
return tf.estimator.EstimatorSpec(mode, *copy)
super(Wrapper, self).__init__(model_fn, kwargs["model_dir"], dnn.config)
classifier = Wrapper(feature_columns=feature_columns,
hidden_units=[10, 20, 10],
n_classes=3,
model_dir="/tmp/iris_model")
I dont think there is a bug with canned Estimators (or rather if there was ever one, it has been fixed). I was able to successfully export a canned estimator model using Python and import it in Java.
Here is my code to export the model:
a = tf.feature_column.numeric_column("a");
b = tf.feature_column.numeric_column("b");
feature_columns = [a, b];
model = tf.estimator.DNNClassifier(feature_columns=feature_columns ...);
# To export
feature_spec = tf.feature_column.make_parse_example_spec(feature_columns);
export_input_fn = tf.estimator.export.build_parsing_serving_input_receiver_fn(feature_spec);
servable_model_path = model.export_savedmodel(servable_model_dir, export_input_fn, as_text=True);
To import the model in Java, I used the Java client code provided by rhaertel80 above and it works. Hope this also answers Ben Fowler's question above.
It appears that the TensorFlow team does not agree that there is a bug in version 1.3 using canned estimators for exporting a model under use case #2. I submitted a bug report here:
https://github.com/tensorflow/tensorflow/issues/13477
The response I received from TensorFlow is that the input must only be a single string tensor. It appears that there may be a way to consolidate multiple features into a single string tensor using serialized TF.examples, but I have not found a clear method to do this. If anyone has code showing how to do this, I would be appreciative.
You need to export the saved model using tf.contrib.export_savedmodel and you need to define input receiver function to pass input to.
Later you can load the saved model ( generally saved.model.pb) from the disk and serve it.
TensorFlow: How to predict from a SavedModel?