I trained a model on GCloud AutoML Vision, exported it as a TensorFlow.js model, and loaded it on the application start. Looking at the model.json, the model is definitely expecting a 224x224 image. I had to do the tensor.reshape because it was rejecting my tensor when I ran a prediction on a tensor of [224, 224, 3].
Base64 comes in from camera. I believe I am preparing this image correctly, but I have no way of knowing for sure.
const imgBuffer = decodeBase64(base64) // from 'base64-arraybuffer' package
const raw = new Uint8Array(imgBuffer)
const imageTensor = decodeJpeg(raw)
const resizedImageTensor = imageTensor.resizeBilinear([224, 224])
const reshapedImageTensor = resizedImageTensor.reshape([1, 224, 224, 3])
const res = model.predict(reshapedImageTensor)
console.log('response', res)
But the response I get doesn't seem to have much...
{
"dataId":{},
"dtype":"float32",
"id":293,
"isDisposedInternal":false,
"kept":false,
"rankType":"2",
"scopeId":5,
"shape":[
1,
1087
],
"size":1087,
"strides":[
1087
]
}
What does this type of response mean? Is there something I'm doing wrong?
You need to use dataSync() to download the actual predictions of the model.
const res = model.predict(reshapedImageTensor);
const predictions = res.dataSync();
console.log('Predictions', predictions);
Related
trying to get a custom tensorFlowJS model to work in JS but getting error with the shape, kindly note that the model is already trained and converted to model.json but getting below error
Error: Error: Error in concat4D: Shape of tensors[1] (1,30,40,256) does not match the shape of the rest (1,15,20,832) along the non-concatenated axis 1.
below is the code:
const RGB = await imageToRgbaMatrix(imageUrl);
const model = await tf.loadLayersModel(modelUrl);
const ImageData = await tf.tensor([RGB]);
const predictions = model.predict(ImageData);
in brief i am trying to implement the tfjs model (model.json) with binary file with it gives:
Error in concat4D: Shape of tensors[1] (1,30,40,256) does not match the shape of the rest (1,15,20,832).
For the pre-trained model in python we can reset input/output shapes:
from tensorflow import keras
# Load the model
model = keras.models.load_model('models/generator.h5')
# Define arbitrary spatial dims, and 3 channels.
inputs = keras.Input((None, None, 3))
# Trace out the graph using the input:
outputs = model(inputs)
# Override the model:
model = keras.models.Model(inputs, outputs)
The source code
I'm trying to do the same in TFJS:
// Load the model
this.model = await tf.loadLayersModel('/assets/fast_srgan/model.json');
// Define arbitrary spatial dims, and 3 channels.
const inputs = tf.layers.input({shape: [null, null, 3]});
// Trace out the graph using the input.
const outputs = this.model.apply(inputs) as tf.SymbolicTensor;
// Override the model.
this.model = tf.model({inputs: inputs, outputs: outputs});
TFJS does not support one of the layers in the model:
...
u = keras.layers.Conv2D(filters, kernel_size=3, strides=1, padding='same')(layer_input)
u = tf.nn.depth_to_space(u, 2) # <- TFJS does not support this layer
u = keras.layers.PReLU(shared_axes=[1, 2])(u)
...
I wrote my own:
import * as tf from '#tensorflow/tfjs';
export class DepthToSpace extends tf.layers.Layer {
constructor() {
super({});
}
computeOutputShape(shape: Array<number>) {
// I think the issue is here
// because the error occurs during initialization of the model
return [null, ...shape.slice(1, 3).map(x => x * 2), 32];
}
call(input): tf.Tensor {
const result = tf.depthToSpace(input[0], 2);
return result;
}
static get className() {
return 'TensorFlowOpLayer';
}
}
Using the model:
tf.tidy(() => {
let img = tf.browser.fromPixels(this.imgLr.nativeElement, 3);
img = tf.div(img, 255);
img = tf.expandDims(img, 0);
let sr = this.model.predict(img) as tf.Tensor;
sr = tf.mul(tf.div(tf.add(sr, 1), 2), 255).arraySync()[0];
tf.browser.toPixels(sr as tf.Tensor3D, this.imgSrCanvas.nativeElement);
});
but I get the error:
Error: Input 0 is incompatible with layer p_re_lu: expected axis 1 of input shape to have value 96 but got shape 1,128,128,32.
The pre-trained model was trained with 96x96 pixels images. If I use the 96x96 image, it works. But if I try to use other sizes (for example 128x128), It doesn't work. In python, we can easily reset input/output shapes. Why it doesn't work in JS?
To define a new model from the layers of the previous model, you need to use tf.model
this.model = tf.model({inputs: inputs, outputs: outputs});
I tried to debug this class:
import * as tf from '#tensorflow/tfjs';
export class DepthToSpace extends tf.layers.Layer {
constructor() {
super({});
}
computeOutputShape(shape: Array<number>) {
return [null, ...shape.slice(1, 3).map(x => x * 2), 32];
}
call(input): tf.Tensor {
const result = tf.depthToSpace(input[0], 2);
return result;
}
static get className() {
return 'TensorFlowOpLayer';
}
}
and saw: when I do not try to rewrite the size, the computeOutputShape, method works only twice, and it works 4 times when I try to reset inputs/outputs. Well, then I opened the model's JSON file and changed inputs from [null, 96, 96, 32] to [null, 128, 128, 32] and removed these lines:
// Define arbitrary spatial dims, and 3 channels.
const inputs = tf.layers.input({shape: [null, null, 3]});
// Trace out the graph using the input.
const outputs = this.model.apply(inputs) as tf.SymbolicTensor;
// Override the model.
this.model = tf.model({inputs: inputs, outputs: outputs});
And now it works with 128x128 images. It looks like the piece of code above, adds the layers instead of rewriting them.
I am using p5 to return the vector path of a drawn line. All the vectors in the line are pushed into an array that holds all the vectors. I'm trying to use this as a tensor but I keep getting an error saying
Error when checking model input: the Array of Tensors that you are passing to your model is not the size the model expected. Expected to see 1 Tensor(s), but instead got the following list of Tensor(s):
When I opened the array on the dev tool, each vector was printed like this:
0: Vector {p5: p5, x: 0.5150300601202404, y: -0.25450901803607207, z: 0}
could it be the p5 text in the vector array that's giving me the error? Here's my model and fit code:
let vectorpath = []; //vector path array
// model, setting layers till next '-----'
const model = tf.sequential();
model.add(tf.layers.dense({units: 4, inputShape: [2, 2], activation: 'sigmoid'}));
model.add(tf.layers.dense({units: 2, activation: 'sigmoid'}));
console.log(JSON.stringify(model.outputs[0].shape));
model.weights.forEach(w => {
console.log(w.name, w.shape);
});
// -----
//this is under the draw function so it is continually updated
const labels = tf.randomUniform([0, 1]);
function onBatchEnd(batch, logs) {
console.log('Accuracy', logs.acc);
}
model.fit(vectorpath, labels, {
epochs: 5,
batchSize: 32,
callbacks: {onBatchEnd}
}).then(info => {
console.log('Final accuracy', info.history.acc);
});
What could be causing the error? and how can I fix it?
The question's pretty vague but I'm really just not sure.
I trained my model using Keras in Python and I converted my model to a tfjs model to use it in my webapp. I also wrote a small prediction script in python to validate my model on unseen data. In python it works perfectly, but when I'm trying to predict in my webapp it goes wrong.
This is the code I use in Python to create tensors and predict based on these created tensors:
input_dict = {name: tf.convert_to_tensor([value]) for name, value in sample_v.items()}
predictions = model.predict(input_dict)
classes = predictions.argmax(axis=-1)
In TFJS however it seems I can't pass a dict (or object) to the predict function, but if I write code to convert it to a tensor array (like I found on some places online), it still doesn't seem to work.
Object.keys(input).forEach((k) => {
input[k] = tensor1d([input[k]]);
});
console.log(Object.values(input));
const prediction = await model.executeAsync(Object.values(input));
console.log(prediction);
If I do the above, I get the following error: The shape of dict['key_1'] provided in model.execute(dict) must be [-1,1], but was [1]
If I then convert it to this code:
const input = { ...track.audioFeatures };
Object.keys(input).forEach((k) => {
input[k] = tensor2d([input[k]], [1, 1]);
});
console.log(Object.values(input));
I get the error that some dtypes have to be int32 but are float32. No problem, I can set the dtype manually:
const input = { ...track.audioFeatures };
Object.keys(input).forEach((k) => {
if (k === 'int_key') {
input[k] = tensor2d([input[k]], [1, 1], 'int32');
} else {
input[k] = tensor2d([input[k]], [1, 1]);
}
});
console.log(Object.values(input));
I still get the same error, but if I print it, I can see the datatype is set to int32.
I'm really confused as to why this is and why I can't just do like python and just put a dict (or object) in TFJS, and how to fix the issues I'm having.
Edit 1: Complete Prediction Snippet
const model = await loadModel();
const input = { ...track.audioFeatures };
Object.keys(input).forEach((k) => {
if (k === 'time_signature') {
input[k] = tensor2d([parseInt(input[k], 10)], [1, 1], 'int32');
} else {
input[k] = tensor2d([input[k]], [1, 1]);
}
});
console.log(Object.values(input));
const prediction = model.predict(Object.values(input));
console.log(prediction);
Edit 2: added full errormessage
I'm having trouble with the transition of Tensorflow Python to Tensorflow.js in regards to image preprocessing
in Python
single_coin = r"C:\temp\coins\20Saint-03o.jpg"
img = image.load_img(single_coin, target_size = (100, 100))
array = image.img_to_array(img)
x = np.expand_dims(array, axis=0)
vimage = np.vstack([x])
prediction =model.predict(vimage)
print(prediction[0])
I get the correct result
[2.8914417e-05 3.5085387e-03 1.9252902e-03 6.2635467e-05 3.7389682e-03
1.2983804e-03 7.4157811e-04 1.4608903e-04 2.7099697e-06 1.1844193e-02
1.3398369e-04 9.3798796e-03 9.7308388e-05 7.3931034e-05 1.9695959e-04
9.6496813e-05 4.2653349e-04 8.7305409e-05 8.1476872e-04 4.9094640e-04
1.3498703e-04 9.6476960e-01]
However in Tensorflow.js with the same image post the following preprocessing function:
function preprocess(img)
{
let tensor = tf.browser.fromPixels(img)
const resized = tf.image.resizeBilinear(tensor, [100, 100]).toFloat()
const offset = tf.scalar(255.0);
const normalized = tf.scalar(1.0).sub(resized.div(offset));
const batched = normalized.expandDims(0)
return batched
}
I get the following result:
[0.044167134910821915,
0.04726826772093773,
0.04546305909752846,
0.04596292972564697,
0.044733788818120956,
0.04367975518107414,
0.04373137652873993,
0.044592827558517456,
0.045657724142074585,
0.0449688546359539,
0.04648510739207268,
0.04426411911845207,
0.04494940862059593,
0.0457320399582386,
0.045905906707048416,
0.04473186656832695,
0.04691491648554802,
0.04441603645682335,
0.04782886058092117,
0.04696653410792351,
0.045027654618024826,
0.04655187949538231]
I'm obviously not translating the preprocessing appropriately. Does anyone see what I'm missing?
There is no normalization applied in the python code but there is a normalization in the js code. Either the same normalization applied in js is applied in python as well, or the normalization is removed from the js code.
Similar answer has been given here