trying to get a custom tensorFlowJS model to work in JS but getting error with the shape, kindly note that the model is already trained and converted to model.json but getting below error
Error: Error: Error in concat4D: Shape of tensors[1] (1,30,40,256) does not match the shape of the rest (1,15,20,832) along the non-concatenated axis 1.
below is the code:
const RGB = await imageToRgbaMatrix(imageUrl);
const model = await tf.loadLayersModel(modelUrl);
const ImageData = await tf.tensor([RGB]);
const predictions = model.predict(ImageData);
in brief i am trying to implement the tfjs model (model.json) with binary file with it gives:
Error in concat4D: Shape of tensors[1] (1,30,40,256) does not match the shape of the rest (1,15,20,832).
Related
I followed this tutorial and had the model fine-tuned.
the model-saving part of serving model is like this:
saved_model_path = 'dogs_and_cats_yamnet/yamnet-model/00000001'
input_segment = tf.keras.layers.Input(shape=(), dtype=tf.float32, name='audio')
embedding_extraction_layer = hub.KerasLayer(yamnet_model_handle,
trainable=False, name='yamnet')
_, embeddings_output, _ = embedding_extraction_layer(input_segment)
serving_outputs = my_model(embeddings_output)
serving_outputs = ReduceMeanLayer(axis=0, name='classifier')(serving_outputs)
serving_model = tf.keras.Model(input_segment, serving_outputs)
serving_model.save(saved_model_path, include_optimizer=False)
Then followed this page, uploading the model to S3 and deploying the model.
!tar -C "$PWD" -czf dogs_and_cats_yamnet.tar.gz dogs_and_cats_yamnet/
model_data = Session().upload_data(path="dogs_and_cats_yamnet.tar.gz", key_prefix="model")
model = TensorFlowModel(model_data=model_data, role=sagemaker_role, framework_version="2.3")
predictor = model.deploy(initial_instance_count=1, instance_type="ml.c5.xlarge")
Deployment seems successful, but when I try to do inference,
waveform = np.zeros((3*48000), dtype=np.float32)
result = predictor.predict(waveform)
the following error occurs.
ModelError: An error occurred (ModelError) when calling the InvokeEndpoint operation: Received client error (400) from primary with message "{
"error": "The first dimension of paddings must be the rank of inputs[1,2] [1,144000]\n\t [[{{node yamnet_frames/tf_op_layer_Pad/Pad}}]]"
I have no idea why this happens. I am struggling with it for hours and coming up with no clue.
YAMNet works fine when I pulled the model from tf hub directly and take inference with it.
This is kind of a minor question I guess, but I would appreciate any helpful answers.
Thank you in advance.
I trained a model on GCloud AutoML Vision, exported it as a TensorFlow.js model, and loaded it on the application start. Looking at the model.json, the model is definitely expecting a 224x224 image. I had to do the tensor.reshape because it was rejecting my tensor when I ran a prediction on a tensor of [224, 224, 3].
Base64 comes in from camera. I believe I am preparing this image correctly, but I have no way of knowing for sure.
const imgBuffer = decodeBase64(base64) // from 'base64-arraybuffer' package
const raw = new Uint8Array(imgBuffer)
const imageTensor = decodeJpeg(raw)
const resizedImageTensor = imageTensor.resizeBilinear([224, 224])
const reshapedImageTensor = resizedImageTensor.reshape([1, 224, 224, 3])
const res = model.predict(reshapedImageTensor)
console.log('response', res)
But the response I get doesn't seem to have much...
{
"dataId":{},
"dtype":"float32",
"id":293,
"isDisposedInternal":false,
"kept":false,
"rankType":"2",
"scopeId":5,
"shape":[
1,
1087
],
"size":1087,
"strides":[
1087
]
}
What does this type of response mean? Is there something I'm doing wrong?
You need to use dataSync() to download the actual predictions of the model.
const res = model.predict(reshapedImageTensor);
const predictions = res.dataSync();
console.log('Predictions', predictions);
I'm having trouble with the transition of Tensorflow Python to Tensorflow.js in regards to image preprocessing
in Python
single_coin = r"C:\temp\coins\20Saint-03o.jpg"
img = image.load_img(single_coin, target_size = (100, 100))
array = image.img_to_array(img)
x = np.expand_dims(array, axis=0)
vimage = np.vstack([x])
prediction =model.predict(vimage)
print(prediction[0])
I get the correct result
[2.8914417e-05 3.5085387e-03 1.9252902e-03 6.2635467e-05 3.7389682e-03
1.2983804e-03 7.4157811e-04 1.4608903e-04 2.7099697e-06 1.1844193e-02
1.3398369e-04 9.3798796e-03 9.7308388e-05 7.3931034e-05 1.9695959e-04
9.6496813e-05 4.2653349e-04 8.7305409e-05 8.1476872e-04 4.9094640e-04
1.3498703e-04 9.6476960e-01]
However in Tensorflow.js with the same image post the following preprocessing function:
function preprocess(img)
{
let tensor = tf.browser.fromPixels(img)
const resized = tf.image.resizeBilinear(tensor, [100, 100]).toFloat()
const offset = tf.scalar(255.0);
const normalized = tf.scalar(1.0).sub(resized.div(offset));
const batched = normalized.expandDims(0)
return batched
}
I get the following result:
[0.044167134910821915,
0.04726826772093773,
0.04546305909752846,
0.04596292972564697,
0.044733788818120956,
0.04367975518107414,
0.04373137652873993,
0.044592827558517456,
0.045657724142074585,
0.0449688546359539,
0.04648510739207268,
0.04426411911845207,
0.04494940862059593,
0.0457320399582386,
0.045905906707048416,
0.04473186656832695,
0.04691491648554802,
0.04441603645682335,
0.04782886058092117,
0.04696653410792351,
0.045027654618024826,
0.04655187949538231]
I'm obviously not translating the preprocessing appropriately. Does anyone see what I'm missing?
There is no normalization applied in the python code but there is a normalization in the js code. Either the same normalization applied in js is applied in python as well, or the normalization is removed from the js code.
Similar answer has been given here
I have a model saved in SavedModel format (.pb). After serving the model without problems i try to make a prediction via tensorflow serving. TF Serving requires me to input the data via a list, otherwise the answer i receive is TypeError: Object of type 'ndarray' is not JSON serializable
. But when i input a list the response is an error
The input is
value = [1, 2, 3, 4, 5]
body = {"signature_name": "serving_default",
"instances": [[values]]}
res = requests.post(url=url, data=json.dumps(body))
and the answer { "error": "In[0] is not a matrix. Instead it has shape [1,1,5]\n\t [[{{node sequential/dense/Relu}}]]" }
I know the model works, the input without using tensorflow serving is
value = np.array([1,2,3,4,5])
model.predict([[value]])
So the problem is how can use tensorflow serving if it requires to use a list as input but the model requires a np.array as input.
I suppose you should do it in this way
value = <ndarray>
data = value.tolist()
body = {
"signature_name": "serving_default",
"instances": data}
I'm trying to send a job up to my object detection model on CloudML to get predictions. I'm following the guide at https://cloud.google.com/ml-engine/docs/online-predict but I'm getting an error when submitting the request:
RuntimeError: Prediction failed: Error processing input: Expected uint8, got '\xf6>\x00\x01\x04\xa4d\x94...(more bytes)...\x00\x10\x10\x10\x04\x80\xd9' of type 'str' instead.
This is my code:
img = base64.b64encode(open("file.jpg", "rb").read()).decode('utf-8')
json = {"b64": img}
result = predict_json(project, model, json, "v1")
My fault, I forgot to add --input_type encoded_image_string_tensor when I exported the graph.