I am trying to learn Kera and have been playing with their example code. I am currently playing with this
at the very end i've added the following line just to test them out:
fnet.save( 'FNET' )
fnet = keras.models.load_model( 'FNET' )
It should jut save and load, a it does with other models of their but instead it throws the following error:
ValueError: Exception encountered when calling layer "f_net_decoder" (type FNetDecoder).
Could not find matching concrete function to call loaded from the SavedModel. Got:
Positional arguments (4 total):
* Tensor("inputs:0", shape=(None, None, 256), dtype=float32)
* Tensor("encoder_outputs:0", shape=(None, None, 256), dtype=float32)
* None
* False
Keyword arguments: {}
Expected these arguments to match one of the following 2 option(s):
Option 1:
Positional arguments (4 total):
* TensorSpec(shape=(None, None, 256), dtype=tf.float32, name='inputs')
* TensorSpec(shape=(None, None, 256), dtype=tf.float32, name='encoder_outputs')
* TensorSpec(shape=(None, None), dtype=tf.bool, name='mask')
* False
Keyword arguments: {}
Option 2:
Positional arguments (4 total):
* TensorSpec(shape=(None, None, 256), dtype=tf.float32, name='inputs')
* TensorSpec(shape=(None, None, 256), dtype=tf.float32, name='encoder_outputs')
* TensorSpec(shape=(None, None), dtype=tf.bool, name='mask')
* True
Keyword arguments: {}
Call arguments received:
• args=('tf.Tensor(shape=(None, None, 256), dtype=float32)',)
• kwargs={'encoder_outputs': 'tf.Tensor(shape=(None, None, 256), dtype=float32)', 'training': 'False'}
I'm super new too Keras and Tensorflow so I'm not really sure what the issue is and why its only showing up here. Any thoughts on how to fix it?
Used
fnet.save( 'FNET' )
fnet = keras.models.load_model( 'FNET' )
was expecting to have it run as normal, instead crashes.
Related
I am trying to put a trained model into a Flask app.
The thing is, both input image for the app are preprocessed the same way as for training the model. Still I get this error:
ValueError: Input 0 is incompatible with layer functional_1: expected shape=(None, 112, 112, 3), found shape=(None, 224, 224, 3)
Model training:
'''
train_data.element_spec
>>(TensorSpec(shape=(None, 224, 224, 3), dtype=tf.float32, name=None),
TensorSpec(shape=(None, 400), dtype=tf.bool, name=None))
'''
Flask app:
'''
IMG_SIZE=224
BATCH_SIZE=32
def preprocess_image(image_path, img_size = 224):
image=tf.io.read_file(image_path)
image=tf.image.decode_jpeg(image,channels=3)
image=tf.image.convert_image_dtype(image,tf.float32)
image = tf.image.resize(image,size=[IMG_SIZE,IMG_SIZE])
return image
def create_data_batches(x,batch_size=32):
print('Creating test data branches....')
x=[x]
data=tf.data.Dataset.from_tensor_slices((tf.constant(x)))
data_batch=data.map(preprocess_image).batch(BATCH_SIZE)
return data_batch
'''
So where does this expected shape=(None, 112, 112, 3) come from? There is not a single '112' in a code, so what am I doing wrong?
I will really appreciate any help.
P.S. The model I am trying to work with is inception_v2.
P.P.S. Saved trained model format id .h5
This error occurs as a result of the model being trained with an image size of (112,112,3).
To solve the error replace this:
def preprocess_image(image_path, img_size = 224)
with this:
def preprocess_image(image_path, img_size = 112)
I am trying to follow a guide for transfer learning from a textbook using the code below and get the error message above. I assume the input_shape does not match the IMAGE_SHAPE but I can't figure out the correct dimensions.
Code:
import tensorflow as tf
import tensorflow_hub as hub
import numpy as np
import matplotlib.pyplot as plt
module_url = "https://tfhub.dev/google/imagenet/mobilenet_v2_100_160/feature_vector/4"
my_model = hub.KerasLayer(module_url)
classifier_url = "https://tfhub.dev/google/imagenet/mobilenet_v2_100_160/feature_vector/4"
IMAGE_SHAPE = (224,224)
classifier = tf.keras.Sequential([hub.KerasLayer(classifier_url, input_shape = IMAGE_SHAPE+(3,))])
Error message:
graph_function = self._create_graph_function(args, kwargs)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/function.py:3289 _create_graph_function
capture_by_value=self._capture_by_value),
/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/func_graph.py:999 func_graph_from_py_func
func_outputs = python_func(*func_args, **func_kwargs)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/def_function.py:672 wrapped_fn
out = weak_wrapped_fn().__wrapped__(*args, **kwds)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/saved_model/function_deserialization.py:291 restored_function_body
"\n\n".join(signature_descriptions)))
ValueError: Could not find matching function to call loaded from the SavedModel. Got:
Positional arguments (4 total):
* Tensor("inputs:0", shape=(None, 224, 224, 3), dtype=float32)
* False
* False
* 0.99
Keyword arguments: {}
Expected these arguments to match one of the following 4 option(s):
Option 1:
Positional arguments (4 total):
* TensorSpec(shape=(None, 160, 160, 3), dtype=tf.float32, name='inputs')
* False
* False
* TensorSpec(shape=(), dtype=tf.float32, name='batch_norm_momentum')
Keyword arguments: {}
Option 2:
Positional arguments (4 total):
* TensorSpec(shape=(None, 160, 160, 3), dtype=tf.float32, name='inputs')
* False
* True
* TensorSpec(shape=(), dtype=tf.float32, name='batch_norm_momentum')
Keyword arguments: {}
Option 3:
Positional arguments (4 total):
* TensorSpec(shape=(None, 160, 160, 3), dtype=tf.float32, name='inputs')
* True
* True
* TensorSpec(shape=(), dtype=tf.float32, name='batch_norm_momentum')
Keyword arguments: {}
Option 4:
Positional arguments (4 total):
* TensorSpec(shape=(None, 160, 160, 3), dtype=tf.float32, name='inputs')
* True
* False
* TensorSpec(shape=(), dtype=tf.float32, name='batch_norm_momentum')
Keyword arguments: {}
You can find the documentation of the model you'd like to use at https://tfhub.dev/google/imagenet/mobilenet_v2_100_160/feature_vector/4. There and in the error message it says that the input image needs to be of shape (160, 160, 3):
classifier_url = "https://tfhub.dev/google/imagenet/mobilenet_v2_100_160/feature_vector/4"
IMAGE_SHAPE = (160, 160)
classifier = tf.keras.Sequential([hub.KerasLayer(classifier_url, input_shape = IMAGE_SHAPE+(3,))])
I am trying to use the Unet model output in other neural network, the problem is, I need to get the real shape without None instead of dimensions, could you please tell me how I can do it? unet_model.summary() shows the real shape, but when I try to get the output tensor, its shape is None
unet_model = Unet(input_shape=(256,256,3),backbone_name='resnet50',encoder_weights='imagenet', decoder_block_type='transpose')
f_i = Input(shape=(256,256,3))
unet_model.call(f_i)
unet_model.layers[-1].output
link to the screenshot with the output
# unet_model.summary()
....
sigmoid (Activation) (None, 256, 256, 1) 0 final_conv[0][0]
# unet_model.call(f_i)
<tf.Tensor 'sigmoid_5/Sigmoid:0' shape=(?, ?, ?, 1) dtype=float32>
# unet_model.layers[-1].output
<tf.Tensor 'sigmoid_5/Sigmoid:0' shape=(?, ?, ?, 1) dtype=float32>
I expect to receive a tensor with the shape = (None, 256, 256, 1)
I use tensorflow==1.14.0, keras==2.3.1
You can get the layer shape of the model using below lines.
for layer in unet_model.layers:
print(layer.output_shape)
I trained my customized ssd_mobilenet_v2 using TensorFlow2 Object Detection API.
After training completed, I used exporter_main_v2.py to export a saved_model of my customized model.
If I load saved_model by TensorFlow2, it seem there are two kind of output format.
import tensorflow as tf
saved_model = tf.saved_model.load("saved_model")
detect_fn = saved_model["serving_default"]
print(detect_fn.outputs)
'''
[<tf.Tensor 'Identity:0' shape=(1, 100) dtype=float32>,
<tf.Tensor 'Identity_1:0' shape=(1, 100, 4) dtype=float32>,
<tf.Tensor 'Identity_2:0' shape=(1, 100) dtype=float32>,
<tf.Tensor 'Identity_3:0' shape=(1, 100, 7) dtype=float32>,
<tf.Tensor 'Identity_4:0' shape=(1, 100) dtype=float32>,
<tf.Tensor 'Identity_5:0' shape=(1,) dtype=float32>,
<tf.Tensor 'Identity_6:0' shape=(1, 1917, 4) dtype=float32>,
<tf.Tensor 'Identity_7:0' shape=(1, 1917, 7) dtype=float32>]
'''
print(detect_fn.structured_outputs)
'''
{'detection_classes': TensorSpec(shape=(1, 100), dtype=tf.float32, name='detection_classes'),
'detection_scores': TensorSpec(shape=(1, 100), dtype=tf.float32, name='detection_scores'),
'detection_multiclass_scores': TensorSpec(shape=(1, 100, 7), dtype=tf.float32, name='detection_multiclass_scores'),
'num_detections': TensorSpec(shape=(1,), dtype=tf.float32, name='num_detections'),
'raw_detection_boxes': TensorSpec(shape=(1, 1917, 4), dtype=tf.float32, name='raw_detection_boxes'),
'detection_boxes': TensorSpec(shape=(1, 100, 4), dtype=tf.float32, name='detection_boxes'),
'detection_anchor_indices': TensorSpec(shape=(1, 100), dtype=tf.float32, name='detection_anchor_indices'),
'raw_detection_scores': TensorSpec(shape=(1, 1917, 7), dtype=tf.float32, name='raw_detection_scores')}
'''
Then, I try to convert this saved_model to onnx format using tf2onnx.
However, the outputs of onnxruntime was a list.
By the shape of result in the list, I think that the sequence is same as detect_fn.outputs
import numpy as np
import onnxruntime as rt
sess = rt.InferenceSession("model.onnx")
input_name = sess.get_inputs()[0].name
pred_onx = sess.run(None, {input_name: np.zeros((1,300,300,3), dtype=np.uint8)})
print(pred_onx) # a list
print([i.shape for i in pred_onx])
'''
[(1, 100),
(1, 100, 4),
(1, 100),
(1, 100, 7),
(1, 100),
(1,),
(1, 1917, 4),
(1, 1917, 7)]
'''
Because there is some shape of result which is same as others, so it become hard to recognized.
Is there any document talk about this relationship that I can refer?
After I looked closely into the values in outputs.
I found the mapping relationship below.
def result_mapper(outputs):
result_dict = dict()
result_dict["num_detections"] = outputs[5]
result_dict["raw_detection_scores"] = outputs[7]
result_dict["raw_detection_boxes"] = outputs[6]
result_dict["detection_multiclass_scores"] = outputs[3]
result_dict["detection_boxes"] = outputs[1]
result_dict["detection_scores"] = outputs[4]
result_dict["detection_classes"] = outputs[2]
result_dict["detection_anchor_indices"] = outputs[0]
return result_dict
Had the same question and after some hours of stepping through the debugger found they are... in sorted order. The method determining iteration order is here.
In the case of dict instances, the sequence consists of the values, sorted by
key to ensure deterministic behavior.
I have no idea how this error is arising. I'm trying to change the input format to an RNN and have printed out the tensors in the original version (which works) and the modified version (which crashes).
FUNCTIONAL:
LABEL= Tensor("concat_1:0", shape=(?, 2), dtype=float32, device=/device:CPU:0) (?, 2)
inputs=Tensor("concat:0", shape=(?, 8), dtype=float32, device=/device:CPU:0)
x=[<tf.Tensor 'split:0' shape=(?, 1) dtype=float32>,
<tf.Tensor 'split:1' shape=(?, 1) dtype=float32>,
<tf.Tensor 'split:2' shape=(?, 1) dtype=float32>,
<tf.Tensor 'split:3' shape=(?, 1) dtype=float32>,
<tf.Tensor 'split:4' shape=(?, 1) dtype=float32>,
<tf.Tensor 'split:5' shape=(?, 1) dtype=float32>,
<tf.Tensor 'split:6' shape=(?, 1) dtype=float32>,
<tf.Tensor 'split:7' shape=(?, 1) dtype=float32>]
last outputs=Tensor("rnn/rnn/basic_lstm_cell/mul_23:0", shape=(?, 3), dtype=float32)
PREDICTION Tensor("add:0", shape=(?, 2), dtype=float32)
LOSS Tensor("mean_squared_error/value:0", shape=(), dtype=float32)
BROKEN:
X= 5 Tensor("Const:0", shape=(49, 10), dtype=float32, device=/device:CPU:0)
labels= Tensor("Const_5:0", shape=(49, 10), dtype=float32)
OUTPUTS Tensor("rnn/rnn/basic_lstm_cell/mul_14:0", shape=(49, 5), dtype=float32)
PREDICTIONS Tensor("add:0", shape=(49, 10), dtype=float32)
LABELS Tensor("Const_5:0", shape=(49, 10), dtype=float32)
LOSS Tensor("mean_squared_error/value:0", shape=(), dtype=float32)
Here is the code for the model, which is the same for each of them:
lstm_cell = rnn.BasicLSTMCell(LSTM_SIZE, forget_bias=1.0)
outputs, _ = tf.nn.static_rnn(lstm_cell, x, dtype=tf.float32)
outputs = outputs[-1]
print('-->OUTPUTS', outputs)
weight = tf.Variable(tf.random_normal([LSTM_SIZE, N_OUTPUTS]))
bias = tf.Variable(tf.random_normal([N_OUTPUTS]))
predictions = tf.matmul(outputs, weight) + bias
print('-->PREDICTIONS', predictions)
print('-->LABELS', labels)
loss = tf.losses.mean_squared_error(labels, predictions)
print('-->LOSS', loss)
train_op = tf.contrib.layers.optimize_loss(loss=loss, global_step=tf.train.get_global_step(), learning_rate=0.01, optimizer="SGD")
eval_metric_ops = {"rmse": tf.metrics.root_mean_squared_error(labels, predictions)}
TL;DR: Use x = tf.split( x, 10, axis = -1 ) to split x before feeding it.
TS;WM:
The error probably happens at tf.nn_static_rnn(), in the second line of your code (would have been nice had you posted the error line number):
outputs, _ = tf.nn.static_rnn(lstm_cell, x, dtype=tf.float32)
The "broken" version tries to feed a tensor with shape ( 49, 10 ) whereas the working version is feeding a list of 8 tensors with shape ( ?, 1 ). The documentation says:
inputs: A length T list of inputs, each a Tensor of shape [batch_size, input_size], or a nested tuple of such elements.
In the previous line you define the lstm_cell with tf.contrib.rnn.BasicLSTMCell.__init__() (presumably, because the import lines are omitted from your code), which has the num_units argument filled by LSTM_SIZE (which is, again, omitted from your code):
lstm_cell = rnn.BasicLSTMCell(LSTM_SIZE, forget_bias=1.0)
So you have to get your ducks in a row. x has to be a list of ( batch_size, 1 ) tensors, which you can achieve with tf.split():
x = tf.split( x, 10, axis = -1 )
where I presume 10 to be the length of the data you're trying to feed, just based on the output you pasted.