I have the problem when importing the Tensorflow lite-model (last line code below) tf.lite.Interpreter, i get an error (see below) when the dilation_rate!=1. Since the code has to run on a embedded device, hence there will be many steps after this piece of code, the shortest way to get it work, would be a work-around. Does anyone know a work-around to get this functionality working?
Traceback (most recent call last):
File "D:\training\tflite_test.py", line 16, in <module>
interpreter = tf.lite.Interpreter( model_content=tflite_model )
File "D:\Python\Python37\lib\site-packages\tensorflow\lite\python\interpreter.py", line 224, in __init__custom_op_registerers_by_func))
ValueError: tensorflow/lite/core/subgraph.cc BytesRequired number of elements overflowed.
Tensor 22 is invalidly specified in schema.
# Tensorflow 2.4.1
import tensorflow as tf
model = tf.keras.models.Sequential([
tf.keras.layers.Conv1D( 9, 7, dilation_rate=1, padding="causal", activation='relu', input_shape=(10,20) ),
tf.keras.layers.Conv1D( 6, 7, dilation_rate=2, padding="causal", activation='relu'),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense( 12, activation='relu', name='hidden'),
tf.keras.layers.Dense( 1, activation='sigmoid', name='output' )
])
model.save( "tflite_test_model" )
converter = tf.lite.TFLiteConverter.from_saved_model( "tflite_test_model" )
tflite_model = converter.convert()
interpreter = tf.lite.Interpreter( model_content=tflite_model )***
Many thanks!
The recent TF versions including TF 2.5 rc version and TF nightly version have a fix for supporting dilation rate != 1 in the TFLite conversion. Please try out your code at the recent TF versions.
Related
I've been trying to train a model on AWS Sagemaker as I found that my computer is no longer powerful enough to train my model in a reasonable amount of time. However, when I tried to load the model (after copy pasting the code from my computer) I got an unexpected error.
After tinkering around for a little bit, I found that the very first Conv2D layer has a different output shape than it was on my computer.
Sagemaker output dimensions:
(None, 128, 498, 3)
Expected output dimensions:
(None, 498, 498, 3)
My code is below:
import tensorflow as tf
from tensorflow import keras
model = keras.models.Sequential()
model.add(keras.Input(shape = (500,500,3)))
model.add(keras.layers.Conv2D(filters=128, kernel_size = (3,3), activation='relu'))
model.compile(optimizer=keras.optimizers.Adam(learning_rate=0.0001),
loss=keras.losses.CategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
model.summary()
How can I fix this?
I came here, because I had the same problem. I found the solution, but I am still really confused about it. I just want to mention that I use the same tensorflow version locally and on sagemaker (2.10). And on both EXACTLY the same code.
If you go https://keras.io/api/layers/convolution_layers/convolution2d/
It states:
"Output shape
4+D tensor with shape: batch_shape + (filters, new_rows, new_cols) if data_format='channels_first' or 4+D tensor with shape: batch_shape + (new_rows, new_cols, filters) if data_format='channels_last'. rows and cols values might have changed due to padding."
So I forced the Sagemaker's version to `data_format='channels_last'
Now both version the local one and AWS one are consistent.
`
This is the code
from https://keras.io/examples/vision/image_classification_from_scratch/
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
# generate a dataset
image_size = (180,180)
batch_size = 32
train_ds = tf.keras.preprocessing.image_dataset_from_directory(
"PetImages",
validation_split = 0.2,
subset = "training",
seed = 1337,
image_size = image_size,
batch_size = batch_size,
)
Error is
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-21-bb7f2d14bf63> in <module>
3 batch_size = 32
4
----> 5 train_ds = tf.keras.preprocessing.image_dataset_from_directory(
6 "PetImages",
7 validation_split = 0.2,
AttributeError: module 'tensorflow.keras.preprocessing' has no attribute 'image_dataset_from_directory'
Any smallest detail which I am overlooking now?
It has been addressed under this issue.
The specific function (tf.keras.preprocessing.image_dataset_from_directory) is not available under TensorFlow v2.1.x or v2.2.0 yet. It is only available with the tf-nightly builds and is existent in the source code of the master branch.
Too bad they didn't indicate it anywhere on site. Better to use flow_from_directory for now. Or switch to tf-nightly and carry on.
v2.5.0
I got the same error using that code:
tf.keras.utils.image_dataset_from_directory(...)
changing it to:
tf.keras.preprocessing.image_dataset_from_directory(...)
fix my problem
I also had the same problem. When I upgraded the TensorFlow version to 2.3.0, it worked.
I trained a simple MLP model using new tf.keras version 2.2.4-tf. Here is how the model look like:
input_layer = Input(batch_shape=(138, 28))
first_layer = Dense(30, activation=activation, name="first_dense_layer_1")(input_layer)
first_layer = Dropout(0.1)(first_layer, training=True)
second_layer = Dense(15, activation=activation, name="second_dense_layer")(first_layer)
out = Dense(1, name='output_layer')(second_layer)
model = Model(input_layer, out)
I'm getting an error when I try to do prediction prediction_result = model.predict(test_data, batch_size=138). The test_data has shape of (69, 28), so it is smaller than the batch_size which is 138. Here is the error, it seems like the issue comes from first dropout layer:
tensorflow.python.framework.errors_impl.InvalidArgumentError: Incompatible shapes: [138,30] vs. [69,30]
[[node model/dropout/dropout/mul_1 (defined at ./mlp_new_tf.py:471) ]] [Op:__inference_distributed_function_1700]
The same solution works with no issues in older version of keras (2.2.4) and tensorflow (1.12.0). How can I fix the issue? I don't have more data for test, so I can't change the test_data set to have more data points!
Since you are seeing the issue at prediction time, one way of getting around this would be to pad the test data to be a multiple of your batch size. It shouldn't slow down the prediction since the number of batches doesn't change. numpy.pad should do the trick.
I am using Tensorflow 2 on Windows 10 and I download a model from TensorFlow Detection Model Zoo.
The model I am using is ssd.mobilenetv2.oid4
The model details are:
[<tf.Tensor 'image_tensor:0' shape=(None, None, None, 3) dtype=uint8>]
Note: I also have the frozen_inference_graph.pb available along with config file and checkpoint.
I used the TensorFlowLiteConverter Snippet to convert a saved_model.pb file to .tflite with custom shape:
import tensorflow as tf
input_dir = "D:\\Models\\ssd_mobilenet_v2_oid_v4_2018_12_12\\saved_model"
model = tf.saved_model.load(input_dir)
concrete_func = model.signatures[
tf.saved_model.DEFAULT_SERVING_SIGNATURE_DEF_KEY]
concrete_func.inputs[0].set_shape([None, None, None, 3])
converter = tf.lite.TFLiteConverter.from_concrete_functions([concrete_func])
tflite_model = converter.convert()
I get the following error:
Traceback (most recent call last):
File "C:\Users\Bhavin\Desktop\TensorFlow_pb_converter.py", line 10, in <module>
tflite_model = converter.convert()
File "C:\Users\Bhavin\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow_core\lite\python\lite.py", line 428, in convert
"invalid shape '{1}'.".format(_get_tensor_name(tensor), shape_list))
ValueError: None is only supported in the 1st dimension. Tensor 'image_tensor' has invalid shape '[None, None, None, 3]'
I tried using toco and tflite_convert but I got the same error.
What am I doing wrong and how do I convert this pb file to tflite file?
Currently tensorflow Lite doesn't support converting dynamic shape except for first dimension.
Consider setting the exact shape instead of 'None'
I have been using Keras (version 1.1.1) LSTM with Theano as backend without any problem. Now I would like to switch to Tensorflow (version 0.8.0) and could not get a simple example to work. The problem can be boiled down to following code snippet copied from this Keras-Tensorflow interface tutorial.
from keras.layers import LSTM
import tensorflow as tf
my_graph = tf.Graph()
with my_graph.as_default():
x = tf.placeholder(tf.float32, shape=(None, 20, 64))
y = LSTM(32)(x)
And I got following error when last line is executed:
File "/home/xxx/local/lib/python2.7/site-packages/Keras-1.1.1-py2.7.egg/keras/engine/topology.py", line 529, in call
return self.call(x, mask)
File "/home/xxx/local/lib/python2.7/site-packages/Keras-1.1.1-py2.7.egg/keras/layers/recurrent.py", line 227, in call
input_length=input_shape1)
File "/home/xxx/local/lib/python2.7/site-packages/Keras-1.1.1-py2.7.egg/keras/backend/tensorflow_backend.py", line 1306, in rnn
axes = [1, 0] + list(range(2, len(outputs.get_shape())))
File "/usr/local/anaconda/lib/python2.7/site-packages/tensorflow/python/framework/tensor_shape.py", line 462, in len
raise ValueError("Cannot take the length of Shape with unknown rank.")
ValueError: Cannot take the length of Shape with unknown rank.
Any suggestions?
You can't mix tensorflow as keras like that. Keras keeps track of the shape of its tensors separately from how tensorflow does.
Try using x = Input(shape=(20,64))