error while converting keras model with 2 input to tflite - tensorflow

I am trying to convert a tf2.keras model to tflite, but get the following error:
ValueError: Invalid input size: expected 2 items got 1 items.
my network is Siamese - it has 2 inputs that both are fed into the same backbone:
input_shape = (image_size, image_size, 3)
left_input = tf.keras.layers.Input(shape=input_shape, name='left_input')
right_input = tf.keras.layers.Input(shape=input_shape, name='right_input')
# define base model:
general_input = tf.keras.layers.Input(shape=input_shape)
x = build_mobilenet(inputs=general_input) # builds a standart mobilenet model
backbone_model = tf.keras.Model(general_input, x)
# run both examples:
left_features = backbone_model(left_input)
right_features = backbone_model(right_input)
output = tf.keras.layers.Subtract(name='diff')([left_features, right_features])
# continue run some more actions over the output tensor...
during training my dataset object return a dictionary of input and the label: {'left_input': im_left, 'right_input': im_right}, label
when trying to qunatize the model I have a representative dataset objects that returns only the inputs (without the label): return {'left_input': left, 'right_input': right}.
the tflite code used for qunatization:
data_generator = DataProvider(num_images=10)
model = tf.keras.models.load_model(float32_model_path, compile=False)
converter = tf.lite.TFLiteConverter.from_keras_model(model)
converter.optimizations = [tf.lite.Optimize.OPTIMIZE_FOR_SIZE]
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
converter.inference_input_type = tf.int8
converter.inference_output_type = tf.int8
converter.representative_dataset = data_generator
tflite_model = converter.convert()
The error occurs when calling to converter.convert(). Does anyone understand what could be the issue?
Thanks!

Related

Keras Autoencoder: Target data is missing. Model expects target data to be provided in fit()

I am encountering an error when fitting an autoencoder model. I'm trying to translate text between 2 languages.
Here is my code:
xLines = np.genfromtxt('data-1.ar',dtype='str',delimiter=',' )
yLines = np.genfromtxt('data-1.en',dtype='str',delimiter=',' )
vectorize_layer = TextVectorization(
standardize=custom_standardization,
max_tokens=maxTokens,
split="whitespace",
ngrams=None,
output_mode="int",
output_sequence_length=outputLength
)
vectorize_layer.adapt(vocab_data)
def vectorize_text(text):
text = tf.expand_dims(text, -1)
return vectorize_layer(text)
arDataset = vectorize_text(xLines)
enDataset = vectorize_text(yLines)
print(arDataset.shape)
print(enDataset.shape)
x = keras_nlp.layers.TokenAndPositionEmbedding(
vocabulary_size=maxTokens,
sequence_length=outputLength,
embedding_dim=EMBED_DIM,
mask_zero=True,
)(encoder_inputs)
encoder_outputs = keras_nlp.layers.TransformerEncoder(
intermediate_dim=INTERMEDIATE_DIM, num_heads=NUM_HEADS
)(inputs=x)
encoder = keras.Model(encoder_inputs, encoder_outputs)
# Decoder
decoder_inputs = keras.Input(shape=(None,), dtype="int64", name="decoder_inputs")
encoded_seq_inputs = keras.Input(shape=(None, EMBED_DIM), name="decoder_state_inputs")
x = keras_nlp.layers.TokenAndPositionEmbedding(
vocabulary_size=maxTokens,
sequence_length=outputLength,
embedding_dim=EMBED_DIM,
mask_zero=True,
)(decoder_inputs)
x = keras_nlp.layers.TransformerDecoder(
intermediate_dim=INTERMEDIATE_DIM, num_heads=NUM_HEADS
)(decoder_sequence=x, encoder_sequence=encoded_seq_inputs)
x = keras.layers.Dropout(0.5)(x)
decoder_outputs = keras.layers.Dense(enChars, activation="softmax")(x)
decoder = keras.Model(
[
decoder_inputs,
encoded_seq_inputs,
],
decoder_outputs,
)
decoder_outputs = decoder([decoder_inputs, encoder_outputs])
transformer = keras.Model(
[encoder_inputs, decoder_inputs],
decoder_outputs,
name="transformer",
)
transformer.summary()
transformer.compile(
"rmsprop", loss="categorical_crossentropy", metrics=["accuracy"]
)
transformer.fit([arDataset,enDataset], epochs=100)
The error I'm receiving is:
ValueError: Target data is missing. Your model was compiled with
loss=categorical_crossentropy, and therefore expects target data to be
provided in fit().
The shape of both my datasets is (99, 22)
Update 1
I've tried changing the fit arguments but still encountering a similar error.
transformer.fit(arDataset,enDataset, epochs=100)
transformer.fit(x=arDataset,y=enDataset, epochs=100)
ValueError: Layer "transformer" expects 2 input(s), but it received 1
input tensors. Inputs received: [<tf.Tensor 'IteratorGetNext:0'
shape=(None, 22) dtype=int64>]
I'm not sure what the error means and how to resolve it.

ValueError: Failed to parse the model: pybind11::init(): factory function returned nullptr

After a long search for my problem, any solution fund won’t work for me. I hope that you can help me to overcome this problem so I can continue my project.
The problem is while doing post-training integer quantization of a GRU model, it gives me the following error :
ValueError: Failed to parse the model: pybind11::init(): factory function returned nullptr.
GRU quantization error
the code I am using :
converter = tf.lite.TFLiteConverter.from_saved_model(GRUMODEL_TF)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
def representative_dataset_gen():
for sample in XX_data:
sample = np.expand_dims(sample.astype(np.float32), axis=0)
yield [sample]
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
converter.inference_input_type = tf.int8
converter.inference_output_type = tf.int8
converter.representative_dataset = repr_data_gen
model_tflite = converter.convert()
open(GRUMODEL_TFLITE, "wb").write(model_tflite)

evaluating two inputs and one output model tensorflow

I am trying to evaluate a model with 2 inputs and 1 output, each input goes to separate pretrained model and then the output from both the models get averaged. I am using the same data for both the inputs.
test_dir = 'D:\Graduation_project\Damage type not collected'
test_datagen = tf.keras.preprocessing.image.ImageDataGenerator(rescale=1./255,)
test_set = test_datagen.flow_from_directory(test_dir,
class_mode = 'categorical',
batch_size = 16,
target_size=(150,150))
test_set1 = test_datagen.flow_from_directory(test_dir,
class_mode = 'categorical',
batch_size = 16,
target_size=(150,150))
loading first model and renaming the layers
def load_dense_model():
densenet = tf.keras.models.load_model('D:\Graduation_project\saved models\damage_type_model.h5', compile=False)
for i, layer in enumerate(densenet.layers):
layer._name = 'Densenet_layer' + str(i)
return densenet
loading second model
def load_vgg19_model():
vgg19 = tf.keras.models.load_model('D:\Graduation_project\saved models\damage_type_VGG19.h5', compile=False)
return vgg19
creating ensemble model
def ensamble_model(first_model, second_model):
densenet = first_model()
vgg19 = second_model()
output_1 = densenet.get_layer('Densenet_layer613')
output_2 = vgg19.get_layer('dense_4')
avg = tf.keras.layers.Average()([output_1.output, output_2.output])
model = Model(inputs=[densenet.input, vgg19.input], outputs=avg)
return model
METRICS = [
'accuracy',
tf.metrics.TruePositives(name='tp'),
tf.metrics.FalsePositives(name='fp'),
tf.metrics.TrueNegatives(name='tn'),
tf.metrics.FalseNegatives(name='fn'),
tf.metrics.Precision(name='precision'),
tf.metrics.Recall(name='recall'),
tfa.metrics.F1Score(name='F1_Score', num_classes=5),
tfa.metrics.MultiLabelConfusionMatrix(num_classes=5)
]
model = ensamble_model(load_dense_model, load_vgg19_model)
compiling and evaluating the model
model.compile(optimizer = 'adam' , loss ='binary_crossentropy',
metrics = 'accuracy')
model.evaluate({'Densenet_layer0':test_set1, 'input_2':test_set})
evaluate() fails to run
ValueError: Failed to find data adapter that can handle input: (<class 'dict'> containing {"<class 'str'>"} keys and {"<class 'tensorflow.python.keras.preprocessing.image.DirectoryIterator'>"} values), <class 'NoneType'>
My guess is that your model complaining because you are feeding a dict/list of iterators that yield an image each, instead of feeding an iterator that yields the image twice (once for each model).
What would happen if you wrap your DirectoryIterator on a generator that can feed the data correctly?
def gen_itertest(test_dir):
test_set = test_datagen.flow_from_directory(test_dir,
class_mode = 'categorical',
batch_size = 16,
target_size=(150,150))
for i in range(len(test_set)):
x = test_set.next()
yield [x[0], x[0]], x[1] # Twice the input, only once the label
and then you can feed this to the evaluate
testset = gen_itertest('D:\Graduation_project\Damage type not collected')
result = model.evaluate(testset)
I am not sure this will work but because you haven't provide us with a minimal, reproducible example, I am not going to do one to test it.
Try calling the evaluate() like this:
result = model.evaluate(x=[test_set1, test_set])
Then you could get the name of the metrics doing something like this:
dict(zip(model.metrics_names, result))

Object Detection API v2 Tflite model post quantization

I try to convert and quantize a model trained with the Object Detection API v2 to run it on a Coral Devboard.
It seems like there is still a big problem with exporting Object Detection Models to lite, though I hope that maybe someone has some advice for me.
my converter looks like the following and I try to convert "SSD MobileNet v2 320x320" from Model Zoo v2
def convertModel(input_dir, output_dir, pipeline_config="", checkpoint:int=-1, quantization=False ):
os.environ['CUDA_VISIBLE_DEVICES'] = '0'
files = os.listdir(input_dir)
if pipeline_config == "":
pipeline_config = [pipe for pipe in files if pipe.endswith(".config")][0]
pipeline_config_path = os.path.join(input_dir, pipeline_config)
# Find latest or given checkpoint
checkpoint_file = ""
checkpointDir = os.path.join(input_dir, 'checkpoint')
for chck in sorted(os.listdir(checkpointDir)):
if chck.endswith(".index"):
checkpoint_file = chck[:-6]
# Stop search when the requested was found
if chck.endswith(str(checkpoint)):
break
print("#####################################")
print(checkpoint_file)
print("#####################################")
#ckeckpint_file = [chck for chck in files if chck.endswith(f"{checkpoint}.meta")][0]
trained_checkpoint_prefix = os.path.join(checkpointDir, checkpoint_file)
configs = config_util.get_configs_from_pipeline_file(pipeline_config_path)
detection_model = model_builder.build(configs['model'], is_training=False)
ckpt = tf.compat.v2.train.Checkpoint(
model=detection_model)
ckpt.restore(trained_checkpoint_prefix).expect_partial()
class MyModel(tf.keras.Model):
def __init__(self, model):
super(MyModel, self).__init__()
self.model = model
self.seq = tf.keras.Sequential([
tf.keras.Input([300,300,3], 1),
])
def call(self, x):
x = self.seq(x)
images, shapes = self.model.preprocess(x)
prediction_dict = self.model.predict(images, shapes)
detections = self.model.postprocess(prediction_dict, shapes)
return detections
km = MyModel(detection_model)
y = km.predict(np.random.random((1,300,300,3)).astype(np.float32))
converter = tf.lite.TFLiteConverter.from_keras_model(km)
if quantization:
converter.optimizations = [tf.lite.Optimize.OPTIMIZE_FOR_SIZE]
converter.target_spec.supported_ops = [ tf.lite.OpsSet.SELECT_TF_OPS, tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
converter.representative_dataset = _genDataset
else:
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS]
converter.experimental_new_converter = True
converter.allow_custom_ops = True
tflite_model = converter.convert()
open(os.path.join(output_dir, 'model.tflite'), 'wb').write(tflite_model)
My Datagenerator loads about 100 images downloaded from the coco dataset to generate sample inputs
def _genDataset():
sampleDir = os.path.join("Dataset", "Coco")
for i in os.listdir(sampleDir):
image = cv2.imread(os.path.join(sampleDir, i))
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
image = cv2.resize(image, (300,300))
image = image.astype("float")
image = np.expand_dims(image, axis=1)
image = image.reshape(1, 300, 300, 3)
yield [image.astype("float32")]
I tried to tun the code with TF2.2.0 which returned me
RuntimeError: Max and min for dynamic tensors should be recorded during calibration
according to an update to TF2.3.0 should help when then returns me
<unknown>:0: error: failed while converting: 'main': Ops that can be supported by the flex runtime (enabled via setting the -emit-select-tf-ops flag):
tf.Size {device = ""}
I also tested tf-nightly (2.4.0) which returns again
RuntimeError: Max and min for dynamic tensors should be recorded during calibration
Right now this tf.Size operator seems to be the reason why I can convert the model because when I allow custom operations I can convert it to tflite.
Sadly that is not a solution for me because the coral converter or my interpreter can't use the model with a missing custom op.
Does someone know if there is a possibility to remove this op in postprocessing or just ignore it during conversion?
Just converting it to TFlite without quantization and tf.lite.OpsSet.TFLITE_BUILTINS works without problems

TFF :ValueError Error when checking model target

I would like to implement a code of image classification with tensorflow-federated, So when I create the model and I pass it to federated averaging process, I find error that I can't understand here. Here is a part of my code implemented with TFF
input_shape = (224,224,3)
def create_compiled_keras_model(input_shape, base_model='resnet18'):
inputs = tf.keras.layers.Input(shape=(input_shape))
base_encoder = tf.keras.applications.ResNet50(
include_top=False, weights=None, input_tensor=None,
input_shape=None, pooling='avg')
base_encoder.training = True
h = base_encoder(inputs)
x = tf.keras.layers.Dense(2)(h)
x = tf.keras.layers.Activation('relu')(x)
x = tf.keras.layers.Dense(2)(x)
model = tf.keras.Model(inputs=inputs, outputs=[h, x])
model.compile(
optimizer=tf.keras.optimizers.SGD(learning_rate=0.001),
loss=tf.keras.losses.CategoricalCrossentropy(),
metrics=[tf.keras.metrics.CategoricalAccuracy()])
return model
def model_fn():
keras_model = create_compiled_keras_model(input_shape, base_model='resnet18')
return tff.learning.from_compiled_keras_model(keras_model, sample_batch)
iterative_process = tff.learning.build_federated_averaging_process(model_fn, server_optimizer_fn=lambda: tf.keras.optimizers.SGD(learning_rate=1.0),client_weight_fn=None)
The error was:
ValueError: Error when checking model target: the list of Numpy arrays that you are passing to your model is not the size the model expected. Expected to see 2 array(s), for inputs ['resnet50', 'dense_1'] but instead got the following list of 1 arrays: [<tf.Tensor 'Const_1:0' shape=(2, 2) dtype=float32>]...
Thanks for help!!