I tried to convert my CNN model .h5 file to a .tflite file using this code:
import tensorflow as tf
from tf.lite import TFLiteConverter
converter = lite.TFLiteConverter.from_saved_model('/drive/My Drive/FSD_modelV09A.h5')
tflite_model = converter.convert()
open("/drive/My Drive/FSD_modelV09A.tflite", "wb").write(tflite_model)
But then there's an error saying:
ModuleNotFoundError: No module named 'tf'
You cannot make imports from module aliases, you have to use the full module name:
from tensorflow.lite import TFLiteConverter
You can also just refer to tf.lite.TFLiteConverter in code
I guess the tensorflow module is not fully loaded (yet) when you try to make other imports. E.g. when you do from X import Y, aliases are not known yet, so X cannot be an alias. Try importing from the original module name:
from tensorflow.lite import TFLiteConverter
Related
I had a code for loading a BERT model that executed very well, but now it raises me an error
here is the code
model = load_trained_model_from_checkpoint(
config_path,
checkpoint_path,
trainable=True,
seq_len=SEQ_LEN,
output_layer_num=4
)
now the error it raises is:
AttributeError: 'tuple' object has no attribute 'layer'
The environment settings are as follows:
keras-bert=0.85.0
keras=2.4.3
tensorflow=1.15.2
Many thanks in advance
In your environment settings, when installing packages, try installing them without specifying the specific versions:
pip install -q keras-bert
pip install keras
AttributeError: 'tuple' object has no attribute 'layer' basically occurs when you mixup keras and tensorflow.keras as this answer explains.
See if that resolves your issue. Also, if you have the following in your code:
import keras
from keras import backend as K
Try changing them to:
from tensorflow.python import keras
import tensorflow.keras.backend as K
I hope that resolves your issue.
You can check this article for reference.
I'm trying to learn how to use some ML stuff for Android. I got the Text Classification demo working and seems to work fine. So then I tried creating my own model.
The code I used to create my own model was this:
import numpy as np
import os
from tflite_model_maker import model_spec
from tflite_model_maker import text_classifier
from tflite_model_maker.config import ExportFormat
from tflite_model_maker.text_classifier import AverageWordVecSpec
from tflite_model_maker.text_classifier import DataLoader
import tensorflow as tf
assert tf.__version__.startswith('2')
tf.get_logger().setLevel('ERROR')
spec = model_spec.get('mobilebert_classifier')
train_data = DataLoader.from_csv(
filename='/path to file/train.csv',
text_column='sentence',
label_column='label',
model_spec=spec,
is_training=True)
model = text_classifier.create(train_data, model_spec=spec, epochs=10)
model.export(export_dir='average_word_vec')
The code appeared to run fine and it created a model.tflite file for me. I then replaced the demo tflite file with mine. But when I run the demo I get the following error:
java.lang.AssertionError: Error occurred when initializing NLClassifier: Type mismatch for input tensor serving_default_input_type_ids:0. Requested STRING, got INT32.
at org.tensorflow.lite.task.text.nlclassifier.NLClassifier.initJniWithByteBuffer(Native Method)
at org.tensorflow.lite.task.text.nlclassifier.NLClassifier.access$100(NLClassifier.java:67)
at org.tensorflow.lite.task.text.nlclassifier.NLClassifier$2.createHandle(NLClassifier.java:223)
at org.tensorflow.lite.task.core.TaskJniUtils.createHandleFromLibrary(TaskJniUtils.java:91)
at org.tensorflow.lite.task.text.nlclassifier.NLClassifier.createFromBufferAndOptions(NLClassifier.java:219)
at org.tensorflow.lite.task.text.nlclassifier.NLClassifier.createFromFileAndOptions(NLClassifier.java:175)
at org.tensorflow.lite.task.text.nlclassifier.NLClassifier.createFromFile(NLClassifier.java:150)
at org.tensorflow.lite.examples.textclassification.client.TextClassificationClient.load(TextClassificationClient.java:44)
at org.tensorflow.lite.examples.textclassification.MainActivity.lambda$onStart$1$MainActivity(MainActivity.java:67)
at org.tensorflow.lite.examples.textclassification.-$$Lambda$MainActivity$eJaQnJq74KcmPEczFE5swJIGydg.run(Unknown Source:2)
What am I missing?
In your codes you trained a MobileBERT model, but saved to the path of average_word_vec?
spec = model_spec.get('mobilebert_classifier')
model.export(export_dir='average_word_vec')
One posssiblity is: you use the model of average_word_vec, but add MobileBERT metadata, thus the preprocessing doesn't match.
Could you follow the Model Maker tutorial and try again?
https://colab.sandbox.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/tutorials/model_maker_text_classification.ipynb
Make sure change the export path.
I have to try the quantization to my model(tflite).
I want to change float32 to float 16 through the dynamic range quantization.
This is my code:
import tensorflow as tf
import json
import sys
import pprint
from tensorflow import keras
import numpy as np
converter = tf.lite.TFLiteConverter.from_saved_model('models')
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.target_spec.supported_types = [tf.float16]
tflite_quant_model = converter.convert()
open("quant.tflite", "wb").write(tflite_quant_model)
In my MacBook, there is a folder called 'models', which contains two tflite files there.
When I execute the code, the following error occurs:
converter = tf.lite.TFLiteConverter.from_saved_model('quantization')
OSError: SavedModel file does not exist at: models/{saved_model.pbtxt|saved_model.pb}
I checked most of the posts in stack overflow, but I couldn't find a solution.
Please review my code and give me some advice.
I uploaded my tflite file because I guess it would be necessary to check if there was a problem.
This is my model(download link):
https://drive.google.com/file/d/13gft7bREsv2vZYFvfoCiP5ndxHkfGKIM/view?usp=sharing
Thank you so much.
The tf.lite.TFLiteConverter.from_saved_model function takes a tensorflow (.pb) model as a parameter. On the other hand, you give a tensorflowlite (.tflite) model, which necessarily leads to an error. If you want to convert your model to float 16, the only way I know of is to take the original model in ".pb" format and you convert it as you want
This error happens cause I used from astroNN.models import Galaxy10CNN and do downgrade Tensorflow to 1.15.2 to prevent the ImportError: cannot import name 'get_default_session' but see new error related to attribute 'Wrapper' AttributeError: module 'keras.layers' has no attribute 'Wrapper'
Please advise. Thanks!
Use the keras.layers.wrapper in Tensorflow 1.15 as
import tensorflow as tf
tf.keras.layers.Wrapper(layer, **kwargs)
for more details on the library please find here.
With tensorflow version 1.15.2 and astroNN version 1.0.1 a hacky way is to replace line 15 of the file /usr/local/lib/python3.7/dist-packages/astroNN/nn/layers.py (e.g., in linux, with colab)
Layer, Wrapper, InputSpec = tf.keras.layers.Layer, tf.keras.layers.Wrapper, tf.keras.layers.InputSpec
I'm having a problem similar to the one described here:
ValueError: Unknown layer: Functional
import tensorflow as tf
model = tf.keras.models.load_model("model.h5")
which throws: ValueError: Unknown layer: Functional.
I'm pretty sure this is because the h5 file was saved in TF 2.3.0 and I'm trying to load it in 2.2.0. I'd rather not convert using tf 2.3.0 directly, and I'm hoping to find a way of manually fixing the h5py file itself, or passing the right custom object to the model loader. I've noticed that it seems like it's just an extra key wherever the config file is stored, e.g. https://github.com/tensorflow/tensorflow/issues/41929
The problem is, I'm not sure how to manually get rid of the Functional layer in the h5 file. Specifically, I've tried:
import h5py
f = h5py.File("model.h5",'r')
print(f['model_weights'].keys())
which gives:
<KeysViewHDF5 ['concatenate_1', 'conv1d_3', 'conv1d_4', 'conv1d_5', 'dense_1', 'dropout_4', 'dropout_5', 'dropout_6', 'dropout_7', 'embedding_1', 'global_average_pooling1d_1', 'global_max_pooling1d_1', 'input_2']>
and I don't see the Functional layer anywhere. Where exactly is the config for the model stored in this file? E.g. I'm looking for something like {"class_name": "Functional", "config": {"name": "model", "layers":...}}
Question: is there a way I can manually edit the h5 file using h5py to get rid of the Functional layer?
Alternatively, can I pass a specific custom_obects={'Functiona':???} to the load_model function?
I've tried {'Functional':tf.keras.models.Model} but that returns ('Keyword argument not understood:', 'groups') because I think it's trying to load a model into weights?
I had a similar problem. The only way I could solve it without changing the Tensorflow version and retraining the model is by building the model structure again using Keras API in TensorFlow 2.2.0 and then call:
model.load_weights(<h5 file>)
where the original h5 file was created using TensorFlow 2.3.0. If you already have the code that builds the model structure then this method should be relatively easy since all you have to do is replace load_model(<h5 file>) with the line above.
Just change
keras.models import load_model
tensorflow.keras.models import load_model
then
load_model('model.h5', compile = False)