This is an error i am getting on spyder 5.0.3
File
"C:\Users\HP\anaconda3\lib\site-packages\tensorflow\python\keras\layers\core.py",
line 1057, in _parse_function_from_config
function = generic_utils.func_load(
File
"C:\Users\HP\anaconda3\lib\site-packages\tensorflow\python\keras\utils\generic_utils.py",
line 457, in func_load
code = marshal.loads(raw_code)
ValueError: bad marshal data (unknown type code)
library-versions:
I have tried downgrading my libraries.. but nothing has happened
import tensorflow as tf
from tensorflow import keras
model_path = 'facenet_keras.h5'
#model = load_model(model_path)
#model = tf.keras.models.load_model(model_path)
#model = keras.models.load_model(model_path)
tf.keras.models.load_model(
model_path, custom_objects=None, compile=True, options=None
)
tried all this!
Related
The line "tflite_model = converter.convert()" gives the AttributeError: 'str' object has no attribute 'call'.
See screenshot of code ->1
CODE:
import tensorflow as tf
converter = tf.lite.TFLiteConverter.from_keras_model('///Users/theunskuhn/Desktop/Savedfile/basic_malaria_pos_neg_v3.h5')
converter.experimental_new_converter = True
tflite_model = converter.convert()
open("basic_malaria_pos_neg_v3.tflite", "wb").write(tflite_model)
ERROR:
AttributeError: 'str' object has no attribute 'call'
The Error points to the line 4: "tflite_model = converter.convert()"
Screenshot of new code from answer below
If you're using the TFLiteConverter API in TensorFlow 2.0 or above, the TFLiteConverter.from_keras_model takes in a Keras Model object and not the path of the model which is str.
First, load the model using tf.keras.models.load_model() and then pass this model to the TFLiteConverter API.
import tensorflow as tf
model = tf.keras.models.load_model( '///Users/theunskuhn/Desktop/Savedfile/basic_malaria_pos_neg_v3.h5' )
converter = tf.lite.TFLiteConverter.from_keras_model( model )
tflite_model = converter.convert()
open("basic_malaria_pos_neg_v3.tflite", "wb").write(tflite_model)
The method TFLiteConverter.from_keras_model_file() was replaced by TFLiteConverter.from_keras_model() in TF 2.0. See the docs.
I am going to train my model quantization aware. However, when i use it , the tensorflow_model_optimization cannot quantize tf.reshape function , and throws an error.
tensorflow version : '2.4.0-dev20200903'
python version : 3.6.9
the code:
import os
os.environ['CUDA_VISIBLE_DEVICES'] = '3'
from tensorflow.keras.applications import VGG16
import tensorflow_model_optimization as tfmot
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
quantize_model = tfmot.quantization.keras.quantize_model
inputs = keras.Input(shape=(784,))
# img_inputs = keras.Input(shape=(32, 32, 3))
dense = layers.Dense(64, activation="relu")
x = dense(inputs)
x = layers.Dense(64, activation="relu")(x)
outputs = layers.Dense(10)(x)
outputs = tf.reshape(outputs, [-1, 2, 5])
model = keras.Model(inputs=inputs, outputs=outputs, name="mnist_model")
# keras.utils.plot_model(model, "my_first_model.png")
q_aware_model = quantize_model(model)
and the output:
Traceback (most recent call last):
File "<ipython-input-39-af601b78c010>", line 14, in <module>
q_aware_model = quantize_model(model)
File "/home/essys/.local/lib/python3.6/site-packages/tensorflow_model_optimization/python/core/quantization/keras/quantize.py", line 137, in quantize_model
annotated_model = quantize_annotate_model(to_quantize)
File "/home/essys/.local/lib/python3.6/site-packages/tensorflow_model_optimization/python/core/quantization/keras/quantize.py", line 210, in quantize_annotate_model
to_annotate, input_tensors=None, clone_function=_add_quant_wrapper)
...
File "/home/essys/anaconda3/envs/tf_gpu/lib/python3.6/site-packages/tensorflow/python/autograph/impl/api.py", line 667, in wrapper
raise e.ag_error_metadata.to_exception(e)
TypeError: in user code:
TypeError: tf__call() got an unexpected keyword argument 'shape'
If somebody know, please help ?
The reason behind is because your layer is not yet support for QAT at the moment. If you want to quantize it, you have to self writing your quantization by quantize_annotate_layer and pass it through quantize_scope and apply to your model by quantize_apply as describe in here: https://www.tensorflow.org/model_optimization/guide/quantization/training_comprehensive_guide?hl=en#quantize_custom_keras_layer
I have create a batch_norm_layer in here as an example
Tensorflow 2.x is not complete for QAT layer, pls consider using tf1.x by adding FakeQuant after operators.
How to use Huggingface create_optimizer method ?
My code is as follows:
import tensorflow as tf
from transformers import RobertaConfig, TFRobertaForMaskedLM, create_optimizer
config = RobertaConfig()
optimizer,lr = create_optimizer(1e-4,1000000,10000,0.1,1e-6,0.01)
training_loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
model = TFRobertaForMaskedLM(config)
model.compile(optimizer=optimizer, loss=training_loss)
input = tf.random.uniform(shape=[1,25], maxval=100, dtype=tf.int32)
hist = model.fit(input, input, epochs=1, steps_per_epoch=1,verbose=0)
I am getting an error:
TypeError: apply_gradients() got an unexpected keyword argument
'experimental_aggregate_gradients'
I tried with tensorflow 2.3.0 and 2.2.0, transformers 3.0.2.
I added this line:
optimizer._HAS_AGGREGATE_GRAD = False
I need to convert a tensorflow pb model into tensorflow lite, by using Google CoLab.
The conversion procedures are next:
1) To upload the model:
from google.colab import files
pbfile = files.upload()
2) To convert it:
import tensorflow as tf
pb_file = 'data_513.pb'
tflite_file = 'data_513.tlite'
converter = tf.lite.TFLiteConverter.from_frozen_graph(pb_file, ['ImageTensor'], ['SemanticPredictions'],
input_shapes={"ImageTensor":[1,513,513,3]})
tflite_model = converter.convert()
open(tflite_file,'wb').write(tflite_model)
The conversion fails with the next error
Check failed: array.data_type == array.final_data_type Array "ImageTensor" has mis-matching actual and final data types (data_type=uint8, final_data_type=float).
I think I may need to specify some extra commands to overcome this error, but I can't find any information about it.
Finally found the solution. Here the snipped for others to use:
import tensorflow as tf
pb_file = 'model.pb'
tflite_file = 'model.tflite'
converter = tf.lite.TFLiteConverter.from_frozen_graph(pb_file, ['ImageTensor'], ['SemanticPredictions'],
input_shapes={"ImageTensor":[1,513,513,3]})
converter.inference_input_type=tf.uint8
converter.quantized_input_stats = {'ImageTensor': (128, 127)} # (mean, stddev)
tflite_model = converter.convert()
open(tflite_file,'wb').write(tflite_model)
interpreter = tf.lite.Interpreter(model_content=tflite_model)
interpreter.allocate_tensors()
files.download(tflite_file)
I'm using macOS terminal to run train.py file, which can be found in this Github link but I keep getting the following error:
Arabic-NER $ python train.py
Using TensorFlow backend.
Loading Word Embedding model...
loaded model in 368.90753722190857 seconds
2019-12-18 21:11:19.329197: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x7fb88580c330 executing computations on platform Host. Devices:
2019-12-18 21:11:19.329702: I tensorflow/compiler/xla/service/service.cc:175] StreamExecutor device (0): Host, Default Version
Traceback (most recent call last):
File "train.py", line 130, in <module>
train_model, crf_layer = model.build_model()
File "/Name Entity Recognition /HassanAzzam/Arabic-NER/model.py", line 21, in build_model
output_layer = crf_layer(bi_gru)
File "/Name Entity Recognition /HassanAzzam/Arabic-NER/env/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py", line 75, in symbolic_fn_wrapper
return func(*args, **kwargs)
File "/Name Entity Recognition /HassanAzzam/Arabic-NER/env/lib/python3.6/site-packages/keras/engine/base_layer.py", line 475, in __call__
previous_mask = _collect_previous_mask(inputs)
File "/Name Entity Recognition /HassanAzzam/Arabic-NER/env/lib/python3.6/site-packages/keras/engine/base_layer.py", line 1441, in _collect_previous_mask
mask = node.output_masks[tensor_index]
AttributeError: 'Node' object has no attribute 'output_masks'
Also, here's model.py:
import tensorflow.keras
from tensorflow.keras.layers import Dense, Input, GRU, Embedding, Dropout, Activation, Masking
from tensorflow.keras.layers import Bidirectional, GlobalMaxPool1D, TimeDistributed
from tensorflow.keras.models import Model, Sequential
from keras_contrib.layers import CRF
def build_model():
crf_layer = CRF(9)
input_layer = Input(shape=(None,300,))
# embedding = Embedding(212, 20, input_length=None, mask_zero=True)(input_layer)
mask_layer = Masking(mask_value=0., input_shape=(212, 300))(input_layer)
bi_gru = Bidirectional(GRU(10, return_sequences=True))(mask_layer)
bi_gru = TimeDistributed(Dense(10, activation="relu"))(bi_gru)
output_layer = crf_layer(bi_gru)
return Model(input_layer, output_layer), crf_layer
I'm using TensorFlow 2.0 and Python 3.6.6.
Note: in the train.py file, in line 8, I have added tensorflow before keras.
So, it's from tensorflow.keras.utils import to_categorical.
The full Github repository can be found here
I solved the problem by installing the following versions:
Seqeval:
pip install seqeval==0.0.5
Keras:
pip install keras==2.2.4
Tensorflow:
pip install tensorflow==1.14.0
I also added from keras.models import * in the beginning of each file