Problem with running object_detection_tutorial TypeError: load() missing 2 required positional arguments - tensorflow

I'm pretty new to tensorflow and I'm trying to run object_detection_tutorial. I'm getting TypeErrror and don't know how to fix it.
This is load_model function which misses 2 arguments:
tags: Set of string tags to identify the required MetaGraphDef. These should correspond to the tags used when saving the variables using the SavedModel save() API.
export_dir: Directory in which the SavedModel protocol buffer and variables to be loaded are located.
def load_model(model_name):
base_url = 'http://download.tensorflow.org/models/object_detection/'
model_file = model_name + '.tar.gz'
model_dir = tf.keras.utils.get_file(
fname=model_name,
origin=base_url + model_file,
untar=True)
model_dir = pathlib.Path(model_dir)/"saved_model"
model = tf.saved_model.load(str(model_dir))
model = model.signatures['serving_default']
return model
WARNING:tensorflow:From <ipython-input-9-f8a3c92a04a4>:11: load (from tensorflow.python.saved_model.loader_impl) is deprecated and will be removed in a future version.
Instructions for updating:
This function will only be available through the v1 compatibility library as tf.compat.v1.saved_model.loader.load or tf.compat.v1.saved_model.load. There will be a new function for importing SavedModels in Tensorflow 2.0.
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-12-e10c73a22cc9> in <module>
1 model_name = 'ssd_mobilenet_v1_coco_2017_11_17'
----> 2 detection_model = load_model(model_name)
<ipython-input-9-f8a3c92a04a4> in load_model(model_name)
9 model_dir = pathlib.Path(model_dir)/"saved_model"
10
---> 11 model = tf.saved_model.load(str(model_dir))
12 model = model.signatures['serving_default']
13
~/.local/lib/python3.6/site-packages/tensorflow/python/util/deprecation.py in new_func(*args, **kwargs)
322 'in a future version' if date is None else ('after %s' % date),
323 instructions)
--> 324 return func(*args, **kwargs)
325 return tf_decorator.make_decorator(
326 func, new_func, 'deprecated',
TypeError: load() missing 2 required positional arguments: 'tags' and 'export_dir'
Can you help me fix this and run my first object detector :D?

I had the same problem and i'm trying to solve this for 1 week now. I guess the solution should be this;
model = tf.compat.v2.saved_model.load(str(model_dir), None)
More detail would be (from the official website) ;
Load a SavedModel from export_dir.
tf.saved_model.load(
export_dir,
tags=None
)
Aliases:
tf.compat.v1.saved_model.load_v2
tf.compat.v2.saved_model.load

I guessed it was a branch problem and using the tf_2_1_reference branch did the trick for me:
igian#iGians-MBP models % git checkout tf_2_1_reference
M research/object_detection/object_detection_tutorial.ipynb
Branch 'tf_2_1_reference' set up to track remote branch 'tf_2_1_reference' from 'origin'.
Switched to a new branch 'tf_2_1_reference'
igians#iGians-MBP models % jupyter notebook
Then executed each jupiter cell of the tutorial like a good newbie!
This is the branch i used: https://github.com/tensorflow/models/tree/tf_2_1_reference

If you would just like to make a perdiction then you can also use load the model as below:
from tensorflow.contrib import predictor
predict_fn = predictor.from_saved_model(model_dir)

Related

Not able to execute sample code provided in Hugging faces Models card

When i am trying sample code from Hugging face i get below error.
the code can be found from https://huggingface.co/facebook/tts_transformer-en-ljspeech
Code:
from fairseq.checkpoint_utils import load_model_ensemble_and_task_from_hf_hub
from fairseq.models.text_to_speech.hub_interface import TTSHubInterface
import IPython.display as ipd
models, cfg, task = load_model_ensemble_and_task_from_hf_hub(
"facebook/fastspeech2-en-ljspeech",
arg_overrides={"vocoder": "hifigan", "fp16": False}
)
model = models[0]
TTSHubInterface.update_cfg_with_data_cfg(cfg, task.data_cfg)
generator = task.build_generator(model, cfg)
text = "Hello, this is a test run."
sample = TTSHubInterface.get_model_input(task, text)
wav, rate = TTSHubInterface.get_prediction(task, model, generator, sample)
ipd.Audio(wav, rate=rate)
Error:
TypeError Traceback (most recent call last)
Input In [1], in <module>
10 model = models[0]
11 TTSHubInterface.update_cfg_with_data_cfg(cfg, task.data_cfg)
---> 12 generator = task.build_generator(model, cfg)
14 text = "Hello, this is a test run."
16 sample = TTSHubInterface.get_model_input(task, text)
File ~/office/virtual_environments/eye_for_bliend/Images/fairseq/fairseq/tasks/text_to_speech.py:151, in TextToSpeechTask.build_generator(self, models, cfg, vocoder, **unused)
149 if vocoder is None:
150 vocoder = self.build_default_vocoder()
--> 151 model = models[0]
152 if getattr(model, "NON_AUTOREGRESSIVE", False):
153 return NonAutoregressiveSpeechGenerator(model, vocoder, self.data_cfg)
TypeError: 'TTSTransformerModel' object is not subscriptable
What worked for me was to put the model in a list where you build the generator on line 12.
generator = task.build_generator([model], cfg)

keras.models.load_model() gives ValueError

I have saved the trained model and the weights as below.
model, history, score = fit_model(model, train_batches, val_batches, callbacks=[callback])
model.save('./model')
model.save_weights('./weights')
Then I tried to get the saved model as the following way
if __name__ == '__main__':
model = keras.models.load_model('./model', compile= False,custom_objects={"F1Score": tfa.metrics.F1Score})
test_batches, nb_samples = test_gen(dataset_test_path, 32, img_width, img_height)
predict, loss, acc = predict_model(model,test_batches, nb_samples)
print(predict)
print(acc)
print(loss)
But it gives me an error. What should I do to overcome this?
Traceback (most recent call last):
File "test_pro.py", line 34, in <module>
model = keras.models.load_model('./model',compile= False,custom_objects={"F1Score": tfa.metrics.F1Score})
File "/home/dcs2016csc007/.local/lib/python3.8/site-packages/tensorflow/python/keras/saving/save.py", line 212, in load_model
return saved_model_load.load(filepath, compile, options)
File "/home/dcs2016csc007/.local/lib/python3.8/site-packages/tensorflow/python/keras/saving/saved_model/load.py", line 138, in load
keras_loader.load_layers()
File "/home/dcs2016csc007/.local/lib/python3.8/site-packages/tensorflow/python/keras/saving/saved_model/load.py", line 379, in load_layers
self.loaded_nodes[node_metadata.node_id] = self._load_layer(
File "/home/dcs2016csc007/.local/lib/python3.8/site-packages/tensorflow/python/keras/saving/saved_model/load.py", line 407, in _load_layer
obj, setter = revive_custom_object(identifier, metadata)
File "/home/dcs2016csc007/.local/lib/python3.8/site-packages/tensorflow/python/keras/saving/saved_model/load.py", line 921, in revive_custom_object
raise ValueError('Unable to restore custom object of type {} currently. '
ValueError: Unable to restore custom object of type _tf_keras_metric currently. Please make sure that the layer implements `get_config`and `from_config` when saving. In addition, please use the `custom_objects` arg when calling `load_model()`.
Looking at the source code for Keras, the error is raised when trying to load a model with a custom object:
def revive_custom_object(identifier, metadata):
"""Revives object from SavedModel."""
if ops.executing_eagerly_outside_functions():
model_class = training_lib.Model
else:
model_class = training_lib_v1.Model
revived_classes = {
constants.INPUT_LAYER_IDENTIFIER: (
RevivedInputLayer, input_layer.InputLayer),
constants.LAYER_IDENTIFIER: (RevivedLayer, base_layer.Layer),
constants.MODEL_IDENTIFIER: (RevivedNetwork, model_class),
constants.NETWORK_IDENTIFIER: (RevivedNetwork, functional_lib.Functional),
constants.SEQUENTIAL_IDENTIFIER: (RevivedNetwork, models_lib.Sequential),
}
parent_classes = revived_classes.get(identifier, None)
if parent_classes is not None:
parent_classes = revived_classes[identifier]
revived_cls = type(
compat.as_str(metadata['class_name']), parent_classes, {})
return revived_cls._init_from_metadata(metadata) # pylint: disable=protected-access
else:
raise ValueError('Unable to restore custom object of type {} currently. '
'Please make sure that the layer implements `get_config`'
'and `from_config` when saving. In addition, please use '
'the `custom_objects` arg when calling `load_model()`.'
.format(identifier))
The method will only work fine with the custom objects of the types defined in revived_classes. As you can see, it currently only works with input layer, layer, model, network, and sequential custom objects.
In your code, you pass an tfa.metrics.F1Score class in the custom_objects argument, which is of type METRIC_IDENTIFIER, therefore, not supported (probably because it doesn't implement the get_config and from_config functions as the error output says):
keras.models.load_model('./model', compile=False, custom_objects={"F1Score": tfa.metrics.F1Score})
It's been a while since I last worked with Keras but maybe you can try and follow what was proposed in this other related answer and wrap the call to tfa.metrics.F1Score in a method. Something like this (adjust it to your needs):
def f1(y_true, y_pred):
metric = tfa.metrics.F1Score(num_classes=3, threshold=0.5)
metric.update_state(y_true, y_pred)
return metric.result()
keras.models.load_model('./model', compile=False, custom_objects={'f1': f1})

Tensorflow how to change hub.Module() to local folder

how can I change:
BERT_MODEL = "https://tfhub.dev/google/bert_multi_cased_L-12_H-768_A-12/1"
def create_tokenizer_from_hub_module():
"""Get the vocab file and casing info from the Hub module."""
with tf.Graph().as_default():
bert_module = hub.Module(BERT_MODEL)
tokenization_info = bert_module(signature="tokenization_info", as_dict=True)
with tf.Session() as sess:
vocab_file, do_lower_case = sess.run([tokenization_info["vocab_file"],
tokenization_info["do_lower_case"]])
return bert.tokenization.FullTokenizer(
vocab_file=vocab_file, do_lower_case=do_lower_case)
tokenizer = create_tokenizer_from_hub_module()
So that I can load a local BERT model without the hub.Module() call as it doesn't work with a local path.
I downloaded a different TF1 pre-trained model from a different website, unzipped it and stored in /test/module/.
If I change above BERT_MODEL = "/test/module" how would I need to change the rest? I now get string errors as tokenization_info = bert_module(signature="tokenization_info", as_dict=True) doesn't work.
Help please I am new to TF - note I need to use TF1, not TF2.
Note: on suggestion below I get:
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-11-a98e44536f87> in <module>()
9 return vocab_file, do_lower_case
10
---> 11 print(get_bert_tokenizer_info("/tmp/local_copy"))
12 # Will print: (b'/tmp/local_copy/assets/vocab.txt', False)
4 frames
/usr/local/lib/python3.6/dist-packages/tensorflow_hub/registry.py in __call__(self, *args, **kwargs)
43 raise RuntimeError(
44 "Missing implementation that supports: %s(*%r, **%r)" % (
---> 45 self._name, args, kwargs))
46
47
RuntimeError: Missing implementation that supports: loader(*('/tmp/local_copy',), **{})
hub.Module works with local uncompressed paths, so you can change BERT_MODEL to another path and reuse the same code.
Example:
Create local copy of the module:
mkdir /tmp/local_copy
wget "https://tfhub.dev/google/bert_multi_cased_L-12_H-768_A-12/1?tf-hub-format=compressed" -O "module.tar.gz"
tar -C /tmp/local_copy -xzvf module.tar.gz
Use the local copy of the module:
import tensorflow as tf
import tensorflow_hub as hub
def get_bert_tokenizer_info(bert_module):
"""Get the vocab file and casing info from the Hub module."""
with tf.Graph().as_default():
bert_module = hub.Module(bert_module)
tokenization_info = bert_module(signature="tokenization_info", as_dict=True)
with tf.Session() as sess:
vocab_file, do_lower_case = sess.run([tokenization_info["vocab_file"],
tokenization_info["do_lower_case"]])
return vocab_file, do_lower_case
print(get_bert_tokenizer_info("/tmp/local_copy"))
# Will print: (b'/tmp/local_copy/assets/vocab.txt', False)

Tesorflow Custom Layer in High level API: throws object has no attribute '_expects_mask_arg' error

I am trying to reconsturct an image based on three inputs of previous layers normal (None,128,128,3),albedo(None,128,128,3) and lighting(27) . But here the code still says object has no attribute '_expects_mask_arg' error .I have presented my code here in which I have implemented a custom layer using Tensorflow v2 beta using the high level API.
import math
class Reconstruction_Layer(tf.keras.layers.Layer):
def __init__(self,input_shape ):
super(Reconstruction_Layer, self).__init__()
#self.num_outputs = num_outputs
#self.pixel=np.zeros((9),dtype=int)
self.sphar=np.zeros((9),dtype=float)
self.y=np.zeros((9),dtype=float)
self.reconstructed_img=np.zeros((128,128,3),dtype=float)
#self.y=tf.zeros([128,128,9])
self.normal_light=np.zeros((128,128,9),dtype=float)
self.y_temp=np.zeros((9),dtype=float)
w_init = tf.random_normal_initializer()
self.r_img = tf.Variable(initial_value=w_init(shape=input_shape),dtype='float32',trainable=True)
def build(self,input_shape):
super(MyLayer, self).build(input_shape)
def call(self,input_layer):
self.normal,self.albedo,self.light = input_layer
for i in range(128):
for j in range(128):
#self.y=spherical_harmonic_calc(self.normal(i,j))
self.pixel=self.normal[i,j,:]
#self.normal_light(i,j)= self.y
self.sphar[0]=(1/((4*math.pi)**0.5))
self.sphar[1]=((3/(4*math.pi))**0.5)*self.pixel[2]
self.sphar[3]=(((3/(4*math.pi))**0.5)*self.pixel[1])
self.sphar[4]=((1/2)*((5/(4*math.pi))**0.5)*(3*(self.pixel[2]**2) - 1))
self.sphar[5]=(3*((5/(12*math.pi))**0.5)*self.pixel[2]*self.pixel[0])
self.sphar[6]=(3*((5/(12*math.pi))**0.5)*self.pixel[2]*self.pixel[1])
self.sphar[7]=((3/2)*((5/(12*math.pi))**0.5)*((self.pixel[0]**2)-(self.pixel[1]**2)))
self.sphar[8]=(3*((5/(12*math.pi))**0.5)*self.pixel[0]*self.pixel[1])
self.normal_light[i,j,:]=self.sphar
for j in range(128):
for k in range(128):
for i in range(3):
self.reconstructed_img[j,k,i]=self.albedo[j,k,i]* tf.tensordot(self.normal_light[j,k],self.light[i*9:(i+1)*9 ],axes=1)
self.reconstructed_img=tf.convert_to_tensor(self.reconstructed_img)
self.r_img=self.reconstructed_img
return self.r_img
"""
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-74-06759ef5b0b5> in <module>
1 import numpy as np
----> 2 x=Reconstruction_Layer((128,128,3))(d)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/base_layer.py in __call__(self, inputs, *args, **kwargs)
580 # explicitly take priority.
581 input_masks = self._collect_input_masks(inputs, args, kwargs)
--> 582 if (self._expects_mask_arg and input_masks is not None and
583 not self._call_arg_was_passed('mask', args, kwargs)):
584 kwargs['mask'] = input_masks
AttributeError: 'Reconstruction_Layer' object has no attribute '_expects_mask_arg'
"""
I just had the same error and it was due to me forgetting to call .__init__() after super(). You did it, but this make me think that this error is due to wrong initialization of the base layer you are deriving from.
I notice that in the doc example it's not necessary to call build() on the base layer, and it works for me if you remove that function (as it does nothing related to your layer).

How to initialize tf.metrics members in TensorFlow?

The below is a part of my project code.
with tf.name_scope("test_accuracy"):
test_mean_abs_err, test_mean_abs_err_op = tf.metrics.mean_absolute_error(labels=label_pl, predictions=test_eval_predict)
test_accuracy, test_accuracy_op = tf.metrics.accuracy(labels=label_pl, predictions=test_eval_predict)
test_precision, test_precision_op = tf.metrics.precision(labels=label_pl, predictions=test_eval_predict)
test_recall, test_recall_op = tf.metrics.recall(labels=label_pl, predictions=test_eval_predict)
test_f1_measure = 2 * test_precision * test_recall / (test_precision + test_recall)
tf.summary.scalar('test_mean_abs_err', test_mean_abs_err)
tf.summary.scalar('test_accuracy', test_accuracy)
tf.summary.scalar('test_precision', test_precision)
tf.summary.scalar('test_recall', test_recall)
tf.summary.scalar('test_f1_measure', test_f1_measure)
# validation metric init op
validation_metrics_init_op = tf.variables_initializer(\
var_list=[test_mean_abs_err_op, test_accuracy_op, test_precision_op, test_recall_op], \
name='validation_metrics_init')
However, when I run it, errors occur like this:
Traceback (most recent call last):
File "./run_dnn.py", line 285, in <module>
train(wnd_conf)
File "./run_dnn.py", line 89, in train
name='validation_metrics_init')
File "/export/local/anaconda2/lib/python2.7/site-
packages/tensorflow/python/ops/variables.py", line 1176, in
variables_initializer
return control_flow_ops.group(*[v.initializer for v in var_list], name=name)
AttributeError: 'Tensor' object has no attribute 'initializer'
I realize that I cannot create a validation initializer like that. I want to re-calculate the corresponding metrics when I save a new checkpoint model and apply a new round of validation. So, I have to re-initialize the metrics to be zero.
But how to reset all these metrics to be zero? Many thanks to your help!
I sovled the problem in the following way after referring to the blog (Avoiding headaches with tf.metrics).
# validation metrics
validation_metrics_var_scope = "validation_metrics"
test_mean_abs_err, test_mean_abs_err_op = tf.metrics.mean_absolute_error(labels=label_pl, predictions=test_eval_predict, name=validation_metrics_var_scope)
test_accuracy, test_accuracy_op = tf.metrics.accuracy(labels=label_pl, predictions=test_eval_predict, name=validation_metrics_var_scope)
test_precision, test_precision_op = tf.metrics.precision(labels=label_pl, predictions=test_eval_predict, name=validation_metrics_var_scope)
test_recall, test_recall_op = tf.metrics.recall(labels=label_pl, predictions=test_eval_predict, name=validation_metrics_var_scope)
test_f1_measure = 2 * test_precision * test_recall / (test_precision + test_recall)
tf.summary.scalar('test_mean_abs_err', test_mean_abs_err)
tf.summary.scalar('test_accuracy', test_accuracy)
tf.summary.scalar('test_precision', test_precision)
tf.summary.scalar('test_recall', test_recall)
tf.summary.scalar('test_f1_measure', test_f1_measure)
# validation metric init op
validation_metrics_vars = tf.get_collection(tf.GraphKeys.LOCAL_VARIABLES, scope=validation_metrics_var_scope)
validation_metrics_init_op = tf.variables_initializer(var_list=validation_metrics_vars, name='validation_metrics_init')
a minimal working example that can be run line by line in a python terminal:
import tensorflow as tf
s = tf.Session()
acc = tf.metrics.accuracy([0,1,0], [0.1, 0.9, 0.8])
ini = tf.variables_initializer(tf.get_collection(tf.GraphKeys.LOCAL_VARIABLES))
s.run([ini])
s.run([acc])