Model cannot be save - universal-sentence-encoder-qa - tensorflow

I have spent few hours to troubleshoot this issue but not clue.... hope someone can help.
The error I got it below
ValueError: Model XXXXXXX cannot be saved either because the input shape is not available or because the forward pass of the model is not defined.To define a forward pass, please override Model.call(). To specify an input shape, either call build(input_shape) directly, or call the model on actual data using Model(), Model.fit(), or Model.predict(). If you have a custom training step, please make sure to invoke the forward pass in train step through Model.call_, i.e. model(inputs), as opposed to model.call().
import tensorflow as tf
import tensorflow_hub as hub
class queryEncoder(tf.keras.Model):
def __init__(self):
super().__init__()
module = hub.load('https://tfhub.dev/google/universal-sentence-encoder-qa/3')
self.encoder = module.signatures['question_encoder']
def call(self, inputs):
return self.encoder(input=tf.constant(inputs))['outputs']
qModel = queryEncoder()
qModel.save('use-query-encoder')
!gsutil cp use-query-encoder gs://ed-model-artifacts-xxyyzz
Actually, what I wanted to do is follow Google's youtube to build a Q&A model with Vertex AI.
https://www.youtube.com/watch?v=5iSmX8sqtx8
But the tutorial didn't give any source code, so I typed the codes follow the video.
If I didn't run qModel.save('use-query-encoder') , but run qModel((["This is hello world"])) , it seems works - returning embedding successfully.
So, the model should be working fine, but some how cannot be save.
I read a lot of posts in Stackoverflow, but I still cannot solve my issue. Perhaps I am too new to Python and TF. I found some posts said, I should run fit() before save(). However, in this can I don't know what parameters should be put in fit()...
I am newbie to Python, perhaps I did some silly thing. If so, please point out :)

Related

Use Tensorflow2 saved model for object detection

im quite new to object detection but i managed to train my first Tensorflow custom model yesterday. I think it worked fine besides some warnings, at least i got my exported_model folder with checkpoint, saved model and pipeline.config. I built it with exporter_main_v2.py from Tensorflow. I just loaded some images of deers and want to try to detect some on different pictures.
That's what i would like to test now, but i dont know how. I already did an object detection tutorial with pre trained models and it worked fine. I tried to just replace config_file_path, saved_model_path and image_path with the paths linking to my exported model but it didnt work:
error: OpenCV(4.6.0) D:\a\opencv-python\opencv-python\opencv\modules\dnn\src\tensorflow\tf_io.cpp:42: error: (-2:Unspecified error) FAILED: ReadProtoFromBinaryFile(param_file, param). Failed to parse GraphDef file: D:\VSCode\Machine_Learning_Tests\Tensorflow\workspace\exported_models\first_model\saved_model\saved_model.pb in function 'cv::dnn::ReadTFNetParamsFromBinaryFileOrDie'
There are endless tutorials on how to train custom detection but i cant find a good explanation how to manually test my exported model.
Thanks in advance!
EDIT: I need to know how to build a script where i can import a model i saved with Tensorflow exporter_main_v2.py and an image i want to test this model on and get a result, either in text or with rectangels in picture. Seeing many tutorials but none works for me with a model i saved with Tensorflow exporter_main_v2.py
From the error it looks like you have a model saved as .pb. If you want to do inference you can write something like this:
# load the model
model = tf.keras.models.load_model(my_model_dir)
prediction = model.predict(x=x_test, ...)
You'll have to set x which is the only mandatory argument. It is your test dataset (the images you want to obtain predictions from). Also, predict is useful when you have a great amount of images to predict. It handles the prediction in a batched way, avoiding filling up the memory. If you have just a few you can use directly the __call__() method of your model, like this:
prediction = model(x_test, training=False)
More about prediction can be found at the Tensorflow documentation.

Fine Tuning Blenderbot

I have been trying to fine-tune a conversational model of HuggingFace: Blendebot. I have tried the conventional method given on the official hugging face website which asks us to do it using the trainer.train() method. I also tried it using the .compile() method. I have tried fine-tuning using PyTorch as well as TensorFlow on my dataset. Both methods seem to fail and give us an error saying that there is no method called compile or train for the Blenderbot model.
I have also looked everywhere online to check how Blenderbot could be fine-tuned on my custom data and nowhere does it mention properly that runs without throwing an error. I have gone through Youtube tutorials, blogs, and StackOverflow posts but none answer this question. Hoping someone would respond here and help me out. I am open to using other HuggingFace Conversational Models as well for fine-tuning.
Thank you! :)
Here is a link I am using to fine-tune the blenderbot model.
Fine-tuning methods: https://huggingface.co/docs/transformers/training
Blenderbot: https://huggingface.co/docs/transformers/model_doc/blenderbot
from transformers import BlenderbotTokenizer, BlenderbotForConditionalGeneration
mname = "facebook/blenderbot-400M-distill"
model = BlenderbotForConditionalGeneration.from_pretrained(mname)
tokenizer = BlenderbotTokenizer.from_pretrained(mname)
#FOR TRAINING:
trainer = Trainer(
model=model,
args=training_args,
train_dataset=small_train_dataset,
eval_dataset=small_eval_dataset,
compute_metrics=compute_metrics,
)
trainer.train()
#OR
model.compile(
optimizer=tf.keras.optimizers.Adam(learning_rate=5e-5),
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=tf.metrics.SparseCategoricalAccuracy(),
)
model.fit(tf_train_dataset, validation_data=tf_validation_dataset, epochs=3)
None of these work! :(

I need to upload weights that were saved on tensorflow 1.x to an identical model in tensroflow 2.x

So I have an old model with tensorflow 1.x code and it includes too much stuff I don't need, all I need is just the model and I created the model in a way I'm almost certain is identical to the previous one (I checked a bunch of stuff)
I have the .data and .index and a .meta file and I tried very many different types of things and either it says that "a few things weren't saved" and then lists all of the weights (but not really the entire thing, cause when the weights are too big it just adds three dots (...) )
I would LOVE to have someone tell me how I can use that in my new model
I tried:
model.load_weights
I tried:
tf.compat.v1.disable_eager_execution()
sess = tf.compat.v1.Session()
saver = tf.compat.v1.train.import_meta_graph('checkpoints/pix2pix-60.meta')
saver.restore( "checkpoints/pix2pix-60")
I tried:
tf.compat.v1.disable_eager_execution()
sess = tf.compat.v1.Session()
saver = tf.compat.v1.train.Checkpoint(model=gen)
saver.restore(tf.train.latest_checkpoint('checkpoints')).assert_consumed()
I tried:
ck_path = tf.train.latest_checkpoint('checkpoints')
gen.load_weights(ck_path)
I tried:
from tensorflow.python.training import checkpoint_utils as cp
ckpt = cp.load_checkpoint('checkpoints/pix2pix--60')
and then tried to see what I can do with that
and I think I tried honestly a bunch of more stuff
I honestly won't mind if someone can even just tell me how I can read the .index or .data files so that I can just copy the weights and from there I can deal with it
I would again really love some help,
Thanks!
It seems that your TF1.x model is saved as a ckpt format, and to restore a ckpt model, you need get the graph before load weight.
To convert it to TF2.x model, you may instantiate the original model, then save it as like recommended saved_model format use 2.x api.
Your can continue your second trying, use compat v1 to instantiate a default Session, then load graph from meta file, then load weight, after this, your Session will contain your graph and loaded weights.
To convert to 2.x model, you need get the inputs and outputs tensors from graph:
# you have loaded graph and weight into sess
sess.as_default()
g = sess.graph
# assuming that your input output names are "input:0", "output:0"
input_tensor = g.get_tensor_by_name("input:0")
output_tensor = g.get_tensor_by_name("output:0")
# then use tf2.x to save a saved_model format model
model = tf.keras.Model(input_tensor, output_tensor, name="tf2_model")
model.save("your_saved_dir")
A saved_model format model stores all graph and weight, you can simply use
model = tf.saved_model.load("your_model_dir")
to instantiate model for using.
Ok, So I think I figured it out although it was quite tedious
In the model in tensorflow 1.x all variables were created with tf.name_scope and in tensorflow 2.x there is no such thing so the variable names were unmatched and so I pretty much had to kind of manually change the names so they would fit and then it really did upload the weights as such:
checkpoint = tf.train.Checkpoint(model=gen)
checkpoint.restore('checkpoints/pix2pix--60').assert_consumed()
this also seemed to work:
gen.load_weights('checkpoints/pix2pix--60')
however something is still not working correctly since the output is actually not what I am expecting (what the output is like in the tensorflow 1.x model)
It may have something to do with the batch_normalization weights that aren't being loaded but I checked and in my current tf 2.x model they are untrainable and are equal to exactly the weights that aren't being loaded
Another weird thing is that when I do gen.predict(x) it gives me a different outcome each time, so I guess the weights aren't being frozen or something...
So I have yet to understand what went wrong previously, but I do know that there have been many changes in the API of tf2 from tf1 including default parameters and more so what I eventually did which worked perfectly was this:
tf_upgrade_v2
--intree my_project/
--outtree my_project_v2/
--reportfile report.txt
as explained here
you just put all the pieces of code you want to change in folder my_project and it creates a folder named myproject_v2 with the tf1 code converted to tf2

Outputting multiple loss components to tensorboard from tensorflow estimators

I am pretty new to tensorflow and I am struggling to get tensorboard to display some of my custom metrics. The model I am working with is a tf.estimator.Estimator, with an associated EstimatorSpec. The first new metric I am trying to log is from my loss function, which is composed of two components: a loss for an age prediction (tf.float32) and a loss for a class prediction (one-hot/multiclass), which I add together to determine a total loss (my model is predicting both a class and an age). The total loss is output just fine during training and shows up on tensorboard, but I would like to track the individual age and the class prediction loss components as well.
I think a solution that is supposed to work is to add a eval_metric_ops argument to the EstimatorSpec as described here (Custom eval_metric_ops in Estimator in Tensorflow). I have not been able to make this approach work, however. I defined a custom metric function that looks like this:
def age_loss_function(labels, ages_pred, ages_true):
per_sample_age_loss = get_age_loss_per_sample(ages_pred, ages_true) ### works fine
#### The error happens on this line:
mean_abs_age_diff, age_loss_update_fn = tf.metrics.Mean(per_sample_age_loss)
######
return mean_abs_age_diff, age_loss_update_fn
eval_metric_ops = {"age_loss": age_loss_function} #### Want to use this in EstimatorSpec
The instructions seem to say that I need both the error metric and the update function which should both be returned from the tf.metrics command as in examples like the one I linked. But this command fails for me with the error message:
tensorflow.python.framework.errors_impl.OperatorNotAllowedInGraphError: using a `tf.Tensor` as a Python `bool` is not allowed in Graph execution. Use Eager execution or decorate this function with #tf.function.
I am probably just misusing the APIs. If someone can guide me on the proper usage I would really appreciate it. Thanks!
It looks like the problem was from a version change. I had updated to tensorflow 2.0 while the instructions I was following were from 1.X. Using tf.compat.v1.metrics.mean() instead gets past this problem.

Keras + Tensorflow model convert to coreml exits NameError: global name ... is not defined

I've adapted the VAE example from the keras site to train on my data, and everything runs fine. But I'm unable to convert to coreml. The error is:
NameError: global name `batch_size' is not defined
Since batch_size clearly is defined in the python source, I'm guessing it has to do with how the conversion tool captures variable names. Does anyone know how I can fix it (or whether it is, indeed, possible to fix)?
Many thanks,
J.
I ran into a similar message when using parameters to construct the neural net. This should work:
from keras import models
batch_size = 50
model = models.load_model(filename, custom_objects={'batch_size': batch_size})
See also documentation: https://keras.io/getting-started/faq/#how-can-i-save-a-keras-model