Use Tensorflow2 saved model for object detection - tensorflow

im quite new to object detection but i managed to train my first Tensorflow custom model yesterday. I think it worked fine besides some warnings, at least i got my exported_model folder with checkpoint, saved model and pipeline.config. I built it with exporter_main_v2.py from Tensorflow. I just loaded some images of deers and want to try to detect some on different pictures.
That's what i would like to test now, but i dont know how. I already did an object detection tutorial with pre trained models and it worked fine. I tried to just replace config_file_path, saved_model_path and image_path with the paths linking to my exported model but it didnt work:
error: OpenCV(4.6.0) D:\a\opencv-python\opencv-python\opencv\modules\dnn\src\tensorflow\tf_io.cpp:42: error: (-2:Unspecified error) FAILED: ReadProtoFromBinaryFile(param_file, param). Failed to parse GraphDef file: D:\VSCode\Machine_Learning_Tests\Tensorflow\workspace\exported_models\first_model\saved_model\saved_model.pb in function 'cv::dnn::ReadTFNetParamsFromBinaryFileOrDie'
There are endless tutorials on how to train custom detection but i cant find a good explanation how to manually test my exported model.
Thanks in advance!
EDIT: I need to know how to build a script where i can import a model i saved with Tensorflow exporter_main_v2.py and an image i want to test this model on and get a result, either in text or with rectangels in picture. Seeing many tutorials but none works for me with a model i saved with Tensorflow exporter_main_v2.py

From the error it looks like you have a model saved as .pb. If you want to do inference you can write something like this:
# load the model
model = tf.keras.models.load_model(my_model_dir)
prediction = model.predict(x=x_test, ...)
You'll have to set x which is the only mandatory argument. It is your test dataset (the images you want to obtain predictions from). Also, predict is useful when you have a great amount of images to predict. It handles the prediction in a batched way, avoiding filling up the memory. If you have just a few you can use directly the __call__() method of your model, like this:
prediction = model(x_test, training=False)
More about prediction can be found at the Tensorflow documentation.

Related

I need to upload weights that were saved on tensorflow 1.x to an identical model in tensroflow 2.x

So I have an old model with tensorflow 1.x code and it includes too much stuff I don't need, all I need is just the model and I created the model in a way I'm almost certain is identical to the previous one (I checked a bunch of stuff)
I have the .data and .index and a .meta file and I tried very many different types of things and either it says that "a few things weren't saved" and then lists all of the weights (but not really the entire thing, cause when the weights are too big it just adds three dots (...) )
I would LOVE to have someone tell me how I can use that in my new model
I tried:
model.load_weights
I tried:
tf.compat.v1.disable_eager_execution()
sess = tf.compat.v1.Session()
saver = tf.compat.v1.train.import_meta_graph('checkpoints/pix2pix-60.meta')
saver.restore( "checkpoints/pix2pix-60")
I tried:
tf.compat.v1.disable_eager_execution()
sess = tf.compat.v1.Session()
saver = tf.compat.v1.train.Checkpoint(model=gen)
saver.restore(tf.train.latest_checkpoint('checkpoints')).assert_consumed()
I tried:
ck_path = tf.train.latest_checkpoint('checkpoints')
gen.load_weights(ck_path)
I tried:
from tensorflow.python.training import checkpoint_utils as cp
ckpt = cp.load_checkpoint('checkpoints/pix2pix--60')
and then tried to see what I can do with that
and I think I tried honestly a bunch of more stuff
I honestly won't mind if someone can even just tell me how I can read the .index or .data files so that I can just copy the weights and from there I can deal with it
I would again really love some help,
Thanks!
It seems that your TF1.x model is saved as a ckpt format, and to restore a ckpt model, you need get the graph before load weight.
To convert it to TF2.x model, you may instantiate the original model, then save it as like recommended saved_model format use 2.x api.
Your can continue your second trying, use compat v1 to instantiate a default Session, then load graph from meta file, then load weight, after this, your Session will contain your graph and loaded weights.
To convert to 2.x model, you need get the inputs and outputs tensors from graph:
# you have loaded graph and weight into sess
sess.as_default()
g = sess.graph
# assuming that your input output names are "input:0", "output:0"
input_tensor = g.get_tensor_by_name("input:0")
output_tensor = g.get_tensor_by_name("output:0")
# then use tf2.x to save a saved_model format model
model = tf.keras.Model(input_tensor, output_tensor, name="tf2_model")
model.save("your_saved_dir")
A saved_model format model stores all graph and weight, you can simply use
model = tf.saved_model.load("your_model_dir")
to instantiate model for using.
Ok, So I think I figured it out although it was quite tedious
In the model in tensorflow 1.x all variables were created with tf.name_scope and in tensorflow 2.x there is no such thing so the variable names were unmatched and so I pretty much had to kind of manually change the names so they would fit and then it really did upload the weights as such:
checkpoint = tf.train.Checkpoint(model=gen)
checkpoint.restore('checkpoints/pix2pix--60').assert_consumed()
this also seemed to work:
gen.load_weights('checkpoints/pix2pix--60')
however something is still not working correctly since the output is actually not what I am expecting (what the output is like in the tensorflow 1.x model)
It may have something to do with the batch_normalization weights that aren't being loaded but I checked and in my current tf 2.x model they are untrainable and are equal to exactly the weights that aren't being loaded
Another weird thing is that when I do gen.predict(x) it gives me a different outcome each time, so I guess the weights aren't being frozen or something...
So I have yet to understand what went wrong previously, but I do know that there have been many changes in the API of tf2 from tf1 including default parameters and more so what I eventually did which worked perfectly was this:
tf_upgrade_v2
--intree my_project/
--outtree my_project_v2/
--reportfile report.txt
as explained here
you just put all the pieces of code you want to change in folder my_project and it creates a folder named myproject_v2 with the tf1 code converted to tf2

Can't save save/export and load a keras model that uses eager execution

I'm following the RNN text-generation tutorial with eager execution pretty much line for line. I've trained the model with my own data set and have saved a low loss checkpoint. I'm able to load the weights and generate text but I want to export/save the model so that I can learn how to deploy one using flask. However I can't figure out how. The version I'm using is '1.14.0-rc1'.
The tutorial: https://www.tensorflow.org/tutorials/sequences/text_generation
I have been able to save the model as an HDF5 file but I cannot load it. I've also disabled eager execution but that causes problems with running the code later on. I have tried the following and a few more snippets but those led to nothing as well:
new_model = keras.models.load_model("/content/gdrive/My Drive/ColabNotebooks/ckpt4/my_model.h5")
How ever I get
RuntimeError: tf.placeholder() is not compatible with eager execution.
Lastly I found this in another post and tried it as well but was met with another error:
tf.saved_model.save(model, "/content/gdrive/My Drive/Colab Notebooks/ckpt4/my_model.h5")
error:
AssertionError: Tried to export a function which references untracked object Tensor("StatefulPartitionedCall/args_2:0", shape=(), dtype=resource).TensorFlow objects (e.g. tf.Variable) captured by functions must be tracked by assigning them to an attribute of a tracked object or assigned to an attribute of the main object directly.

how to do finetune using pre-trained model in tf.estimator

i got a model converted from caffe by using MMDNN tool, it converted the caffe model into a saved_model tensorflow style. it's a resnet18 model, and i just strip out several layers in the last, i wish i could load this architecture in the model_fn in a tf.estimator, and manually add some extra layers to do my job.
As the tutorial recommended that I could use loader.load method to load the saved_model. But i just want to use it in a estimator, and i need to define the architecture in the model_fn function. I searched out the SO and github but there isn't a very specific workflow to do that thing, somebody could help me out?
Here is one way of fine tuning using tf.Estimator:
Define your model using the SAME variable names/scopes as in your saved model
Use tf.estimator's warm start functions to initialize your new model with the saved weights. Here is a code snippet :
if fine_tuning:
ws = tf.estimator.WarmStartSettings(ckpt_to_initialize_from=path_saved_model,
vars_to_warm_start='.*')
else:
ws = None
estimator = tf.estimator.Estimator(model_fn=model_function,
warm_start_from=ws,
...
)
This will initialize any variable that share names between your currently defined graph and the saved model.

Deploying model

I just finished training a categorizer model exactly the way described in https://github.com/GoogleCloudPlatform/MiniCat but I am not sure how to use the model to make predictions.
Trained model in the direction Train
Data in the directory Data
I'm really new to this and I don't know where to start. I read something about deploying model in https://cloud.google.com/ml-engine/docs/tensorflow/deploying-models but how do I even create a SavedModel.
Any answers will be appreciated.
So in the folder where you got the trained model, you just need to load that model in your session. First create a saver (you can also use it for laoding)
train_saver = tf.train.Saver()
Now inside your session:
train_saver.restore(sess, 'path/to/model/doc_classifier_cnn_model.ckpt')
Then just feed the tensors with feed_dict.
Other option is to create a protobuf file (.pb) but in doing so you will have to load the model as I said.

How to properly freeze a tensorflow graph containing a LookupTable

I am working with a model that uses multiple lookup tables to transform the model input from text to feature ids. I am able to train the model fine. I am able to load it via the javacpp bindings. I am using a default Saver object via the tensor flow supervisor on a periodic basis.
When I try to run the model I get the following error:
Table not initialized.
[[Node: hash_table_Lookup_3 = LookupTableFind[Tin=DT_STRING, Tout=DT_INT64,
_class=["loc:#string_to_index_2/hash_table"], _output_shapes=[[-1]],
_device="/job:localhost/replica:0/task:0/cpu:0"]
(string_to_index_2/hash_table, ParseExample/ParseExample:5, string_to_index_2/hash_table/Const)]]
I prepare the model by using the freeze_graph.py script as follows:
bazel-bin/tensorflow/python/tools/freeze_graph --input_graph=/tmp/tf/graph.pbtxt
--input_checkpoint=/tmp/tf/model.ckpt-0 --output_graph=/tmp/ticker_classifier.pb
--output_node_names=sigmoid --initializer_nodes=init_all_tables
As far as I can tell specifying the initializer_nodes has no effect on the resulting file. Am I running into something that is not currently supported? If not than is there something else I need to do to prepare the graph to be frozen?
I had the same problem when using C++ to invoke TF API to run the inference. It seems the reason is I train a model using tf.feature_column.categorical_column_with_hash_bucket, which needs to be initialized like this:
table_init_op = tf.tables_initializer(name="init_all_tables")
sess.run(table_init_op)
So when you want to freeze the model, you must append the name of table_init_op to the argument "--output_node_names":
freeze_graph --input_graph=/tmp/tf/graph.pbtxt
--input_checkpoint=/tmp/tf/model.ckpt-0
-- output_graph=/tmp/ticker_classifier.pb
--output_node_names=sigmoid,init_all_tables
--initializer_nodes=init_all_tables
When you load and init model in C++, you should first invoke TF C++ API like this:
std::vector<Tensor> dummy_outputs;
Status st = session->Run({}, {}, {"init_all_tables"}, dummy_outputs);
Now you have initialized all tables and can do other things such as inference. This issue may give you a help.