Deploying model - tensorflow

I just finished training a categorizer model exactly the way described in https://github.com/GoogleCloudPlatform/MiniCat but I am not sure how to use the model to make predictions.
Trained model in the direction Train
Data in the directory Data
I'm really new to this and I don't know where to start. I read something about deploying model in https://cloud.google.com/ml-engine/docs/tensorflow/deploying-models but how do I even create a SavedModel.
Any answers will be appreciated.

So in the folder where you got the trained model, you just need to load that model in your session. First create a saver (you can also use it for laoding)
train_saver = tf.train.Saver()
Now inside your session:
train_saver.restore(sess, 'path/to/model/doc_classifier_cnn_model.ckpt')
Then just feed the tensors with feed_dict.
Other option is to create a protobuf file (.pb) but in doing so you will have to load the model as I said.

Related

Use Tensorflow2 saved model for object detection

im quite new to object detection but i managed to train my first Tensorflow custom model yesterday. I think it worked fine besides some warnings, at least i got my exported_model folder with checkpoint, saved model and pipeline.config. I built it with exporter_main_v2.py from Tensorflow. I just loaded some images of deers and want to try to detect some on different pictures.
That's what i would like to test now, but i dont know how. I already did an object detection tutorial with pre trained models and it worked fine. I tried to just replace config_file_path, saved_model_path and image_path with the paths linking to my exported model but it didnt work:
error: OpenCV(4.6.0) D:\a\opencv-python\opencv-python\opencv\modules\dnn\src\tensorflow\tf_io.cpp:42: error: (-2:Unspecified error) FAILED: ReadProtoFromBinaryFile(param_file, param). Failed to parse GraphDef file: D:\VSCode\Machine_Learning_Tests\Tensorflow\workspace\exported_models\first_model\saved_model\saved_model.pb in function 'cv::dnn::ReadTFNetParamsFromBinaryFileOrDie'
There are endless tutorials on how to train custom detection but i cant find a good explanation how to manually test my exported model.
Thanks in advance!
EDIT: I need to know how to build a script where i can import a model i saved with Tensorflow exporter_main_v2.py and an image i want to test this model on and get a result, either in text or with rectangels in picture. Seeing many tutorials but none works for me with a model i saved with Tensorflow exporter_main_v2.py
From the error it looks like you have a model saved as .pb. If you want to do inference you can write something like this:
# load the model
model = tf.keras.models.load_model(my_model_dir)
prediction = model.predict(x=x_test, ...)
You'll have to set x which is the only mandatory argument. It is your test dataset (the images you want to obtain predictions from). Also, predict is useful when you have a great amount of images to predict. It handles the prediction in a batched way, avoiding filling up the memory. If you have just a few you can use directly the __call__() method of your model, like this:
prediction = model(x_test, training=False)
More about prediction can be found at the Tensorflow documentation.

what is equivalent to torch.load() in tensorflow?

I want to know if there is any way to see the parameters of models in tensorflow. there is a command in pytorch i.e. torch.load('/filepath').
For a prediction context, you can do a
model = tf.keras.models.load_model(PATH, compile=True)
That works with both .h5 keras models and models on SavedModel format. Otherwise you might have to provide custom metrics and training code you may not have on the prediction context.
For references, check it here: https://www.tensorflow.org/api_docs/python/tf/keras/models/load_model
Provided that you already have a model saved at MODEL_PATH, this should do the trick:
model = tf.keras.models.load_model(MODEL_PATH)
model.summary()
Check this out for more info on saving and loading models.

Training a keras model on pretrained weights using load_weights()

I am using a custom keras model in Databricks environment.
For a custom keras model, model.save(model.h5) does not work, because custom model is not serializable. Instead it is recommended to use model.save_weights(path) as an alternate.
model.save_weights(pathDirectory) works. This yields 3 files checkpoint,.data-00000-of-00001,.index in the pathDirectory
For loading weights, Following mechanism is working fine.
model = Model()
model.load_weights(path)
But I want to train my model on pretrained weights I just saved. Like I saved model weights, and continue training on these saved weights afterwards.
So, when I load model weights and apply training loop, I get this error, TypeError: 'CheckpointLoadStatus' object is not callable
After much research, I have found a workaround,
we can also save model using
model.save("model.hpy5") and read it the saved model in databricks.
model.h5 not work for customized models, but it works for standard models.

how to do finetune using pre-trained model in tf.estimator

i got a model converted from caffe by using MMDNN tool, it converted the caffe model into a saved_model tensorflow style. it's a resnet18 model, and i just strip out several layers in the last, i wish i could load this architecture in the model_fn in a tf.estimator, and manually add some extra layers to do my job.
As the tutorial recommended that I could use loader.load method to load the saved_model. But i just want to use it in a estimator, and i need to define the architecture in the model_fn function. I searched out the SO and github but there isn't a very specific workflow to do that thing, somebody could help me out?
Here is one way of fine tuning using tf.Estimator:
Define your model using the SAME variable names/scopes as in your saved model
Use tf.estimator's warm start functions to initialize your new model with the saved weights. Here is a code snippet :
if fine_tuning:
ws = tf.estimator.WarmStartSettings(ckpt_to_initialize_from=path_saved_model,
vars_to_warm_start='.*')
else:
ws = None
estimator = tf.estimator.Estimator(model_fn=model_function,
warm_start_from=ws,
...
)
This will initialize any variable that share names between your currently defined graph and the saved model.

Converting a model trained and saved with tf.estimator to .pb

I have a model trained with tf.estimator and it was exported after training as below
serving_input_fn = tf.estimator.export.build_raw_serving_input_receiver_fn(
feature_placeholders)
classifier.export_savedmodel(
r'./path/to/model/trainedModel', serving_input_fn)
This gives me a saved_model.pb and a folder which contains weights as a .data file. I can reload the saved model using
predictor = tf.contrib.predictor.from_saved_model(r'./path/to/model/trainedModel')
I'd like to run this model on android and that requires the model to be in .pb format. How can I freeze this predictor for use on android platform?
I don't deploy to Android, so you might need to customize the steps a bit, but this is how I do this:
Run <tensorflow_root_installation>/python/tools/freeze_graph.py with arguments --input_saved_model_dir=<path_to_the_savedmodel_directory>, --output_node_names=<full_name_of_the_output_node> (you can get the name of the output node from graph.pbtxt, although that's not the most comfortable of ways), --output_graph=frozen_model.pb
(optionally) Run <tensorflow_root_installation>/python/tools/optimize_for_inference.py with adequate arguments. Alternatively you can look up the Graph Transform Tool and selectively apply optimizations.
At the end of step 1 you'll already have a frozen model with no variables left, that you can then deploy to Android.