How to persist tflearn model.load - tflearn

I am using tflearn and I am new to it. I created a model and saved it. I am also able to load that model. But still I have a question unanswered.
What does model.load exactly do and how to persist that in memory even after the script ends?

model.load(model_file='path') will load pre-saved model from path.
Suppose if you save a model using model.save('my_tf_model') and that will save the built model as my_tf_model.meta my_tf_model.index my_tf_model.data-NNNNN-of-NNNNN
you would retrieve this model using model.load('my_tf_model') and then use it for prediction.

Related

How do I use the Object Detection API to evaulate an own custom model? What do I write into the config files?

I have a custom object detection model that I can call with model = MyModel() and model.loadweights(checkpoint) and I want to evaluate it using the Object Detection API.
From what I understood there are two possibilities, either I use the legacy eval.py, there I don't know, what to put into the pipeline_config file
Or I use the newer version that is implemented in model_main_tf2.py, but there I would have to save my model as model.config and I don't know what to put the pipeline file either.
Since my model is a YOLO model, it is not included in the sample once yet.
https://github.com/tensorflow/models/tree/master/research/object_detection/configs/tf2
Would really appreciate the help!
You can't calculate the mAP using the Object Detection API because there's no pipeline.config file for Yolo.
However, you can check this repo out. It's a Tensorflow based implementation of YoloV3. They have working code for calculating mAP. You can modify this accordingly to calculate the mAP of your model.

How to extract weights of DQN agent in TF-Agents framework?

I am using TF-Agents for a custom reinforcement learning problem, where I train a DQN (constructed using DqnAgents from the TF-Agents framework) on some features from my custom environment, and separately use a keras convolutional model to extract these features from images. Now I want to combine these two models into a single model and use transfer learning, where I want to initialize the weights of the first part of the network (images-to-features) as well as the second part which would have been the DQN layers in the previous case.
I am trying to build this combined model using keras.layers and compiling it with the Tf-Agents tf.networks.sequential class to bring it to the necessary form required when passing it to the DqnAgent() class. (Let's call this statement (a)).
I am able to initialize the image feature extractor network's layers with the weights since I saved it as a .h5 file and am able to obtain numpy arrays of the same. So I am able to do the transfer learning for this part.
The problem is with the DQN layers, where I saved the policy from the previous example using the prescribed Tensorflow Saved Model Format (pb) which gives me a folder containing model attributes. However, I am unable to view/extract the weights of my DQN in this way, and the recommended tf.saved_model.load('policy_directory') is not really transparent with respect to what data I can see regarding the policy. If I have to follow the transfer learning as I do in statement (a), I need to extract the weights of my DQN and assign them to the new network. The documentation seems to be quite sparse for this case where transfer learning needs to be applied.
Can anyone help me in this, by explaining how I can extract weights from the Saved Model method (from the pb file)? Or is there a better way to go about this problem?

Is it possible to run two TFLite models at the same time on a Flutter App? / make Teachable Machine recognize when an object is not present?

I am using a Teachable Machine model which i trained to recognize some specific objects, the issue with it, however, is that it does not recognize when there is nothing, basically it always assumes that one of the objects is there. One potential solution I am considering is combining two models like the YOLO V2 Tflite model in the same app. Would this be even possible/efficient? If it is what would be the best way to do it?
If anyone knows a solution to get teachable machine to recognize when the object is not present that would probably be a much better solution.
Your problem can be solved making a model ensemble: Train a classifier that learns to know if your specific objects are not in the visual space, and then use your detection model.
However, I really recommend you to upload your model to an online service and consume it via an API. As I know tflite package just supports well MobileNet based models.
I had the same problem, just create another class called whatever you want(for example none) and put some non-related images in it, then train the model.
Now whenever there is nothing in the field, it should output none.

What is difference between a regular model checkpoint and a saved model in tensorflow?

There's a fairly clear difference between a model and a frozen model. As described in model_files, relevant part: Freezing
...so there's the freeze_graph.py script that takes a graph definition and a set of checkpoints and freezes them together into a single file.
Is a "saved_model" most similar to a "frozen_model" (and not a saved
"model_checkpoint")?
Is this defined somewhere in docs I'm missing?
In prior versions of tensorflow we would save and restore model
weights, but this seems to be in context of a "model_checkpoint" not
a "saved_model", is that still correct?
I'm asking more for the design overview here, not implementation specifics.
Checkpoint file only contains variables for specific model and should be loaded with either exactly same, predefined graph or with specific assignment_map to load only chosen variables. See https://www.tensorflow.org/api_docs/python/tf/train/init_from_checkpoint
Saved model is more broad cause it contains graph that can be loaded within a session and training could be continued. Frozen graph, however, is serialized and could not be used to continue training.
You can find all the info here https://www.tensorflow.org/guide/saved_model

What's the diff between tf.import_graph_def and tf.train.import_meta_graph

When training in model folder there are auto saved meta graph files.
So what's the diff between graph and meta graph.
If I want to load model and do inference witout building graph from scratch,
use tf.train.import_meta_grah is fine?
Well, I find the answer from
https://www.tensorflow.org/versions/r0.9/how_tos/meta_graph/index.html
and
How to load several identical models from save files into one session in Tensorflow
And you can also refer to this:
https://github.com/tensorflow/tensorflow/issues/4658