Evaluate the TensorFlow object detection model - tensorflow

How can I evaluate my object detection model in a simple and understandable way, I used the TensorFlow's Object Detection API, but I didn’t understand the Tensorboard graphs. Can I evaluate it manually?
Any help? :(

Welcome to StackOverflow!
In short, yes you can. Yet it could be quite time-consuming to achieve your goal.
Here are the steps you might want to follow(assuming you have some basic understanding of Tensorflow graphs and sessions, otherwise please update your question):
Export your model to a frozen graph(*.pb file) via HERE. This step will give you an out-of-the-box model that you could load without any dependencies of Object Detection API.
Write a script to load your model(frozen graph) and perform the evaluation. Some instructions can be found from HERE. Make sure you use tools such as Netron to check the input and output node names of your frozen graph.
Once you could perform the evaluation, you could write metrics on your own dataset, such as mAP, and loop through all images to get the desired evaluation performed.

You could use the confusion matrix to evaluate your model on the test dataset.
After training the model on your dataset, export the inference graph for evaluation.
Find the attached link which helps you step by step towards evaluation.
Best of luck!
confusion_matrix

Related

Different evaluation accuracy when loading BERT from checkpoint

For some reason, I get wildly different loss and acc when I evaluate my BERT test set right after training vs. when I load from a saved checkpoint. I thought it might have been my adaptation of BERT, so I tried modifying the run_classifier.py script as little as possible to fit my use case, and I still am seeing this problem.
The only reason I can think of is that the model isn't loading correctly, but I don't know how to fix it. I believe I'm loading how originally intended. For the init_checkpoint parameter, I pass path/to/classifier/model.ckpt-{last_step}. There are three model files (meta, index, data) but there are also the checkpoint, events, and graph files. Do I need to be doing something with those other three files as well? I'm used to using keras, and this pure tensorflow saving/loading process seems unnecessarily convoluted to me.
Thank you in advance for any help/insight regarding BERT or pure tf saving/loading! If you're unfamiliar with BERT, here's the github link: BERT GitHub

Looking for clarification on "running" Tensorflow models

from Tensorflow's documentation, there seems to be a large array of options for "running", serving, testing, and predicting using a Tensorflow model. I've made a model very similar to MNIST, where it outputs a distribution from an image. For a beginner, what would be the easiest way to take one or a few images, and send them through the model, getting an output prediction? It is mostly for experimentation purposes. Sorry if this is too redundant, but all my research has led me to so many different ways of doing this and the documentation doesn't really give any info on the pros and cons of the different methods. Thanks
I guess you are using placeholders for your model input and then using feed_dict to feed values into your model.
If that's the case the simplest way would be after you have a trained model you save it using tf.saver. Then you can have a test script where you restore your model and then sess.run on your output variable with a feed_dict of whatever you want your input to be.

What does freezing a graph in TensorFlow mean?

I am a beginner in NN APIs and TensorFlow.
I am trying to save my trained model in protobuff format (.pb), there are many blogs explaining how to save the model as protobuff. One thing I did not understand is what is the importance of freezing the graph before saving it as protobuff? I read that freezing coverts variable to constants, does that mean the model is not trainable anymore?
What else will freezing do on models?
What is that model loses after freezing?
Can anyone please explain or give some pointers on details of freezing?
This is only a partial answer to your question.
A freezed graph is easily optimizable. When doing inference (forward propagation) for instance you can fuse some of the layers together. This you can't do with a graph separated between variables and operations (a not frozen graph). Why would you want to fuse layers together? There are multiple reasons. Going hardware specific: it might be easier to compute a number of operations together in a group of tensors, specific to the structure of your cpu or gpu. TensorRT is a graph optimizer for instance that works starting from a frozen graph (here more info on graph optimizations done by tensorRT: https://devblogs.nvidia.com/tensorrt-integration-speeds-tensorflow-inference/ ). This software does graph optimizations as well as hardware specific ones.
As far as I understand you can unfreeze a graph. I have only worked optimizing them, so I haven't use this feature. But here there is code for it: https://gist.github.com/tokestermw/795cc1fd6d0c9069b20204cbd133e36b
Here is another question that might be useful:
TensorFlow: Is there a way to convert a frozen graph into a checkpoint model?
It has not yet been answered though.
Freezing the model means producing a singular file containing information about the graph and checkpoint variables, but saving these hyperparameters as constants within the graph structure. This eliminates additional information saved in the checkpoint files such as the gradients at each point, which are included so that the model can be reloaded and training continued from where you left off. As this is not needed when serving a model purely for inference they are discarded in freezing. A frozen model is a file of the Google .pb file type.

Using TensorFlow object detection API models at prediction

I have used the TensorFlow object detection API to train the SSD Inception model from scratch. The evaluation script shows that the model has learned something and now I want to use the model.
I have looked at the object detection ipynb that can feed single images to a trained model. However, this is for SSD with MobileNet. I have used the following line (after loading the meta graph) to print the tensor names of the TensorFlow model I trained.
print([str(op.name) for op in tf.get_default_graph().get_operations()] )
But it does not contain the same input or output tensor names as in the ipynb. I have also searched through the code, but many functions point toward each other and it is difficult to find what I am looking for.
How can I find the tensor names I need? Or is there another method I do not know about?
To use the graph, you need to freeze/export it, using this provided script. The resulting .pb file will contain the nodes you need. I don't know why it's organized like that, but it is.

Tensorflow - How to ignore certain labels

I'm trying to implement a fully convolutional network and train it on the Pascal VOC dataset, however after reading up on the labels in the set, I see that I need to somehow ignore the "void" label. In Caffe their softmax function has an argument to ignore labels, so I'm wondering what the mechanic is, so I can implement something similar in tensorflow.
Thanks
In tensorflow you're feeding the data in feed_dict right? Generally you'd want to just pre-process the data and remove the unwanted samples - don't give them to tensorflow for processing.
My prefered approach is a producer-consumer model where you fire up a tensorflow queue and load it with samples from a loader thread which just skips enqueuing your void samples.
In training your model dequeue samples in the model (you don't use feed_dict in the optimize step). This way you're not bothering to write out a whole new dataset with the specific preprocessing step you're interested in today (tomorrow you're likely to find you want to do some other preprocessing step).
As a side comment, I think tensorflow is a little more do-it-yourself than some other frameworks. But I tend to like that, it abstracts enough to be convenient, but not so much that you don't understand what's happening. When you implement it you understand it, that's the motto that comes to mind with tensorflow.