Evaluate a model created using Tensorflow Object Detection API - tensorflow

I trained a model using Tensorflow object detection API for detecting swimming pools using satellite images. I used 'faster_rcnn_inception_v2_coco_2018_01_28' model for training. I generated a frozen inference graph (.pb). I want to evaluate the precision and recall of the model. Can someone tell me how I can do that, preferably without using pycocotools as I was facing some issues with that. Any suggestions are welcome :)

From the Object Detection API you can run "eval.py" from "models/research/object_detection/legacy/".
Your have to define an evaluation metric in your config file (see the supported evaluation protocols)
For example:
eval_config: {metrics_set: "coco_detection_metrics"}
The Pascal VOC e.g. then gives you the mean Average Precsion (mAP)

Related

How to convert model trained on custom data-set for the Edge TPU board?

I have trained my custom data-set using the Tensor Flow Object Detection API. I run my "prediction" script and it works fine on the GPU. Now , I want to convert the model to lite and run it on the Google Coral Edge TPU Board to detect my custom objects. I have gone through the documentation that Google Coral Board Website provides but I found it very confusing.
How to convert and run it on the Google Coral Edge TPU Board?
Thanks
Without reading the documentation, it will be very hard to continue. I'm not sure what your "prediction script" means, but I'm assuming that the script loaded a .pb tensorflow model, loaded some image data, and run inference on it to produce prediction results. That means you have a .pb tensorflow model at the "Frozen graph" stage of the following pipeline:
Image taken from coral.ai.
The next step would be to convert your .pb model to a "fully quantized .tflite model" using the post training quantization technique. The documentation to do that are given here. I also created a github gist, containing an example of Post Training Quantization here. Once you have produced the .tflite model, you'll need to compile the model via the edgetpu_compiler. Although everything you need to know about the edgetpu compiler is in that link, for your purpose, compiling a model is as simple as:
$ edgetpu_compiler your_model_name.tflite
Which will creates a your_model_name_edgetpu.tflite model that is compatible with the EdgeTPU. Now, if at this stage, instead of creating an edgetpu compatible model, you are getting some type of errors, then that means your model did not meets the requirements that are posted in the models-requirements section.
Once you have produced a compiled model, you can then deploy it on an edgetpu device. Currently are 2 main APIs that can be use to run inference with the model:
EdgeTPU API
python api
C++ api
tflite API
C++ api
python api
Ultimately, there are many demo examples to run inference on the model here.
The previous answer works with general classification models, but not with TF object detection API trained models.
You cannot do post-training quantization with TF Lite converter on TF object detection API models.
In order to run object detection models on EdgeTPU-s:
You must train the models in quantized aware training mode with this addition in model config:
graph_rewriter {
quantization {
delay: 48000
weight_bits: 8
activation_bits: 8
}
}
This might not work with all the models provided in the model-zoo, try a quantized model first.
After training, export the frozen graph with: object_detection/export_tflite_ssd_graph.py
Run tensorflow/lite/toco tool on the frozen graph to make it TFLite compatible
And finally run edgetpu_complier on the .tflite file
You can find more in-depth guide here:
https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/running_on_mobile_tensorflowlite.md

TF Lite object detection only returning 10 detections

I am using a custom object detection model with TensorFlow Lite on native Android. At this point I'm only detecting 2 custom objects. I am using the TensorFlow Object Detection API, and I have a pipeline in place that produces optimized .tflite files.
However, at inference time, the model only returns up to 10 individual detections. According to https://www.tensorflow.org/lite/models/object_detection/overview, this is expected. The problem is that my images have a relatively large object density. I need to be able to detect up to 30 individual objects per image/inference call.
If I change NUM_DETECTIONS in the sample Android app from the TF repo from 10 to, say, 20, I get a runtime exception due to shape mismatch. How can I produce .tflite files capable of yielding more than 10 object detection instances?
Thank you!
Unfortunately, since TFLite prefers static-shaped Input/Outputs, you would need to re-export a TFLite SSD graph with the required number of outputs. Instructions are here. While invoking object_detection/export_tflite_ssd_graph.py, you would need to pass in the parameter --max_detections=20. Then, your change of NUM_DETECTIONS should work as expected.

Understanding exactly what the pretrained model does on the Tensorflow object detection API

I am trying to understand what I need from any pre-trained model used in the API regardless of any additional code found on the Tensorflow object detection API.
For example: ssd_mobilenet_v1_coco_2017_11_17, depending on what I have understood: it is a model that is already trained to detect objects (there is a classification to know the category of the object + Regression to bound the objects with rectangles and those rectangles are actually the x,y,w,h coordinates on the object).
How do we benefit from the regression output of that model (x,y,w,h coordinates) to use them in another model?
Let's assume we want to print out just the coordinates x,y,w,h of a detected object on an image without any need of the code of Tensorflow object detection API, how can we do that?
Certainly you can use the pretrained model provided in tensorflow object detection model zoo without installing object detection api. The alternative solution is to use opencv.
Opencv has provided both c++ and python api to call .pb models generated by tensorflow. Here is a nice tutorial.

Understanding what a model is in regards to Tensorflow and object detection

I'm starting to dive into tensor and object detection for a drone my friend and I are building. I keep seeing the word "model" thrown around and I'm sorry but I don't know what I should be picturing when I see the word "model" in terms of tensorflow and object detection.
Usually, in deep learning model is simply architecture of a neural network. It defines type of layers, number of nodes, connections, etc. Tensorflow uses static graph, which describes your model architecture in terms of nodes and operations. As a start you can use Keras API for defining your model.
https://keras.io/
Also read more about TF graph https://www.tensorflow.org/guide/graphs and take a look at tutorials https://www.tensorflow.org/tutorials

Customize MobileNet model architecture with Tensorflow Object Detection API

Tensorflow object detection API provides a number of pretrained object detection models to choose from. However, I would like to introduce modifications to the architecture of those models.
Particularly, I would like to make Faster RCNN into a more shallow network and use it to train my model. I want to gain in performance despite loss in accuracy. MobileNet is too inaccurate for my application.
Is it possible to achieve this without having to implement everything from scratch ?
Thank you.