mobilenetv1 trained with custom dataset Quantization Size problem - tensorflow

I am working on an object detection software, basically i am using TensorFlow objet detection API on Python with MobileNetV1, i have trained the model with my own dataset.
The frozen_inference_graph.pb file resulting of the training with my dataset is like 22 Mo.
I tried to convert it to TFLite with quantization but it is still like 21.2 Mo.
Is it normal that these two sizes are 20+ Mo ? I have read from differents sources that MobileNet quantized models are around 5 Mo. It is because I trained it on my custom dataset with new objects ? And also, why quantizing it does not reduce size (up to 4 times smaller) ?
Thank you for your help

Related

My models goes from 75kB to 150 MB when converting from onnx to tf-lite

I a have a small onnx model (generated from pytorch) that only contains convolutions with gaussian kernels, as well as upsampling / downsampling operations. Therefore, it's a small model that's only about 75KB on disk in its onnx form.
If I convert it to tf-lite for a ~1000x1500 image input size, it remains fairly small.
However, when converting it to tf-lite for a ~3000x4000 image input size, the exported model size jumps to 150 MB !
Why is that ? Can I do something to address it ?
Thanks !

normalization image before using ssd_mobilenet

I try train ssd-mobilenet in my own dataset :
training image : 3400 with size :1600*1200
test set :800 with size :1600 *1200 tensorflow -gpu :1.13.1 gpu :4GB cuda 10.0 cudnn 7
object: road damage like aligator crack but after 197000 step my training loss cannot go down 2.
I have 2 questions
Should I normalize my training and set image before before using pretrained model like ssd_mobilenet?
If yes
Should I annotate images normalized or not ?
I need really helps. Thanks in advance
Should I normalize my training and set image before before using pretrained model like ssd_mobilenet?
No. Assuming you define your training pipeline correctly (see the examples in the TF Models repository), the Object detection API will take care of defining the appropriate image transformations (scaling, padding, normalization, etc) required in order to make the input compatible with the model.

Using ssd_inception_v2 to train on different resolution

The dataset contains images of different sizes.
The pretrained weights are trained on 300x300 resolution.
I am training on widerface dataset where objects are as small as 15x15.
Q1. I want to train with 800x800 resolution do i need to resize all the images manually or this will be done by Tensorflow automatically ?
I am using the following command to train:
python3 /opt/github/models/research/object_detection/legacy/train.py --logtostderr --train_dir=/opt/github/object_detection_retraining/wider_face_checkpoint/ --pipeline_config_path=/opt/github/object_detection_retraining/models/ssd_inception_v2_coco_2018_01_28/pipeline.config
Q2. I also tried training it using the model_main.py but after 1000 iterations it is evaluating the dataset with each iteration.
I am using the following command to train:
python3 /opt/github/models/research/object_detection/model_main.py --num_train_steps=200000 --logtostderr --model_dir=/opt/github/object_detection_retraining/wider_face_checkpoint/ --pipeline_config_path=/opt/github/object_detection_retraining/models/ssd_inception_v2_coco_2018_01_28/pipeline.config
Q3. Also if you can suggest any model i should use for real time face detection apart from mobilenet and inception, please suggest.
Thanks.
Q1. No you do not need to resize manually. See this detailed answer.
Q2. By 1000 iterations you meant steps right? (An iteration counts as a complete cycle of the dataset.) Usually the model performed evaluation after a certain amount of time, e.g. 10 minutes. So in every 10 minutes, the checkpoints are saved and an evaluation of the model on evaluation set is performed.
Q3. SSD models with mobilenet is one of the fast detectors, apart from that you can try YOLO models for real time detection

Retrain last inception or mobilenet layer to work with INPUT_SIZE 64x64 or 32x32

I want to retrain last inception or mobilenet layer so it would classify my own objects (about 5-15)
Also I want this to work with INPUT_SIZE == 64x64 or 32x32 (not 224 like for the default inception model)
I found some articles about retraining models:
https://hackernoon.com/creating-insanely-fast-image-classifiers-with-mobilenet-in-tensorflow-f030ce0a2991
https://medium.com/#daj/creating-an-image-classifier-on-android-using-tensorflow-part-3-215d61cb5fcd
For mobilenet they say
the input image size, either '224', '192', '160', or '128'
so I can't train with 64 or 32 (it's bad) https://github.com/tensorflow/tensorflow/blob/master/tensorflow/examples/image_retraining/retrain.py#L80
What about inception models? Can I somehow train models to work with small image input sizes (to get results faster)?
Objects which I want to classify from such small images will be already cropped from its parent image (for example from camera frames), it could be traffic/road signs cropped by fastest cascade classifiers (LBP/Haar) which were trained to detect everything that looks like sign's shapes/figures (triangle/rhombus,circle shapes)
So 64x64 images which fully include/contain only interested object should be enough for classification
No you still can, use the smallest option which would be 128. It will just scale your 32 or 64 image up, which is fine.
it's not possible for classificators
but it become possible for tensorflow object detection api (we can set any input size) https://github.com/tensorflow/models/tree/master/research/object_detection

Tensorflow: Quantized graph not working for inception-resnet-v2 model

I did quantization on inception-resnet-v2 model using https://www.tensorflow.org/performance/quantization#how_can_you_quantize_your_models.
Size of freezed graph(input for quantization) is 224,6 MB and quantized graph is 58,6 MB. I ran accuracy test for certain dataset wherein, for freezed graph the accuracy is 97.4% whereas for quantized graph it is 0%.
Is there a different way to quantize the model for inception-resnet versions? or, for inception-resnet model, quantization is not support at all?
I think they transitioned from quantize_graph to graph_transforms. Try using this:
https://github.com/tensorflow/tensorflow/tree/master/tensorflow/tools/graph_transforms
And what did you use for the input nodes/output nodes when testing?