I use a quantified model of deeplab v3 on coral dev board. The result is not really precise so I want to use a non quantified model.
I can find a tflite non quantified model of deeplab so I want to generate it.
I donwload a xception65_coco_voc_trainval model on Tensorflow github:
xception65_coco_voc_trainval
Then I use that command to tranform the input:
bazel-bin/tensorflow/tools/graph_transforms/transform_graph \
--in_graph="/path/to/xception65_coco_voc_trainval.pb" \
--out_graph="/path/to/xception65_coco_voc_trainval_flatten.pb" \
--inputs='ImageTensor' \
--outputs='SemanticPredictions' \
--transforms='
strip_unused_nodes(type=quint8, shape="1,513,513,3")
flatten_atrous_conv
fold_constants(ignore_errors=true, clear_output_shapes=false)
fold_batch_norms
fold_old_batch_norms
remove_device
sort_by_execution_order
Then I generate the tflite file with this command:
tflite_convert \
--graph_def_file="/tmp/deeplab_mobilenet_v2_opt_flatten_static.pb" \
--output_file="/tmp/deeplab_mobilenet_v2_opt_flatten_static.tflite" \
--output_format=TFLITE \
--input_shape=1,513,513,3 \
--input_arrays="ImageTensor" \
--inference_type=FLOAT \
--inference_input_type=QUANTIZED_UINT8 \
--std_dev_values=128 \
--mean_values=128 \
--change_concat_input_ranges=true \
--output_arrays="SemanticPredictions" \
--allow_custom_ops
This command generates me a tflite file. I donwload the file on my coral dev board and I try to run it with.
I use that example on github to try my deeplab model on coral:
deeplab on coral
When I start the program there is an error:
terminate called after throwing an instance of 'std::bad_alloc'
what(): std::bad_alloc
Aborted
The error comes from that line:
engine = BasicEngine(args.model)
Related
I am converting several models from Tensorflowsj Keras and Tensorflow to TensorflowLite and then to TensorflowMicro c-header files.
I can do the main conversions but have found little information about using tflite_convert for quantization.
Wondering if people could post working command line examples. As far as I can tell we are encouraged to use python to do the conversions, but I would prefer to stay on the command line.
I have summarized what I am working on here https://github.com/hpssjellis/my-examples-for-the-arduino-portentaH7/tree/master/m09-Tensoflow/tfjs-convert-to-arduino-header.
This is what I have so far and it works converting a saved tensorfowjs model.json into a .pb file that is converted to a .tflite and then to a c-header file to work on an Arduino style microcontroller.
tensorflowjs_converter --input_format=tfjs_layers_model --output_format=keras_saved_model ./model.json ./
tflite_convert --keras_model_file ./ --output_file ./model.tflite
xxd -i model.tflite model.h
But my files do not get any smaller when I try any quantization.
The tflit_convert command line help at Tensorflow is not specific enough https://www.tensorflow.org/lite/convert/cmdline
Here are some examples I have found using both tflite_convert or tensorflowjs_convert, some seem to work on other peoples models but do not seem to work on my own models:
tflite_convert --output_file=/home/wang/Downloads/deeplabv3_mnv2_pascal_train_aug/optimized_graph.tflite --graph_def_file=/home/wang/Downloads/deeplabv3_mnv2_pascal_train_aug/frozen_inference_graph.pb --inference_type=FLOAT --inference_input_type=QUANTIZED_UINT8 --input_arrays=ImageTensor --input_shapes=1,513,513,3 --output_arrays=SemanticPredictions –allow_custom_ops
tflite_convert --graph_def_file=<your_frozen_graph> \
--output_file=<your_chosen_output_location> \
--input_format=TENSORFLOW_GRAPHDEF \
--output_format=TFLITE \
--inference_type=QUANTIZED_UINT8 \
--output_arrays=<your_output_arrays> \
--input_arrays=<your_input_arrays> \
--mean_values=<mean of input training data> \
--std_dev_values=<standard deviation of input training data>
tflite_convert --graph_def_file=/tmp/frozen_cifarnet.pb \
--output_file=/tmp/quantized_cifarnet.tflite \
--input_format=TENSORFLOW_GRAPHDEF \
--output_format=TFLITE \
--inference_type=QUANTIZED_UINT8 \
--output_arrays=CifarNet/Predictions/Softmax \
--input_arrays=input \
--mean_values 121 \
--std_dev_values 64
tflite_convert
--graph_def_file=frozen_inference_graph.pb
--output_file=new_graph.tflite
--input_format=TENSORFLOW_GRAPHDEF
--output_format=TFLITE
--input_shape=1,600,600,3
--input_array=image_tensor
--output_array=detection_boxes,detection_scores,detection_classes,num_detections
--inference_type=QUANTIZED_UINT8
--inference_input_type=QUANTIZED_UINT8
--mean_values=128 \
--std_dev_values=127
tflite_convert --graph_def_file=~YOUR PATH~/yolov3-tiny.pb --output_file=~YOUR PATH~/yolov3-tiny.tflite --input_format=TENSORFLOW_GRAPHDEF --output_format=TFLITE --input_shape=1,416,416,3 --input_array=~YOUR INPUT NAME~ --output_array=~YOUR OUTPUT NAME~ --inference_type=FLOAT --input_data_type=FLOAT
tflite_convert \ --graph_def_file=built_graph/yolov2-tiny.pb \ --output_file=built_graph/yolov2_graph.lite \ --input_format=TENSORFLOW_GRAPHDEF \ --output_format=TFLITE \ --input_shape=1,416,416,3 \ --input_array=input \ --output_array=output \ --inference_type=FLOAT \ --input_data_type=FLOAT
tflite_convert --graph_def_file=frozen_inference_graph.pb --output_file=optimized_graph.lite --input_format=TENSORFLOW_GRAPHDEF --output_format=TFLITE --input_shape=1,1024,1024,3 --input_array=image_tensor --output_array=Softmax
tensorflowjs_converter --quantize_float16 --input_format=tf_hub 'https://tfhub.dev/google/imagenet/mobilenet_v1_100_224/classification/1' ./
tensorflowjs_converter --control_flow_v2=True --input_format=tf_hub --quantize_uint8=* --strip_debug_ops=True --weight_shard_size_bytes=4194304 --output_node_names='Postprocessor/ExpandDims_1,Postprocessor/Slice' --signature_name 'serving_default' https://tfhub.dev/tensorflow/ssd_mobilenet_v2/2 test
If anyone has working examples of quantization that they can explain especially what is important to include and what is optional, that would be very helpful. I use netron to visualize the models so I should be able to see when a float input has been changed to an int8. A bit of an explanation would be helpful to.
Recently tried this set of commands to make which compiled but the quantized file was larger than the un-quantized file
tensorflowjs_converter --input_format=tfjs_layers_model --output_format=keras_saved_model ./model.json ./saved_model
tflite_convert --keras_model_file ./saved_model --output_file ./model.tflite
xxd -i model.tflite model.h
tflite_convert --saved_model_dir=./saved_model \
--output_file=./model_int8.tflite \
--input_format=TENSORFLOW_GRAPHDEF \
--output_format=TFLITE \
--inference_type=QUANTIZED_UINT8 \
--output_arrays=1,1 \
--input_arrays=1,2 \
--mean_value=128 \
--std_dev_value=127
xxd -i model_int8.tflite model_int8.h
The python way is easy as well. And you can find official examples here:
https://www.tensorflow.org/lite/performance/post_training_quantization
There is an entire section for this. I think you didn't train the model so post-training quantization is what you are looking for.
I'm trying to quantize the ssd_mobilenetv2_oidv4 model from Tensorflow object detection model zoo, but after quantization the model stops working entirely.
To get the tflite graph, I ran
export_tflite_ssd_graph.py \
--pipeline_config_path=$CONFIG_FILE \
--trained_checkpoint_prefix=$CHECKPOINT_PATH \
--output_directory=$OUTPUT_DIR \
--add_postprocessing_op=true
Then to generate the tflite file, I ran
tflite_convert \
--graph_def_file=$OUTPUT_DIR/tflite_graph.pb \
--output_file=$OUTPUT_DIR/detect.tflite \
--input_shapes=1,300,300,3 \
--input_arrays=normalized_input_image_tensor \
--output_arrays='TFLite_Detection_PostProcess','TFLite_Detection_PostProcess:1','TFLite_Detection_PostProcess:2','TFLite_Detection_PostProcess:3' \
--inference_type=QUANTIZED_UINT8 \
--mean_values=128 \
--std_dev_values=128 \
--change_concat_input_ranges=false \
--allow_custom_ops \
--default_ranges_min=0 \
--default_ranges_max=6
Then I used this example android app to test it. When I try running it, it just shows 10 bounding that never move that are apparently detecting a tortoise with 50% accuracy. I'm not sure what all that means, but Tortoise is the first class in the label map if that's relevant.
Anyone know what's going on?
Here's a screenshot of the quantized model in action:
I'm using MobileNetV2 and trying to get it working for Google Coral. Everything seems to work except the Coral Web Compiler, throws a random error, Uncaught application failure. So I think the problem is the intemidary steps required. For example, I'm using this with tflite_convert
tflite_convert \
--graph_def_file=optimized_graph.pb \
--output_format=TFLITE \
--output_file=mobilenet_v2_new.tflite \
--inference_type=FLOAT \
--inference_input_type=FLOAT \
--input_arrays=input \
--output_arrays=final_result \
--input_shapes=1,224,224,3
What am I getting wrong?
This is most likely because your model is not quantized. Edge TPU devices do not currently support float-based model inference. For the best results, you should enable quantization during training (described in the link). However, you can also apply quantization during TensorFlow Lite conversion.
With post-training quantization, you sacrifice accuracy but can test something out more quickly. When you convert your graph to TensorFlow Lite format, set inference_type to QUANTIZED_UINT8. You'll also need to apply the quantization parameters (mean/range/std_dev) on the command line as well.
tflite_convert \
--graph_def_file=optimized_graph.pb \
--output_format=TFLITE \
--output_file=mobilenet_v2_new.tflite \
--inference_type=QUANTIZED_UINT8 \
--input_arrays=input \
--output_arrays=final_result \
--input_shapes=1,224,224,3 \
--mean_values=128 --std_dev_values=127 \
--default_ranges_min=0 --default_ranges_max=255
You can then pass the quantized .tflite file to the model compiler.
For more details on the Edge TPU model requirements, check out TensorFlow models on the Edge TPU.
Disclaimer: First time i am trying Machine Learning!
We have a requirement of Automatic segmentation of a objects in an image from Background. Through internet we found that "Deep lab" will solve our purpose. we downloaded the deeplab from their offical site and followed all the instructions that they have mentioned. we trained the pascal_voc_2012 dataset with below command
python deeplab/train.py \
--logtostderr \
--training_number_of_steps=30000 \
--train_split="train" \
--model_variant="xception_65" \
--atrous_rates=6 \
--atrous_rates=12 \
--atrous_rates=18 \
--output_stride=16 \
--decoder_output_stride=4 \
--train_crop_size=513 \
--train_crop_size=513 \
--train_batch_size=1 \
--dataset="pascal_voc_seg" \
--tf_initial_checkpoint=/home/ktpl13/Desktop/models-master/research/deeplab/datasets/pascal_voc_seg/checkpoint
\
--train_logdir=/home/ktpl13/Desktop/models-master/research/deeplab/datasets/pascal_voc_seg/exp/train_on_train_set/train$
\
--dataset_dir=/home/ktpl13/Desktop/models-master/research/deeplab/datasets/pascal_voc_seg/tfrecord
Training is done after 50 hours. Then i started the Evaluation using below command
python deeplab/eval.py \
--logtostderr \
--eval_split="val" \
--model_variant="xception_65" \
--atrous_rates=6 \
--atrous_rates=12 \
--atrous_rates=18 \
--output_stride=16 \
--decoder_output_stride=4 \
--eval_crop_size=513 \
--eval_crop_size=513 \
--dataset="pascal_voc_seg" \
--checkpoint_dir=/home/ktpl13/Desktop/models-master/research/deeplab/datasets/pascal_voc_seg/exp/train_on_train_set/train/
\
--eval_logdir=/home/ktpl13/Desktop/models-master/research/deeplab/datasets/pascal_voc_seg/exp/train_on_train_set/eval/
\
--dataset_dir=/home/ktpl13/Desktop/models-master/research/deeplab/datasets/pascal_voc_seg/tfrecord
After executing the above command, it found one checkpoint correctly, but after that it stays with this message
"Waiting for checkpoint at
home/ktpl13/Desktop/models-master/research/deeplab/datasets/pascal_voc_seg/exp/train_on_train_set/train/"
So i terminated the execution of Eval after 2 hours and started the visualization with below command
python deeplab/vis.py \
--logtostderr \
--vis_split="val" \
--model_variant="xception_65" \
--atrous_rates=6 \
--atrous_rates=12 \
--atrous_rates=18 \
--output_stride=16 \
--decoder_output_stride=4 \
--vis_crop_size=513 \
--vis_crop_size=513 \
--dataset="pascal_voc_seg" \
--checkpoint_dir=/home/ktpl13/Desktop/models-master/research/deeplab/datasets/pascal_voc_seg/exp/train_on_train_set/train/
\
--vis_logdir=/home/ktpl13/Desktop/models-master/research/deeplab/datasets/pascal_voc_seg/exp/train_on_train_set/vis/
\
--dataset_dir=/home/ktpl13/Desktop/models-master/research/deeplab/datasets/pascal_voc_seg/tfrecord/
visualization also executed for one checkpoint and then again got the same message like Eval.
"Waiting for checkpoint at
home/ktpl13/Desktop/models-master/research/deeplab/datasets/pascal_voc_seg/exp/train_on_train_set/train/"
Again i terminated the execution of vis. There is a folder generated under vis with name "segmentation_results" which contains the "prediction.png" for each input image. which is "completly black image".
Now My questions are.
Did My Evaluation and visualization are done? or am i doing something wrong?
Why the predicted images all are Black?
About the eval waiting for another checkpoint, it's because the default expects to run along with the train process. To run the eval script only once, after training, add this flag to the eval.sh script:
--max_number_of_evaluations = 1
And you can view the value using TensorBoard.
The vis.sh script appears to be running correctly as it's saving the images to the directory. The issue with all black images is a different problem (e.g: dataset configuration, label weights, colormap removal, etc).
For future reference, I ran into the same problem. After I found out what happened I laughed so hard.
Both eval and vis ran as expected.
For eval, right above your output of "waiting for checkpoints," there should be a line that says "miou[your model accuracy here]" It is a tiny line and easy to miss.
For vis, you will find your segmented result in the vis logdir you provided in your vis command.
More in depth, both eval and vis have successfully analyze the network your trained, and as a feature they are waiting for more checkpoints in case you decided to train more networks to compare.
hii any idea to convert retrain image classifier for use with tensorflow js
from
https://www.tensorflow.org/hub/tutorials/image_retraining
mkdir ~/example_code
cd ~/example_code
curl -LO https://github.com/tensorflow/hub/raw/r0.1/examples/image_retraining/retrain.py
python retrain.py --image_dir ~/flower_photos
try to convert model using tensorflowjs_converter
https://github.com/tensorflow/tfjs-converter
tensorflowjs_converter \
--input_format=tf_frozen_model \
--output_node_names='MobilenetV1/Predictions/Reshape_1' \
--saved_model_tags=serve \
/tmp/output_graph.pb \
/tmp/web_model
getting this error
"graph." % repr(name))
KeyError: "The name 'MobilenetV1/Predictions/Reshape_1' refers to an Operation not in the graph."
also fail for mobilenet v1 model generate using command
python retrain.py \
--image_dir ~/flower_photos \
--tfhub_module https://tfhub.dev/google/imagenet/mobilenet_v1_100_224/feature_vector/1
thanks
#Mustafa I think you're giving the wrong value in the --output_node_names , try going through the model using tensorboard and you will find the value what has to be given here, it should be something like final_result(this is in my case).