Retrain an Image Classifier in tensorflow js - tensorflow

hii any idea to convert retrain image classifier for use with tensorflow js
from
https://www.tensorflow.org/hub/tutorials/image_retraining
mkdir ~/example_code
cd ~/example_code
curl -LO https://github.com/tensorflow/hub/raw/r0.1/examples/image_retraining/retrain.py
python retrain.py --image_dir ~/flower_photos
try to convert model using tensorflowjs_converter
https://github.com/tensorflow/tfjs-converter
tensorflowjs_converter \
--input_format=tf_frozen_model \
--output_node_names='MobilenetV1/Predictions/Reshape_1' \
--saved_model_tags=serve \
/tmp/output_graph.pb \
/tmp/web_model
getting this error
"graph." % repr(name))
KeyError: "The name 'MobilenetV1/Predictions/Reshape_1' refers to an Operation not in the graph."
also fail for mobilenet v1 model generate using command
python retrain.py \
--image_dir ~/flower_photos \
--tfhub_module https://tfhub.dev/google/imagenet/mobilenet_v1_100_224/feature_vector/1
thanks

#Mustafa I think you're giving the wrong value in the --output_node_names , try going through the model using tensorboard and you will find the value what has to be given here, it should be something like final_result(this is in my case).

Related

Use non quantified deeplab model on coral board

I use a quantified model of deeplab v3 on coral dev board. The result is not really precise so I want to use a non quantified model.
I can find a tflite non quantified model of deeplab so I want to generate it.
I donwload a xception65_coco_voc_trainval model on Tensorflow github:
xception65_coco_voc_trainval
Then I use that command to tranform the input:
bazel-bin/tensorflow/tools/graph_transforms/transform_graph \
--in_graph="/path/to/xception65_coco_voc_trainval.pb" \
--out_graph="/path/to/xception65_coco_voc_trainval_flatten.pb" \
--inputs='ImageTensor' \
--outputs='SemanticPredictions' \
--transforms='
strip_unused_nodes(type=quint8, shape="1,513,513,3")
flatten_atrous_conv
fold_constants(ignore_errors=true, clear_output_shapes=false)
fold_batch_norms
fold_old_batch_norms
remove_device
sort_by_execution_order
Then I generate the tflite file with this command:
tflite_convert \
--graph_def_file="/tmp/deeplab_mobilenet_v2_opt_flatten_static.pb" \
--output_file="/tmp/deeplab_mobilenet_v2_opt_flatten_static.tflite" \
--output_format=TFLITE \
--input_shape=1,513,513,3 \
--input_arrays="ImageTensor" \
--inference_type=FLOAT \
--inference_input_type=QUANTIZED_UINT8 \
--std_dev_values=128 \
--mean_values=128 \
--change_concat_input_ranges=true \
--output_arrays="SemanticPredictions" \
--allow_custom_ops
This command generates me a tflite file. I donwload the file on my coral dev board and I try to run it with.
I use that example on github to try my deeplab model on coral:
deeplab on coral
When I start the program there is an error:
terminate called after throwing an instance of 'std::bad_alloc'
what(): std::bad_alloc
Aborted
The error comes from that line:
engine = BasicEngine(args.model)

Tensorflow OID4 mobilenet model not quantizing correctly

I'm trying to quantize the ssd_mobilenetv2_oidv4 model from Tensorflow object detection model zoo, but after quantization the model stops working entirely.
To get the tflite graph, I ran
export_tflite_ssd_graph.py \
--pipeline_config_path=$CONFIG_FILE \
--trained_checkpoint_prefix=$CHECKPOINT_PATH \
--output_directory=$OUTPUT_DIR \
--add_postprocessing_op=true
Then to generate the tflite file, I ran
tflite_convert \
--graph_def_file=$OUTPUT_DIR/tflite_graph.pb \
--output_file=$OUTPUT_DIR/detect.tflite \
--input_shapes=1,300,300,3 \
--input_arrays=normalized_input_image_tensor \
--output_arrays='TFLite_Detection_PostProcess','TFLite_Detection_PostProcess:1','TFLite_Detection_PostProcess:2','TFLite_Detection_PostProcess:3' \
--inference_type=QUANTIZED_UINT8 \
--mean_values=128 \
--std_dev_values=128 \
--change_concat_input_ranges=false \
--allow_custom_ops \
--default_ranges_min=0 \
--default_ranges_max=6
Then I used this example android app to test it. When I try running it, it just shows 10 bounding that never move that are apparently detecting a tortoise with 50% accuracy. I'm not sure what all that means, but Tortoise is the first class in the label map if that's relevant.
Anyone know what's going on?
Here's a screenshot of the quantized model in action:

Convert Frozen graph for tfLite for Coral using tflite_convert

I'm using MobileNetV2 and trying to get it working for Google Coral. Everything seems to work except the Coral Web Compiler, throws a random error, Uncaught application failure. So I think the problem is the intemidary steps required. For example, I'm using this with tflite_convert
tflite_convert \
--graph_def_file=optimized_graph.pb \
--output_format=TFLITE \
--output_file=mobilenet_v2_new.tflite \
--inference_type=FLOAT \
--inference_input_type=FLOAT \
--input_arrays=input \
--output_arrays=final_result \
--input_shapes=1,224,224,3
What am I getting wrong?
This is most likely because your model is not quantized. Edge TPU devices do not currently support float-based model inference. For the best results, you should enable quantization during training (described in the link). However, you can also apply quantization during TensorFlow Lite conversion.
With post-training quantization, you sacrifice accuracy but can test something out more quickly. When you convert your graph to TensorFlow Lite format, set inference_type to QUANTIZED_UINT8. You'll also need to apply the quantization parameters (mean/range/std_dev) on the command line as well.
tflite_convert \
--graph_def_file=optimized_graph.pb \
--output_format=TFLITE \
--output_file=mobilenet_v2_new.tflite \
--inference_type=QUANTIZED_UINT8 \
--input_arrays=input \
--output_arrays=final_result \
--input_shapes=1,224,224,3 \
--mean_values=128 --std_dev_values=127 \
--default_ranges_min=0 --default_ranges_max=255
You can then pass the quantized .tflite file to the model compiler.
For more details on the Edge TPU model requirements, check out TensorFlow models on the Edge TPU.

AttributeError: Flag --trained_checkpoint_prefix must be specified

I'm trying to use object_detection model with tf 1.4. I realize that there is no training directory that used to contain some ckpt files as in the previous tf versions, so any suggestions where is the new destination or should I use the ones from old versions?
I'm using the below command
python3 export_inference_graph.py \
--input_type image_tensor \
--pipeline_config_path samples/configs/faster_rcnn_resnet101_coco.config \
--trained_checkpoint_prefix model.ckpt \
--output_directory inference
You can download the full sets including the .ckpt files from this link

Tensorflow Inception FeedInputs: unable to find feed output input

I tried the inception tutorial in tensorflow site:
https://www.tensorflow.org/versions/r0.12/how_tos/image_retraining/
the bazel build is done successfully but when I try to predict an image class wth this command:
bazel build tensorflow/examples/label_image:label_image && \
bazel-bin/tensorflow/examples/label_image/label_image \
--graph=/tmp/output_graph.pb --labels=/tmp/output_labels.txt \
--output_layer=final_result \
--image=$HOME/flower_photos/daisy/21652746_cc379e0eea_m.jpg
I have this error:
tensorflow/examples/label_image/main.cc:305] Running model failed: Not found: FeedInputs: unable to find feed output input
How can I solve this problem
This thread helped me to fix this issue.
It seems that we need to provide --input_layer with Tensorflow 1.0+.
In your case, this should fix the problem:
bazel build tensorflow/examples/label_image:label_image && \
bazel-bin/tensorflow/examples/label_image/label_image \
--graph=/tmp/output_graph.pb --labels=/tmp/output_labels.txt \
--output_layer=final_result \
--image=$HOME/flower_photos/daisy/21652746_cc379e0eea_m.jpg \
--input_layer=Mul
Are you using Tensorflow 1.0+? I had the same issue but switching over to an earlier version (I used 0.12.0) resolved the issue. It must be something in the 1.0.0 update that broke the tutorial