'tflite_convert' is not recognized as an internal or external command (in windows) - tensorflow

im trying to convert my saved_model.pb(from object detection API) file to .tflite for mlkit but when i execute the command on cmd:
tflite_convert \
--output_file=/saved_model/maonani.tflite \
--saved_model_dir=/saved_model/saved_model
i get a response saying
C:\Users\LENOVO-PC\tensorflow> tflite_convert \ --output_file=/saved_model/maonani.tflite \ --saved_model_dir=/saved_model/saved_model
'tflite_convert' is not recognized as an internal or external command,
operable program or batch file.
what should i do to make this work?

Below is answer for Linux system, hope it gives you ideas for Windows.
locate tflite_convert
if output is
/tflite_convert
where TFLITE_CONVERT_PATH depends on your TF installation, execute (2)
/tflite_convert \
--output_file=/saved_model/maonani.tflite \ --saved_model_dir=/saved_model/saved_model
if output is not found, reinstall tf, in my case it is tf v1.9
if (2) works, you might add PATH=$PATH:/bin in ~/.bashrc
if "locate" command not recognized, install it by
sudo apt-get install locate

Related

Unable to run training command for LaBSE

I am trying to reproduce the fine-tuning stage of LaBSE[https://github.com/tensorflow/models/tree/master/official/projects/labse] model.
I have cloned the tensorflow/models repository. Set the enironment path as follows.
%env PYTHONPATH='/env/python:/content/drive/MyDrive/Colab_Notebooks/p3_sentence_similiarity/models'
!echo $PYTHONPATH
output: '/env/python:/content/drive/MyDrive/Colab_Notebooks/p3_sentence_similiarity/models'
installed pre-requisites
!pip install -U tensorflow
!pip install -U "tensorflow-text==2.10.*"
!pip install tf-models-official
Then I try to run the labse training command in readme.md.
python3 /content/drive/MyDrive/Colab_Notebooks/p3_sentence_similiarity/models/official/projects/labse/train.py \
--experiment=labse/train \
--config_file=/content/drive/MyDrive/Colab_Notebooks/p3_sentence_similiarity/models/official/projects/labse/experiments/labse_bert_base.yaml \
--config_file=/content/drive/MyDrive/Colab_Notebooks/p3_sentence_similiarity/models/official/projects/labse/experiments/labse_base.yaml \
--params_override=${PARAMS} \
--model_dir=/content/drive/MyDrive/Colab_Notebooks/p3_sentence_similiarity/models \
--mode=train_and_eval
Issue
I get the following error.
File "/content/drive/MyDrive/Colab_Notebooks/p3_sentence_similiarity/models/official/projects/labse/train.py", line 23, in
from official.projects.labse import config_labse
ModuleNotFoundError: No module named 'official.projects.labse'
The import statement python from official.projects.labse import config_labse fails
System information
I executed this on colab as well as in a GPU machine. However in both environments I get the same error.
I need to know why the import statement failed and what corrective action should be taken for this.

Invalid argument --model_config_file_poll_wait_seconds

I'm trying to start tensorflow-serving with the following two options like on the documentation
docker run -t --rm -p 8501:8501 \
-v "$(pwd)/models/:/models/" tensorflow/serving \
--model_config_file=/models/models.config \
--model_config_file_poll_wait_seconds=60
The container does not start because it does not recognize the argument --model_config_file_poll_wait_seconds.
unknown argument: --model_config_file_poll_wait_seconds=60
usage: tensorflow_model_server
I'm on the latest docker image, 1.14.0 and the line is taken straight from the documentation
https://www.tensorflow.org/tfx/serving/serving_config
Does this argument even work?
Many thanks.
It seems https://www.tensorflow.org/tfx/serving/serving_config is talking about code that has not been released as a new version yet, which is odd. I will ask about that.
That package is generated from this source:
https://github.com/tensorflow/serving/blob/master/tensorflow_serving/g3doc/serving_config.md, it mentions the --model_config_file_poll_wait_seconds flag.
However, the same document for 1.14.0 has no mention of the flag:
https://github.com/tensorflow/serving/blob/1.14.0/tensorflow_serving/g3doc/serving_config.md
Try using the nightly tensorflow serving image and see if it works.
docker run -t --rm -p 8501:8501 \
-v "$(pwd)/models/:/models/" tensorflow/serving:nightly \
--model_config_file=/models/models.config \
--model_config_file_poll_wait_seconds=60
Just tried. Tensorflow Serving 2.1.0 supports it while 1.14.0 doesn't.

Exporting a Trained Inference Graph TENSERFLOW

I have trained my custom model and want to export a trained inference graph
I ran the following command
INPUT_TYPE=image_tensor
PIPELINE_CONFIG_PATH= training/ ssd_mobilenet_v1_pets.config
TRAINED_CKPT_PREFIX= training/model.ckpt-2509
EXPORT_DIR= training/new_model
python exporter.py \
--input_type=${INPUT_TYPE} \
--pipeline_config_path=${PIPELINE_CONFIG_PATH} \
--trained_checkpoint_prefix=${TRAINED_CKPT_PREFIX} \
--output_directory=${EXPORT_DIR}
And I got the following output
W0819 22:08:54.649750 2680 deprecation_wrapper.py:119] From C:\Users\Aleksej\Anaconda3\envs\cocosynth4\lib\site-packages\object_detection-0.1-py3.6.egg\nets\mobilenet\mobilenet.py:397: The name tf.nn.avg_pool is deprecated. Please use tf.nn.avg_pool2d instead.
(cocosynth4) D:\yolo\models\research\object_detection> --input_type=${INPUT_TYPE} \
'--input_type' is not recognized as an internal or external command,
operable program or batch file.
(cocosynth4) D:\yolo\models\research\object_detection> --pipeline_config_path=${PIPELINE_CONFIG_PATH} \
'--pipeline_config_path' is not recognized as an internal or external command,
operable program or batch file.
(cocosynth4) D:\yolo\models\research\object_detection> --trained_checkpoint_prefix=${TRAINED_CKPT_PREFIX} \
'--trained_checkpoint_prefix' is not recognized as an internal or external command,
operable program or batch file.
(cocosynth4) D:\yolo\models\research\object_detection> --output_directory=${EXPORT_DIR}
'--output_directory' is not recognized as an internal or external command,
operable program or batch file.
I am running a windows 10 and python3.
Does anyone have any suggestions on how to solve this issue
Fixed with this code
python export_inference_graph.py --input_type image_tensor --pipeline_config_path training/ssd_mobilenet_v1_pets.config --trained_checkpoint_prefix training/model.ckpt-3970 --output_directory ships_inference_graph

Facing error while running Docker on Tensorflow serving image

I have trained a Tensorflow Object detection model. I am trying to make a REST request using the tensorflow serving image on docker. (following instruction from https://github.com/tensorflow/serving )
TESTDATA="$(pwd)/serving/tensorflow_serving/servables/tensorflow/testdata/"
docker run -t --rm -p 8501:8501 \
-v "$TESTDATA/my_model:/models/work_place_safety" \
-e MODEL_NAME=work_place_safety \
tensorflow/serving &
I am facing below error message-
$ C:\Program Files\Docker\Docker\Resources\bin\docker.exe: Error response from daemon: Mount denied:
The source path "C:/Users/Desktop/models/serving/tensorflow_serving/servables/tensorflow/testdata/saved_model_work_place_safety;C"
doesn't exist and is not known to Docker.
See 'C:\Program Files\Docker\Docker\Resources\bin\docker.exe run --help'.
I wonder why its including ";C" at the end of source path and throwing an error.
Any Help is much appreciated.
Thanks
resolved the issue by adding a / before $ in Git bash.
docker run -t --rm -p 8501:8501 \
-v /$TESTDATA/my_model:/models/my_model \
-e MODEL_NAME=my_model \
tensorflow/serving &
What is the value of my_model. Is it saved_model_work_place_safety.
Are you sure that your Saved Objection Detection Model is in the Folder, saved_model_work_place_safety and that Folder is in the path, $(pwd)/serving/tensorflow_serving/servables/tensorflow/testdata/?
If it is not inside testdata, you should mention the correct path, where saved_model_work_place_safety is present.
Folder structure should be something like this =>
saved_model_work_place_safety => 00000123 or 1556272508 or 1 => .pb file and Variables Folder.

Freeze TensorFlow graph to use in iOS app

I have the below files:
1. retrained_graph.pb
2. retrained_labels.txt
3. _retrain_checkpoint.meta
4. _retrain_checkpoint.index
5. _retrain_checkpoint.data-00000-of-00001
6. checkpoint
Command Executed:
python freeze_graph.py
--input_graph=/Users/saurav/Desktop/example/tmp/retrained_graph.pb
--input_checkpoint=./_retrain_checkpoint
--output_graph=/Users/saurav/Desktop/example/tmp/frozen_graph.pb --output_node_names=softmax
Getting error:
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xff in position 44: invalid start byte
Here are screenshots:
Finally I found the answer. To freeze a graph you need to build with "bazel".
1. Install bazel by using homebrew. brew install bazel
2. If you don't have homebrew get it installed.
/usr/bin/ruby -e "$(curl -fsSL \
https://raw.githubusercontent.com/Homebrew/install/master/install)"
Clone tensorflow by command git clone https://github.com/tensorflow/tensorflow
Change directory to tensorflow in terminal
run command ./Configure. It asks few questions answer according your need. Most of them you can type "NO". It asks default path to Python you need to specify the path or just hit "enter".
Now build bazel for freeze_graph using command:
bazel build tensorflow/python/tools:freeze_graph
Keep the retrained graph and checkpoints in folder.
Run bazel command to freeze the graph.
bazel-bin/tensorflow/python/tools/freeze_graph \ --input_graph=YouDirectory/retrained_graph.pb \ --input_checkpoint=YouDirectory/_retrain_checkpoint \ --output_graph=YouDirectory/frozen_graph.pb