Following the tutorial on TensorFlow for Poets (Android) (https://codelabs.developers.google.com/codelabs/tensorflow-for-poets-2/#0)
Attempting to use Inception model instead of Mobilenet
Trying to strip DecodeJpeg Op from the retrained model using strip_unused.py, but encountered the following error.
Error:
/home/user/tensorflow/bin/python: Error while finding spec for 'tensorflow.python.tools.strip_unused.py' (AttributeError: module 'tensorflow.python.tools.strip_unused' has no attribute '__path__')
Command line:
python -m tensorflow.python.tools.strip_unused.py --input_graph=tf_files/retrained_graph.pb --output_graph=tf_files/stripped_graph.pb --input_node_names="Mul" --output_node_names="final_result" --input_binary=true
Machine:
Ubuntu 16.04 LTS
Python 3.5.2
TensorFlow 1.4.1
Any assistance is greatly appreciated. Thanks!
Might be due to the typo mistake as file extension .py was specified. This seemed to work:
python -m tensorflow.python.tools.strip_unused --input_graph=tf_files/retrained_graph.pb --output_graph=tf_files/stripped_graph.pb --input_node_names="Mul" --output_node_names="final_result" --input_binary=true
Result: 997 ops in the final graph.
Related
I have trained one object detection model in tensorflow.
My Environment--
tf version == 1.15, network== ssd mobilnet v2
Now i want to convert my saved_model(.pb) file to tfjs(.json) format.
I followed below steps--
pip install tensorflowjs==0.8.6 # not sure if it's compatible with tf version 1.15
command==
tensorflowjs_converter --input_format=tf_saved_model --output_format=tfjs_graph_model --signature_name=serving_default --saved_model_tags=serve exported_path/saved_model exported_path/web_model_path
Error== AttributeError: module 'keras_applications' has no attribute 'set_keras_submodules'
Then i degrade keras_application version.
Now getting error as
usage: TensorFlow.js model converters. [-h]
[--input_format {keras,tf_session_bundle,keras_saved_model,tf_hub,tf_saved_model,tensorflowjs,tf_frozen_model}]
[--output_format {keras,tensorflowjs}]
[--output_node_names OUTPUT_NODE_NAMES]
[--signature_name SIGNATURE_NAME]
[--saved_model_tags SAVED_MODEL_TAGS]
[--quantization_bytes {1,2}]
[--split_weights_by_layer] [--version]
[--skip_op_check SKIP_OP_CHECK]
[--strip_debug_ops STRIP_DEBUG_OPS]
[--output_json OUTPUT_JSON]
[input_path] [output_path]
TensorFlow.js model converters.: error: argument --output_format: invalid choice: 'tfjs_graph_model' (choose from 'keras', 'tensorflowjs')
So there is no option for tf_graph_model in output_format.
Now when i am installing pip install tensorflowjs (not passing any specific version), then it installs tfjs==3.3.0, and uninstaling my current tf1.15 and installing new tf2.x version. which i need to avoid at any cost.
Can somebody please guide me , how to convert the saved_model to tf_js format in version tensorflow==1.15.
Thanks in Advance.
Can you please upgrade tensorflow to latest version tf 2.x, stable versions tf 2.6/2.7 and let us know if this issue exist as tf 1.x is not supported any more and hence these errors.
I trained a custom CNN model using keras and tensorflow 2.2.0 as background. After that, I saved the model as .ckpt file having assets, variables, .pb file as subfolders init. After that to convert it into IR in openvino documentation it is given that we have use the following command:
**To convert such TensorFlow model:
Go to the <INSTALL_DIR>/deployment_tools/model_optimizer directory
Run the mo_tf.py script with a path to the SavedModel directory to convert a model:
python3 mo_tf.py --saved_model_dir <SAVED_MODEL_DIRECTORY>**
so, I went to the following directory as mentioned and tired the following command:
python3 mo_tf.py --saved_model_dir C:\Users\vyas\Desktop\saved_model\cp.ckpt
There is no output or anything. There is no error also.
Also, I tried the following command:
python3 mo_tf.py --saved_model_dir C:\Users\vyas\Desktop\saved_model\cp.ckpt --output_dir C:\Users\vyas\Desktop\out
Still there is no output.
Can someone please help.
I am using tensorflow 2.2.0
Can someone please help me with this
--saved_model_dir must provide a path to the SavedModel directory.
Modify your command as follows:
python3 mo_tf.py --saved_model_dir C:\Users\vyas\Desktop\saved_model
Anyone know if Tensorflow Lite has GPU support for Python? I've seen guides for Android and iOS, but I haven't come across anything about Python. If tensorflow-gpu is installed and tensorflow.lite.python.interpreter is imported, will GPU be used automatically?
According to this thread, it is not.
one solution is to convert tflite to onnx and use onnxruntime-gpu
convert to onnx with https://github.com/onnx/tensorflow-onnx:
pip install tf2onnx
python3 -m tf2onnx.convert --opset 11 --tflite path/to/model.tflite --output path/to/model.onnx
then pip install onnxruntime-gpu
and run like:
session = onnxruntime.InferenceSession(('/path/to/model.onnx'))
raw_output = self.detection_session.run(['output_name'], {'input_name': img})
you can get the input and output names by:
for i in range(len(session.get_inputs)):
print(session.get_inputs()[i].name)
and the same but replace 'get_inputs' with 'get_outputs'
You can force the computation to take place on a GPU:
import tensorflow as tf
with tf.device('/gpu:0'):
for i in range(10):
t = np.random.randint(len(x_test) )
...
Hope this helps.
I am trying to train a model on Google Cloud ML Engine with this command. I installed tensorflow with Anaconda.But while I training model , this error appears:
-Import error:No module named Cython.Build
Command "python setup.py egg_info" failed with error code 1 in /tmp/pip-install-0eA9cj/pycocotools/
I also added this line on setup.py :
"REQUIRED_PACKAGES = ['Pillow>=1.0', 'Matplotlib>=2.1', 'Cython>=0.28', 'pycocotools>=2.0.0'
But the problem is not solved,is there any idea to solve this problem?
Thanks in advance...
Same error for me, This is what worked for me, I added an extra --packages to my gcloud command, the packages i've downloaded from pip web repository as zip files and added as the below flag
--packages your-folder/Cython-0.28.1.tar.gz,your-folder/pycocotools-2.0.0.tar.gz --packages <your other object-detection and slim packages here>
I am learning tesnorflow from this blog:
http://www.wildml.com/2015/12/implementing-a-cnn-for-text-classification-in-tensorflow/
The code i am running is :
https://github.com/dennybritz/cnn-text-classification-tf/blob/master/train.py
I have installed tensorflow from sourcse in a virtual enviroment,in CPU only enviroment using followinbg bazel build command: bazel build --config=mkl ...
here is the exact error:
"2018-01-16 03:15:27.783040: F tensorflow/core/kernels/mkl_maxpooling_op.cc:157] Check failed: dnnPoolingCreateForward_F32( &prim_pooling_fwd, primAttr, algorithm, lt_user_input, params.kernel_size, params.kernel_stride, params.in_offset, dnnBorderZerosAsymm) == E_SUCCESS (-127 vs. 0)
Aborted
"
I have debugged error to the line where sess.run is written, i have beleived it has something to do it mkl_maxpooling, as i had installed tensorflow with mkl optimization of INTEL cpu's
Given below are the steps that I followed:
Build tensorflow 1.4 from source with mkl as mentioned in the question
Cloned the git repo "https://github.com/dennybritz/cnn-text-classification-tf.git"
Ran "python train.py" from "cnn-text-classification-tf" directory(created from git clone)
Code ran without any errors. So it seems like the tensorflow was not properly built from the source. Please confirm that there were no errors while building tensorflow from source.