I'm profiling my model in Tensorflow 2.0. I'm able to get a profile & view with Tensorboard by following the instructions for a function. I use tf.summary.trace_on & tf.summary.trace_export:
https://www.tensorflow.org/tensorboard/r2/graphs
Now I'd like to create a summary by operation type, similar to the node type summary in the tflite profiler (https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/tools/benchmark).
TensorFlow 1.14 tf.profile.Profiler looks to be able to do it with the 'cmd' option:
https://www.tensorflow.org/api_docs/python/tf/profiler/profile
How do I do this with TensorFlow 2.0?
Related
While converting model to tflite getting this error
"""
Some of the operators in the model are not supported by the standard TensorFlow Lite runtime and are not recognized by TensorFlow. If you have a custom implementation for them you can disable this error with --allow_custom_ops, or by setting allow_custom_ops=True when calling tf.lite.TFLiteConverter(). Here is a list of builtin operators you are using: ABS, ADD, CONV_2D, MAX_POOL_2D, MUL, RELU, SOFTMAX, SQUEEZE, SUB. Here is a list of operators for which you will need custom implementations: AdjustContrastv2, AdjustHue, AdjustSaturation, RandomUniform.
"""
How to resolve this?
tensorflow version: 1.13.1
You can use TF ops directly by selecting TF ops.
I've confirmed that AdjustContrastv2, AdjustHue, AdjustSaturation are available via FlexDelegate.
https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/delegates/flex/allowlisted_flex_ops.cc#L35
To use this feature, you need to use TF 2.4 or higher. Since TF 2.4 is not available yet, you need to use tf-nightly release.
FYI, regarding migration TF1 to TF2, please check https://www.tensorflow.org/guide/migrate
You may try adding following lines to specify your model can use ops in both TF Lite built in and in TF.
converter.experimental_new_converter=True
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS,
tf.lite.OpsSet.SELECT_TF_OPS]
Or better you should rewrite ops not supported in TF Lite built in by ops available in TF built in
We have trained a model in Google Cloud AutoML (a tool that we like a lot) and successfully exported it to GCS, and then created the model in BigQuery using the below command:
create or replace model my_dataset.my_bq_ml_model
options(model_type='tensorflow',
model_path='my gcs path to exported tensorflow model'))
However, when we use BigQueryML to try and run some predictions using the model we are unsure of the how to format the multiple features that our model uses into the single "inputs" string the exported Tensorflow model accepts in BigQuery.
select *
from ml.predict(model my_project.my_dataset.my_bq_ml_model,
(
select 'How do we format this?' as inputs
from my_rows_to_predict
))
Has anyone done this yet?
This is similar to this question, which remains open:
Multi-column input to ML.PREDICT for a TensorFlow model in BigQuery ML
Thank you all.
After you load the model into BigQuery ML, click on the model in the BigQuery UI and switch over to the "Schema" tab. This should tell you what columns the model wants.
Alternately, run the program saved_model_cli on the model (it's a python program that comes with tensorflow) to see what the supported signature is
saved_model_cli show --dir $export_path --all
I've beeen trying out the Tensorflow 2 alpha and I have been trying to freeze and export a model to a .pb graphdef file.
In Tensorflow 1 I could do something like this:
# Freeze the graph.
frozen_graph_def = tf.graph_util.convert_variables_to_constants(
sess,
sess.graph_def,
output_node_names)
# Save the frozen graph to .pb file.
with open('model.pb', 'wb') as f:
f.write(frozen_graph_def.SerializeToString())
However this doesn't seem possible anymore as convert_variables_to_constants is removed and use of sessions is discouraged.
I looked and found there is the freeze graph util
https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/tools/freeze_graph.py that works with SavedModel exports.
Is there some way to do it within Python still or I am meant to switch and use this tool now?
I have also faced this same problem while migrating from tensorflow1.x to tensoflow2.0 beta.
This problem can be solved by 2 methods:
1st is to go to the tensflow2.0 docs search for the methods you have used and change the syntax for each line &
To use google's tf_ugrade_v2 script
tf_upgrade_v2 --infile your_tf1_script_file --outfile converted_tf2_file
You try above command to change your tensorflow1.x script to tensorflow2.0, it will solve all your problem.
Also, you can rename the method (Manual step by refering documentation)
Rename 'tf.graph_util.convert_variables_to_constants' to 'tf.compat.v1.graph_util.convert_variables_to_constants'
The measure problem is that in tensorflow2.0 is that many syntax and function has changed try referring the tensoflow2.0 docs or use the google's tf_upgrade_v2 script
Not sure if you've seen this Tensorflow 2.0 issue, but this response seems to be a work-around:
https://github.com/tensorflow/tensorflow/issues/29253#issuecomment-530782763
Note: this hasn't worked for my nlp model but maybe it will work for you. The suggested work-around is to use model.save_weights('weights.h5') while in TF 2.0 environment. Then create new environment with TF 1.14 and do all following steps in TF 1.14 env. Build your model model = create_model() and use model.load_weights('weights.h5') to load weights back into your model. Then save entire model with model.save('final_model.h5'). If you manage to have success with the above steps, then follow the rest of the steps in the link to use freeze_graph.
I have trained a model using keras and saved to json file and .h5 weight file. Then I need to put this run on intel Neural Compute stick. So I converted these two model and weight files to .meta file for tensorflow using my repo: https://github.com/anuragcp/to_ncs_graph.git
Prediction working without NCS. On creating graph from this meta file using mvNCCompile command it make an error:
[Error 5] Toolkit Error: Stage Details Not Supported: VarHandleOp
I Have checked on both ncsdk v1 & v2 - same result.
Any idea how to solve this?
I have been experimenting with the new 8-bit quantization feature available in TensorFlow. I could run the example given in the blog post (quantization of googlenet) without any issue and it works fine for me !!!
Now, I would like to apply the same for a simpler network. So I used a pre-trained network for CIFAR-10 (which is trained on Caffe), extracted its parameters, created corresponding graph in tensorflow, initialized the weights with this pre-trained weights and finally saved it as a GraphDef object. See this IPython Notebook for full procedure.
Now I applied the 8-bit quantization with the tensorflow script as mentioned in the Pete Warden's blog:
bazel-bin/tensorflow/contrib/quantization/tools/quantize_graph --input=cifar.pb --output=qcifar.pb --mode=eightbit --bitdepth=8 --output_node_names="ArgMax"
Now I wanted to run the classification on this quantized network. So I loaded the new qcifar.pb to a tensorflow session and passed the image (the same way I passed it to original version). Full code can be found in this IPython Notebook.
But as you can see at the end, I am getting following error:
NotFoundError: Op type not registered 'QuantizeV2'
Can anybody suggest what am I missing here?
Because the quantized ops and kernels are in contrib, you'll need to explicitly load them in your python script. There's an example of that in the quantize_graph.py script itself:
from tensorflow.contrib.quantization import load_quantized_ops_so
from tensorflow.contrib.quantization.kernels import load_quantized_kernels_so
This is something that we should update the documentation to mention!