Loading saved model trained in an earlier tensorflow version - tensorflow

I recently use Google AutoML service to create a model.
Its output seems to be in a saved model format. However,when I attempt to load it via tf.saved_model.load ,it display following error
Op type not registered 'TreeEnsembleSerialize' in binary ...
When I look up this op,I find that this op exists in tf.contrib.boosted_trees in Tensorflow 1.15,but since Tensorflow 2 removes tf.contrib,this op has be renamed to BoostedTreesSerializeEnsemble in tf.raw_ops.
My question is:Is there any way to duplicate the op and rename it to TreeEnsembleSerialize ,so the saved model could be loaded without errors.
Thanks.

There are no significant compatibility concerns for saved models.
TensorFlow 1.x saved_models work in TensorFlow 2.x.
TensorFlow 2.x
saved_models work in TensorFlow 1.x if all the ops are supported.
For more information visit Tensorflow doc

Related

facing issues for convert from tensorflow core to tensorflow lite

I am facing issues for convert TensorFlow to TensorFlow Lite. As per research first need to save the model in .pb and by using this file we can convert it into TensorFlow lite but facing an error.
Among the TF graph representations, exporting as a saved model is recommended. TFLiteConverter.from_saved_model API is more capable than the other conversion APIs. For example, signature def API is only available from the saved model API and there are better support of resource and variant types in the saved model API.
https://www.tensorflow.org/hub/exporting_tf2_saved_model
https://www.tensorflow.org/lite/convert

Dumping Weights in TensorflowLite

new Tensorflow 2.0 user. My project requires me to investigate the weights for the neural network i created in Tensorflow (super simple one). I think I know how to do it in the regular Tensorflow case. Namely I use the command model.save_weights(filename). I would like to repeat this effort for a .tflite model but I am having trouble. Instead of generating my own tensorflow lite model, I am using one of the many models which are provided online: https://www.tensorflow.org/lite/guide/hosted_model to avoid having to troubleshoot my use of the Tensorflow Lite converter. Any thoughts?

How to run tensorflow 2.0 model inference in Java?

I have a Java application that use my old tensorflow models. I used to convert the .h5 weights and .json model into a frozen graph in .pb.
I used a similar code than in this github https://github.com/amir-abdi/keras_to_tensorflow.
But this code but it's not compatible with tf 2.0 model.
I couldn't find any other resources.
Is it even possible?
Thank you :)

How to use Tensorflow model comparison with tflite_diff_example_test

I have trained a model for detection, which is doing great when embedded in tensorflow sample app.
After freezing with export_tflite_ssd_graph and conversion to tflite using toco the results do perform rather bad and have a huge "variety".
Reading this answer on a similar problem with loss of accuracy I wanted to try tflite_diff_example_test on a tensorflow docker machine.
As the documentation is not that evolved right now, I build the tool referencing this SO Post
using:
bazel build tensorflow/contrib/lite/testing/tflite_diff_example_test.cc which ran smooth.
After figuring out all my needed input parameters I tried the testscript with following commands:
~/.cache/bazel/_bazel_root/68a62076e91007a7908bc42a32e4cff9/external/bazel_tools/tools/test/test-setup.sh tensorflow/contrib/lite/testing/tflite_diff_example_test '--tensorflow_model=/tensorflow/shared/exported/tflite_graph.pb' '--tflite_model=/tensorflow/shared/exported/detect.tflite' '--input_layer=a,b,c,d' '--input_layer_type=float,float,float,float' '--input_layer_shape=1,3,4,3:1,3,4,3:1,3,4,3:1,3,4,3' '--output_layer=x,y'
and
bazel-bin/tensorflow/contrib/lite/testing/tflite_diff_example_test --tensorflow_model="/tensorflow/shared/exported/tflite_graph.pb" --tflite_model="/tensorflow/shared/exported/detect.tflite" --input_layer=a,b,c,d --input_layer_type=float,float,float,float --input_layer_shape=1,3,4,3:1,3,4,3:1,3,4,3:1,3,4,3 --output_layer=x,y
Both ways are failing. Errors:
way:
tflite_diff_example_test.cc:line 1: /bazel: Is a directory
tflite_diff_example_test.cc: line 3: syntax error near unexpected token '('
tflite_diff_example_test.cc: line 3: 'Licensed under the Apache License, Version 2.0 (the "License");'
/root/.cache/bazel/_bazel_root/68a62076e91007a7908bc42a32e4cff9/external/bazel_tools/tools/test/test-setup.sh: line 184: /tensorflow/: Is a directory
/root/.cache/bazel/_bazel_root/68a62076e91007a7908bc42a32e4cff9/external/bazel_tools/tools/test/test-setup.sh: line 276: /tensorflow/: Is a directory
way:
2018-09-10 09:34:27.650473: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 FMA
Failed to create session. Op type not registered 'TFLite_Detection_PostProcess' in binary running on d36de5b65187. Make sure the Op and Kernel are registered in the binary running in this process. Note that if you are loading a saved graph which used ops from tf.contrib, accessing (e.g.)tf.contrib.resamplershould be done before importing the graph, as contrib ops are lazily registered when the module is first accessed.
I would really appreciate any help, that enables me to compare the output of two graphs using tensorflows given tests.
The second way you mentioned is the correct way to use tflite_diff. However, the object detection model containing the TFLite_Detection_PostProcess op cannot be run via tflite_diff.
tflite_diff runs the provided TensorFlow (.pb) model in the TensorFlow runtime and runs the provided TensorFlow Lite (.tflite) model in the TensorFlow Lite runtime. In order to run the .pb model in the TensorFlow runtime, all of the operations must be implemented in TensorFlow.
However, in the model you provided, the TFLite_Detection_PostProcess op is not implemented in TensorFlow runtime - it is only available in the TensorFlow Lite runtime. Therefore, TensorFlow cannot resolve the op. Therefore, you unfortunately cannot use the tflite_diff tool with this model.

How do I identify Keras version which is merged to Tensorflow current?

I am trying to use Keras/TensorFlow. But some options are not supported (ex. TensorBoard embeddings_freq) . I want to know TensorFlow merging policy for Keras, especially for synchronizing schedule and how to check Keras merged version.
The Keras in tf.keras is a reimplementation of keras and not a merge of a particular version. File issues if features you need are missing.