How to check which Tensorflow version is compatible to Tensorflow Model Garden? - tensorflow

In order to use a pre-trained model with Tensorflow, we clone the Model Garden for TensorFlow, then choose a model in Model Zoo, for example, Detection Model Zoo: EfficientDet D0 512x512.
Is there anyway to detect the right version of Tensorflow, e.g. 2.7.0, or 2.7.1, or 2.8.0, that will surely work with the aforementioned setup?
The documentation (README.md) doesn't seem to mention this requirement. Maybe it is implied somehow?
I checked setup.py for Object Detection, but there is still no clue!
\models\research\object_detection\packages\tf2\setup.py
REQUIRED_PACKAGES = [
# Required for apache-beam with PY3
'avro-python3',
'apache-beam',
'pillow',
'lxml',
'matplotlib',
'Cython',
'contextlib2',
'tf-slim',
'six',
'pycocotools',
'lvis',
'scipy',
'pandas',
'tf-models-official>=2.5.1',
'tensorflow_io',
'keras'
]

I am not aware of a formal/quick way to determine the right Tensorflow version, given a specific Model Garden version, master branch. However, here is my workaround:
In the REQUIRED_PACKAGES above, we see tf-models-official>=2.5.1.
Checking the package history on pypi.org, the latest version, as of 03.02.2022, is 2.8.0.
So when installing this \models\research\object_detection\packages\tf2\setup.py file, pip will naturally fetch the latest version of tf-models-official, which is 2.8.0, thanks to >= symbol.
However, given tf-models-official, v2.8.0, its required packages are defined in tf-models-official-2.8.0\tf_models_official.egg-info\requires.txt (Note: download the package and extract it, using the link.)
Here, we find out:
tensorflow~=2.8.0
...meaning the required Tensorflow version is 2.8.*.
This may not be desired, e.g. in CoLab, currently, the version is 2.7.0.
To workaround, we should use tf-models-official, v2.7.0. Notice it matches the Tensorflow version. In this version 2.7.0's requires.txt, we should see tensorflow>=2.4.0, which is already satisfied by CoLab's default Tensorflow version (2.7.0).
To make this workaround possible, the \models\research\object_detection\packages\tf2\setup.py should be modified from, e.g. 'tf-models-official>=2.5.1' to 'tf-models-official==2.7.0'.
Caveat: I think this hack doesn't affect the functionality of the Object Detection API because it originally demands any tf-models-official >= 2.5.1. We just simply fix it to ==2.7.0 instead.

Related

I get error: module 'tensorflow.keras.layers' has no attribute 'Normalization'

I use
layers.Normalization()
in Keras, in keras.Sequential
When I try to run it, I get the following error:
module 'tensorflow.keras.layers' has no attribute 'Normalization'
I've seen the command layers.Normalization() being used in many codes, so I don't know what's wrong. Did something change?
One reason can be that you are using the tensorflow version older then the required to use that layer. There are two ways to get around this problem.
Upgrade tensorflow as discussed above.
Or you can add the layer as follows:
tf.keras.layers.experimental.preprocessing.Normalization
Regards
Check the version of TensorFlow you have:
import tensorflow as tf
print(tf.__version__)
tf.keras.layers.Normalization is an attribute in TensorFlow v2.6.0, so might not work on earlier versions: https://www.tensorflow.org/api_docs/python/tf/keras/layers/Normalization
If you have an earlier version, you can upgrade using
pip install --upgrade tensorflow

What is the future of TensorFlow V1?

I like the modeling, programming and coding style in TensorFlow V1. But after TensorFlow V2, the V1 version seems to be just as a historic component of TensorFlow V2.
So could anybody introduce the future of Tensorflow V1?
Will it be stopped to be maintained or even removed in the future TensorFlow?
The TensorFlow 1.15.0 release notes say:
This is the last 1.x release for TensorFlow. We do not expect to update the 1.x branch with features, although we will issue patch releases to fix vulnerabilities for at least one year.
There have been a few bugfix releases since (now at 1.15.3), which fix security vulnerabilities, but according to the 1.15.0 release notes, V1 development has essentially ended.
If your code does not use tf.contrib, it is still possible to use v1 code in TensorFlow 2. See the TensorFlow migration guide.
import tensorflow.compat.v1 as tf
tf.disable_v2_behavior()
It is not clear what the future of tensorflow.compat.v1 is, but the symbols exported by this module often point to the same implementations as in 2.x.

Workflow for converting .pb files to .tflite

Overview
I know this subject has been discussed many times, but I am having a hard time understanding the workflow, or rather, the variations of the workflow.
For example, imagine you are installing TensorFlow on Windows 10. The main goal being to train a custom model, convert to TensorFlow Lite, and copy the converted .tflite file to a Raspberry Pi running TensorFlow Lite.
The confusion for me starts with the conversion process. After following along with multiple guides, it seems TensorFlow is often install with pip, or Anaconda. But then I see detailed tutorials which indicate it needs to be built from source in order to convert from TensorFlow models to TFLite models.
To make things more interesting, I've also seen models which are converted via Python scripts as seen here.
Question
So far I have seen 3 ways to do this conversion, and it could just be that I don't have a grasp on the full picture. Below are the abbreviated methods I have seen:
Build from source, and use the TensorFlow Lite Optimizing Converter (TOCO):
bazel run --config=opt tensorflow/lite/toco:toco -- --input_file=$OUTPUT_DIR/tflite_graph.pb --output_file=$OUTPUT_DIR/detect.tflite ...
Use the TensorFlow Lite Converter Python API:
converter = tf.lite.TFLiteConverter.from_saved_model(export_dir)
tflite_model = converter.convert()
with tf.io.gfile.GFile('model.tflite', 'wb') as f:
f.write(tflite_model)
Use the tflite_convert CLI utilities:
tflite_convert --saved_model_dir=/tmp/mobilenet_saved_model --output_file=/tmp/mobilenet.tflite
I *think I understand that options 2/3 are the same, in the sense that the tflite_convert utility is installed, and can be invoked either from the command line, or through a Python script. But is there a specific reason you should choose one over the other?
And lastly, what really gets me confused is option 1. And maybe it's a version thing (1.x vs 2.x)? But what's the difference between the TensorFlow Lite Optimizing Converter (TOCO) and the TensorFlow Lite Converter. It appears that in order to use TOCO you would have to build TensorFlow from source, so is there is a reason you would use one over the other?
There is no difference in the output from different conversion methods, as long as the parameters remain the same. The Python API is better if you want to generate TFLite models in an automated way (for eg a Python script that's run periodically).
The TensorFlow Lite Optimizing Converter (TOCO) was the first version of the TF->TFLite converter. It was recently deprecated and replaced with a new converter that can handle more ops/models. So I wouldn't recommend using toco:toco via bazel, but rather use tflite_convert as mentioned here.
You should never have to build the converter from source, unless you are making some changes to it and want to test them out.

What is the difference between TensorFlow's contrib.slim.nets and models/slim/nets?

In Github repository, we have tensorflow/models having slim and we also have slim in tensorflow.contrib.slim
They both have similar names, functionality and structure. They provide similar nets. For example, inception_v1
any reference for this brain split?
why they did not just git sub
module? any discussion link?
which is the most usable/stable/maintained?
which is the real net used to train pre-trained data? this one or this one
which one of those two is the real slim shady?
https://github.com/tensorflow/models/blob/master/research/slim/slim_walkthrough.ipynb under the section titled Installation and setup :
Since the stable release of TF 1.0, the latest version of slim has been available as tf.contrib.slim, although, to use TF-Slim for image classification (as we do in this notebook), you also have to install the TF-Slim image models library from here.

difference between version r0.11 and r0.9

The latest version of Tensorflow is r0.11, I am using 0.9.0. Before updating the Tensorflow to the latest version, I would like to know whether there are any major changes for the Tensorflow, which can cause the original code fail to compile/run.
There are various API changes, mostly documented in release notes. One undocumented change I ran into was that custom variable initializers must now have a partition_info keyword argument in the constructor.
There are also some numeric changes. IE, we have a model which trains on 0.9, doesn't train on 0.10, again trains on 0.11. These are usually undocumented/unintended and you just have to run and see.