I am running a small tfp model with Normal as my output layer. The code is working fine in tensorflow==2.4.0 and tensorflow-probability==0.12.2. But when I am trying to upgrade my tensorflow version it is breaking.
ERROR: AttributeError: 'Normal' object has no attribute 'shape'
python==3.8
tensorflow==2.7.0
tensorflow-addons==0.14.0
tensorflow-estimator==2.7.0
tensorflow-io-gcs-filesystem==0.24.0
tensorflow-probability==0.12.2 (tried with all the versions of tfp till 0.16.x)
Related
I want to run code that uses tensorflow == 1.15 but has tensorflow 2.7 installed on my system. According to Tensorflow at this address https://www.tensorflow.org/guide/migrate/migrate_tf2, I use the following lines so that I can run the code without changing in tensorflow 2.7:
import tensorflow.compat.v1 as tf
tf.disable_v2_behavior()
But I do not know what to do instead of the following lines in my code!
Because, according to Tensorflow "You can still run unmodified TF1.x code (except for contrib) against TF2 binary installations"
l2_reg = tf.contrib.layers.l2_regularizer(scale=self.beta)
xavier = tf.contrib.layers.xavier_initializer()
please help
Tf.contrib is deprecated in Tensorflow 2.x
Replace
tf.contrib.layers.l2_regularizer
with
tf.compat.v1.keras.regularizers.l2
Replace
tf.contrib.layers.xavier_initializer
With
tf.compat.v1.keras.initializers.glorot_normal
I have trained a multi_gpu_model using tensorflow 1.13/1.14 and saved them with keras.model.save('<.hdf5>').
Now, after migrating to tensorflow 2.4.1, in which Keras is integrated as tensorflow.keras, I cannot tensorflow.keras.models.load_model as I did before, due to the following error:
AttributeError: module 'tensorflow.python.keras.backend' has no attribute 'slice'
After trying to import keras.models.load_model, and trying different versions of keras (2.2.4 -> 2.4.1) and tensorflow (2.2 -> 2.4.1), I cannot load_model from my .hdf5 file using my TF 2.2+ code.
I do know that in TF 2.X + we can train using distributed machines by implementing the "strategy" scope, and it does work, but I have a lot of "old" models that I need to work on the same code base which is now being migrated to TF 2.4.1
Apparently the problem was not the TF versions, but the way I was saving my models on my TF 1.X code versions.
I used the keras.multi_gpu_model class for both training and saving, while this practice is wrong, as clearly stated on Keras documentation:
"To save the multi-gpu model, use .save(fname) or .save_weights(fname)
with the template model (the argument you passed to multi_gpu_model),
rather than the model returned by multi_gpu_model."
So, after figuring this out a method for model conversion, using TF 1.X code, was adopted:
build you model from scratch, namely new_model
load your pre-trained weights from the multi_gpu_model, namely 'old_model'
copy your old_model's weights, which is old_model.layers[3] (due to the wrong usage of multi_gpu_model) to your new_model
save new_model as .hdf5 file
use new_model.hdf5 everywhere - TF 1.X and TF 2.X
While converting model to tflite getting this error
"""
Some of the operators in the model are not supported by the standard TensorFlow Lite runtime and are not recognized by TensorFlow. If you have a custom implementation for them you can disable this error with --allow_custom_ops, or by setting allow_custom_ops=True when calling tf.lite.TFLiteConverter(). Here is a list of builtin operators you are using: ABS, ADD, CONV_2D, MAX_POOL_2D, MUL, RELU, SOFTMAX, SQUEEZE, SUB. Here is a list of operators for which you will need custom implementations: AdjustContrastv2, AdjustHue, AdjustSaturation, RandomUniform.
"""
How to resolve this?
tensorflow version: 1.13.1
You can use TF ops directly by selecting TF ops.
I've confirmed that AdjustContrastv2, AdjustHue, AdjustSaturation are available via FlexDelegate.
https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/delegates/flex/allowlisted_flex_ops.cc#L35
To use this feature, you need to use TF 2.4 or higher. Since TF 2.4 is not available yet, you need to use tf-nightly release.
FYI, regarding migration TF1 to TF2, please check https://www.tensorflow.org/guide/migrate
You may try adding following lines to specify your model can use ops in both TF Lite built in and in TF.
converter.experimental_new_converter=True
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS,
tf.lite.OpsSet.SELECT_TF_OPS]
Or better you should rewrite ops not supported in TF Lite built in by ops available in TF built in
I was able to run my python program three weeks ago but now every time I try to run it, I get the following error:
AttributeError: module 'tensorflow' has no attribute 'placeholder'
I have tensorflow installed (version '2.0.0-alpha0').
I have read a couple of posts related to this issue. They say I should uninstall TensorFlow and re-install it again. The problem is that I am running this on a cluster computer and I do not have sudo permissions.
Any idea?
In Tensorflow 2.0, there is no placeholder. You need to update your TF1.x code to TF2.0 code and then run it on your cluster. Please take a look at the official doc on converting your TF1.x code to TF2.0.
In TF1.x codes, you build tensorflow graph (static graph) with placeholders, constants, variables. Then, run the code in a session with a tf.session() command. During that session, you provide the values for the placeholder and execute the static graph.
In TF2.0, models run eagerly as you enter commands. This is more pythonic. Check more details about TF 2.0 here. Thanks!
After including the tensorflow compat v1 libraries:
import tensorflow.compat.v1 as tf
tf.disable_v2_behavior()`
use the v1 syntax like this:
X = tf.compat.v1.placeholder(dtype="float",shape=[None, n_H0, n_W0, n_C0])
Y = tf.compat.v1.placeholder(dtype="float",shape=[None, n_y])
In addition to the #Vishnuvardhan Janapati's answer, you can update folders ("*TREE") and/or files to version 2 of TensorFlow. The upgrade tool tf_upgrade_v2 is automatically included in TensorFlow 1.13 and later.
tf_upgrade_v2 [-h] [--infile INPUT_FILE] [--outfile OUTPUT_FILE]
[--intree INPUT_TREE] [--outtree OUTPUT_TREE]
[--copyotherfiles COPY_OTHER_FILES] [--inplace]
[--reportfile REPORT_FILENAME] [--mode {DEFAULT,SAFETY}]
[--print_all]
An illustration of how the conversion fixed the "placeholder" error:
Note: this fixes similar complaints module 'tensorflow' has no attribute 'xxxxx' (not just the "placeholder").
Calling disable_v2_behavior() function is not necessary
just,
import tensorflow as tf
tf.compat.v1.placeholder()
Changing the library worked for me
#libraries
import tensorflow.compat.v1 as tf
tf.disable_v2_behavior()
If this doesn't work maybe you need you install TensorFlow again.
I hope it helps
I am trying to convert my premade DNN Model to tflite file, using the function:
from tensorflow.contrib.lite.python import convert_saved_model
convert_saved_model.convert(saved_model_dir=saved_model, output_tflite="/TF_Lite_Model")
I have the last verison of tensorflow installed 1.10
I am using UBUNTU 16.04
the error is the following:
AttributeError: module 'tensorflow.contrib.lite.python.convert_saved_model' has no attribute 'convert'
The API for converting SavedModels to TensorFlow Lite FlatBuffers is TocoConverter.from_saved_model as documented here. The documentation has been copied below.
To provide a general explanation. from_saved_model is a classmethod that returns a TocoConverter object. TocoConverter has a function convert. convert_saved_model is a function and therefore does not have its own convert function.
Copied from documentation:
The following example shows how to convert a SavedModel into a TensorFlow Lite FlatBuffer.
import tensorflow as tf
converter = tf.contrib.lite.TocoConverter.from_saved_model(saved_model_dir)
tflite_model = converter.convert()
open("converted_model.tflite", "wb").write(tflite_model)
For more complex SavedModels, the optional parameters that can be passed into TocoConverter.from_saved_model() are input_arrays, input_shapes, output_arrays, tag_set and signature_key. Details of each parameter are available by running help(tf.contrib.lite.TocoConverter).
I had to compile the tflite contrib module as it was missing on my repo.