Why do I get AttributeError: module 'tensorflow' has no attribute 'placeholder'? - tensorflow

I was able to run my python program three weeks ago but now every time I try to run it, I get the following error:
AttributeError: module 'tensorflow' has no attribute 'placeholder'
I have tensorflow installed (version '2.0.0-alpha0').
I have read a couple of posts related to this issue. They say I should uninstall TensorFlow and re-install it again. The problem is that I am running this on a cluster computer and I do not have sudo permissions.
Any idea?

In Tensorflow 2.0, there is no placeholder. You need to update your TF1.x code to TF2.0 code and then run it on your cluster. Please take a look at the official doc on converting your TF1.x code to TF2.0.
In TF1.x codes, you build tensorflow graph (static graph) with placeholders, constants, variables. Then, run the code in a session with a tf.session() command. During that session, you provide the values for the placeholder and execute the static graph.
In TF2.0, models run eagerly as you enter commands. This is more pythonic. Check more details about TF 2.0 here. Thanks!

After including the tensorflow compat v1 libraries:
import tensorflow.compat.v1 as tf
tf.disable_v2_behavior()`
use the v1 syntax like this:
X = tf.compat.v1.placeholder(dtype="float",shape=[None, n_H0, n_W0, n_C0])
Y = tf.compat.v1.placeholder(dtype="float",shape=[None, n_y])

In addition to the #Vishnuvardhan Janapati's answer, you can update folders ("*TREE") and/or files to version 2 of TensorFlow. The upgrade tool tf_upgrade_v2 is automatically included in TensorFlow 1.13 and later.
tf_upgrade_v2 [-h] [--infile INPUT_FILE] [--outfile OUTPUT_FILE]
[--intree INPUT_TREE] [--outtree OUTPUT_TREE]
[--copyotherfiles COPY_OTHER_FILES] [--inplace]
[--reportfile REPORT_FILENAME] [--mode {DEFAULT,SAFETY}]
[--print_all]
An illustration of how the conversion fixed the "placeholder" error:
Note: this fixes similar complaints module 'tensorflow' has no attribute 'xxxxx' (not just the "placeholder").

Calling disable_v2_behavior() function is not necessary
just,
import tensorflow as tf
tf.compat.v1.placeholder()

Changing the library worked for me
#libraries
import tensorflow.compat.v1 as tf
tf.disable_v2_behavior()
If this doesn't work maybe you need you install TensorFlow again.
I hope it helps

Related

keras-bert load_trained_model_from_checkpoint error

I had a code for loading a BERT model that executed very well, but now it raises me an error
here is the code
model = load_trained_model_from_checkpoint(
config_path,
checkpoint_path,
trainable=True,
seq_len=SEQ_LEN,
output_layer_num=4
)
now the error it raises is:
AttributeError: 'tuple' object has no attribute 'layer'
The environment settings are as follows:
keras-bert=0.85.0
keras=2.4.3
tensorflow=1.15.2
Many thanks in advance
In your environment settings, when installing packages, try installing them without specifying the specific versions:
pip install -q keras-bert
pip install keras
AttributeError: 'tuple' object has no attribute 'layer' basically occurs when you mixup keras and tensorflow.keras as this answer explains.
See if that resolves your issue. Also, if you have the following in your code:
import keras
from keras import backend as K
Try changing them to:
from tensorflow.python import keras
import tensorflow.keras.backend as K
I hope that resolves your issue.
You can check this article for reference.

AttributeError: module 'keras.optimizers' has no attribute 'Adam'

When i am using "optimizer = keras.optimizers.Adam(learning_rate)" i am getting this error
"AttributeError: module 'keras.optimizers' has no attribute 'Adam". I am using python3.8 keras 2.6 and backend tensorflow 1.13.2 for running the program. Please help to resolve !
Use tf.keras.optimizers.Adam(learning_rate) instead of keras.optimizers.Adam(learning_rate)
As per the documentation , try to import keras into your code like this,
>>> from tensorflow import keras
This has helped me as well.
Make sure you've imported tensorflow:
import tensorflow as tf
Then use
tf.optimizers.Adam(learning_rate)
There are ways to solve your problem as you are using keras 2.6 and tensorflow too:
use (from keras.optimizer_v2.adam import Adam as Adam) but go through the function documentation once to specify your learning rate and beta values
you can also use (Adam = keras.optimizers.Adam).
(import tensorflow as tf) then (Adam = tf.keras.optimizers.Adam)
Use the form that is useful for the environment you set
I think you are using Keras directly. Instead of giving as from keras.distribute import —> give as from tensorflow.keras.distribute import
Hope this would help you.. It is working for me.

Exporting a frozen graph .pb file in Tensorflow 2

I've beeen trying out the Tensorflow 2 alpha and I have been trying to freeze and export a model to a .pb graphdef file.
In Tensorflow 1 I could do something like this:
# Freeze the graph.
frozen_graph_def = tf.graph_util.convert_variables_to_constants(
sess,
sess.graph_def,
output_node_names)
# Save the frozen graph to .pb file.
with open('model.pb', 'wb') as f:
f.write(frozen_graph_def.SerializeToString())
However this doesn't seem possible anymore as convert_variables_to_constants is removed and use of sessions is discouraged.
I looked and found there is the freeze graph util
https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/tools/freeze_graph.py that works with SavedModel exports.
Is there some way to do it within Python still or I am meant to switch and use this tool now?
I have also faced this same problem while migrating from tensorflow1.x to tensoflow2.0 beta.
This problem can be solved by 2 methods:
1st is to go to the tensflow2.0 docs search for the methods you have used and change the syntax for each line &
To use google's tf_ugrade_v2 script
tf_upgrade_v2 --infile your_tf1_script_file --outfile converted_tf2_file
You try above command to change your tensorflow1.x script to tensorflow2.0, it will solve all your problem.
Also, you can rename the method (Manual step by refering documentation)
Rename 'tf.graph_util.convert_variables_to_constants' to 'tf.compat.v1.graph_util.convert_variables_to_constants'
The measure problem is that in tensorflow2.0 is that many syntax and function has changed try referring the tensoflow2.0 docs or use the google's tf_upgrade_v2 script
Not sure if you've seen this Tensorflow 2.0 issue, but this response seems to be a work-around:
https://github.com/tensorflow/tensorflow/issues/29253#issuecomment-530782763
Note: this hasn't worked for my nlp model but maybe it will work for you. The suggested work-around is to use model.save_weights('weights.h5') while in TF 2.0 environment. Then create new environment with TF 1.14 and do all following steps in TF 1.14 env. Build your model model = create_model() and use model.load_weights('weights.h5') to load weights back into your model. Then save entire model with model.save('final_model.h5'). If you manage to have success with the above steps, then follow the rest of the steps in the link to use freeze_graph.

WARNING from Tensorflow when creating VGG16

I am using Keras to create a deep learning model. When I creating a VGG16 model, the model is created but I get the following warning.
vgg16_model = VGG16()
why this warning happens and how can I resolve this?
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.
Instructions for updating:
Colocations handled automatically by placer.
It looks like there's an open git issue to clean this up in the keras code:
https://github.com/tensorflow/minigo/issues/740
You should be safe to ignore the warning, I don't believe you can change it without modifying the TF repo. You can disable warnings as mentioned here:
tf.logging.set_verbosity(tf.logging.ERROR)
You can use the function below to avoid these warnings. First, you must make the appropriate imports:
import os
os.environ['KERAS_BACKEND']='tensorflow'
import tensorflow as tf
def tf_no_warning():
"""
Make Tensorflow less verbose
"""
try:
tf.logging.set_verbosity(tf.logging.ERROR)
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'
except ImportError:
pass
And then call the above function at the beginning of the code.
tf_no_warning()
So , the method colocate_with is a context manager to make sure that the operation or tensor you're about to create will be placed on the same device the reference operation is on. But, your warning says that it will be deprecated and that this will from now on be handled automatically. From the next version of tensorflow, this method will be removed so you will either have to update your code now (which will run currently) or later (when you update the version of tensorflow to the next one, this code will no longer be runnable because that method will be removed)

how to use MNIST datast on linux using tensorflow

I'm new in machine learning and I am following tensorflow's tutorial to create some simple Neural Networks which learn the MNIST data.
i wanna run a code that do the recognition hand writing digits using the MNIST data but i don't know how to run it ... should i dowload the data on my machine and extracted and put it on a file and then set the path on the code or did tensorflow contain the data ...but when i do import input_data i get
No module named 'input_data' also when i do
from tensorflow.examples.tutorials.mnist import input_data ==> No module named 'tensorflow.examples'
ps:when i do import tensorflow as tf i get no erreur so it's fine with tensorflow i think
could u help me plz for example i wanna run the code below what should i do
https://github.com/hwalsuklee/tensorflow-mnist-cnn
If you cannot import tensorflow.examples I'm guessing something went wrong with the installation. Try reinstalling tensorflow with the latest version.
You don't need to download the data on your own, tensorflow will put it in the path you provide. But first, try these steps:
I'm currently using tf 1.2.0 and I'm not getting that error.
If you want to know which version you have installed:
import tensorflow as tf
print(tf.__version__)
After everything is installed try:
from tensorflow.examples.tutorials.mnist import input_data
input_data.read_data_sets("./data/", one_hot=True)
That should copy the data to a "data" folder inside your working folder (the "data" folder will be created and all the files will be available there).
If the above lines of code run with no errors, you should be able to run the example.