I have an conda environment with Tensorflow 2.0.0-beta1 installed. However whenever I import tensorflow and attempt to enable eager execution I get the error :
AttributeError: module 'tensorflow' has no attribute 'enable_eager_execution'
The only code that I have run for this is:
import tensorflow as tf
print(tf.__version__)
tf.enable_eager_execution()
Is this an error with the tensorflow 2.0 beta module or an issue with my installation ?
In ternsorflow 2.0 the enable_eager_execution method is moved to tf.compat.v1 module. The following works on tensorflow-2.0.0-beta1
tf.compat.v1.enable_eager_execution()
In tensorflow 2.0 the eager execution is enabled by default. You don't need to enable it in your program.
E.g
import tensorflow as tf
t = tf.constant([5.0])
Now you can directly view the value of tensor without using session object.
print(t)
# tf.Tensor([5.], shape=(1,), dtype=float32)
You can also change the tensor value to numpy array
numpy_array = t.numpy()
print(numpy_array)
# [5.]
You can also disable eager execution in tensorflow-2(Tested on tensorflow-2.0.0-beta1. This might not work on future versions.)
tf.compat.v1.disable_eager_execution()
t2 = tf.constant([5.0])
print(t2)
# Tensor("Const:0", shape=(1,), dtype=float32)
Calling numpy() method on tensor after eager execution is disabled throws an error
AttributeError: 'Tensor' object has no attribute 'numpy'
One issue you should consider while disabling the eager execution is, once the eager execution is disabled it cannot be enabled in the same program, because tf.enable_eager_execution should be called at program startup and calling this method after disabling eager execution throws an error:
ValueError: tf.enable_eager_execution must be called at program startup.
Related
Just updated tensorflow to version 2.10.
Python v3.10.7
import tensorflow as tf
print(tf.version.VERSION)
2.10.0
exit
Previously, model was saving all ok. now getting this warning.
C:\Users\Master\anaconda3\envs\gputensorflow\lib\site-packages\tensorflow\python\keras\utils\generic_utils.py:494: CustomMaskWarning: Custom mask layers require a config and must override get_config. When loading, the custom mask layer must be passed to the custom_objects argument.
warnings.warn('Custom mask layers require a config and must override '
Code is as follows:
model.save(os.path.join(new_folder, 'model_weights.h5'))
I had a code for loading a BERT model that executed very well, but now it raises me an error
here is the code
model = load_trained_model_from_checkpoint(
config_path,
checkpoint_path,
trainable=True,
seq_len=SEQ_LEN,
output_layer_num=4
)
now the error it raises is:
AttributeError: 'tuple' object has no attribute 'layer'
The environment settings are as follows:
keras-bert=0.85.0
keras=2.4.3
tensorflow=1.15.2
Many thanks in advance
In your environment settings, when installing packages, try installing them without specifying the specific versions:
pip install -q keras-bert
pip install keras
AttributeError: 'tuple' object has no attribute 'layer' basically occurs when you mixup keras and tensorflow.keras as this answer explains.
See if that resolves your issue. Also, if you have the following in your code:
import keras
from keras import backend as K
Try changing them to:
from tensorflow.python import keras
import tensorflow.keras.backend as K
I hope that resolves your issue.
You can check this article for reference.
I am trying to load Universal Sentence Encoder and this is my code snippet:
import tensorflow as tf
import tensorflow_hub as hub
import os, requests, tarfile
def extractUSEEmbeddings(words):
# Extracts USE embeddings
# Replace `USE_folder` with any directory in your machine, where you want USE to be downloaded
try:
embed = hub.KerasLayer(USE_folder)
except Exception as e:
print ("Downloading USE embeddings...")
r = requests.get("https://tfhub.dev/google/universal-sentence-encoder-large/5?tf-hub-format=compressed")
open("USE.tar.gz", "wb").write(r.content)
tar = tarfile.open("USE.tar.gz", "r:gz")
tar.extractall(path=USE_folder)
tar.close()
os.remove("USE.tar.gz")
embed = hub.KerasLayer(USE_folder)
pass
word_embeddings = embed(words)
return word_embeddings.numpy()
I get the error 'Tensor' object has no attribute 'numpy'. When I run the same code on Jupyter notebook, with the same versions of tensorflow (2.2.0) and tensorflow-hub (0.9.0), I do not get any error and it works perfectly fine.
I printed the type of Tensor in both cases, and realized that this is because I get an Eager Tensor (tensorflow.python.framework.ops.EagerTensor) in Jupyter, which has a numpy method whereas in my script, the Tensor is of type tensorflow.python.framework.ops.Tensor. However, I am now unable to figure out how to switch on Eager Execution in my script, since in TF 2.x it is supposed to be enabled by default.
I have tried all the solutions given in this thread, but none of them work for me.
Why am I not getting an Eager Tensor when run through the terminal, but get it through Jupyter? Does my problem have anything to do with the fact that I am using tensorflow-hub here, and is that why none of the solutions are working for me? Most importantly, how do I convert Tensor in tf 2.x to a numpy array?
I'm running tf2.0 in a conda environment, and would like to display a tensor in a figure.
plt.imshow(tmp)
TypeError: Image data of dtype object cannot be converted to float
tmp.dtype
tf.float32
So I tried converting it to a numpy array, but...
print(tmp.numpy())
AttributeError: 'Tensor' object has no attribute 'numpy'
tmp.eval()
ValueError: Cannot evaluate tensor using `eval()`: No default session is registered. Use `with sess.as_default()` or pass an explicit session to `eval(session=sess)`
I've read elsewhere that this is because I need an active session or eager execution. Eager execution should be enabled by default in tf2.0, but...
print(tf.__version__)
2.0.0-alpha0
tf.executing_eagerly()
False
tf.enable_eager_execution()
AttributeError: module 'tensorflow' has no attribute 'enable_eager_execution'
tf.compat.v1.enable_eager_execution()
None
tf.executing_eagerly()
False
sess = tf.Session()
AttributeError: module 'tensorflow' has no attribute 'Session'
I tried upgrading to 2.0.0b1, but the results were exactly the same (except tf.__version__).
Edit:
according to this answer, the problems are probably because I am trying to debug a function which is inside a tf.data.Dataset.map() call, which work with static graphs. So perhaps the question becomes "how do I debug these functions?"
The critical insight for me was that running the tf.data.Dataset.map() function builds a graph, and the graph is executed later as part of a data pipeline. So it is more about code generation, and eager execution doesn't apply. Besides the lack of eager execution, building a graph has other restrictions, including that all inputs and outputs must be tensors. Tensors don't support item assignment operations such as T[0] += 1.
Item assignment is a fairly common use case, so there is a straightforward solution: tf.py_function (previously tf.py_func). py_function works with numpy arrays as inputs and outputs, so you're free to make use of other numpy functions which have not yet been included in the tensorflow library.
As usual, there is a trade-off: a py_function is interpreted on the fly by the python interpreter. So it won't be as fast as pre-compiled tensor operations. More importantly, the interpreter threads are not aware of each other, so there may be parallelisation issues.
There's a helpful explanation and demonstration of a py_function in the documentation: https://www.tensorflow.org/beta/guide/data
I'm struggling running Tensorflow (v1.1) code multiple times in Jupyter Notebook.
For example, I execute this simple code snippet that creates an encoding layer for a seq2seq model:
# Construct encoder layer (LSTM)
encoder_cell = tf.contrib.rnn.LSTMCell(encoder_hidden_units)
encoder_outputs, encoder_final_state = tf.nn.dynamic_rnn(
encoder_cell, encoder_inputs_embedded,
dtype=tf.float32, time_major=False
)
First time is totally fine, my encoder is created.
However, if I rerun it (no matter the changes I've applied), I get this error:
Attempt to have a second RNNCell use the weights of a variable scope that already has weights
It's very annoying as it forces me to restart the kernel every time I want to change a layer.
Can someone explain me why this happens and how I can fix this ?
Thanks!
You are trying to build the exact same graph twice and therefore TensorFlow complains because the variables already exist in the default graph.
What you could do is to call tf.reset_default_graph() before trying to call the method a second time to ensure you create a new graph when required.
Just in case, I would also suggest using an interactive session as described here in the Start TensorFlow InteractiveSession section:
import tensorflow as tf
sess = tf.InteractiveSession()