NotImplementedError:: Cannot convert a symbolic Tensor (bidirectional/forward_lstm/strided_slice:0) to a numpy array - tensorflow2.0

sequence_input = Input(shape=(max_len,), dtype="int32")
embedded_sequences = Embedding(vocab_size, 128, input_length=max_len,
mask_zero=True)(sequence_input)
lstm = Bidirectional(LSTM(64, dropout=0.5, return_sequences=True))(embedded_sequences)
The third line of code gives the following error:
Cannot convert a symbolic Tensor (bidirectional/forward_lstm/strided_slice:0) to a numpy array.
This error may indicate that you're trying to pass a Tensor to a NumPy call, which is not supported
When I was looking for a solution to the same error as me, I saw a lot of answers on stackoverflow telling me to lower the numpy version to less than 1.20.
But since I use featuretools, I need to set the numpy version to 1.2 or higher.
So, my question is, is there currently no way to fix this error without downgrading the numpy version?
(my tensorflow version is 2.3.0, numpy version is 1.23)

I solved this error by changing the cuda version and installing the latest tensorflow. (2.3 -> 2.10)

Related

Installing Tensorflow, NumPy and Pandas on the same machine

If I try and install TensorFlow on my machine, it'll install numpy 1.19.5
If I try and install Pandas, it'll install numpy 1.22
If I stick with numpy 1.19.5 and try to import pandas, I get a complaint from pandas:
ValueError: numpy.ndarray size changed, may indicate binary incompatibility. Expected 88 from C header, got 80 from PyObject
If I go with numpy 1.22, I get
NotImplementedError: Cannot convert a symbolic Tensor (lstm_2/strided_slice:0) to a numpy array
Which I've heard means it's because tensorflow can't run on 1.22
So what am I supposed to do to have pandas and tensorflow working at the same time?
You can use Pandas version==1.3.5 which supports all later versions of Tensorflow 2.6 to 2.9 and Numpy version >= 1.17.3

How do I convert a tensor to a Numpy array within a Tensorflow model?

I've been trying to make an LSTM model using this code provided in a Deeplearning.AI tutorial on Coursera.
model = tf.keras.Sequential([
tf.keras.layers.Embedding(10000, 64),
tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(64, return_sequences=True)),
tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(32)),
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.Dense(1, activation='sigmoid')
])
However, I get this error:
"Cannot convert a symbolic Tensor (bidirectional_4/forward_lstm_4/strided_slice:0) to a numpy array. This error may indicate that you're trying to pass a Tensor to a NumPy call, which is not supported"
If I drop the bidirectional layers, the code runs fine.
I know that there are several other questions dealing with the issue of converting tensors to numpy arrays. However I can't find one that addresses my issue because:
a) They all deal with doing so outside of a model. My issue is that the model is failing to even be instantiated because one layer seems to have trouble talking to another and I haven't found a solution that deals with that and
b) This is the exact same code that runs just fine inside a Colab notebook (with the same TF version as on my desktop) but fails on my desktop.
Thanks,
It turns out that this is a problem with numpy versions 2.0 and above. When I downgraded my numpy version to 1.19.5, it worked just fine.

Cannot convert a symbolic Tensor (lstm/strided_slice:0) to a numpy array in Pi 4, 32 bit OS; During LSTM implementation

During running LSTM model , I'm getting this error when my code is calling the following function:
model.add(LSTM(100, dropout=0.2, recurrent_dropout=0.2))
error is:
" a NumPy call, which is not supported".format(self.name))
NotImplementedError: Cannot convert a symbolic Tensor
(lstm/strided_slice:0) to a numpy array. This error may indicate that
you're trying to pass a Tensor to a NumPy call, which is not
supported"
Im using-
tensorflow 2.4.0
Numpy 1.20.0
pandas=1.3.4
python 3.7.3
platform: Pi 4 + 32bit OS
Try to use the lower version of Numpy i.e 1.19.5 or 1.19 using
pip install numpy == 1.19.5
Or
pip install numpy == 1.19
This problem usually occurs when you try to evaluate something using a Symbolic Tensor ( For Example Tensor array etc) with Non-Symbolic Types like NumPy, It is quite difficult to avoid because we might use symbolic tensors like tf.zeros() or tf.ones() as the parameters of your model, which use NumPy internally. Try for a Lower version of NumPy or the Latest Version of Tensorflow version i.e. Tensorflow Core v2.6.0
I solved this problem by changing my OS from 32 bit OS to 64 bit OS. I tried with 32 bit OS+ numpy 1.19.5 + tf 2.4.0; I was receiving this error constantly. later I changed the OS to 64 bit with numpy 1.19.5 and tf 2.4.0 and its working now.

NotImplementedError: Cannot convert a symbolic Tensor to a numpy array

The code below used to work last year, but updates in keras/tensorflow/numpy broke it. It now outputs the exception below. Does anyone know how to make it work again?
I'm using:
Tensorflow 2.4.1
Keras 2.4.3
Numpy 1.20.1
Python 3.9.1
import numpy as np
from keras.layers import LSTM, Embedding, Input, Bidirectional
dim = 30
max_seq_length = 40
vecs = np.random.rand(45,dim)
input_layer = Input(shape=(max_seq_length,))
embedding_layer = Embedding(len(vecs), dim, weights=[vecs], input_length=max_seq_length, trainable=False, name="layerA")(input_layer)
lstm_nobi = LSTM(max_seq_length, return_sequences=True, activation="linear", name="layerB")
lstm = Bidirectional(lstm_nobi, name="layerC")(embedding_layer)
Complete output of the script above: https://pastebin.com/DsQNWVwz
Shortened output:
2021-02-10 17:51:13.037468: I tensorflow/compiler/jit/xla_cpu_device.cc:41] Not creating XLA devices, tf_xla_enable_xla_devices not set
2021-02-10 17:51:13.037899: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: SSE3 SSE4.1 SSE4.2 AVX AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2021-02-10 17:51:13.038418: I tensorflow/core/common_runtime/process_util.cc:146] Creating new thread pool with default inter op setting: 2. Tune using inter_op_parallelism_threads for best performance.
Traceback (most recent call last):
File "/run/media/volker/DATA/configruns/load/./test.py", line 13, in <module>
lstm = Bidirectional(lstm_nobi, name="layerC")(embedding_layer)
... omitted, see pastebin ...
File "/usr/lib/python3.9/site-packages/tensorflow/python/framework/ops.py", line 852, in __array__
raise NotImplementedError(
NotImplementedError: Cannot convert a symbolic Tensor (layerC/forward_layerB/strided_slice:0) to a numpy array. This error may indicate that you're trying to pass a Tensor to a NumPy call, which is not supported
Installing numpy 1.19.5 works for me even with Python 3.9
pip install -U numpy==1.19.5
My environment is Windows and since I do not have Visual C++ Compiler installed, I ride on third party whl file installation
pip install -U https://mirrors.aliyun.com/pypi/packages/bc/40/d6f7ba9ce5406b578e538325828ea43849a3dfd8db63d1147a257d19c8d1/numpy-1.19.5-cp39-cp39-win_amd64.whl#sha256=0eef32ca3132a48e43f6a0f5a82cb508f22ce5a3d6f67a8329c81c8e226d3f6e
Solution: Use Python 3.8, because Python 3.9 is not supported by Tensorflow.

Convert Tensor to numpy array in TF 2.x

I am trying to load Universal Sentence Encoder and this is my code snippet:
import tensorflow as tf
import tensorflow_hub as hub
import os, requests, tarfile
def extractUSEEmbeddings(words):
# Extracts USE embeddings
# Replace `USE_folder` with any directory in your machine, where you want USE to be downloaded
try:
embed = hub.KerasLayer(USE_folder)
except Exception as e:
print ("Downloading USE embeddings...")
r = requests.get("https://tfhub.dev/google/universal-sentence-encoder-large/5?tf-hub-format=compressed")
open("USE.tar.gz", "wb").write(r.content)
tar = tarfile.open("USE.tar.gz", "r:gz")
tar.extractall(path=USE_folder)
tar.close()
os.remove("USE.tar.gz")
embed = hub.KerasLayer(USE_folder)
pass
word_embeddings = embed(words)
return word_embeddings.numpy()
I get the error 'Tensor' object has no attribute 'numpy'. When I run the same code on Jupyter notebook, with the same versions of tensorflow (2.2.0) and tensorflow-hub (0.9.0), I do not get any error and it works perfectly fine.
I printed the type of Tensor in both cases, and realized that this is because I get an Eager Tensor (tensorflow.python.framework.ops.EagerTensor) in Jupyter, which has a numpy method whereas in my script, the Tensor is of type tensorflow.python.framework.ops.Tensor. However, I am now unable to figure out how to switch on Eager Execution in my script, since in TF 2.x it is supposed to be enabled by default.
I have tried all the solutions given in this thread, but none of them work for me.
Why am I not getting an Eager Tensor when run through the terminal, but get it through Jupyter? Does my problem have anything to do with the fact that I am using tensorflow-hub here, and is that why none of the solutions are working for me? Most importantly, how do I convert Tensor in tf 2.x to a numpy array?