I am trying to load Universal Sentence Encoder and this is my code snippet:
import tensorflow as tf
import tensorflow_hub as hub
import os, requests, tarfile
def extractUSEEmbeddings(words):
# Extracts USE embeddings
# Replace `USE_folder` with any directory in your machine, where you want USE to be downloaded
try:
embed = hub.KerasLayer(USE_folder)
except Exception as e:
print ("Downloading USE embeddings...")
r = requests.get("https://tfhub.dev/google/universal-sentence-encoder-large/5?tf-hub-format=compressed")
open("USE.tar.gz", "wb").write(r.content)
tar = tarfile.open("USE.tar.gz", "r:gz")
tar.extractall(path=USE_folder)
tar.close()
os.remove("USE.tar.gz")
embed = hub.KerasLayer(USE_folder)
pass
word_embeddings = embed(words)
return word_embeddings.numpy()
I get the error 'Tensor' object has no attribute 'numpy'. When I run the same code on Jupyter notebook, with the same versions of tensorflow (2.2.0) and tensorflow-hub (0.9.0), I do not get any error and it works perfectly fine.
I printed the type of Tensor in both cases, and realized that this is because I get an Eager Tensor (tensorflow.python.framework.ops.EagerTensor) in Jupyter, which has a numpy method whereas in my script, the Tensor is of type tensorflow.python.framework.ops.Tensor. However, I am now unable to figure out how to switch on Eager Execution in my script, since in TF 2.x it is supposed to be enabled by default.
I have tried all the solutions given in this thread, but none of them work for me.
Why am I not getting an Eager Tensor when run through the terminal, but get it through Jupyter? Does my problem have anything to do with the fact that I am using tensorflow-hub here, and is that why none of the solutions are working for me? Most importantly, how do I convert Tensor in tf 2.x to a numpy array?
Related
I'm trying to run the GAN in the link with my own dataset. First of all, I wanted to try with MNIST dataset and see the results. I am running it on COLAB. When I use the existing versions of tensorflow and keras in Colab, the outputs are noisy and have bad results. An example from 1400.epoch:
But when I downgrade tensorflow to 2.2.0 and keras to 2.3.1 the results are very good. An example from 1350.epoch:
Then, when I ran it with my own dataset without changing the existing library versions in COLAB, I still got noisy and bad results. So I just updated the versions as before. However, as a result, I get the following error:
FailedPreconditionError: Error while reading resource variable
_AnonymousVar45 from Container: localhost. This could mean that the variable was uninitialized. Not found: Resource
localhost/_AnonymousVar45/N10tensorflow3VarE does not exist. [[node
mul_1/ReadVariableOp (defined at
/usr/local/lib/python3.7/dist-packages/keras/backend/tensorflow_backend.py:3009)
]] [Op:__inference_keras_scratch_graph_5103]
Function call stack: keras_scratch_graph
If this error was caused by tensorflow and keras versions, I think I would get the same error when I tried with MNIST. So I couldn't find the source of the error. Maybe it has to do with the way I load my data. However, existing library versions had no problems with this. Anyway I'm adding the way I load the data here:
import zipfile # unziping
import glob # finding image paths
import numpy as np # creating numpy arrays
from skimage.io import imread # reading images
from skimage.transform import resize # resizing images
# 1. Unzip images
path = '/content/gdrive/My Drive/gan/RealImages.zip'
with zipfile.ZipFile(path, 'r') as zip_ref:
zip_ref.extractall('/content/gdrive/My Drive/gan/extracted')
# 2. Obtain paths of images (.png used for example)
img_list = sorted(glob.glob('/content/gdrive/My Drive/gan/extracted/RealImages/RealImages/*.jpg'))
print(img_list)
# 3. Read images & convert to numpy arrays
## create placeholding numpy arrays
IMG_SIZE = 28
x_data = np.empty((len(img_list), IMG_SIZE, IMG_SIZE, 3), dtype=np.float32)
## read and convert to arrays
for i, img_path in enumerate(img_list):
# read image
img = imread(img_path)
print(img_path)
# resize image (1 channel used for example; 1 for gray-scale, 3 for RGB-scale)
img = resize(img, output_shape=(IMG_SIZE, IMG_SIZE,3), preserve_range=True)
# save to numpy array
x_data[i] = img
Then, I changed the old line:
(X_train, _), (_, _) = mnist.load_data()
to:
X_train=x_data
I couldn't find what I did wrong. I would be very happy if you help.
sequence_input = Input(shape=(max_len,), dtype="int32")
embedded_sequences = Embedding(vocab_size, 128, input_length=max_len,
mask_zero=True)(sequence_input)
lstm = Bidirectional(LSTM(64, dropout=0.5, return_sequences=True))(embedded_sequences)
The third line of code gives the following error:
Cannot convert a symbolic Tensor (bidirectional/forward_lstm/strided_slice:0) to a numpy array.
This error may indicate that you're trying to pass a Tensor to a NumPy call, which is not supported
When I was looking for a solution to the same error as me, I saw a lot of answers on stackoverflow telling me to lower the numpy version to less than 1.20.
But since I use featuretools, I need to set the numpy version to 1.2 or higher.
So, my question is, is there currently no way to fix this error without downgrading the numpy version?
(my tensorflow version is 2.3.0, numpy version is 1.23)
I solved this error by changing the cuda version and installing the latest tensorflow. (2.3 -> 2.10)
I'm having a problem similar to the one described here:
ValueError: Unknown layer: Functional
import tensorflow as tf
model = tf.keras.models.load_model("model.h5")
which throws: ValueError: Unknown layer: Functional.
I'm pretty sure this is because the h5 file was saved in TF 2.3.0 and I'm trying to load it in 2.2.0. I'd rather not convert using tf 2.3.0 directly, and I'm hoping to find a way of manually fixing the h5py file itself, or passing the right custom object to the model loader. I've noticed that it seems like it's just an extra key wherever the config file is stored, e.g. https://github.com/tensorflow/tensorflow/issues/41929
The problem is, I'm not sure how to manually get rid of the Functional layer in the h5 file. Specifically, I've tried:
import h5py
f = h5py.File("model.h5",'r')
print(f['model_weights'].keys())
which gives:
<KeysViewHDF5 ['concatenate_1', 'conv1d_3', 'conv1d_4', 'conv1d_5', 'dense_1', 'dropout_4', 'dropout_5', 'dropout_6', 'dropout_7', 'embedding_1', 'global_average_pooling1d_1', 'global_max_pooling1d_1', 'input_2']>
and I don't see the Functional layer anywhere. Where exactly is the config for the model stored in this file? E.g. I'm looking for something like {"class_name": "Functional", "config": {"name": "model", "layers":...}}
Question: is there a way I can manually edit the h5 file using h5py to get rid of the Functional layer?
Alternatively, can I pass a specific custom_obects={'Functiona':???} to the load_model function?
I've tried {'Functional':tf.keras.models.Model} but that returns ('Keyword argument not understood:', 'groups') because I think it's trying to load a model into weights?
I had a similar problem. The only way I could solve it without changing the Tensorflow version and retraining the model is by building the model structure again using Keras API in TensorFlow 2.2.0 and then call:
model.load_weights(<h5 file>)
where the original h5 file was created using TensorFlow 2.3.0. If you already have the code that builds the model structure then this method should be relatively easy since all you have to do is replace load_model(<h5 file>) with the line above.
Just change
keras.models import load_model
tensorflow.keras.models import load_model
then
load_model('model.h5', compile = False)
I wanted to try out the embeddings provided in tensorflow-hub, the 'universal-sentence-encoder' to be specific. I tried the examples provided (https://colab.research.google.com/github/tensorflow/hub/blob/master/examples/colab/semantic_similarity_with_tf_hub_universal_encoder.ipynb)
and it worked fine. So I tried to do the same with 'multilingual' model but every time the multilingual model is loaded, the colab kernel fails and restarts. What is the problem and how can I get around this?
import tensorflow as tf
import tensorflow_hub as hub
import matplotlib.pyplot as plt
import numpy as np
import os
import pandas as pd
import re
import seaborn as sns
import tf_sentencepiece
import sentencepiece
# Import the Universal Sentence Encoder's TF Hub module
embed = hub.Module("https://tfhub.dev/google/universal-sentence-encoder-multilingual/1") // This is where the kernel dies.
print("imported model")
# Compute a representation for each message, showing various lengths supported.
word = "코끼리"
sentence = "나는 한국어로 쓰여진 문장이야."
paragraph = (
"동해물과 백두산이 마르고 닳도록. "
"하느님이 보우하사 우리나라 만세~")
messages = [word, sentence, paragraph]
# Reduce logging output.
tf.logging.set_verbosity(tf.logging.ERROR)
with tf.Session() as session:
session.run([tf.global_variables_initializer(), tf.tables_initializer()])
message_embeddings = session.run(embed(messages))
for i, message_embedding in enumerate(np.array(message_embeddings).tolist()):
print("Message: {}".format(messages[i]))
print("Embedding size: {}".format(len(message_embedding)))
message_embedding_snippet = ", ".join(
(str(x) for x in message_embedding[:3]))
print("Embedding: [{}, ...]\n".format(message_embedding_snippet))
I had similar issues with the multilingual sentence encoder. I resolved it by specifying tensorflow version to 1.14.0 and tf-sentencepiece to 0.1.83, so before running your code in colab try:
!pip3 install tensorflow==1.14.0
!pip3 install tensorflow-hub
!pip3 install sentencepiece
!pip3 install tf-sentencepiece==0.1.83
I was able to replicate your problem in colab and this solution loaded the model correctly:
It seems to be a compatibility problem between sentencepiece and tensorflow, check for updates on this issue here.
Let us know how it goes. Best of luck and I hope this helps.
EDIT: If tensorflow version 1.14.0 does not work, change it to 1.13.1. This problem should be resolved once compatibility between tensorflow and sentencepiece is figured it out.
Summary
My question is composed by:
A context in which I present my project, my working environment and my workflow
The detailed problem
The concerned parts of my code
The solutions I tried to solve my problem
The question reminder
Context
I've written a Python Keras implementation of a downgraded version of the original Super-Resolution GAN. Now I want to test it using Google Firebase Machine Learning Kit, by hosting it in the Google servers. That's why I have to convert my Keras program to a TensorFlow Lite one.
Environment and workflow (with the problem)
I'm training my program on Google Colab working environment: there, I've installed TF 2.0.0-beta1 (this choice is motivated by this uncorrect answer: https://datascience.stackexchange.com/a/57408/78409).
Workflow (and problem):
I write locally my Python Keras program, keeping in mind that it will run on TF 2. So I use TF 2 imports, for example: from tensorflow.keras.optimizers import Adam and also from tensorflow.keras.layers import Conv2D, BatchNormalization
I send my code to my Drive
I run without any problem my Google Colab Notebook: TF 2 is used.
I get the output model in my Drive, and I download it.
I try to convert this model to the TFLite format by executing the following CLI: tflite_convert --output_file=srgan.tflite --keras_model_file=srgan.h5: here the problem appears.
The problem
Instead of outputing the TF Lite converted model from the TF (Keras) model, the previous CLI outputs this error:
ValueError: Unknown loss function:build_vgg19_loss_network
The function build_vgg19_loss_network is a custom loss function that I've implemented and that must be used by the GAN.
Parts of code that rise this problem
Presenting the custom loss function
The custom loss function is implemented like that:
def build_vgg19_loss_network(ground_truth_image, predicted_image):
loss_model = Vgg19Loss.define_loss_model(high_resolution_shape)
return mean(square(loss_model(ground_truth_image) - loss_model(predicted_image)))
Compiling the generator network with my custom loss function
generator_model.compile(optimizer=the_optimizer, loss=build_vgg19_loss_network)
What I've tried to do in order to solve the problem
As I read it on StackOverflow (link at the beginning of this question), TF 2 was thought to be sufficient to output a Keras model which would be correctly processed by my tflite_convert CLI. But it's not, obviously.
As I read it on GitHub, I tried to manually set my custom loss function among Keras' loss functions, by adding these lines: import tensorflow.keras.losses
tensorflow.keras.losses.build_vgg19_loss_network = build_vgg19_loss_network. It didn't work.
I read on GitHub I could use custom objects with load_model Keras function: but I only want to use compile Keras function. Not load_model.
My final question
I want to do only minor changes to my code, since it works fine. So I don't want, for example, to replace compile with load_model. With this constraint, could you help me, please, to make my CLI tflite_convert works with my custom loss function?
Since you are claiming that TFLite conversion is failing due to a custom loss function, you can save the model file without keep the optimizer details. To do that, set include_optimizer parameter to False as shown below:
model.save('model.h5', include_optimizer=False)
Now, if all the layers inside your model are convertible, they should get converted into TFLite file.
Edit:
You can then convert the h5 file like this:
import tensorflow as tf
model = tf.keras.models.load_model('model.h5') # srgan.h5 for you
converter = tf.lite.TFLiteConverter.from_keras_model(model)
tflite_model = converter.convert()
open("converted_model.tflite", "wb").write(tflite_model)
Usual practice to overcome the unsupported operators in TFLite conversion is documented here.
I had the same error. I recommend changing the loss to "mse" since you already have a well-trained model and you don't need to train with the .tflite file.