Using hub.text_embedding_column with tf.contrib.estimator.RNNClassifier - tensorflow

I'm trying to use a module off Tensorflow Hub (a word embedding module) with tf.contrib.estimator.RNNClassifier.
My desired model
embedded_text_feature_column = hub.text_embedding_column(
key="description",
module_spec="https://tfhub.dev/google/nnlm-en-dim128/1")
estimator = tf.contrib.estimator.RNNClassifier(
sequence_feature_columns=[embedded_text_feature_column],
num_units=[32, 16])
Running that returns the following error:
ValueError: All feature_columns must be of type _SequenceDenseColumn.
You can wrap a sequence_categorical_column with an embedding_column or indicator_column.
Given (
type <class 'tensorflow_hub.feature_column._TextEmbeddingColumn'>):
_TextEmbeddingColumn(key='title_description', module_spec=<tensorflow_hub.native_module._ModuleSpec object at 0x7fb0102a5a90>, trainable=False
)
A working model
Using the TF Hub module works fine with:
estimator = tf.estimator.DNNClassifier(
hidden_units=[32, 16],
feature_columns=[embedded_text_feature_column])
Is it possible to use the nnlm module with RNNClassifier?

The Code corresponding to your Desired Model seems to be working without error in Google Colab with Tensorflow Version 1.15.
Please find the working code below:
!pip install tensorflow==1.15
import tensorflow as tf
import tensorflow_hub as hub
embedded_text_feature_column = hub.text_embedding_column(
key="description",
module_spec="https://tfhub.dev/google/nnlm-en-dim128/1")
estimator = tf.contrib.estimator.RNNClassifier(
sequence_feature_columns=[embedded_text_feature_column],
num_units=[32, 16])
Here is the Link for Github Colab Gist.

Related

How to print the layers of the tensorflow 2 saved_model

I am using tensorflow 2.6.2 and I downloaded the model from the Tensorflow 2 Model zoo
I am able to load the model using this
import tensorflow as tf
if __name__ == "__main__":
try:
model = tf.saved_model.load("/home/user/git/models_zoo/ssd_mobilenet_v2_320x320_coco17_tpu-8/saved_model/")
But unfortunately I am not able to see all the layers of the model using the below
for v in model.trainable_variables:
print(v.name)
which should ideally print all the layers in the network, but I am getting the following error
print(model.trainable_variables)
AttributeError: '_UserObject' object has no attribute 'trainable_variables'
Can someone please tell, what I am doing wrong here.
I was able to print using this
loaded = tf.saved_model.load("/home/user/git/models_zoo/ssd_mobilenet_v2_320x320_coco17_tpu-8/saved_model/")
infer = loaded.signatures["serving_default"]
for v in infer.trainable_variables:
print(v.name)

"Bayesian Methods for Hackers" jupyter notebook not working

I am reading the online TensorFlow Probability (TFP) version of "Bayesian Methods for Hackers".
But when I excecute the first cell of Ch2_MorePyMC_TFP.ipynb
the following error occurs:
AttributeError: module 'tensorflow' has no attribute 'contrib'
I suppose this version of "Bayesian Methods for Hackers" jupyter notebook was written for TF1.
Do you have an easy fix or a updated version of this jupyter notebook working with TF2 ?
Some of the contrib functions are removed and some of them are merged into TensorFlow core. You need to find the equivalent version of them.
import tensorflow as tf
import tensorflow_probability as tfp
tfd = tfp.distributions
tfb = tfp.bijectors
print(tf.__version__) # 2.5.0
print(tfp.__version__) # 0.12.1
For example first contrib functions are available in TensorFlow and can be re-written as:
parameter = tfd.Exponential(rate=1., name="poisson_param").sample()
rv_data_generator = tfd.Poisson(parameter, name="data_generator")
data_generator = rv_data_generator.sample()
data_generator_ = tf.nest.pack_sequence_as(
data_generator,
[t.numpy() if tf.is_tensor(t) else t
for t in tf.nest.flatten(data_generator)])
print("Value of sample from data generator random variable:", data_generator_)
For other TF Operations you can replace them like this:
with tf.compat.v1.variable_scope(tf.compat.v1.get_variable_scope(), reuse=tf.compat.v1.AUTO_REUSE):
step_size = tf.compat.v1.get_variable(
name='step_size',
initializer=tf.constant(0.5, dtype=tf.float32),
trainable=False,
use_resource=True
)
More info can be found in the documentation
Frightera, I have problems getting rid of the following error :
module 'tensorflow' has no attribute 'variable_scope'
at cell :
# Initialize the step_size. (It will be automatically adapted.)
with tf.variable_scope(tf.get_variable_scope(), reuse=tf.AUTO_REUSE):
step_size = tf.get_variable(
name='step_size',
initializer=tf.constant(0.5, dtype=tf.float),
trainable=False,
use_resource=True
)
Do you have any clue how to replace this one?

How to use ELMO Embeddings as the First Embedding Layer in tf 2.0 Keras using tf-hub?

I am trying to build a NER model in Keras using ELMO Embeddings. SO I stumped across this tutorial and started implementing. I got lots of errors and some of them are as:
import tensorflow as tf
import tensorflow_hub as hub
from keras import backend as K
sess = tf.Session()
K.set_session(sess)
elmo_model = hub.Module("https://tfhub.dev/google/elmo/2", trainable=True)
sess.run(tf.global_variables_initializer())
sess.run(tf.tables_initializer())
def ElmoEmbedding(x):
return elmo_model(inputs={"tokens": tf.squeeze(tf.cast(x, tf.string)),
"sequence_len": tf.constant(batch_size*[max_len])},signature="tokens",as_dict=True)["elmo"]
input_text = Input(shape=(max_len,), dtype=tf.string)
embedding = Lambda(ElmoEmbedding, output_shape=(None, 1024))(input_text)
It gives me AttributeError: module 'tensorflow' has no attribute 'Session' . So if I comment out sess= code and run, it gives me AttributeError: module 'keras.backend' has no attribute 'set_session'.
Then again, Elmo code line is giving me RuntimeError: Exporting/importing meta graphs is not supported when eager execution is enabled. No graph exists when eager execution is enabled..
I have the following configurations:
tf.__version__
'2.3.1'
keras.__version__
'2.4.3'
import sys
sys.version
'3.8.3 (default, Jul 2 2020, 17:30:36) [MSC v.1916 64 bit (AMD64)]'
How can I use ELMO Embeddings in Keras Model?
You are using the old Tensorflow 1.x syntax but you have tensorflow 2 installed.
This is the new way to do elmo in TF2
Extracting ELMo features using tensorflow and convert them to numpy

Convert Tensor to numpy array in TF 2.x

I am trying to load Universal Sentence Encoder and this is my code snippet:
import tensorflow as tf
import tensorflow_hub as hub
import os, requests, tarfile
def extractUSEEmbeddings(words):
# Extracts USE embeddings
# Replace `USE_folder` with any directory in your machine, where you want USE to be downloaded
try:
embed = hub.KerasLayer(USE_folder)
except Exception as e:
print ("Downloading USE embeddings...")
r = requests.get("https://tfhub.dev/google/universal-sentence-encoder-large/5?tf-hub-format=compressed")
open("USE.tar.gz", "wb").write(r.content)
tar = tarfile.open("USE.tar.gz", "r:gz")
tar.extractall(path=USE_folder)
tar.close()
os.remove("USE.tar.gz")
embed = hub.KerasLayer(USE_folder)
pass
word_embeddings = embed(words)
return word_embeddings.numpy()
I get the error 'Tensor' object has no attribute 'numpy'. When I run the same code on Jupyter notebook, with the same versions of tensorflow (2.2.0) and tensorflow-hub (0.9.0), I do not get any error and it works perfectly fine.
I printed the type of Tensor in both cases, and realized that this is because I get an Eager Tensor (tensorflow.python.framework.ops.EagerTensor) in Jupyter, which has a numpy method whereas in my script, the Tensor is of type tensorflow.python.framework.ops.Tensor. However, I am now unable to figure out how to switch on Eager Execution in my script, since in TF 2.x it is supposed to be enabled by default.
I have tried all the solutions given in this thread, but none of them work for me.
Why am I not getting an Eager Tensor when run through the terminal, but get it through Jupyter? Does my problem have anything to do with the fact that I am using tensorflow-hub here, and is that why none of the solutions are working for me? Most importantly, how do I convert Tensor in tf 2.x to a numpy array?

Table not initialized issue using #tf.function while loading TF hub model

I am trying to load the Tf hub model and predict the output using #tf.function decorator. It is throwing tensorflow.python.framework.errors_impl.FailedPreconditionError: Table not initialized. error.
TF version - 2.1.0
TF hub Version - 0.8.0
Note: It is working without using #tf.function decorator
import tensorflow as tf
import tensorflow_hub as hub
image_tensor = tf.constant(2.0, shape=[1, 298, 298, 3])
#tf.function
def run_function(method, args):
return method(args)
detector = hub.KerasLayer("https://tfhub.dev/google/openimages_v4/ssd/mobilenet_v2/1",
signature_outputs_as_dict=True)
detector_output = run_function(detector, image_tensor)
class_names = detector_output["detection_class_entities"]
print(class_names)
Can anyone know the reason why it is not working with #tf.function?
You are using a TensorFlow V1 hub model in hub.KerasLayer which is to be used for tf2.0 models
In TensorFlow hub, you can find a toggle button to view tf hub models for specific TensorFlow versions.
To make it work using hub.KeralLayer, change the URL to either of the following tf2.0 mobilenet versions
https://tfhub.dev/google/tf2-preview/mobilenet_v2/classification/4
https://tfhub.dev/google/imagenet/mobilenet_v2_050_96/classification/4
or if you have to use the exact URL as in your example. Use hub.Module instead of hub.KeralLayer