Table not initialized issue using #tf.function while loading TF hub model - tensorflow

I am trying to load the Tf hub model and predict the output using #tf.function decorator. It is throwing tensorflow.python.framework.errors_impl.FailedPreconditionError: Table not initialized. error.
TF version - 2.1.0
TF hub Version - 0.8.0
Note: It is working without using #tf.function decorator
import tensorflow as tf
import tensorflow_hub as hub
image_tensor = tf.constant(2.0, shape=[1, 298, 298, 3])
#tf.function
def run_function(method, args):
return method(args)
detector = hub.KerasLayer("https://tfhub.dev/google/openimages_v4/ssd/mobilenet_v2/1",
signature_outputs_as_dict=True)
detector_output = run_function(detector, image_tensor)
class_names = detector_output["detection_class_entities"]
print(class_names)
Can anyone know the reason why it is not working with #tf.function?

You are using a TensorFlow V1 hub model in hub.KerasLayer which is to be used for tf2.0 models
In TensorFlow hub, you can find a toggle button to view tf hub models for specific TensorFlow versions.
To make it work using hub.KeralLayer, change the URL to either of the following tf2.0 mobilenet versions
https://tfhub.dev/google/tf2-preview/mobilenet_v2/classification/4
https://tfhub.dev/google/imagenet/mobilenet_v2_050_96/classification/4
or if you have to use the exact URL as in your example. Use hub.Module instead of hub.KeralLayer

Related

"Bayesian Methods for Hackers" jupyter notebook not working

I am reading the online TensorFlow Probability (TFP) version of "Bayesian Methods for Hackers".
But when I excecute the first cell of Ch2_MorePyMC_TFP.ipynb
the following error occurs:
AttributeError: module 'tensorflow' has no attribute 'contrib'
I suppose this version of "Bayesian Methods for Hackers" jupyter notebook was written for TF1.
Do you have an easy fix or a updated version of this jupyter notebook working with TF2 ?
Some of the contrib functions are removed and some of them are merged into TensorFlow core. You need to find the equivalent version of them.
import tensorflow as tf
import tensorflow_probability as tfp
tfd = tfp.distributions
tfb = tfp.bijectors
print(tf.__version__) # 2.5.0
print(tfp.__version__) # 0.12.1
For example first contrib functions are available in TensorFlow and can be re-written as:
parameter = tfd.Exponential(rate=1., name="poisson_param").sample()
rv_data_generator = tfd.Poisson(parameter, name="data_generator")
data_generator = rv_data_generator.sample()
data_generator_ = tf.nest.pack_sequence_as(
data_generator,
[t.numpy() if tf.is_tensor(t) else t
for t in tf.nest.flatten(data_generator)])
print("Value of sample from data generator random variable:", data_generator_)
For other TF Operations you can replace them like this:
with tf.compat.v1.variable_scope(tf.compat.v1.get_variable_scope(), reuse=tf.compat.v1.AUTO_REUSE):
step_size = tf.compat.v1.get_variable(
name='step_size',
initializer=tf.constant(0.5, dtype=tf.float32),
trainable=False,
use_resource=True
)
More info can be found in the documentation
Frightera, I have problems getting rid of the following error :
module 'tensorflow' has no attribute 'variable_scope'
at cell :
# Initialize the step_size. (It will be automatically adapted.)
with tf.variable_scope(tf.get_variable_scope(), reuse=tf.AUTO_REUSE):
step_size = tf.get_variable(
name='step_size',
initializer=tf.constant(0.5, dtype=tf.float),
trainable=False,
use_resource=True
)
Do you have any clue how to replace this one?

How to use ELMO Embeddings as the First Embedding Layer in tf 2.0 Keras using tf-hub?

I am trying to build a NER model in Keras using ELMO Embeddings. SO I stumped across this tutorial and started implementing. I got lots of errors and some of them are as:
import tensorflow as tf
import tensorflow_hub as hub
from keras import backend as K
sess = tf.Session()
K.set_session(sess)
elmo_model = hub.Module("https://tfhub.dev/google/elmo/2", trainable=True)
sess.run(tf.global_variables_initializer())
sess.run(tf.tables_initializer())
def ElmoEmbedding(x):
return elmo_model(inputs={"tokens": tf.squeeze(tf.cast(x, tf.string)),
"sequence_len": tf.constant(batch_size*[max_len])},signature="tokens",as_dict=True)["elmo"]
input_text = Input(shape=(max_len,), dtype=tf.string)
embedding = Lambda(ElmoEmbedding, output_shape=(None, 1024))(input_text)
It gives me AttributeError: module 'tensorflow' has no attribute 'Session' . So if I comment out sess= code and run, it gives me AttributeError: module 'keras.backend' has no attribute 'set_session'.
Then again, Elmo code line is giving me RuntimeError: Exporting/importing meta graphs is not supported when eager execution is enabled. No graph exists when eager execution is enabled..
I have the following configurations:
tf.__version__
'2.3.1'
keras.__version__
'2.4.3'
import sys
sys.version
'3.8.3 (default, Jul 2 2020, 17:30:36) [MSC v.1916 64 bit (AMD64)]'
How can I use ELMO Embeddings in Keras Model?
You are using the old Tensorflow 1.x syntax but you have tensorflow 2 installed.
This is the new way to do elmo in TF2
Extracting ELMo features using tensorflow and convert them to numpy

How to use the efficientnet-lite provided by tfhub for the second training on tf2.1

The version I use is tensorflow-gpu version 2.1.0, installed from pip.
import tensorflow as tf
import tensorflow_hub as hub
tf.keras.backend.set_learning_phase(True)
module_url = "https://tfhub.dev/tensorflow/efficientnet/lite0/classification/2"
module2 = tf.keras.Sequential([
hub.KerasLayer(module_url, trainable=False, input_shape=(224,224,3))])
output1 = module2(tf.ones(shape=(1,224,224,3)))
print(module2.summary())
When I set trainable = True, the operation will give an error.
So, can't I retrain it on tf2.1 version?
The EfficientNet-Lite models on TFHub are based on TensorFlow 1, and thus are subject to many restrictions on TF2 including fine-tuning as you've discovered. The EfficientNet models were updated to TF2 but we're still waiting for their lite counterparts.
https://www.tensorflow.org/hub/model_compatibility
https://github.com/tensorflow/hub/issues/751
UPDATE: Beginning October 5, 2021, the EfficientNet-Lite models on TFHub are available for TensorFlow 2.

FailedPreconditionError: Error while reading resource variable module/bilm/CNN_proj/W_proj from Container: localhost

I am trying to use pre-trained elmo embeddings in jupyter notebook with python 3.7.
Tensorflow version - 1.14.0
This is my code
def ElmoEmbeddingLayer(x):
print(x.shape)
module = hub.Module("https://tfhub.dev/google/elmo/3", trainable=False)
embeddings = module(tf.squeeze(tf.cast(x, tf.string)), signature="default", as_dict=True)["elmo"]
return embeddings
elmo_dim=1024
elmo_input = Input(shape=(None,), dtype=tf.string)
elmo_embedding = Lambda(ElmoEmbeddingLayer, output_shape=(None,elmo_dim))(elmo_input)
x = Dense(1)(elmo_embedding)
x = Activation('relu')(x)
model = Model(inputs=[elmo_input], outputs=x)
model.compile(loss='binary_crossentropy',optimizer='adam',metrics=['accuracy'])
model.fit(x_train, y_train, epochs=1,validation_data=(x_test, y_test))
However I'm getting a runtime error that is
FailedPreconditionError: Error while reading resource variable module/bilm/CNN_proj/W_proj from Container: localhost. This could mean that the variable was uninitialized. Not found: Resource localhost/module/bilm/CNN_proj/W_proj/N10tensorflow3VarE does not exist.
[[{{node lambda/module_apply_default/bilm/MatMul_9/ReadVariableOp}}]]
To use model pieces from TF Hub in building a Keras model, use the hub.KerasLayer class. It implements Keras's way of collecting variables for initialization. With tensorflow_hub 0.7.0 (and preferably tensorflow 1.15), you can also use it for older TF Hub modules (like the https://tfhub.dev/google/elmo/3 in your example), subject to some caveats, see tensorflow.org/hub/migration_tf2
For context: The older hub.Module class is for building models in the classic TF1 way (like tf.layers). It implements the old-style way of collecting variables for initialization via the GLOBAL_VARIABLES collection of the tf.Graph. Those are missed in your case. (You could try to initialize them manually in the Session returned bytf.compat.v1.keras.backend.get_session(), but that's getting weird.)

Using hub.text_embedding_column with tf.contrib.estimator.RNNClassifier

I'm trying to use a module off Tensorflow Hub (a word embedding module) with tf.contrib.estimator.RNNClassifier.
My desired model
embedded_text_feature_column = hub.text_embedding_column(
key="description",
module_spec="https://tfhub.dev/google/nnlm-en-dim128/1")
estimator = tf.contrib.estimator.RNNClassifier(
sequence_feature_columns=[embedded_text_feature_column],
num_units=[32, 16])
Running that returns the following error:
ValueError: All feature_columns must be of type _SequenceDenseColumn.
You can wrap a sequence_categorical_column with an embedding_column or indicator_column.
Given (
type <class 'tensorflow_hub.feature_column._TextEmbeddingColumn'>):
_TextEmbeddingColumn(key='title_description', module_spec=<tensorflow_hub.native_module._ModuleSpec object at 0x7fb0102a5a90>, trainable=False
)
A working model
Using the TF Hub module works fine with:
estimator = tf.estimator.DNNClassifier(
hidden_units=[32, 16],
feature_columns=[embedded_text_feature_column])
Is it possible to use the nnlm module with RNNClassifier?
The Code corresponding to your Desired Model seems to be working without error in Google Colab with Tensorflow Version 1.15.
Please find the working code below:
!pip install tensorflow==1.15
import tensorflow as tf
import tensorflow_hub as hub
embedded_text_feature_column = hub.text_embedding_column(
key="description",
module_spec="https://tfhub.dev/google/nnlm-en-dim128/1")
estimator = tf.contrib.estimator.RNNClassifier(
sequence_feature_columns=[embedded_text_feature_column],
num_units=[32, 16])
Here is the Link for Github Colab Gist.