AttributeError: module 'tensorflow.contrib.seq2seq' has no attribute 'prepare_attention' - tensorflow

I am trying to run my code and the code is throwing the error.
Error is mentioned below:
AttributeError: module 'tensorflow.contrib.seq2seq' has no attribute 'prepare_attention'
I updated my tensorflow version to 1.0.0. But the up-gradation did not solved my problem. I also searched in google regarding this error, but i did not got correct solution.
Here is the code part, please have a look.
Getting the training and test predictions
training_predictions, test_predictions = seq2seq_model(tf.reverse(inputs, [-1]),
targets,
keep_prob,
batch_size,
sequence_length,
len(answerswords2int),
len(questionswords2int),
encoding_embedding_size,
decoding_embedding_size,
rnn_size,
num_layers,
questionswords2int)
C:\Users\Maniech\Anaconda3\lib\site-packages\tensorflow_core\python\client\session.py:1750: UserWarning: An interactive session is already active. This can cause out-of-memory errors in some cases. You must explicitly call `InteractiveSession.close()` to release resources held by the other session(s).
warnings.warn('An interactive session is already active. This can '
Traceback (most recent call last):
File "<ipython-input-8-aecd893a8ef5>", line 37, in <module>
questionswords2int)
File "C:/Users/Maniech/Desktop/Deep NLP AZ/chatbot.py", line 292, in seq2seq_model
batch_size)
File "C:/Users/Maniech/Desktop/Deep NLP AZ/chatbot.py", line 258, in decoder_rnn
batch_size)
File "C:/Users/Maniech/Desktop/Deep NLP AZ/chatbot.py", line 201, in decode_training_set
attention_keys, attention_values, attention_score_function, attention_construct_function = tf.contrib.seq2seq.prepare_attention(attention_states, attention_option = "bahdanau", num_units = decoder_cell.output_size)
AttributeError: module 'tensorflow.contrib.seq2seq' has no attribute 'prepare_attention'
Any help is appreciated.

Related

keras.models.load_model() gives ValueError

I have saved the trained model and the weights as below.
model, history, score = fit_model(model, train_batches, val_batches, callbacks=[callback])
model.save('./model')
model.save_weights('./weights')
Then I tried to get the saved model as the following way
if __name__ == '__main__':
model = keras.models.load_model('./model', compile= False,custom_objects={"F1Score": tfa.metrics.F1Score})
test_batches, nb_samples = test_gen(dataset_test_path, 32, img_width, img_height)
predict, loss, acc = predict_model(model,test_batches, nb_samples)
print(predict)
print(acc)
print(loss)
But it gives me an error. What should I do to overcome this?
Traceback (most recent call last):
File "test_pro.py", line 34, in <module>
model = keras.models.load_model('./model',compile= False,custom_objects={"F1Score": tfa.metrics.F1Score})
File "/home/dcs2016csc007/.local/lib/python3.8/site-packages/tensorflow/python/keras/saving/save.py", line 212, in load_model
return saved_model_load.load(filepath, compile, options)
File "/home/dcs2016csc007/.local/lib/python3.8/site-packages/tensorflow/python/keras/saving/saved_model/load.py", line 138, in load
keras_loader.load_layers()
File "/home/dcs2016csc007/.local/lib/python3.8/site-packages/tensorflow/python/keras/saving/saved_model/load.py", line 379, in load_layers
self.loaded_nodes[node_metadata.node_id] = self._load_layer(
File "/home/dcs2016csc007/.local/lib/python3.8/site-packages/tensorflow/python/keras/saving/saved_model/load.py", line 407, in _load_layer
obj, setter = revive_custom_object(identifier, metadata)
File "/home/dcs2016csc007/.local/lib/python3.8/site-packages/tensorflow/python/keras/saving/saved_model/load.py", line 921, in revive_custom_object
raise ValueError('Unable to restore custom object of type {} currently. '
ValueError: Unable to restore custom object of type _tf_keras_metric currently. Please make sure that the layer implements `get_config`and `from_config` when saving. In addition, please use the `custom_objects` arg when calling `load_model()`.
Looking at the source code for Keras, the error is raised when trying to load a model with a custom object:
def revive_custom_object(identifier, metadata):
"""Revives object from SavedModel."""
if ops.executing_eagerly_outside_functions():
model_class = training_lib.Model
else:
model_class = training_lib_v1.Model
revived_classes = {
constants.INPUT_LAYER_IDENTIFIER: (
RevivedInputLayer, input_layer.InputLayer),
constants.LAYER_IDENTIFIER: (RevivedLayer, base_layer.Layer),
constants.MODEL_IDENTIFIER: (RevivedNetwork, model_class),
constants.NETWORK_IDENTIFIER: (RevivedNetwork, functional_lib.Functional),
constants.SEQUENTIAL_IDENTIFIER: (RevivedNetwork, models_lib.Sequential),
}
parent_classes = revived_classes.get(identifier, None)
if parent_classes is not None:
parent_classes = revived_classes[identifier]
revived_cls = type(
compat.as_str(metadata['class_name']), parent_classes, {})
return revived_cls._init_from_metadata(metadata) # pylint: disable=protected-access
else:
raise ValueError('Unable to restore custom object of type {} currently. '
'Please make sure that the layer implements `get_config`'
'and `from_config` when saving. In addition, please use '
'the `custom_objects` arg when calling `load_model()`.'
.format(identifier))
The method will only work fine with the custom objects of the types defined in revived_classes. As you can see, it currently only works with input layer, layer, model, network, and sequential custom objects.
In your code, you pass an tfa.metrics.F1Score class in the custom_objects argument, which is of type METRIC_IDENTIFIER, therefore, not supported (probably because it doesn't implement the get_config and from_config functions as the error output says):
keras.models.load_model('./model', compile=False, custom_objects={"F1Score": tfa.metrics.F1Score})
It's been a while since I last worked with Keras but maybe you can try and follow what was proposed in this other related answer and wrap the call to tfa.metrics.F1Score in a method. Something like this (adjust it to your needs):
def f1(y_true, y_pred):
metric = tfa.metrics.F1Score(num_classes=3, threshold=0.5)
metric.update_state(y_true, y_pred)
return metric.result()
keras.models.load_model('./model', compile=False, custom_objects={'f1': f1})

TPUEstimator error -- AttributeError: module 'tensorflow.contrib.tpu.python.ops.tpu_ops' has no attribute 'cross_replica_sum'

I have written a tensorflow code using the TPUEstimator, but I am having problems running it in use_tpu=False mode. I would like to run it on my local computer to make sure that all the operations are TPU-compatible. The code works fine with the normal Estimator. Here is my master code:
import logging
from tensorflow.contrib.tpu.python.tpu import tpu_config, tpu_estimator, tpu_optimizer
from tensorflow.contrib.cluster_resolver import TPUClusterResolver
from capser_7_model_fn import *
from capser_7_input_fn import *
import subprocess
from absl import flags
flags.DEFINE_bool(
'use_tpu', False,
'Use TPUs rather than plain CPUs')
tf.flags.DEFINE_string(
"tpu", default='$TPU_NAME',
help="The Cloud TPU to use for training. This should be either the name "
"used when creating the Cloud TPU, or a grpc://ip.address.of.tpu:8470 "
"url.")
tf.flags.DEFINE_string("model_dir", LOGDIR, "Estimator model_dir")
flags.DEFINE_integer(
'save_checkpoints_secs', 1000,
'Interval (in seconds) at which the model data '
'should be checkpointed. Set to 0 to disable.')
flags.DEFINE_integer(
'save_summary_steps', 100,
'Number of steps which must have run before showing summaries.')
tf.flags.DEFINE_integer("iterations", 1000,
"Number of iterations per TPU training loop.")
tf.flags.DEFINE_integer("num_shards", 8, "Number of shards (TPU chips).")
tf.flags.DEFINE_integer("batch_size", 1024,
"Mini-batch size for the training. Note that this "
"is the global batch size and not the per-shard batch.")
FLAGS = tf.flags.FLAGS
if FLAGS.use_tpu:
my_project_name = subprocess.check_output(['gcloud', 'config', 'get-value', 'project'])
my_zone = subprocess.check_output(['gcloud', 'config', 'get-value', 'compute/zone'])
cluster_resolver = TPUClusterResolver(
tpu=[FLAGS.tpu],
zone=my_zone,
project=my_project_name)
master = TPUClusterResolver(tpu=[os.environ['TPU_NAME']]).get_master()
else:
master = ''
my_tpu_run_config = tpu_config.RunConfig(
master=master,
model_dir=FLAGS.model_dir,
save_checkpoints_secs=FLAGS.save_checkpoints_secs,
save_summary_steps=FLAGS.save_summary_steps,
session_config=tf.ConfigProto(allow_soft_placement=True, log_device_placement=True),
tpu_config=tpu_config.TPUConfig(iterations_per_loop=FLAGS.iterations, num_shards=FLAGS.num_shards),
)
# create estimator for model (the model is described in capser_7_model_fn)
capser = tpu_estimator.TPUEstimator(model_fn=model_fn_tpu,
config=my_tpu_run_config,
use_tpu=FLAGS.use_tpu,
train_batch_size=batch_size,
params={'model_batch_size': batch_size_per_shard})
# train model
logging.getLogger().setLevel(logging.INFO) # to show info about training progress
capser.train(input_fn=train_input_fn_tpu, steps=n_steps)
I have a capsule network defined in model_fn_tpu, which returns the TPUEstimator spec. The optimizer is a standard AdamOptimizer. I have made all the changes explained here https://www.tensorflow.org/guide/using_tpu#optimizer to make my code compatible with TPUEstimator. I get the following error:
Traceback (most recent call last):
File "C:/Users/doerig/PycharmProjects/capser/TPU_playground.py", line 85, in <module>
capser.train(input_fn=train_input_fn_tpu, steps=n_steps)
File "C:\Users\doerig\AppData\Local\Continuum\Anaconda2\envs\tensorflow\lib\site-packages\tensorflow\python\estimator\estimator.py", line 363, in train
loss = self._train_model(input_fn, hooks, saving_listeners)
File "C:\Users\doerig\AppData\Local\Continuum\Anaconda2\envs\tensorflow\lib\site-packages\tensorflow\python\estimator\estimator.py", line 843, in _train_model
return self._train_model_default(input_fn, hooks, saving_listeners)
File "C:\Users\doerig\AppData\Local\Continuum\Anaconda2\envs\tensorflow\lib\site-packages\tensorflow\python\estimator\estimator.py", line 856, in _train_model_default
features, labels, model_fn_lib.ModeKeys.TRAIN, self.config)
File "C:\Users\doerig\AppData\Local\Continuum\Anaconda2\envs\tensorflow\lib\site-packages\tensorflow\python\estimator\estimator.py", line 831, in _call_model_fn
model_fn_results = self._model_fn(features=features, **kwargs)
File "C:\Users\doerig\AppData\Local\Continuum\Anaconda2\envs\tensorflow\lib\site-packages\tensorflow\contrib\tpu\python\tpu\tpu_estimator.py", line 2016, in _model_fn
features, labels, is_export_mode=is_export_mode)
File "C:\Users\doerig\AppData\Local\Continuum\Anaconda2\envs\tensorflow\lib\site-packages\tensorflow\contrib\tpu\python\tpu\tpu_estimator.py", line 1121, in call_without_tpu
return self._call_model_fn(features, labels, is_export_mode=is_export_mode)
File "C:\Users\doerig\AppData\Local\Continuum\Anaconda2\envs\tensorflow\lib\site-packages\tensorflow\contrib\tpu\python\tpu\tpu_estimator.py", line 1317, in _call_model_fn
estimator_spec = self._model_fn(features=features, **kwargs)
File "C:\Users\doerig\PycharmProjects\capser\capser_7_model_fn.py", line 101, in model_fn_tpu
**output_decoder_deconv_params)
File "C:\Users\doerig\PycharmProjects\capser\capser_model.py", line 341, in capser_model
loss_training_op = optimizer.minimize(loss=loss, global_step=tf.train.get_global_step(), name="training_op")
File "C:\Users\doerig\AppData\Local\Continuum\Anaconda2\envs\tensorflow\lib\site-packages\tensorflow\python\training\optimizer.py", line 424, in minimize
name=name)
File "C:\Users\doerig\AppData\Local\Continuum\Anaconda2\envs\tensorflow\lib\site-packages\tensorflow\contrib\tpu\python\tpu\tpu_optimizer.py", line 113, in apply_gradients
summed_grads_and_vars.append((tpu_ops.cross_replica_sum(grad), var))
AttributeError: module 'tensorflow.contrib.tpu.python.ops.tpu_ops' has no attribute 'cross_replica_sum'
Any ideas to solve this problem? Thank you in advance!
I suspect this is either a bug in the version of TensorFlow you are using + Windows, or else an issue with your build of TensorFlow.
For example, when I chase down the file tensorflow\contrib\tpu\python\tpu\tpu_optimizer.py in the TF 1.4 branch, I see that tpu_ops is imported as:
from tensorflow.contrib.tpu.python.ops import tpu_ops
and if you chase that to the relevant file, you see:
if platform.system() != "Windows":
# pylint: disable=wildcard-import,unused-import,g-import-not-at-top
from tensorflow.contrib.tpu.ops.gen_tpu_ops import *
from tensorflow.contrib.util import loader
from tensorflow.python.platform import resource_loader
# pylint: enable=wildcard-import,unused-import,g-import-not-at-top
_tpu_ops = loader.load_op_library(
resource_loader.get_path_to_datafile("_tpu_ops.so"))
else:
# We have already built the appropriate libraries into the binary via CMake
# if we have built contrib, so we don't need this
pass
Following up with the other TF branches that existed at the time of this posting, we see similar comments in 1.5, in 1.6, in 1.7, in 1.8, and in 1.9.
I strongly suspect this would not occur under Linux, but I might test this later and edit this answer.

tensorflow - ValueError: The shape for decoder/while/Merge_12:0 is not an invariant for the loop

I use tf.contrib.seq2seq.dynamic_decode for decoder training
prediction, final_decoder_state, _ = dynamic_decode(
custom_decoder
)
with custom decoder
custom_decoder = CustomDecoder(decoder_cell, helper, decoder_init_state)
and helper
helper = CustomTrainingHelper(batch_size, targets, stop_targets,
num_outs, outputs_per_step, 1.0, False)
And dynamic_decoder raises error
Traceback (most recent call last):
File "E:/tasks/text_to_speech/tts/tf_seq2seq.py", line 95, in <module>
custom_decoder
File "C:\Users\User\Anaconda3\lib\site-packages\tensorflow\contrib\seq2seq\python\ops\decoder.py", line 304, in dynamic_decode
swap_memory=swap_memory)
File "C:\Users\User\Anaconda3\lib\site-packages\tensorflow\python\ops\control_flow_ops.py", line 3224, in while_loop
result = loop_context.BuildLoop(cond, body, loop_vars, shape_invariants)
File "C:\Users\User\Anaconda3\lib\site-packages\tensorflow\python\ops\control_flow_ops.py", line 2956, in BuildLoop
pred, body, original_loop_vars, loop_vars, shape_invariants)
File "C:\Users\User\Anaconda3\lib\site-packages\tensorflow\python\ops\control_flow_ops.py", line 2930, in _BuildLoop
next_vars.append(_AddNextAndBackEdge(m, v))
File "C:\Users\User\Anaconda3\lib\site-packages\tensorflow\python\ops\control_flow_ops.py", line 688, in _AddNextAndBackEdge
_EnforceShapeInvariant(m, v)
File "C:\Users\User\Anaconda3\lib\site-packages\tensorflow\python\ops\control_flow_ops.py", line 632, in _EnforceShapeInvariant
(merge_var.name, m_shape, n_shape))
ValueError: The shape for decoder/while/Merge_12:0 is not an invariant for the loop. It enters the loop with shape (10, 1), but has shape (?, 1) after one iteration. Provide shape invariants using either the `shape_invariants` argument of tf.while_loop or set_shape() on the loop variables.
batch_size is equal to 10. As I understand the issue is in tf.while_loop and batch_size. In what way it is possible to fix this error? Thanks in advance.
Your provided too little information to say anything specific. Please follow (https://stackoverflow.com/help/mcve) in the future.
In general, this error is telling you the following. By default TensorFlow checks that the variables passed from one iteration of the while loop to the next one don't change shape. In your case, the decoder/while/Merge_12:0 tensor originally had a shape of (10, 1) but after one iteration it became (?, 1) meaning that tensorflow can no longer infer the size of the first dimension.
If you know that the first dimension is really 10, you can use Tensor.set_shape to tell this to TensorFlow.

Supervisor does not initialize parameters succeed

It throws an exception when I tried to initialize parameters with tf.train.Supervisor where the net is constructed with tensorlayer. The code is like this:
init_op = tf.global_variables_initializer()
saver = tf.train.Saver()
config = tf.ConfigProto()
config.gpu_options.allow_growth = True
sv = tf.train.Supervisor(logdir=FLAGS.summary_dir, save_summaries_secs=0, saver=None)
with sv.managed_session(config=config) as sess:
init_ = sess.run(init_op)
net.print_params()
net.print_layers()
tl.layers.print_all_variables()
and the exception is:
Cannot evaluate tensor using `eval()`: No default session is registered. Use `with sess.as_default()` or pass an explicit session to `eval(session=sess)`
Traceback (most recent call last):
File "/home/recsys/anaconda3/envs/tf1.2/lib/python3.6/site-packages/tensorlayer/layers.py", line 309, in print_params
val = p.eval()
File "/home/recsys/anaconda3/envs/tf1.2/lib/python3.6/site-packages/tensorflow/python/ops/variables.py", line 463, in eval
return self._variable.eval(session=session)
File "/home/recsys/anaconda3/envs/tf1.2/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 606, in eval
return _eval_using_default_session(self, feed_dict, self.graph, session)
File "/home/recsys/anaconda3/envs/tf1.2/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 3914, in _eval_using_default_session
raise ValueError("Cannot evaluate tensor using `eval()`: No default "
ValueError: Cannot evaluate tensor using `eval()`: No default session is registered. Use `with sess.as_default()` or pass an explicit session to `eval(session=sess)`
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "MainEntryPtb.py", line 65, in <module>
train_rnn(FLAGS)
File "/home/recsys/wangjian/learntf/TFTemplate/examples/ptb.py", line 122, in train_rnn
net.print_params()
File "/home/recsys/anaconda3/envs/tf1.2/lib/python3.6/site-packages/tensorlayer/layers.py", line 313, in print_params
raise Exception("Hint: print params details after tl.layers.initialize_global_variables(sess) or use network.print_params(False).")
Exception: Hint: print params details after tl.layers.initialize_global_variables(sess) or use network.print_params(False).
It seems that when I run initialize op init_ = sess.run(init_op) it does not really work as if parameters are initialized succeed net.print_params() would not throw an exception. I also tried sv = tf.train.Supervisor(logdir=FLAGS.summary_dir, save_summaries_secs=0, saver=None, init_op=tf.global_variables_initializer()) to initialize parameters, but it also failed.
I use tensorflow v1.2.1 and python3.6
This GitHub issue has clearly explained this problem.
This is not a bug, neither nor "Supervisor does not initialize parameters succeed".
You should just change your sess into interactivesession or use with sess.as_default, then this problem will be solved.
If you want to know why this problem occurs,you should refer to the tensorlayer source code, in the definition of print_params has the :
val = p.eval(session=session)
clearly eval only used for interactivesession or with sess.as_default

tf.contrib.slim.get_variables_to_restore() does not return value

Running below code tf.contrib.slim.get_variables_to_restore() return empty value [] for all_vars, and then causing failure when calling tf.train.Saver. Detail error message shows below.
Am I missing anything?
>>> import tensorflow as tf
>>> inception_exclude_scopes = ['InceptionV3/AuxLogits', 'InceptionV3/Logits', 'global_step', 'final_ops']
>>> inception_checkpoint_file = '/Users/morgan.du/git/machine-learning/projects/capstone/yelp/model/inception_v3_2016_08_28.ckpt'
>>> with tf.Session(graph=tf.Graph()) as sess:
... init_op = tf.global_variables_initializer()
... sess.run(init_op)
... reader = tf.train.NewCheckpointReader(inception_checkpoint_file)
... var_to_shape_map = reader.get_variable_to_shape_map()
... all_vars = tf.contrib.slim.get_variables_to_restore(exclude=inception_exclude_scopes)
... inception_saver = tf.train.Saver(all_vars)
... inception_saver.restore(sess, inception_checkpoint_file)
...
Traceback (most recent call last):
File "<stdin>", line 7, in <module>
File "/Users/morgan.du/miniconda2/lib/python2.7/site-packages/tensorflow/python/training/saver.py", line 1051, in __init__
self.build()
File "/Users/morgan.du/miniconda2/lib/python2.7/site-packages/tensorflow/python/training/saver.py", line 1072, in build
raise ValueError("No variables to save")
ValueError: No variables to save
The problem here seems to be that your graph is empty—i.e. it does not contain any variables. You create a new graph on the line with tf.Session(graph=tf.Graph()):, and none of the following lines creates a tf.Variable object.
To restore a pre-trained TensorFlow model, you need to do one of three things:
Rebuild the model graph, by executing the same Python graph building code that was used to train the model in the first place.
Load a "MetaGraph" that contains information about how to reconstruct the graph structure and model variables. See this tutorial for more details on how to create and use a MetaGraph. MetaGraphs are often created alongside checkpoint files, and typically have the extension .meta.
Load a "SavedModel", which contains a "MetaGraph". See the documentation here for more details.