I've built the following TensorArray:
ta = tf.TensorArray(
dtype=tf.float32,
size=0,
dynamic_size=True,
element_shape=tf.TensorShape([None, None])
)
and called ta = ta.write(idx, my_tensor) inside a while_loop.
When evaluating the output = ta.stack() tensor in a session, I receive this error message:
ValueError: Cannot use '.../TensorArrayWrite/TensorArrayWriteV3' as
input to '.../TensorArrayStack_1/TensorArraySizeV3' because
'.../TensorArrayWrite/TensorArrayWriteV3' is in a while loop. See info
log for more details.
I don't understand this error message, could you please help me ?
Update: A minimal example might be difficult to come up with, but this is what I am doing: I am using the reference to the ta TensorArray inside the cell_input_fn of AttentionWrapper. This callback is used in AttentionWrapper's call method, where another TensorArray named alignment_history is being written. Therefore the while_loop code is not designed by me, it's part of the TF dynamic RNN computation tf.nn.dynamic_rnn.
Not sure if this is what's biting you, but you have to make sure your while_loop function takes the tensor array as input and emits an updated one as output; and you have to use the final version of the TensorArray at the end of the while_loop:
def fn(ta_old):
return ta_old.write(...)
ta_final = while_loop(..., body=fn, [tf.TensorArray(...)])
values = ta_final.stack()
specifically you should never access ta_old outside of fn().
Related
I am calling a model in a function detect:
def detect(img):
detector_output = detector(tf.reshape(img, (1, img.shape[0], img.shape[1], img.shape[2])))
classes = detector_output['detection_classes'][0].numpy()
most_likely = tf.convert_to_tensor(classes[0])
box = detector_output['detection_boxes'][0][0]
box = tf.math.multiply(box, [img.shape[0], img.shape[1], img.shape[0], img.shape[1]])
box = tf.cast(box, tf.int16)
return (box, most_likely)
this is called in another function reads via tf.data.datasets map api
dataset = dataset.map(reads, num_parallel_calls = AUTO).batch(32)
I think the issue is that this tensorflow hub model (or all object detection models I could find) does not support batching.
Calling the function via reads by itself works fine.
except if I use the tf.function decorator, then weirdly even by itself detect(img) throws the same error.
I tried with several models from here with the same result.
detector needs the shape with the 1 dimension up front.
I know there should be some reverse flatten() or squeeze() but I couldn't find it, apologies for the bad style!
The issue is also likely here in the reshaping.
the full error:
TypeError: Failed to convert object of type <class 'tuple'> to Tensor. Contents: (1, None, None, 3). Consider casting elements to a supported type.
Edit: I fixed the error by using tf.expand_dims instead of reshaping above.
I'd still be glad for a good explanation to understand better what went trong.
Thank you for your help!
I am interested in a feature or hacky solution that allows every tensorflow (specifically tf1.x) op name to include the file name and line number where the op is defined, in an automated fashion across the entire code base. This will greatly facilitate tracking down the place where an op raises an error, such as the situation below:
File "tensorflow/contrib/distribute/python/mirrored_strategy.py", line 633, in _update
assert isinstance(var, values.DistributedVariable), var
AssertionError: Tensor("floordiv_2:0", shape=(), dtype=int64, device=/job:chief/replica:0/task:0/device:GPU:0)
Right now the best I can do is to take a wild guess where a floordiv might occur, but honestly I have no clue at the moment.
The easiest way might be to show the graph on tensorboard and then search for failing op by name. Among the names of parent scopes or preceding operations you would probably be able to tell which layer is failing.
If not that, wrap your layer calls, model constructions in scopes. Hopefully, if this is your tensor failing and not a part of optimizer or else, you would see the direction where to look at.
If you are dedicated to wrap every op in a file-name/line-number, you can try to monkey-patch tf.Operation constructor with a scope. Which should be something along the next lines:
from inspect import getframeinfo, stack
import tensorflow as tf
def scopify_ops(func):
def wrapper(*args, **kwargs):
caller = getframeinfo(stack()[1][0])
path = "%s:%d - %s" % (caller.filename, caller.lineno, message)
print("Caller info:", path)
with tf.name_scope("path")
return func(*args, **kwargs)
return wrapper
tf.Operation.__init__ = scopify_ops(tf.Operation.__init__)
I have a tf.contrib.lookup.HashTable declared inside a Tensorflow Estimator model_fn. As the session is not directly available to us in Estimators, I am stuck with not being able to initialize the table. I am aware that if not used with Estimators, table can be initialized with table.init.run() using the session
I tried to initialize the table by using a sessionRunHook which I was already using for some other purpose. I pass the table init op as argument to session run in the before_run function. But table is still not initialized. I also tried to pass tf.tables_initializer() instead, but that did not work too. Another option I tried without success is the tf.add_to_collection(tf.GraphKeys.TABLE_INITIALIZERS.. command.
#sessionRunHook code below
class SaveToCSVHook(tf.train.SessionRunHook):
def begin(self):
samples_weights_table = session.graph.get_tensor_by_name('samples_weights_table:0')
self.samples_weights_table_init_op = samples_weights_table.init
self.table_init_op = tf.tables_initializer() # also tried passing this to self.args instead - same result though
tf.add_to_collection(tf.GraphKeys.TABLE_INITIALIZERS, samples_weights_table.init)
def after_create_session(self, session, coord):
self.args ={'table_init_op':self.samples_weights_table_init_op}
def before_run(self, run_context):
return tf.train.SessionRunArgs(self.args)
def after_run(self, run_context, run_values):
print(f"Got Values: {run_values.results}")
# Estimator model_fn code below
def model_fn(..)
samples_weights_table = tf.contrib.lookup.HashTable(tf.contrib.lookup.KeyValueTensorInitializer(keysb, values, key_dtype=tf.string, value_dtype=tf.float32,name='samples_weights_table_init_op'), -1.0,name='samples_weights_table')
I get error:
FailedPreconditionError (see above for traceback): Table not initialized
which obviously means the table is not getting initialized
If anyone is interested to know the answer, the hashtable need not be explicitly initialized when used with Estimators. They are initialized by default for high level APIs like Estimators. The error goes away when the initializer code is removed and the table works as expected.
Suppose now I have a training input pipeline which finally generate train_x and train_y using tf.train.shuffle_batch. I export meta graph and re-import the graph in another code file. Now I want to detach the input pipeline, i.e., the train_x and train_y, and connect a new test_x and test_y. How can I make accomplish this using tf.contrib.graph_editor?
EDIT: As suggested by #iga, I change my input directory using input_map
filenames = tf.train.match_filenames_once(FLAGS.data_dir + '*', name='matching_filenames')
if FLAGS.ckpt != '':
latest = FLAGS.log_dir + FLAGS.ckpt
else:
latest = tf.train.latest_checkpoint(FLAGS.log_dir)
if not latest or not os.path.exists(latest+'.meta'):
print("checkpoint " + latest + " does not exist")
sys.exit(1)
saver = tf.train.import_meta_graph(latest+'.meta',
input_map={'matching_filenames:0':filenames},
import_scope='import')
g = tf.get_default_graph()
but I get the following error:
ValueError: graph_def is invalid at node u'matching_filenames/Assign':
Input tensor 'matching_filenames:0' Cannot convert a tensor of type
string to an input of type string_ref.
Are there any elegant way to resolve this?
For this task, you should be able to just use input_map argument to https://www.tensorflow.org/api_docs/python/tf/import_graph_def. If you are using import_meta_graph, you can pass the input_map into its kwargs and it will get passed down to import_graph_def.
RESPONSE TO EDIT: I am assuming that your original graph (the one you are deserializing) had the same matching_filenames variable. Quite confusingly, the tensor name "matching_filenames:0" actually refers to the tensor going from the VariableV2 op to the Assign op. The type of this edge is string_ref and you don't really want to break that edge.
The output from a variable typically goes through an identity op called matching_filenames/read. This is what you want to use as the key in your input_map. For the value, you want the same tensor in your new filenames. So, your call should probably look like:
tf.train.import_meta_graph(latest+'.meta',
input_map={'matching_filenames/read': filenames.read_value()},
import_scope='import')
In general, variables are fairly complicated. If this does not work, you can use some placeholder op and feed the names into it manually.
I am developing a RNN and am using Tensorflow 1.1. I got the following error:
tensorflow.python.framework.errors_impl.InvalidArgumentError: The node 'model/att_seq2seq/encode/pocmru_rnn_encoder/rnn/while/Variable/Assign' has inputs from different frames. The input 'model/att_seq2seq/encode/pocmru_rnn_encoder/rnn/while/Identity_3' is in frame 'model/att_seq2seq/encode/pocmru_rnn_encoder/rnn/while/model/att_seq2seq/encode/pocmru_rnn_encoder/rnn/while/'. The input 'model/att_seq2seq/encode/pocmru_rnn_encoder/rnn/while/Variable' is in frame ''.
The error is caused by the lambda function in dynamic rnn method and a piece of code in my RNN.
tensorflow rnn.py "dynamic_rnn / _dynamic_rnn_loop / _time_step" that using a lambda function to call RNN.call method to loop through all inputs.
my code :
if type(myObject) != tf.Variable:
tp = tf.Variable(myObject, validate_shape=False)
else:
tp = myObject
Logically, i repeatedly use tf.scatter_nd_update to update myObject. The pseudo code would be like myObject = scatter_nd_update(myObject, indices, updates). Since tf.scatter_nd_update requires Variable as argument and returns tensor, I need to wrap tensor into Variable. Hence the code above (test variable and then wrap). How should I modify my code to make it work? Thanks!