In Tensorflow, what is the difference between a tensor that has a type ending in _ref and a tensor that does not? - tensorflow

The docs say:
In addition, variants of these types with the _ref suffix are defined
for reference-typed tensors.
What exactly does this mean? What are reference-typed tensors and how do they differ from standard ones?

A reference-typed tensor is mutable. The most common way to create a reference-typed tensor is to define a tf.Variable: defining a tf.Variable whose initial value has dtype tf.float32 will create a reference-typed tensor with dtype tf.float32_ref. You can mutate a reference-typed tensor by passing it as the first argument to tf.assign().
(Note that reference-typed tensors are something of an implementation detail in the present version of TensorFlow. We'd encourage you to use higher-level wrappers like tf.Variable, which may migrate to alternative representations for mutable state in the future.)

Related

Tflite: Resize output tensor based on input tensor contents

I am writing a custom op that outputs a tensor whose shape depends on the values of the input tensor. The problem is that we don't have access to the tensor values in the Prepare method. We can get the tensor shapes but the values are not available. How do I implement this?
On a related note, how do I support outputting a tensor with partially specified shape? The tensor would need to be allocated during the eval function, but I don't see an API to allocate tensors at run time.

Can i make a custom tf.estimator without using feature columns?

i.e. can i directly feed tensors with numerical data into the model_fn of a custom estimator?
Creating a numerical column and do a mapping from keys to values seems to be an overkill for me, who only works on image data.
Yes. The features argument to your model_fn can simply be a tensor, or a dict mapping strings to tensors in the case of multiple inputs. This also means that your input_fn can simply return such objects.

Tensorflow: difference get_tensor_by_name vs get_operation_by_name?

The answer here says that one returns an operation while the other returns a tensor. That is pretty obvious from the name and from the documentation. However, suppose I do the following:
logits = tf.add(tf.matmul(inputs, weights), biases, name='logits')
I am following the pattern described in Tensorflow Mechanics 101. Should I restore it as an operation or as a tensor? I am afraid that if I restore it as a tensor I will only get the last computed values for the logits; nonetheless, the post here, seems to suggest that there is no difference or that I should just use get_tensor_by_name. The idea is to compute the logits for a new set of inputs and then make predictions accordingly.
Short answer: you can use both, get_operation_by_name() and get_tensor_by_name(). Long answer:
tf.Operation
When you call
op = graph.get_operation_by_name('logits')
... it returns an instance of type tf.Operation, which is a node in the computational graph, which performs some op on its inputs and produces one or more outputs. In this case, it's a plus op.
One can always evaluate an op in a session, and if this op needs some placehoder values to be fed in, the engine will force you to provide them. Some ops, e.g. reading a variable, don't have any dependencies and can be executed without placeholders.
In your case, (I assume) logits are computed from the input placeholder x, so logits doesn't have any value without a particular x.
tf.Tensor
On the other hand, calling
tensor = graph.get_tensor_by_name('logits:0')
... returns an object tensor, which has the type tf.Tensor:
Represents one of the outputs of an Operation.
A Tensor is a symbolic handle to one of the outputs of an Operation.
It does not hold the values of that operation's output, but instead
provides a means of computing those values in a TensorFlow tf.Session.
So, in other words, tensor evaluation is the same as operation execution, and all the restrictions described above apply as well.
Why is Tensor useful? A Tensor can be passed as an input to another Operation, thus forming the graph. But in your case, you can assume that both entities mean the same.

Shape assertions and declarations in tensroflow

I use tf.strided_slice to get one value out of the 1d tensor. Unfortunately, inferred shape is ?. How can I assert/declare that it has shape [1]?
P.S. I used reshape, but it might have performance implications in some cases
Use x.set_shape() to provide additional information about the shape of this tensor that cannot be inferred from the graph alone.
You can get more information from the FAQ:
The tf.Tensor.set_shape method updates the static shape of a Tensor
object, and it is typically used to provide additional shape
information when this cannot be inferred directly. It does not change
the dynamic shape of the tensor.

tensorflow how to average several IndexedSlicesValue?

I defined a model of RNN in tensorflow, one of the gradients in compute_gradients is of type IndexedSlices while others are of type tensor. After I session.run(compute_gradients ...), the returned value type of IndexedSlices is IndexedSlicesValue, then I have two questions:
How could I average several IndexedSlicesValue values?
How can I serialize a IndexedSlicesValue and send it to another machine through socket?
Thank you very much!
IndexedSlices is really an encoding of a sparse tensor, using a pair of dense tensors. It probably comes from the gradient of a tf.gather operation. There is some API documentation about IndexedSlices here that may help: https://www.tensorflow.org/api_docs/python/tf/IndexedSlices
I don't know of much code to work with IndexedSlices directly; typically they are an internal detail used as part of gradient code. Depending on the data sizes, the easiest way to work with them might be to convert them into a dense Tensor and process/send that.