Difference between Tensorflow Operation and Tensor? - tensorflow

I am confused about the difference between Tensorflow Operation and Tensor objects. More specifically, what are the relationships between them and what are the design philosophies behind them.
x = tf.constant([[37.0, -23.0], [1.0, 4.0]])
w = tf.Variable(tf.random_uniform([2, 2]))
y = tf.matmul(x, w)
output = tf.nn.softmax(y, name="output")
output
<tf.Tensor 'output_7:0' shape=(2, 2) dtype=float32>
output2 = tf.get_default_graph().get_operation_by_name("output")
output2
<tf.Operation 'output' type=Softmax>
If I want to pass output2 to sess.run([output2]), I will get None. Is there a way to convert output2 to output?
I am a PyTorch user, what will be the analogy of Operation and Tensor in PyTorch?

I've not used PyTorch but you can assume it like it's a method and variable of a Layer class. So the operation is a method and the tensor is like the variable that can store the data. So when you run sess.run([output2]), you are trying to access the value of the method and not the variable.
To access the tensor from the name of the Layer, you can use the function as:
output2 = tf.get_default_graph().get_tensor_by_name("output:0")
The :0 is used as it is the first instance of the tensor. If you create more instances of the same Layer, it will be indexed as :1, :2 and so on.
Edit: Another thing to note is that in tensorflow sess.run([output]) extracts the value of output and doesn't feed it to the graph. Values are fed to the graph via using a feed_dict or a Feed Dictionary.

Related

How to perform mathematical operation on regression output layer

I have a simple regression neural network like this:
from scipy.spatial.transform import Rotation as R
def nn_model(2):
in = tf.keras.layers.Input(shape=[80, 80, 3])
x = tf.keras.layers.Dense(64, activation='relu')(in)
x = tf.keras.layers.Dense(64, activation='relu')(x)
out = tf.keras.layers.Dense(1, activation="linear")(x)
####### Perform math operation here
r = R.from_euler('z', out.numpy(), degrees=True)
rMat = r.as_matrix()
#######
return tf.keras.Model(inputs=in, outputs=rMat)
I want to perform a mathematical operation on the output regression layer 'out' inside the network. Is it possible to access its value from inside the NN? Running the code above gives this error:
AttributeError: 'KerasTensor' object has no attribute 'numpy'
Keras layers do not output tensor values, they instead output a tensor specification (KerasTensor) used to get the shape, dtypes and other attribute of the previous layer.
So no, it's not possible to access the value of a layer, as it has no value.
What you can do instead is to use a LambdaLayer which let you apply any pythonic code to the "real" output of the layer.
r = tf.keras.layers.Lambda(lambda x: R.from_euler('z', x))(out)
Note that I'm not sure this will work, as the function inside the lambda should preferably use tensorflow operations, and should be differentiable.

Make Keras model output a constant of certain shape

I want a Keras model which always outputs a constant value of a desired output shape.
def build_model(input_shape, output_shape)
input = tf.keras.layers.Input(shape=(512,512,3))
x = tf.keras.backend.constant(1, shape=output_shape)
output = tf.keras.layers.Lambda(lambda x: x)(x)
model = Model(inputs=input, outputs=output)
return model
model = build_model((512,512,3), (512,512,32))
I get the following error:
Output tensors to a Model must be the output of a TensorFlow Layer (thus holding past layer metadata). Found: Tensor("Const_3:0", shape=(512, 512, 32), dtype=float32)
How can I fix it?
Update
Input and output are indeed not connected. I want to test the performance of my processing pipeline with the lowest GPU load possible. I think that always outputting the same value without doing any computations won't use the GPU much. But I still make sure that my data is properly loaded (input layer).
The issue was indeed that output and input need to be connected. I couldn't use an activation layer because the output should be of a different shape than the input. Thus I ended up concatenating the input 11 times and slice it again to get an output of the correct shape with 0 trainable parameters.
The final model building function looks like this:
def build_model(input_shape=(512,512,3)):
input = tf.keras.layers.Input(shape=input_shape)
lamb = tf.keras.layers.Lambda(lambda x: tf.slice(tf.concat([x]*11, axis=3), begin=(0,0,0,0), size=(-1,512,512,32)))
output = lamb(inp)
model = Model(inputs=input, outputs=output)
return model

Tensorflow: what exactly does tf.gradients() return

Quick question as I'm kind of confused here.
Let's say we have a simple graph:
a = tf.Variable(tf.truncated_normal(shape=[200, 1], mean=0., stddev=.5))
b = tf.Variable(tf.truncated_normal(shape=[200, 100], mean=0., stddev=.5))
add = a+b
add
<tf.Tensor 'add:0' shape=(200, 100) dtype=float32> #shape is because of broadcasting
So I've got a node that takes in 2 tensors, and produces 1 tensor as an output. Let's now run tf.gradients on it
tf.gradients(add, [a, b])
[<tf.Tensor 'gradients/add_grad/Reshape:0' shape=(200, 1) dtype=float32>,
<tf.Tensor 'gradients/add_grad/Reshape_1:0' shape=(200, 100) dtype=float32>]
So we get gradients exactly in the shape of the input tensors. But... why?
Not like there's a single metric with respect to which we can take the partial derivative. Shouldn't the gradients map from every single value of the input tensors to every single value of the output tensor, effectively giving a 200x1x200x100 gradients for input a?
This is just a simple example where every element of the output tensor depends only on one value from tensor b, and one row from tensor a. However if we did something more complicated, like running a gaussian blur on a tensor then gradients would surely have to be bigger than just the input tensor.
What am I getting here wrong?
By default tf.gradients takes the gradient of the scalar you get by summing all elements of all tensors passed to tf.gradients as outputs.

Cannot interpret feed_dict key as Tensor

I have a list of placeholders called "enqueue_ops" and a list of methods called "feed_fns", each of which returns a feed_dict.
The queue runner of my graph is defined as:
queue_runner = feeding_queue_runner.FeedingQueueRunner(
queue=queue, enqueue_ops=enqueue_ops,
feed_fns=feed_fns)
However I got an error of
TypeError: Cannot interpret feed_dict key as Tensor: The name 'face_detection/x1' refers to an Operation, not a Tensor. Tensor names must be of the form "<op_name>:<output_index>".
But why are they looking at my feed_dict keys, while my feed_dict values are tensors that they don't want to look at?
Thanks!!!
In tensorflow if you want to restore a graph and use it, before saving the graph you should give your desired variables, placeholders, operations etc a unique name.
For an example see below.
W = tf.Variable(0.1, name='W')
X = tf.placeholder(tf.float32, (None, 2), name='X')
mult = tf.multiply(W,X,name='mult')
Then, once the graph is saved, you could restore and use it as follows. Remember to bundle your tensors with quotation marks. And if you are finding a value of a tensor, add :0 at the end of the tensor name as tensorflow requires it to be in "op_name:output_index" format.
with tf.Session() as sess:
new_saver = tf.train.import_meta_graph('your_model.meta')
new_saver.restore(sess, tf.train.latest_checkpoint('./'))
print(sess.run('mult:0', feed_dict={'X:0': [[1,4],[2,9]]}))

tensorflow dynamically create placeholders

At each iteration I want to dynamically provide how many placeholders I want and then will feed data to them. Is that possible and how ? I tried to create the whole model (placeholders, loss, optimizer) inside epoch loop but that gave uninitialised variables error.
At present I have n=5 placeholders each of shape=(1, k) in a list and I feed data to them. But n needs to dynamically defined during data feeding inside epoch loop.
Maybe you misunderstood what a tensor is.
If you think of a tensor like a multi-dimensional list, you can understand that having a dynamically number of placeholder with a shape [1, k] is no sense.
Instead, you have to use a single tensor.
Thus, define your input placeholder as a tensor with shape [None, 1, k].
placeholder_ = tf.placeholder(tf.float32, [None, 1, k])
With this statement you define a placeholder with tf.float32 type and an undefined number of elements (the None part) with shape [1,k].
In every iteration, you have to feed the placeholder with the right values. Eg running
result = sess.run(defined_op, feed_dict={
placeholder_: numpy_ndarray_with_N_elements_with_shape_1_k
})
In that way you don't need to define new variables into the computational graph (that simply doesn't work) but feed it with the desired values.