using gather on argmax is different than taking max - tensorflow

I'm trying to learn to train a double-DQN algorithm on tensorflow and it doesn't work. to make sure everything is fine I wanted to test something. I wanted to make sure that using tf.gather on the argmax is exactly the same as taking the max: let's say I have a network called target_network:
first let's take the max:
next_qvalues_target1 = target_network.get_symbolic_qvalues(next_obs_ph) #returns tensor of qvalues
next_state_values_target1 = tf.reduce_max(next_qvalues_target1, axis=1)
let's try it in a different way- using argmax and gather:
next_qvalues_target2 = target_network.get_symbolic_qvalues(next_obs_ph) #returns same tensor of qvalues
chosen_action = tf.argmax(next_qvalues_target2, axis=1)
next_state_values_target2 = tf.gather(next_qvalues_target2, chosen_action)
diff = tf.reduce_sum(next_state_values_target1) - tf.reduce_sum(next_state_values_target2)
next_state_values_target2 and next_state_values_target1 are supposed to be completely identical. so running the session should output diff = . but it does not.
What am I missing?
Thanks.

Found out what went wrong. chosen action is of shape (n, 1) so I thought that using gather on a variable that's (n, 4) I'll get a result of shape (n, 1). turns out this isn't true. I needed to turn chosen_action to be a variable of shape (n, 2)- instead of [action1, action2, action3...] I needed it to be [[1, action1], [2, action2], [3, action3]....] and use gather_nd to be able to take specific elements from next_qvalues_target2 and not gather, because gather takes complete rows.

Related

TypeError: 'TensorShape' object is not callable

I am new to Tensorflow programming , i was digging up some functions and got this error in the snippet :
**with** **tf.Session()** as sess_1:
c = tf.constant(5)
d = tf.constant(6)
e = c + d
print(sess_1.run(e))
print(sess_1.run(e.shape()))
Error found :Traceback (most recent call last):
File "C:/Users/Ashu/PycharmProjects/untitled/Bored.py", line 15, in
print(sess_1.run(e.shape()))
TypeError: 'TensorShape' object is not callable
I didn't found it here so can anyone please clarify this silly doubt as i am new learner.Sorry for any typing mistake !
I have a one more doubt , when i uses simply eval() function it doesn't print anything in pycharm , i had to use it along with print() method. So my doubt is when print() method is used it doesn't print the dtype of the tensor , it simply print the tensor or python object value in pycharm.(Why i am not getting the output in the format like : array([1. , 1.,] , dtype=float32))Is it the Pycharm way to print the tensor in new version or is it something i am doing wrong ? So excited to know the thing behind this , please help and pardon if i am wrong at any place.
One confusing aspect of tensorflow for beginners is there are two types of shape: dynamic shape, given by tf.shape(x), and static shape, given by x.shape (assuming x is a tensor). While they represent the same concept, they are used very differently.
Static shape is the shape of a tensor known at run time. Its a data type in its own right, but it can be converted to a list using as_list().
x = tf.placeholder(shape=(None, 3, 4))
static_shape = x.shape
shape_list = x.shape.as_list()
print(shape_list) # [None, 3, 4]
y = tf.reduce_sum(x, axis=1)
print(y.shape.as_list()) # [None, 4]
During operations, tensorflow tracks static shapes as best it can. In the above example, y's shape was calculated based on the partially known shape of x's. Note we haven't even created a session, but the static shape is still known.
Since the batch size is not known, you can't use the static first entry in calculations.
z = tf.reduce_sum(x) / tf.cast(x.shape.as_list()[0], tf.float32) # ERROR
(we could have divided by x.shape.as_list()[1], since that dimension is known at run-time - but that wouldn't demonstrate anything here)
If we need to use a value which is not known statically - i.e. at graph construction time - we can use the dynamic shape of x. The dynamic shape is a tensor - like other tensors in tensorflow - which is evaluated using a session.
z = tf.reduce_sum(x) / tf.cast(tf.shape(x)[0], tf.float32) # all good!
You can't call as_list on the dynamic shape, nor can you inspect its values without going through a session evaluation.
As stated in the documentation, you can only call a session's run method with tensors, operations, or lists of tensors/operations. Your last line of code calls run with the result of e.shape(), which has type TensorShape. The session can't execute a TensorShape argument, so you're getting an error.
When you call print with a tensor, the system prints the tensor's content. If you want to print the tensor's type, use code like print(type(tensor)).

Logical AND/OR in Keras Backend

Tensorflow has tf.logical_and() and tf.logical_or() for comparison of two boolean tensors, i.e. tf.logical_and(x,y)==TRUE if x==TRUE and y==TRUE (doc). I can't find anything like this in the Keras backend though. They have keras.backend.any() and .all(), but this is for aggregation within a tensor, not between. I've been having to use workarounds with nested K.switch() functions, but it is painfully inelegant.
Let x and y be boolean keras tensors of the same shape.
To take elementwise or, do the following:
keras.backend.any(keras.backend.stack([x, y], axis=0), axis=0)
To take elementwise and, do the following:
keras.backend.all(keras.backend.stack([x, y], axis=0), axis=0)
Here keras.backend.stack([x, y], axis=0) stacks x and y into a new tensor with an additional dimension at number 0. After that keras.backend.any takes a logical or along the new dimension, and keras.backend.any takes the logical and.
My solution (perhaps not the best, because I haven't found others either), is:
A = K.cast(someBooleanTensor, K.floatx())
B = K.cast(anotherBooleanTensor, K.floatx())
A_and_B = A * B #this is also something I use a lot for gathering elements
A_or_B = 1 -((1-A)*(1-B))
But thinking about it now... I never tested python operators... perhaps they work?

What does tf.gather_nd intuitively do?

Can you intuitively explain or give more examples about tf.gather_nd for indexing and slicing into high-dimensional tensors in Tensorflow?
I read the API, but it is kept quite concise that I find myself hard to follow the function's concept.
Ok, so think about it like this:
You are providing a list of index values to index the provided tensor to get those slices. The first dimension of the indices you provide is for each index you will perform. Let's pretend that tensor is just a list of lists.
[[0]] means you want to get one specific slice(list) at index 0 in the provided tensor. Just like this:
[tensor[0]]
[[0], [1]] means you want get two specific slices at indices 0 and 1 like this:
[tensor[0], tensor[1]]
Now what if tensor is more than one dimensions? We do the same thing:
[[0, 0]] means you want to get one slice at index [0,0] of the 0-th list. Like this:
[tensor[0][0]]
[[0, 1], [2, 3]] means you want return two slices at the indices and dimensions provided. Like this:
[tensor[0][1], tensor[2][3]]
I hope that makes sense. I tried using Python indexing to help explain how it would look in Python to do this to a list of lists.
You provide a tensor and indices representing locations in that tensor. It returns the elements of the tensor corresponding to the indices you provide.
EDIT: An example
import tensorflow as tf
sess = tf.Session()
x = [[1,2,3],[4,5,6]]
y = tf.gather_nd(x, [[1,1],[1,2]])
print(sess.run(y))
[5, 6]

Creating new vector in tensorflow from argmax performed on another tensor

I have tensor that has shape (?, 3), looks like this [x, y, z] and I need to create function that take argmax of it, creates new vector and assign values with respect to dimension and argmax.
Example:
f(y):
v = tf.variable(tf.zeros(y.get_shape()))
index = tf.argmax(y)
v[index] = 1.0
return v
Unfortunately this doesn't work and I can't figure out how can one do it.
Are you sure that you want to create and assign to a tf.Variable here? It would probably be simpler to use the tf.one_hot() op (available from version 0.8 onwards) to build the result functionally, as you wouldn't have to worry about initialization, etc. For example, you could do the following:
def f(y):
index = tf.argmax(y, 1)
return tf.one_hot(index, tf.shape(y)[1], 1.0, 0.0)

How to expand a Tensorflow Variable

Is there any way to make a Tensorflow Variable larger? Like, let's say I wanted to add a neuron to a layer of a neural network in the middle of training. How would I go about doing that? An answer in This question told me how to change the shape of the variable, to expand it to fit another row of weights, but I don't know how to initialize those new weights.
I figure another way of going about this might involve combining variables, as in initializing the weights first in a second variable and then adding that in as a new row or column of the first variable, but I can't find anything that lets me do that either.
There are various ways you could accomplish this.
1) The second answer in that post (https://stackoverflow.com/a/33662680/5548115) explains how you can change the shape of a variable by calling 'assign' with validate_shape=False. For example, you could do something like
# Assume var is [m, n]
# Add the new 'data' of shape [1, n] with new values
new_neuron = tf.constant(...)
# If concatenating to add a row, concat on the first dimension.
# If new_neuron was [m, 1], you would concat on the second dimension.
new_variable_data = tf.concat(0, [var, new_neuron]) # [m+1, n]
resize_var = tf.assign(var, new_variable_data, validate_shape=False)
Then when you run resize_var, the data pointed to by 'var' will now have the updated data.
2) You could also create a large initial variable, and call tf.slice on different regions of the variable as training progresses, since you can dynamically change the 'begin' and 'size' attributes of slice.
Simply using tf.concat for expand a Tensorflow Variable,you can see the api_docs
for detail.
v1 = tf.Variable(tf.zeros([5,3]),dtype=tf.float32)
v2 = tf.Variable(tf.zeros([1,3]),dtype=tf.float32)
v3 = tf.concat(0,[v1, v2])
Figured it out. It's kind of a roundabout process, but it's the only one I can tell that actually functions. You need to first unpack the variables, then append the new variable to the end, then pack them back together.
If you're expanding along the first dimension, it's rather short: only 7 lines of actual code.
#the first variable is 5x3
v1 = tf.Variable(tf.zeros([5, 3], dtype=tf.float32), "1")
#the second variable is 1x3
v2 = tf.Variable(tf.zeros([1, 3], dtype=tf.float32), "2")
#unpack the first variable into a list of size 3 tensors
#there should be 5 tensors in the list
change_shape = tf.unpack(v1)
#unpack the second variable into a list of size 3 tensors
#there should be 1 tensor in this list
change_shape_2 = tf.unpack(v2)
#for each tensor in the second list, append it to the first list
for i in range(len(change_shape_2)):
change_shape.append(change_shape_2[i])
#repack the list of tensors into a single tensor
#the shape of this resultant tensor should be [6, 3]
final = tf.pack(change_shape)
If you want to expand along the second dimension, it gets somewhat longer.
#First variable, 5x3
v3 = tf.Variable(tf.zeros([5, 3], dtype=tf.float32))
#second variable, 5x1
v4 = tf.Variable(tf.zeros([5, 1], dtype=tf.float32))
#unpack tensors into lists of size 3 tensors and size 1 tensors, respectively
#both lists will hold 5 tensors
change = tf.unpack(v3)
change2 = tf.unpack(v4)
#for each tensor in the first list, unpack it into its own list
#this should make a 2d array of size 1 tensors, array will be 5x3
changestep2 = []
for i in range(len(change)):
changestep2.append(tf.unpack(change[i]))
#do the same thing for the second tensor
#2d array of size 1 tensors, array will be 5x1
change2step2 = []
for i in range(len(change2)):
change2step2.append(tf.unpack(change2[i]))
#for each tensor in the array, append it onto the corresponding array in the first list
for j in range(len(change2step2[i])):
changestep2[i].append(change2step2[i][j])
#pack the lists in the array back into tensors
changestep2[i] = tf.pack(changestep2[i])
#pack the list of tensors into a single tensor
#the shape of this resultant tensor should be [5, 4]
final2 = tf.pack(changestep2)
I don't know if there's a more efficient way of doing this, but this works, as far as it goes. Changing further dimensions would require more layers of lists, as necessary.