How to create a Tensor with specific values? - tensorflow

I have a tensor with shape NxM.
I'd like to create another tensor with the same shape, filled with ones up until a certain column (might be different for each row) and the rest of it filled with another value (let's say 10 for the example).
How I do that?

Something like this can help you:
input = tf.Variable([[1, 2, 3, 4, 5], [1, 2, 3, 4, 5], [1, 2, 3, 4, 5]], dtype = tf.float32)
indices = tf.constant([1, 4, 2])
X = tf.ones_like(input)
Y = tf.constant(10, dtype = tf.float32, shape = input.shape)
result = tf.where(tf.sequence_mask(indices, tf.shape(input)[1]), X, Y)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
print(sess.run(input))
print(sess.run(indices))
print(sess.run(result))

Related

Tensorflow conv2d on RGB image

From the accepted answer in this question,
given the following
input and kernel matrices, the output of tf.nn.conv2d is
[[14 6]
[6 12]]
which makes sense. However, when I make the input and kernel matrices have 3-channels each (by repeating each original matrix), and run the same code:
# the previous input
i_grey = np.array([
[4, 3, 1, 0],
[2, 1, 0, 1],
[1, 2, 4, 1],
[3, 1, 0, 2]
])
# copy to 3-dimensions
i_rgb = np.repeat( np.expand_dims(i_grey, axis=0), 3, axis=0 )
# convert to tensor
i_rgb = tf.constant(i_rgb, dtype=tf.float32)
# make kernel depth match input; same process as input
k = np.array([
[1, 0, 1],
[2, 1, 0],
[0, 0, 1]
])
k_rgb = np.repeat( np.expand_dims(k, axis=0), 3, axis=0 )
# convert to tensor
k_rgb = tf.constant(k_rgb, dtype=tf.float32)
here's what my input and kernel matrices look like at this point
# reshape input to format: [batch, in_height, in_width, in_channels]
image_rgb = tf.reshape(i_rgb, [1, 4, 4, 3])
# reshape kernel to format: [filter_height, filter_width, in_channels, out_channels]
kernel_rgb = tf.reshape(k_rgb, [3, 3, 3, 1])
conv_rgb = tf.squeeze( tf.nn.conv2d(image_rgb, kernel_rgb, [1,1,1,1], "VALID") )
with tf.Session() as sess:
conv_result = sess.run(conv_rgb)
print(conv_result)
I get the final output:
[[35. 15.]
[35. 26.]]
But I was expecting the original output*3:
[[42. 18.]
[18. 36.]]
because from my understanding, each channel of the kernel is convolved with each channel of the input, and the resultant matrices are summed to get the final output.
Am I missing something from this process or the tensorflow implementation?
Reshape is a tricky function. It will produce you the shape you want, but can easily ground things together. In cases like yours, one should avoid using reshape by all means.
In that particular case instead, it is better to duplicate the arrays along the new axis. When using [batch, in_height, in_width, in_channels] channels is the last dimension and it should be used in repeat() function. Next code should better reflect the logic behind it:
i_grey = np.expand_dims(i_grey, axis=0) # add batch dim
i_grey = np.expand_dims(i_grey, axis=3) # add channel dim
i_rgb = np.repeat(i_grey, 3, axis=3 ) # duplicate along channels dim
And likewise with filters:
k = np.expand_dims(k, axis=2) # input channels dim
k = np.expand_dims(k, axis=3) # output channels dim
k_rgb = np.repeat(k, 3, axis=2) # duplicate along the input channels dim

use tf.shape() on tensorflow placeholder

Let's looke at this simple made up tf operation:
data = np.random.rand(1,2,3)
x = tf.placeholder(tf.float32, shape=[None, None, None], name='x_pl')
out = x
print ('shape:', tf.shape(out))
sess = tf.Session()
sess.run(out, feed_dict={x: data})
and the print is:
shape: Tensor("Shape_13:0", shape=(3,), dtype=int32)
I read that you should use tf.shape() to get the 'dynamic' shape of the tensor, which seems to be what I need, but why the shape is shape=(3,)?
why it is not (1,2,3)? as it should be determined when the session is run?
suppose this is part of a neural network where I need to know the last dimension of x, for example, to pass x into a Dense layer, for which the last dimension of x needed to be known.
how do it do it then?
It is because tf.shape() is an op and you have to run it within a session.
data = np.random.rand(1,2,3)
x = tf.placeholder(tf.float32, shape=[None, None, None], name='x_pl')
out = x
print ('shape:', tf.shape(out))
z = tf.shape(out)
sess = tf.Session()
out_, z_ =sess.run([out,z], feed_dict={x: data})
print(f"shape of out: {z_}")
will return
shape: Tensor("Shape:0", shape=(3,), dtype=int32)
shape of out: [1 2 3]
Even if you look at the example from the docs (https://www.tensorflow.org/api_docs/python/tf/shape):
t = tf.constant([[[1, 1, 1], [2, 2, 2]], [[3, 3, 3], [4, 4, 4]]])
tf.shape(t)
If you run it just like that it will return something like
<tf.Tensor 'Shape_4:0' shape=(3,) dtype=int32>
but if you run it within a session then you will get the expected result
t = tf.constant([[[1, 1, 1], [2, 2, 2]], [[3, 3, 3], [4, 4, 4]]])
print(sess.run(tf.shape(t)))
[2 2 3]

Element-wise assignment in tensorflow

In numpy, it could be easily done as
>>> img
array([[1, 2, 3],
[4, 5, 6],
[7, 8, 9]], dtype=int32)
>>> img[img>5] = [1,2,3,4]
>>> img
array([[1, 2, 3],
[4, 5, 1],
[2, 3, 4]], dtype=int32)
However, there seems not exist similar operation in tensorflow.
You can never assign a value to a tensor in tensorflow as the change in tensor value is not traceable by backpropagation, but you can still get another tensor from origin tensor, here is a solution
import tensorflow as tf
tf.enable_eager_execution()
img = tf.constant(list(range(1, 10)), shape=[3, 3])
replace_mask = img > 5
keep_mask = tf.logical_not(replace_mask)
keep = tf.boolean_mask(img, keep_mask)
keep_index = tf.where(keep_mask)
replace_index = tf.where(replace_mask)
replace = tf.random_uniform((tf.shape(replace_index)[0],), 0, 10, tf.int32)
updates = tf.concat([keep, replace], axis=0)
indices = tf.concat([keep_index, replace_index], axis=0)
result = tf.scatter_nd(tf.cast(indices, tf.int32), updates, shape=tf.shape(img))
Actually there is a way to achieve this. Very similar to #Jie.Zhou's answer, you can replace tf.constant with tf.Variable, then replace tf.scatter_nd with tf.scatter_nd_update

Tensorflow: select specific elements from each row of a tensor for a NN with variable labels

I'm trying to build a neural network where the labels and the number of labels change on input. For example, I could have a final layer of 10 units that represent the logit of their class, but sometimes I will only need units [1,3,4] to calculate cross entropy, some of the units [3,4,5,7] etc.
I tried using different combinations of map_fn, gather, py_fn and while_loop but no one seems to be in my case. Another way might be to list all types of label combinations (I call them network heads) and find some conditional constructs that allow me to choose one based on the value of a placeholder. But I'm not sure how to implement it.
For example:
x = tf.placeholder(dtype=tf.float32, shape=[None,3])
y = tf.placeholder(dtype=tf.int32, shape=[None, 3])
... to_do ...
with tf.Session() as sess:
sess.run(to_do, feed_dict={x: [[1, 3, 4], [3, 7, 8]], y: [[1, 0, 0], [0, 1, 1]]})
Here I need something that return [[1],[7,8]].
Oh no matter. There was a very easy way to get the probabilites I needed for cross-entropy.
x = tf.placeholder(dtype=tf.float32, shape=[None,3])
y = tf.placeholder(dtype=tf.int32, shape=[None, 3])
probabilities = tf.where(tf.equal(y,1), tf.exp(x), tf.zeros_like(x))
normalizing_sum = tf.reduce_sum(probabilities, 1, keep_dims=True)
probabilities/=normalizing_sum
with tf.Session() as sess:
res = sess.run(probabilities, feed_dict={x: [[1, 3, 4], [3, 7, 8]], y: [[1, 0, 0], [0, 1, 1]]})

reshape tensor between two layers

I have a neural network in which I built my own layer and it gives result with the shape A = [10, 5].
I want to feed the result to another layer which takes input with shape B = [10, 9, 5].
The input B is based on the previous result A, for example, selecting 9 different rows from A for 10 times,making a new tensor with the shape [10, 9, 5].
Is there a way to do that?
for loop will do:
a = tf.constant([[1, 2, 7], [3, 4, 8], [5, 6, 9]])
tensor_list = []
pick_times = 3
for i in range(pick_times):
pick_rows = [j for j in range(pick_times) if i != j]
tensor_list.append(tf.gather(a, pick_rows))
concated_tensor = tf.concat(tensor_list, 0)
result = tf.reshape(concated_tensor, [3, 2, 3])
Convert the tensor A (output of the layer) into a numpy array with:
a=sess.run(A.eval())
For the sake of the example I'll use:
a=np.random.uniform(0,5,[10])
Then:
#choose wich element will be left out
out = np.random.randint(5, size=[10])
#array of different output layers without one random element
b=[]
for i in range(10):
b.append(np.delete(a,out[i]))
#Stack them all together
B = tf.stack(b[:])