Tensorflow equivalent of Numpy's array[indices] = scalar - indexing

I have a tensor and an index tensor of the same rank. I want to set the values of the tensor that correspond to the indices in the index tensor to a certain scalar. How do I do this?
In other words, I'm looking for the Tensorflow equivalent of the following Numpy operation:
array[indices] = scalar
In my concrete case we're talking about a 1D tensor:
mask = tf.zeros_like(some_1D_tensor)
(e.g. mask = [0, 0, 0, 0, 0])
Let indices be a 1D tensor that contains the indices of mask that I'd like to set to the scalar value 1. So I want:
mask[indices] = 1
(e.g. for indices = [1, 3] the output should be mask == [0, 1, 0, 1, 0])

I don't know if it wasn't there before or if I just haven't seen it, but the general case equivalent of
array[indices] = scalar
is
tensor = tf.scatter_nd_update(tensor, indices, updates)
using tf.scatter_nd_update()

Related

creating a mask tensor from an index tensor

The problem is, I have an indices tensor with shape [batch_size, seq_len, k] and every element in this tensor is in range [0, hidden_dim). I want to create a mask tensor with shape [batch_size, seq_len, hidden_dim] where every element indexed by the indices tensor is 1 and other elements are 0. k is smaller than hidden_dim. For example:
indices = [[[0],[1],[2]]] #batch_size=1, seq_len=3, k=1
mask = tf.zeros(shape=(1,3,3)) #batch_size=1, seq_len=3, hidden_dim = 3
How can I get a target mask tensor whose elements indicated by the indices are 1, i.e.:
target_mask = [[[1, 0, 0], [0, 1, 0], [0, 0, 1]]]
This can be accomplished using tf.one_hot, e.g.:
mask = tf.one_hot(indices, depth=hidden_dim, axis=-1) # [batch, seq_len, k, hidden_dim]
I wasn't clear on what you'd like to happen to k. tf.one_hot() will keep the axis as is, i.e. you'll get a delta distribution for each [batch-index, seq-index, k-index] tuple.

Understanding INDArray dimension reshaping for Tensorflow Object detection models

Trying to load Tensorflow trained model into Deeplearning4J with following error:
IllegalStateException: Invalid array shape: cannot associate an array with shape [38880] with a placeholder of shape [-1, -1, -1, 3]:shape is wrong rank or does not match on one or more dimensions
var arr: INDArray = Nd4j.create(data) //.reshape(1, -1, -1, 3);
arr = Nd4j.pile(arr, arr)
sd.associateArrayWithVariable(arr, sd.variables.get(0))
Python model was loaded like that:
# Load image using OpenCV and
# expand image dimensions to have shape: [1, None, None, 3]
# i.e. a single-column array, where each item in the column has the pixel RGB value
image = cv2.imread(PATH_TO_IMAGE)
image_expanded = np.expand_dims(image, axis=0)
Please explain any question if you know:
1) What means [1, None, None, 3] in terms of Python arrays
2) What means np.expand_dims(image, axis=0) in Python
3) Deeplearning4J reshape(1, -1, -1, 3);
You're mixing two different concepts here, TF placeholders, and imperative numpy-like reshape.
In your case, model expects 4D input tensor, with shape [-1, -1, -1, 3]. For human it can be translated to [Any, Any, Any, 3]. But you're trying to feed it with tensor with shape [38880], rank 1.
Now to your questions.
1) See above. -1 is treated as "Any".
2) This function adds 1 as dimension. i.e. if you have [38880], expand_dims at axis=0 will make it [1, 38880]
3) Nope, that's wrong. You should not use that as your shape. You have some image there, so you should specify proper dimensions your image has, i.e. [1, 800, 600, 3].

Converting sparse tensor dense shape into an integer value in tensorflow

If I want to get the shape of a normal tensor in tensorflow, and store the values in a list, I would use the following
a_shape=[a.shape[0].value , a.shape[1].value]
If I'm not mistaken, using .value converts the element in the tensor to a real number.
With sparse tensors, I type the following
a_sparse_shape=[a.dense_shape[0].value, a.dense_shape[1].value]
However, I get the error message
" 'Tensor' object has no attribute 'value' "
Does anyone have any alternate solutions?
Yes, there is an alternative:
import tensorflow as tf
tensor = tf.random_normal([2, 2, 2, 3])
tensor_shape = tensor.get_shape().as_list()
print(tensor_shape)
# [2, 2, 2, 3]
Same for sparse tensors:
sparse_tensor = tf.SparseTensor(indices=[[0,0], [1, 1]],
values=[1, 2],
dense_shape=[2, 2])
sparse_tensor_shape = sparse_tensor.get_shape().as_list()
print(sparse_tensor_shape)
# [2, 2]

why softmax_cross_entropy_with_logits_v2 return cost even same value

i have tested "softmax_cross_entropy_with_logits_v2"
with a random number
import tensorflow as tf
x = tf.placeholder(tf.float32,shape=[None,5])
y = tf.placeholder(tf.float32,shape=[None,5])
softmax = tf.nn.softmax_cross_entropy_with_logits_v2(logits=x,labels=y)
with tf.Session() as sess:
feedx=[[0.1,0.2,0.3,0.4,0.5],[0.,0.,0.,0.,1.]]
feedy=[[1.,0.,0.,0.,0.],[0.,0.,0.,0.,1.]]
softmax = sess.run(softmax, feed_dict={x:feedx, y:feedy})
print("softmax", softmax)
console "softmax [1.8194163 0.9048325]"
what i understand about this function was
This function only returns cost when logits and labels are different.
then why it returns 0.9048325 even same value?
The way tf.nn.softmax_cross_entropy_with_logits_v2 works is that it does softmax on your x array to turn the array into probabilities:
where i is the index of your array. Then the output of tf.nn.softmax_cross_entropy_with_logits_v2 will be the dotproduct between -log(p) and the labels:
Since the labels are either 0 or 1, only the term where the label is equal to one contributes. So in your first sample, the softmax probability of the first index is
and the output will be
Your second sample will be different, since x[0] is different than x[1].
tf.nn.softmax_cross_etnropy_with_logits_v2 as per the documentation expects unscaled inputs, because it performs a softmax operation on logits internally. Your second input [0, 0, 0, 0, 1] thus is internally softmaxed to something roughly like [0.15, 0.15, 0.15, 0.15, 0.4] and then, cross entropy for this logit and the true label [0, 0, 0, 0, 1] is computed to be the value you get

Spatial Pyramid Pooling - Input Size Error (? - None)

I've been trying to implement the Spatial Pyramid Pooling (https://arxiv.org/abs/1406.4729), but I've been having a problem with the input size.
My input has shape (batch_size, None, n_feature_maps) and I have the following code:
self.y_conv_unstacked = tf.unstack(self.conv_output, axis=0)
self.y_maxpool = []
for tensor in self.y_conv_unstacked:
for size_pool in self.out_pool_size:
self.w_strd = self.w_size = math.ceil(float(tensor.get_shape()[1]) / size_pool)
self.pad_w = int(size_pool * self.w_size - tensor.get_shape()[1])
self.padded_tensor = tf.pad(tensor, tf.constant([[0, 0], [0, 0], [0, self.pad_w], [0, 0]]))
self.max_pool = tf.nn.max_pool(self.padded_tensor, ksize=[1, 1, self.w_size, 1], strides=[1, 1, self.w_strd, 1], padding='SAME')
self.spp_tensor = tf.concat([self.spp_tensor, tf.reshape(self.max_pool, [1, size_pool, self.n_fm1])], axis=1)
self.y_maxpool.append(spp_tensor)
Since the inputs in the batch have different sizes, I am unstacking them and pooling each tensor separately. However when using tensor.get_shape()[1], it returns "?". If I use tensor.get_shape().as_list()[1], it returns None.
I would like to know how I can work around this nondefined size. Is it possible to get the tensor's shape at runtime?
Edit: Using tf.shape, I get a tensor. How can I use this tensor to create the ksize, strides and paddings I need?
I would like to know how I can work around this nondefined size. Is it
possible to get the tensor's shape at runtime?
Use tf.shape() op to get the dynamic shape of a tensor instead of the x.get_shape() which returns the static shape of x.
This is explained in detail here.
In the above code replace tensor.get_shape()[1] with tf.shape(tensor)[1]