Getting ValueError during assignment of rows in tensorflow - tensorflow

I have the following piece of code:
norm_embed = tf.sqrt(tf.reduce_sum(tf.multiply(embeddings, embeddings), 1))
comparison = tf.greater(norm_embed, tf.constant(1.))
cond_assignment = tf.assign(embeddings, tf.where(comparison, embeddings/norm_embed, embeddings))
What I'm trying to do is that I have a matrix embeddings of [V, 1] shape. And I want to normalize those rows where norm of a row is greater than 1.
However, I get ValueError:
ValueError: Dimensions must be equal, but are 2 and 11202 for
'truediv_1' (op: 'RealDiv') with input shapes: [11202,2], [11202].
I understand that matrix norm_embed has [V] shape, but during division [V, k] matrix by vector [V] the latter should be broadcasted. I don't understand why it's not happening. I also tried to reshape the vector to [V, 1] shape but it didn't help.
Why I receive ValueError during normalization? Maybe other ways to normalize rows when they exceed value?

There are two errors with the code.
First, we need reshape norm_embed to have two dimensions:
norm_embed = tf.reshape(norm_embed, [V, 1])
Second, we need to reshape comparison operator to have only 1 dimension.
comparison = tf.reshape(comparison, [vocabulary_size])
After that code works:
norm_embed = tf.sqrt(tf.reduce_sum(tf.multiply(embeddings, embeddings), 1))
norm_embed = tf.reshape(norm_embed, [V, 1])
comparison = tf.greater(norm_embed, tf.constant(1.))
comparison = tf.reshape(comparison, [V])
replacement = tf.where(comparison, embeddings/norm_embed, embeddings)
cond_assignment = tf.assign(embeddings, replacement)

Related

How to reshape a tensor with unknown dimensions? [duplicate]

I encountered a problem to reshape an intermediate 4D tensorflow tensor X to a 3D tensor Y, where
X is of shape ( batch_size, nb_rows, nb_cols, nb_filters )
Y is of shape ( batch_size, nb_rows*nb_cols, nb_filters )
batch_size = None
Of course, when nb_rows and nb_cols are known integers, I can reshape X without any problem. However, in my application I need to deal with the case
nb_rows = nb_cols = None
What should I do? I tried Y = tf.reshape( X, (-1, -1, nb_filters)) but it clearly fails to work.
For me, this operation is deterministic because it always squeezes the two middle axes into a single one while keeping the first axis and the last axis unchanged. Can anyone help me?
In this case you can access to the dynamic shape of X through tf.shape(X):
shape = [tf.shape(X)[k] for k in range(4)]
Y = tf.reshape(X, [shape[0], shape[1]*shape[2], shape[3]])

ValueError: Dimensions must be equal in Tensorflow/Keras

My codes are as follow:
v = tf.Variable(initial_value=v, trainable=True)
v.shape is (1, 768)
In the model:
inputs_sents = keras.Input(shape=(50,3))
inputs_events = keras.Input(shape=(50,768))
x_1 = tf.matmul(v,tf.transpose(inputs_events))
x_2 = tf.matmul(x_1,inputs_sents)
But I got an error,
ValueError: Dimensions must be equal, but are 768 and 50 for
'{{node BatchMatMulV2_3}} =
BatchMatMulV2[T=DT_FLOAT,
adj_x=false,
adj_y=false](BatchMatMulV2_3/ReadVariableOp,
Transpose_3)' with input shapes: [1,768], [768,50,?]
I think it takes consideration of the batch? But how shall I deal with this?
v is a trainable vector (or 2d array with first dimension being 1), I want it to be trained in the training process.
PS: This is the result I got using the codes provided by the first answer, I think it is incorrect cause keras already takes consideration of the first batch dimension.
Plus, from the keras documentation,
shape: A shape tuple (integers), not including the batch size. For instance, shape=(32,) indicates that the expected input will be batches of 32-dimensional vectors. Elements of this tuple can be None; 'None' elements represent dimensions where the shape is not known.
https://keras.io/api/layers/core_layers/input/
Should I rewrite my codes without keras?
The shape of a batch is denoted by None:
import numpy as np
inputs_sents = keras.Input(shape=(None,1,3))
inputs_events = keras.Input(shape=(None,1,768))
v = np.ones(shape=(1,768), dtype=np.float32)
v = tf.Variable(initial_value=v, trainable=True)
x_1 = tf.matmul(v,tf.transpose(inputs_events))
x_2 = tf.matmul(x_1,inputs_sents)

How to create Keras ZeroTensor of specific shape

I am a total beginner with tensorflow.keras and I am wondering how I could create a constant zero tensor of a specific shape.
For example with this:
zeros = tf.keras.backend.zeros((someTensor.shape[0], someTensor.shape[1], someTensor.shape[2], channels))
concat = tf.kerasbackend.concatenate([someTensor, zeros], axis=3)
The operation tf.keras.backend.zeros fails with:
ValueError: Cannot convert a partially known TensorShape to a Tensor
I guess thats because the batch size is unknown during graph building. How can I create a ZeroTensor or any other constant tensor when I don't know the batchsize at that moment? Or is there some kind of unknown(?) value that I can specify?
It's strange because you are using a tuple of tensors and integers. Sort of weird.
You should:
shape = K.shape(someTensor)
ch = K.variable([channels]) #I think K.constant also works.
newShape = K.concatenate([shape[:3], ch])
zeros = K.zeros(newShape)
Now, if this doesn't work because of unknown shapes, a dirty workaround would be:
#if someTensor is 3D
zeros = K.zeros_like(someTensor)
zeros = K.stack([zeros] * channels, axis=-1)
#if someTensor is 4D
zeros = K.zeros_like(someTensor[:,:,:,0])
zeros = K.stack([zeros]*channels, axis=-1)

Multiply matrix with vector(element wise) Tensorflow

I am not sure on a way to put this question into a title. But will show an example on the thing that I need help in using Tensorflow.
For an example:
matrix_1 shape = [4,2]
matrix_2 shape = [4,1]
matrix_1 * matrix 2
[[1,2],
[3,4],
[5,6],
[7,8]]
*
[[0.1],
[0.2],
[0.3],
[0.4]]
= [[0.1,0.2],
[0.6,0.8],
[1.5,1.8],
[2.8,3.2]]
Is there any algorithm to achieve this?
Thank you
This is the error that I am getting from the simplified problem example above:
ValueError: Dimensions must be equal, but are 784 and 100 for 'mul_13' (op: 'Mul') with input shapes: [100,784], [100]
The standard tf.multiply(matrix_1, matrix_2) operation (or the shorthand syntax matrix_1 * matrix_2) will perform exactly the computation that you want on matrix_1 and matrix_2.
However, it looks like the error message you are seeing is because matrix_2 has shape [100], whereas it must be [100, 1] to get the elementwise broadcasting behavior. Use tf.reshape(matrix_2, [100, 1]) or tf.expand_dims(matrix_2, 1) to convert it to the correct shape.

Slicing a tensor with indices of tensor

In a mini_batch, I have a hidden word embedding tensor that is h, shape(h) is (?, 300), and an attention words embedding tensor attention_words, where shape(attention_words) is (?, num, 300). Now I want to find the closest word in B to each word in A. In this condition, I use the cosine distance to measure, as
normed_attention_words = tf.nn.l2_normalize(attention_words, dim=2)
normed_hidden = tf.nn.l2_normalize(h, dim=1)
normed_hidden = tf.expand_dims(normed_hidden, 1) # (?,1,300)
#num is the number of attention words
normed_hidden = tf.tile(normed_hidden, [1, num, 1]) #(?,num,300)
cosine_similarity = tf.reduce_sum(tf.mul(normed_attention_words, normed_hidden), 2)
closest_words = tf.argmax(cosine_similarity, 1)
Hence, shape(closest_words) is (?,). I want to utilize the tensor closest_words as the index to slice original attention_words. I have tried many times but failed (especially tf.gather_nd, cause the Gradient for gather_nd is not implemented). So in this condition, how to solve this problem?