Multiply matrix with vector(element wise) Tensorflow - tensorflow

I am not sure on a way to put this question into a title. But will show an example on the thing that I need help in using Tensorflow.
For an example:
matrix_1 shape = [4,2]
matrix_2 shape = [4,1]
matrix_1 * matrix 2
[[1,2],
[3,4],
[5,6],
[7,8]]
*
[[0.1],
[0.2],
[0.3],
[0.4]]
= [[0.1,0.2],
[0.6,0.8],
[1.5,1.8],
[2.8,3.2]]
Is there any algorithm to achieve this?
Thank you
This is the error that I am getting from the simplified problem example above:
ValueError: Dimensions must be equal, but are 784 and 100 for 'mul_13' (op: 'Mul') with input shapes: [100,784], [100]

The standard tf.multiply(matrix_1, matrix_2) operation (or the shorthand syntax matrix_1 * matrix_2) will perform exactly the computation that you want on matrix_1 and matrix_2.
However, it looks like the error message you are seeing is because matrix_2 has shape [100], whereas it must be [100, 1] to get the elementwise broadcasting behavior. Use tf.reshape(matrix_2, [100, 1]) or tf.expand_dims(matrix_2, 1) to convert it to the correct shape.

Related

ValueError: Dimensions must be equal in Tensorflow/Keras

My codes are as follow:
v = tf.Variable(initial_value=v, trainable=True)
v.shape is (1, 768)
In the model:
inputs_sents = keras.Input(shape=(50,3))
inputs_events = keras.Input(shape=(50,768))
x_1 = tf.matmul(v,tf.transpose(inputs_events))
x_2 = tf.matmul(x_1,inputs_sents)
But I got an error,
ValueError: Dimensions must be equal, but are 768 and 50 for
'{{node BatchMatMulV2_3}} =
BatchMatMulV2[T=DT_FLOAT,
adj_x=false,
adj_y=false](BatchMatMulV2_3/ReadVariableOp,
Transpose_3)' with input shapes: [1,768], [768,50,?]
I think it takes consideration of the batch? But how shall I deal with this?
v is a trainable vector (or 2d array with first dimension being 1), I want it to be trained in the training process.
PS: This is the result I got using the codes provided by the first answer, I think it is incorrect cause keras already takes consideration of the first batch dimension.
Plus, from the keras documentation,
shape: A shape tuple (integers), not including the batch size. For instance, shape=(32,) indicates that the expected input will be batches of 32-dimensional vectors. Elements of this tuple can be None; 'None' elements represent dimensions where the shape is not known.
https://keras.io/api/layers/core_layers/input/
Should I rewrite my codes without keras?
The shape of a batch is denoted by None:
import numpy as np
inputs_sents = keras.Input(shape=(None,1,3))
inputs_events = keras.Input(shape=(None,1,768))
v = np.ones(shape=(1,768), dtype=np.float32)
v = tf.Variable(initial_value=v, trainable=True)
x_1 = tf.matmul(v,tf.transpose(inputs_events))
x_2 = tf.matmul(x_1,inputs_sents)

Triple tensor product with Tensorflow

Suppose I have a matrix A and two vectors x,y, of appropriate dimensions. I want to compute the dot product x' * A * y, where x' denotes the transpose. This should result in a scalar.
Is there a convenient API function in Tensorflow to do this?
(Note that I am using Tensorflow 2).
Use tf.linalg.tensordot(). See the documentation
As you have mentioned in the question that you are trying to find dot product. In this case tf.matmul() will not work, as it is only for cross product of metrices.
Demo code snippet
import tensorflow as tf
A = tf.constant([[1,4,6],[2,1,5],[3,2,4]])
x = tf.constant([3,2,7])
result = tf.linalg.tensordot(tf.transpose(x), A, axes=1)
result = tf.linalg.tensordot(result, x, axes=1)
print(result)
And the result will be
>>>tf.Tensor(532, shape=(), dtype=int32)
Few points I want to mention here
Don't forget the axes argument inside tf.linalg.tensordot()
When you create tf.zeros(5) it will create a list of shape 5 and it will be like [0,0,0,0,0], when you transpose this it will give you the same list. But if you create it like tf.zeros((5,1)), it would be a vector of shape (5,1) and the result will be
[
[0],[0],[0],[0],[0]
]
Now you can transpose this and the result will be different, but I recommend you do the code snippet I have mentioned. In case of dot product you don't have to bother much about this.
If you are still facing issues, will be very happy to help you.
Just do the following,
import tensorflow as tf
x = tf.constant([1,2])
a = tf.constant([[2,3],[3,4]])
y = tf.constant([2,3])
z = tf.reshape(tf.matmul(tf.matmul(x[tf.newaxis,:], a), y[:, tf.newaxis]),[])
print(z.numpy())
Returns
>>> 49
Just use tf.transpose and multiplication operator like this:
tf.transpose(x)* A * y .
Based on your example:
x = tf.zeros(5)
A = tf.zeros((5,5))
How about
x = tf.expand_dims(x, -1)
tf.matmul(tf.matmul(x, A, transpose_a=True), x)

Getting ValueError during assignment of rows in tensorflow

I have the following piece of code:
norm_embed = tf.sqrt(tf.reduce_sum(tf.multiply(embeddings, embeddings), 1))
comparison = tf.greater(norm_embed, tf.constant(1.))
cond_assignment = tf.assign(embeddings, tf.where(comparison, embeddings/norm_embed, embeddings))
What I'm trying to do is that I have a matrix embeddings of [V, 1] shape. And I want to normalize those rows where norm of a row is greater than 1.
However, I get ValueError:
ValueError: Dimensions must be equal, but are 2 and 11202 for
'truediv_1' (op: 'RealDiv') with input shapes: [11202,2], [11202].
I understand that matrix norm_embed has [V] shape, but during division [V, k] matrix by vector [V] the latter should be broadcasted. I don't understand why it's not happening. I also tried to reshape the vector to [V, 1] shape but it didn't help.
Why I receive ValueError during normalization? Maybe other ways to normalize rows when they exceed value?
There are two errors with the code.
First, we need reshape norm_embed to have two dimensions:
norm_embed = tf.reshape(norm_embed, [V, 1])
Second, we need to reshape comparison operator to have only 1 dimension.
comparison = tf.reshape(comparison, [vocabulary_size])
After that code works:
norm_embed = tf.sqrt(tf.reduce_sum(tf.multiply(embeddings, embeddings), 1))
norm_embed = tf.reshape(norm_embed, [V, 1])
comparison = tf.greater(norm_embed, tf.constant(1.))
comparison = tf.reshape(comparison, [V])
replacement = tf.where(comparison, embeddings/norm_embed, embeddings)
cond_assignment = tf.assign(embeddings, replacement)

Tensorflow how to increment the dimension of variable

When we do tf.embedding_lookup, it returns a vector (not matrix).
In [244]: one_hot_label = tf.nn.embedding_lookup(np.eye(vocab_size), Y[labels_i])
In [245]: one_hot_label
Out[245]: <tf.Tensor 'embedding_lookup_43975:0' shape=(20, 8000) dtype=float64>
I need to reshape this (20,8000) tensor into (20,8000,1). How should I do it?
I'm not asking for hard-cord (20,8000,1) using tf.reshape. I'm asking in general how to convert 2d -> 3d or higher.
You can use tf.expand_dims: this operation inserts a dimension of 1 into the tensor's shape.
one_hot_label = tf.expand_dims(one_hot_label, axis=2)

TensorFlow: Getting a Sub-Tensor from a Tensor Using Indexing

A is a tensorflow.tensor with shape (2261,)
I want to get a new tensor from the following indixes of A: [10,20,30]
I tried all the followings, but none work:
A[[10,20,30]]
# *** ValueError: Index out of range using input dim 1; input has only 1 dims for 'strided_slice' (op: 'StridedSlice') with input shapes: [2261], [3], [3], [3].
A[10,20,30]
# same error as above
A[numpy.array([10,20,30])]
# *** ValueError: Shape must be rank 1 but is rank 2 for 'strided_slice' (op: 'StridedSlice') with input shapes: [2261], [1,3], [1,3], [1].
A[10]
# <tf.Tensor 'strided_slice:0' shape=() dtype=float32> - not an error but a shapeless tensor
A[tensorflow.constant(10)]
# same problem as above
Why are these not working, and what can I do?
C = tf.nn.embedding_lookup(A, B)
where B is a tensor with the values [10,20,30]
For reference: https://www.tensorflow.org/api_docs/python/nn/embeddings
I think what you are looking for is the gather function.
B = tf.constant([10, 20, 30])
tf.gather(A, B)
https://www.tensorflow.org/api_docs/python/tf/gather
I don't think fancy indexing like that is supported yet in TensorFlow. Keep an eye on https://github.com/tensorflow/tensorflow/issues/206 for updates (maybe there is somewhere else too).
If you'd like to see what is available, it looks like they have some documentation about __tensor.__getitem__.