Tensorflow batch sparse multiply - tensorflow

I would like to multiply a sparse tensor by a dense tensor but do so within a batch.
For example I have a sparse tensor with the corresponding dense shape of (20,65536,65536) where 20 is the batch size. I would like to multiply each (65536,65536) in the batch with the corresponding (65536x1) from a tensor shape (20,65536) which has a dense representation. tf.sparse_tensor_dense_matmul only accepts a rank 2 sparse tensor. Is there a way to perform this over a batch?
I would like to avoid converting the sparse matrix to a dense matrix if possible due to memory constraints.

Assuming that a is a sparse tensor with shape (20, 65536, 65536) and b a dense tensor with shape (20, 65536), you could perform the batch sparse-dense matrix multiplication as follows:
y_sparse = tf.sparse.reduce_sum_sparse(a * b[:, None, :], axis=-1)
This solution expands the second dimension of tensor b to enable implicit broadcasting. Then, the batch matrix multiplication takes place by performing a sparse-dense multiplication and a sparse sum along the last axis.
If b has got a third dimension so it is a batch of matrices, you can multiply their columns individually and concatenate them later:
multiplied_dims = []
for i in range (b.shape[-1]):
multiplied_dims.append(tf.expand_dims(tf.sparse.reduce_sum(a * b[:, :, i][:, None, :], axis=-1), -1))
result = tf.concat(multiplied_dims, -1)

The answer is simple - you reshape the sparse tensor first and then multiply it by the dense matrix. Something like this would work:
sparse_tensor_rank2 = tf.sparse_reshape(sparse_tensor, [-1, 65536])

Related

Tabular data: Implementing a custom tensor layer without resorting to iteration

I have an idea for a tensor operation that would not be difficult to implement via iteration, with batch size one. However I would like to parallelize it as much as possible.
I have two tensors with shape (n, 5) called X and Y. X is actually supposed to represent 5 one-dimensional tensors with shape (n, 1): (x_1, ..., x_n). Ditto for Y.
I would like to compute a tensor with shape (n, 25) where each column represents the output of the tensor operation f(x_i, y_j), where f is fixed for all 1 <= i, j <= 5. The operation f has output shape (n, 1), just like x_i and y_i.
I feel it is important to clarify that f is essentially a fully-connected layer from the concatenated [...x_i, ...y_i] tensor with shape (1, 10), to an output layer with shape (1,5).
Again, it is easy to see how to do this manually with iteration and slicing. However this is probably very slow. Performing this operation in batches, where the tensors X, Y now have shape (n, 5, batch_size) is also desirable, particularly for mini-batch gradient descent.
It is difficult to really articulate here why I desire to create this network; I feel it is suited for my domain of 'itemized tabular data' and cuts down significantly on the number of weights per operation, compared to a fully connected network.
Is this possible using tensorflow? Certainly not using just keras.
Below is an example in numpy per AloneTogether's request
import numpy as np
features = 16
batch_size = 256
X_batch = np.random.random((features, 5, batch_size))
Y_batch = np.random.random((features, 5, batch_size))
# one tensor operation to reduce weights in this custom 'layer'
f = np.random.random((features, 2 * features))
for b in range(batch_size):
X = X_batch[:, :, b]
Y = Y_batch[:, :, b]
for i in range(5):
x_i = X[:, i:i+1]
for j in range(5):
y_j = Y[:, j:j+1]
x_i_y_j = np.concatenate([x_i, y_j], axis=0)
# f(x_i, y_j)
# implemented by a fully-connected layer
f_i_j = np.matmul(f, x_i_y_j)
All operations you need (concatenation and matrix multiplication) can be batched.
Difficult part here is, that you want to concatenate features of all items in X with features of all items in Y (all combinations).
My recommended solution is to expand the dimensions of X to [batch, features, 5, 1], expand dimensions of Y to [batch, features, 1, 5]
Than tf.repeat() both tensors so their shapes become [batch, features, 5, 5].
Now you can concatenate X and Y. You will have a tensor of shape [batch, 2*features, 5, 5]. Observe that this way all combinations are built.
Next step is matrix multiplication. tf.matmul() can also do batch matrix multiplication, but I use here tf.einsum() because I want more control over which dimensions are considered as batch.
Full code:
import tensorflow as tf
import numpy as np
batch_size=3
features=6
items=5
x = np.random.uniform(size=[batch_size,features,items])
y = np.random.uniform(size=[batch_size,features,items])
f = np.random.uniform(size=[2*features,features])
x_reps= tf.repeat(x[:,:,:,tf.newaxis], items, axis=3)
y_reps= tf.repeat(y[:,:,tf.newaxis,:], items, axis=2)
xy_conc = tf.concat([x_reps,y_reps], axis=1)
f_i_j = tf.einsum("bfij, fg->bgij", xy_conc,f)
f_i_j = tf.reshape(f_i_j , [batch_size,features,items*items])

Given a batch of n images, how to scalar multiply each image by a different scalar in tensorflow?

Assume we have two TensorFlow tensors:
input and weights.
input is a tensor of n images, say. So its shape is [n, H, W, C].
weights is a simple list of n scalar weights: [w1 w2 ... wn]
The aim is to scalar-multiply each image by its corresponding weight.
How would one do that?
I tried to use tf.nn.conv2D with 1x1 kernels but I do not know how to reshape our rank 1 weight tensor into the required rank 4 kernel tensor.
Any help would be appreciated.
Thanks to user zihaozhihao:
The answer is to change the shape of weights to (-1, 1, 1, 1) and then multiply it with input.
weights = tf.reshape(weights, (-1, 1, 1, 1))
weighted_input = input * weights

How to multiply very large sparse matrix in tensorflow?

I have a very large sparse tensor input_a with shape: [20k, 20k], and a dense tensor input_b with shape: [20k, 512]. nnz(input_a) is about 400k (There are 400k nonzero values in input_a). I would like to multiply these two tensor, to get the output tensor output with shape: [20k, 512]. If I use tf.sparse_tensor_dense_matmul to multiply input_a and input_b, there will be an error: (with tensorflow version v1.8)
Cannot use GPU when output.shape[1] * nnz(a) > 2^31"
If output.shape[1] is 20k, it might make sense, because 20k * 400k = 8000000000 > 2^31 = 2147483648. But, I think output.shape[1] should be 512 in this case?
My code:
input_a: shape [20k, 20k]
input_b: shape [20k, 512]
output: shape [20k, 512]
output = tf.sparse_tensor_dense_matmul(input_a, input_b)
Then I will get the error message.
If I transform input_a to a dense tensor and then use tf.matmul, it works, but it might cost a lot of memory and time?
dense_input_a = tf.sparse_tensor_to_dense(dense_input_a)
output = tf.matmul(dense_input_a, input_b, a_is_sparse = True)
Why the error occurred and how should I multiply these two tensor in a efficient way? Thanks!

How can I select the rows of an nd Tensor, with indexes stored in another Tensor?

I'm trying to slice a Tensor of shape (?, 32, 32) along the first dimension. I have to select two rows with indexes stored in another Tensor of shape (1, 2). I want something like array[list of indexes, :, :] in numpy.
How can I do it? I need this operation to compute a loss inside the model_fn function, passed to my custom Tensorflow Estimator.
I solved it using tf.gather_nd. I reshaped the tensor containing the indexes with:
ids = tf.reshape(tensor_with_indexes, shape=(-1, 1))
and then I applied:
new_tensor = tf.gather_nd(original_tensor, ids)

Rowwise division with batch data

Given a tensor A of shape [?, n, l] and a tensor D of shape [?, n], I want to divide each row of tensor a of shape [n,l] of A by the corresponding scalar of D.
I thought I could somehow do this by using the broadcasting behavior of tf.div. Unfortunatley I am stuck.
Here you need to extend D to match the dimension of A:
res = A / D[...,tf.newaxis])