How to convert this numpy one-liner into Tensorflow backend code? - numpy

I have multiple depthmaps which show a car from different angles. I need to calculate how well they match together in my loss function, so I have to reproject them into a different view. The depthmaps live in a cube that is relative to the length of the vehicle. The images have the shape (256,256). I already wrote the code to convert them to a pointcloud with backend functions (256*256,3). I can reproject this pointcloud to the side view with numpy like this:
reProj = np.zeros((256, 256), np.float32)
reProj[pointCloud[:, 1], pointCloud[:, 2]] = pointCloud[:, 0]
How can I convert this into keras backend code? I suspect there should be a gather somewhere in there, but I just cannot get it working.
Example:
Source depth image:
Reprojected:
Thanks for your help!
Edit: Minimal working example with data: https://easyupload.io/rwutwa

You can do this by using tf.matmul() the first input will be your pointcloud, from the dimensions i am assuming you are storing for every pixel a 3d vector x,y,z. The second input will be the 3d rotation matrix coresponding to the projection you need, keep in mind this works for every angle you want to you just need to define the 3x3 matrix.
If i understand correctly your data you need to rotate over x 90 degrees so the matrix would be
1 0 0
0 0 -1
0 1 0
read more on rotation matrices here https://en.wikipedia.org/wiki/Rotation_matrix
just go to the tree dimension and see what you need

So i finally figured it out, I was just thinking about it wrong. It is not a gather operation, is it a scatter. This works perfect now!
indices = K.stack([p[:, 1], p[:, 2]], -1)
indices = K.reshape(indices, (256, 256, 2))
indices = K.clip(indices, 0, 256 - 1)
updates = K.reshape(p[:,0], (256,256))
reProj = tf.tensor_scatter_nd_max(tf.zeros((256, 256), tf.int32), indices, updates)

Related

How to do 2D Convolution only at a specific location?

This question has been asked multiple times but still I could not get what I was looking for. Imagine
data=np.random.rand(N,N) #shape N x N
kernel=np.random.rand(3,3) #shape M x M
I know convolution typically means placing the kernel all over the data. But in my case N and M are of the orders of 10000. So I wish to get the value of the convolution at a specific location in the data, say at (10,37) without doing unnecessary calculations at all locations. So the output will be just a number. The main goal is to reduce the computation and memory expenses. Is there any inbuilt function that does this with minimal adjustments?
Indeed, applying the convolution for a particular position coincides with the mere sum over the entries of a (pointwise) multiplication of the submatrix in data and the flipped kernel itself. Here, is a reproducible example.
Code
N = 1000
M = 3
np.random.seed(777)
data = np.random.rand(N,N) #shape N x N
kernel= np.random.rand(M,M) #shape M x M
# Pointwise convolution = pointwise product
data[10:10+M,37:37+M]*kernel[::-1, ::-1]
>array([[0.70980514, 0.37426475, 0.02392947],
[0.24387766, 0.1985901 , 0.01103323],
[0.06321042, 0.57352696, 0.25606805]])
with output
conv = np.sum(data[10:10+M,37:37+M]*kernel[::-1, ::-1])
conv
>2.45430578
The kernel is being flipped by definition of the convolution as explained in here and was kindly pointed Warren Weckesser. Thanks!
The key is to make sense of the index you provided. I assumed it refers to the upper left corner of the sub-matrix in data. However, it can refer to the midpoint as well when M is odd.
Concept
A different example with N=7 and M=3 exemplifies the idea
and is presented in here for the kernel
kernel = np.array([[3,0,-1], [2,0,1], [4,4,3]])
which, when flipped, yields
k[::-1,::-1]
> array([[ 3, 4, 4],
[ 1, 0, 2],
[-1, 0, 3]])
EDIT 1:
Please note that the lecturer in this video does not explicitly mention that flipping the kernel is required before the pointwise multiplication to adhere to the mathematically proper definition of convolution.
EDIT 2:
For large M and target index close to the boundary of data, a ValueError: operands could not be broadcast together with shapes ... might be thrown. To prevent this, padding the matrix data with zeros can prevent this (although it blows up the memory requirement). I.e.
data = np.pad(data, pad_width=M, mode='constant')

Creating a matrix with certain conditions

I am trying to create a matrix using PyTorch of size 32x10x1.
The conditions that I need to fulfill are that
torch.mean(a, dim=0) # size is 10x1 and should be almost 0
torch.mean(a, dim=1) # size is 32x1 and should be almost 0
This is a noise matrix for GANs and I am trying to sample it from Normal Distribution. I tried using torch.MultiVariateNormal() but it didnt give me matrix of that shape
Is there any other function or something in numpy or scikit to get this kind of matrix
Use numpy.random.normal
import numpy.random as npr
mean = 0
std_dev = 0.1
size = (32, 10, 1)
mat = npr.normal(loc=mean, scale=std_dev, size=size)
and set the mean and standard deviation as desired to keep the values close to 0.
Here you can see the effect of changing the mean and standard deviation on the graph
By Inductiveload - self-made, Mathematica, Inkscape, Public Domain, https://commons.wikimedia.org/w/index.php?curid=3817954

How does the gradient of the sum trick work to get maxpooling positions in keras?

The keras examples directory contains a lightweight version of a stacked what-where autoencoder (SWWAE) which they train on MNIST data. (https://github.com/fchollet/keras/blob/master/examples/mnist_swwae.py)
In the original SWWAE paper, the authors compute the what and where using soft functions. However, in the keras implementation, they use a trick to get these locations. I would like to understand this trick.
Here is the code of the trick.
def getwhere(x):
''' Calculate the 'where' mask that contains switches indicating which
index contained the max value when MaxPool2D was applied. Using the
gradient of the sum is a nice trick to keep everything high level.'''
y_prepool, y_postpool = x
return K.gradients(K.sum(y_postpool), y_prepool) # How exactly does this line work?
Where y_prepool is a MxN matrix and y_postpool is a M/2 x N/2 matrix (lets assume canonical pooling of a size 2 pixels).
I have verified that the output of getwhere() is a bed of nails matrix where the nails indicate the position of the max (the local argmax if you will).
Can someone construct a small example demonstrating how getwhere works using this "Trick?"
Lets focus on the simplest example, without really talking about convolutions, say we have a vector
x = [1 4 2]
which we max-pool over (with a single, big window), we get
mx = 4
mathematically speaking, it is:
mx = x[argmax(x)]
now, the "trick" to recover one hot mask used by pooling is to do
magic = d mx / dx
there is no gradient for argmax, however it "passes" the corresponding gradient to an element in a vector at the location of maximum element, so:
d mx / dx = [0/dx[1] dx[2]/dx[2] 0/dx[3]] = [0 1 0]
as you can see, all the gradient for non-maximum elements are zero (due to argmax), and "1" appears at the maximum value because dx/x = 1.
Now for "proper" maxpool you have many pooling regions, connected to many input locations, thus taking analogous gradient of sum of pooled values, will recover all the indices.
Note however, that this trick will not work if you have heavily overlapping kernels - you might end up with bigger values than "1". Basically if a pixel is max-pooled by K kernels, than it will have value K, not 1, for example:
[1 ,2, 3]
x = [13,3, 1]
[4, 2, 9]
if we max pool with 2x2 window we get
mx = [13,3]
[13,9]
and the gradient trick gives you
[0, 0, 1]
magic = [2, 0, 0]
[0, 0, 1]

what is the best way to multiply tensors in tensorflow

Suppose that I have tensors x[i,j,k] and y[p,q] in a graph. What is the correct way to specify the tensor z[i,j,k,p,q] = x[i,j,k]y[p,q]? This is the coordinate representation of the tensor product of x and y. I can get the job done using a combination of tf.expand_dims, tf.mult and tf.tile, but I feel like there should be a better way...
I think you can get away without the tile operation using broadcasting.
x_reshaped = tf.reshape(x, (i, j, k, 1, 1))
y_reshaped = tf.reshape(y, (1, 1, 1, p, q))
z = x_reshaped * y_reshaped
When a dimension has size 1 and does not match the size of the other tensor's dimensions it is being multiplied with, it is copied / broadcasted automatically along that dimension and the product is carried out. Tile is often unnecessary. I actually don't think I have ever even used tile in tensorflow. Here I also used reshape rather than expand_dims but the result is the same either way.

Vectorizing multiplication of matrices with different shapes in numpy/tensorflow

I have a 4x4 input matrix and I want to multiply every 2x2 slice with a weight stored in a 3x3 weight matrix. Please see the attached image for an example:
In the image, the colored section of the 4x4 input matrix is multiplied by the same colored section of the 3x3 weight matrix and stored in the 4x4 output matrix. When the slices overlap, the output takes the sum of the overlaps (e.g. the blue+red).
I am trying to perform this operation in Tensorflow 2.0 using eager tensors (which can be treated as numpy arrays). This is what I've written to perform this operation and it produces the expected output.
inputm = np.ones([4,4]) # initialize 4x4 input matrix
weightm = np.ones([3,3]) # initialize 3x3 weight matrix
outputm = np.zeros([4,4]) # initialize blank 4x4 output matrix
# iterate through each weight
for i in range(weightm.shape[0]):
for j in range(weightm.shape[1]):
outputm[i:i+2, j:j+2] += weightm[i,j] * inputm[i:i+2, j:j+2]
However, I don't think this is efficient since I am iterating through the weight matrix one-by-one, and this will be extremely slow when I need to perform this on large matrices of 500x500. I am having a hard time identifying a way to vectorize this operation, maybe tiling the weight matrix to be the same shape as the input matrix and performing a single matrix multiplication. I have also thought about flattening the matrix but I'm still not able to see a way to do this more efficiently.
Any advice will be much appreciated. Thanks in advance!
Alright, I think I have a solution but this involves using both numpy operations (e.g. np.repeat) and TensorFlow 2.0 operations (i.e. tf.segment_sum). And to warn you this is not the most clear elegant solution in the world, but it was the most elegant I could come up with. So here goes.
The main culprit in your problem is this weight matrix. If you manipulate this weight matrix to be a 4x4 matrix (with correct sum of weight at each position) you have a nice weight matrix which you can do an element-wise multiplication with the input. And that's my solution. Note that this is designed for the 4x4 problem and you should be able to relatively easily extend this to the 500x500 matrix.
import numpy as np
import tensorflow as tf
a = np.array([[1,2,3,4],[4,3,2,1],[1,2,3,4],[4,3,2,1]])
w = np.array([[5,4,3],[3,4,5],[5,4,3]])
# We make weights to a 6x6 matrix by repeating 2 times on both axis
w_rep = np.repeat(w,2,axis=0)
w_rep = np.repeat(w_rep,2,axis=1)
# Let's now jump in to tensorflow
tf_a = tf.constant(a)
tf_w = tf.constant(w_rep)
tf_segments = tf.constant([0,1,1,2,2,3])
# This is the most tricky bit, here we use the segment_sum to achieve what we need
# You can use segment_sum to get the sum of segments on the very first dimension of a matrix.
# So you need to do that to the input matrix twice. One on the original and the other on the transpose.
tf_w2 = tf.math.segment_sum(tf_w, tf_segments)
tf_w2 = tf.transpose(tf_w2)
tf_w2 = tf.math.segment_sum(tf_w2, tf_segments)
tf_w2 = tf.transpose(tf_w2)
print(tf_w2*a)
PS: I will try to include an illustration of what's going on here in a future edit. But I reckon that will take some time.
After realising #thushv89's trick, I realised you can get the same result by convolving the weight matrix with a matrix of ones:
import numpy as np
from scipy.signal import convolve2d
a = np.ones([4,4]) # initialize 4x4 input matrix
w = np.ones([3,3]) # initialize 3x3 weight matrix
b = np.multiply(a, convolve2d(w, np.ones((2,2))))
print(b)