How to do FFT convolve? How to do normalization? - numpy

In Python, we can do a convolution by numpy.fft. For example, if we try to calculate gravitational lensing signal of the SIS model, we could define $\kappa$ as
$\kappa = \frac{\theta_{\rm E}}{2|\theta|}$,
then we can calculate deflection angle $\alpha$ by a convolution as $\alpha = \frac{1}{\pi} \int d\theta'^2 \kappa(\theta) \frac{\theta-\theta'}{|\theta-\theta'|}$ . Theoretically, deflection is $\alpha(\theta) = \theta_{\rm E}\frac{\theta}{|\theta|}$. But when I try to calculate that by numpy.fft, I am puzzled by some factors of normalization. For example,
npix = 2048 #mesh grid number
thetaE = 0.5 #a constant
dtheta = thetaE/10 #grid resolution
theta_x, theta_y, theta = mesh_theta(npix, dtheta) #assign grid position for calculation, theta will be an array of 2048*2048
kappa = thetaE/2./theta #define kappa mesh, 2048*2048
kern_alpha_x, kern_alpha_y = kernal_alpha(theta_x, theta_y) #define kernal mesh, 2048*2048
###zero padding should be used###
kappa_fft = np.fft.fft2(kappa)
kern_alpha_x_fft = np.fft.fft2(kern_alpha_x)
kern_alpha_y_fft = np.fft.fft2(kern_alpha_y)
alpha_x = np.fft.fftshift(np.fft.ifft2(kappa_fft*kern_alpha_x_fft)).real
alpha_y = np.fft.fftshift(np.fft.ifft2(kappa_fft*kern_alpha_y_fft)).real
As shown above, $\alpha_{\rm x}$, $\alpha_{\rm y}$ could be calculated by convolution between $\kappa$ and $K_{\alpha_{\rm x}}$, $K_{\alpha_{\rm y}}$, which means deflection $|\alpha| = \sqrt{\alpha_{\rm x}^2+\alpha_{\rm y}^2}$. However, when I check the results from alpha_x, alpha_y, it seems there is a normalization should be multiplied. If I multiply a factor as np.sqrt(alpha_x**2 + alpha_y**2)*dtheta*dtheta, then the result seems to be right. I do not know should this normalization dtheta*dthetamust be used and why? Thx.

Related

how to vectorize exponential probability function

I believe code below is somewhat correct implementation of this exponential heatmap function:
def expfunc(image, landmark, sigma=6): #image = array of shape (512,512), landmark = array of shape (2,)
a= np.sqrt(np.log(2)/2)/sigma #
for i in range(image.shape[0]):
for j in range(image.shape[1]):
prob = np.exp(-a*(np.abs(i-landmark[0])+np.abs(j-landmark[1])))
if prob > 0.01:
image[i][j] = prob
else:
image[i][j]= 0
return image
My questions are:
How could I vectorize this code?
This probability function gives values to all pixels so how should proceed with very small values? Now I am using threshold of 0.01 for zeros?
Let me know if this works for you:
i = np.arange(image.shape[0])
j = np.arange(image.shape[1])
prob = np.exp(-a*(np.abs(i[:,None]-landmark[0])+np.abs(j-landmark[1])))
image = np.where(prob>0.01, prob, 0)
First compute the array prob for all of the indices i and j. Then prob has the same shape as image, and you can redefine image based on the values of prob using numpy.where.

Implementing backpropagation gradient descent using scipy.optimize.minimize

I am trying to train an autoencoder NN (3 layers - 2 visible, 1 hidden) using numpy and scipy for the MNIST digits images dataset. The implementation is based on the notation given here Below is my code:
def autoencoder_cost_and_grad(theta, visible_size, hidden_size, lambda_, data):
"""
The input theta is a 1-dimensional array because scipy.optimize.minimize expects
the parameters being optimized to be a 1d array.
First convert theta from a 1d array to the (W1, W2, b1, b2)
matrix/vector format, so that this follows the notation convention of the
lecture notes and tutorial.
You must compute the:
cost : scalar representing the overall cost J(theta)
grad : array representing the corresponding gradient of each element of theta
"""
training_size = data.shape[1]
# unroll theta to get (W1,W2,b1,b2) #
W1 = theta[0:hidden_size*visible_size]
W1 = W1.reshape(hidden_size,visible_size)
W2 = theta[hidden_size*visible_size:2*hidden_size*visible_size]
W2 = W2.reshape(visible_size,hidden_size)
b1 = theta[2*hidden_size*visible_size:2*hidden_size*visible_size + hidden_size]
b2 = theta[2*hidden_size*visible_size + hidden_size: 2*hidden_size*visible_size + hidden_size + visible_size]
#feedforward pass
a_l1 = data
z_l2 = W1.dot(a_l1) + numpy.tile(b1,(training_size,1)).T
a_l2 = sigmoid(z_l2)
z_l3 = W2.dot(a_l2) + numpy.tile(b2,(training_size,1)).T
a_l3 = sigmoid(z_l3)
#backprop
delta_l3 = numpy.multiply(-(data-a_l3),numpy.multiply(a_l3,1-a_l3))
delta_l2 = numpy.multiply(W2.T.dot(delta_l3),
numpy.multiply(a_l2, 1 - a_l2))
b2_derivative = numpy.sum(delta_l3,axis=1)/training_size
b1_derivative = numpy.sum(delta_l2,axis=1)/training_size
W2_derivative = numpy.dot(delta_l3,a_l2.T)/training_size + lambda_*W2
#print(W2_derivative.shape)
W1_derivative = numpy.dot(delta_l2,a_l1.T)/training_size + lambda_*W1
W1_derivative = W1_derivative.reshape(hidden_size*visible_size)
W2_derivative = W2_derivative.reshape(visible_size*hidden_size)
b1_derivative = b1_derivative.reshape(hidden_size)
b2_derivative = b2_derivative.reshape(visible_size)
grad = numpy.concatenate((W1_derivative,W2_derivative,b1_derivative,b2_derivative))
cost = 0.5*numpy.sum((data-a_l3)**2)/training_size + 0.5*lambda_*(numpy.sum(W1**2) + numpy.sum(W2**2))
return cost,grad
I have also implemented a function to estimate the numerical gradient and verify the correctness of my implementation (below).
def compute_gradient_numerical_estimate(J, theta, epsilon=0.0001):
"""
:param J: a loss (cost) function that computes the real-valued loss given parameters and data
:param theta: array of parameters
:param epsilon: amount to vary each parameter in order to estimate
the gradient by numerical difference
:return: array of numerical gradient estimate
"""
gradient = numpy.zeros(theta.shape)
eps_vector = numpy.zeros(theta.shape)
for i in range(0,theta.size):
eps_vector[i] = epsilon
cost1,grad1 = J(theta+eps_vector)
cost2,grad2 = J(theta-eps_vector)
gradient[i] = (cost1 - cost2)/(2*epsilon)
eps_vector[i] = 0
return gradient
The norm of the difference between the numerical estimate and the one computed by the function is around 6.87165125021e-09 which seems to be acceptable. My main problem seems to be to get the gradient descent algorithm "L-BGFGS-B" working using the scipy.optimize.minimize function as below:
# theta is the 1-D array of(W1,W2,b1,b2)
J = lambda x: utils.autoencoder_cost_and_grad(theta, visible_size, hidden_size, lambda_, patches_train)
options_ = {'maxiter': 4000, 'disp': False}
result = scipy.optimize.minimize(J, theta, method='L-BFGS-B', jac=True, options=options_)
I get the below output from this:
scipy.optimize.minimize() details:
fun: 90.802022224079778
hess_inv: <16474x16474 LbfgsInvHessProduct with dtype=float64>
jac: array([ -6.83667742e-06, -2.74886002e-06, -3.23531941e-06, ...,
1.22425735e-01, 1.23425062e-01, 1.28091250e-01])
message: b'ABNORMAL_TERMINATION_IN_LNSRCH'
nfev: 21
nit: 0
status: 2
success: False
x: array([-0.06836677, -0.0274886 , -0.03235319, ..., 0. ,
0. , 0. ])
Now, this post seems to indicate that the error could mean that the gradient function implementation could be wrong? But my numerical gradient estimate seems to confirm that my implementation is correct. I have tried varying the initial weights by using a uniform distribution as specified here but the problem still persists. Is there anything wrong with my backprop implementation?
Turns out the issue was a syntax error (very silly) with this line:
J = lambda x: utils.autoencoder_cost_and_grad(theta, visible_size, hidden_size, lambda_, patches_train)
I don't even have the lambda parameter x in the function declaration. So the theta array wasn't even being passed whenever J was being invoked.
This fixed it:
J = lambda x: utils.autoencoder_cost_and_grad(x, visible_size, hidden_size, lambda_, patches_train)

TensorFlow loss function zeroes out after first epoch

I am trying to implement a discriminative loss function for instance segmentation of images based on this paper: https://arxiv.org/pdf/1708.02551.pdf (This link is just for the readers' reference; I don't expect anyone to read it to help me out!)
My problem: Once I move from a simple loss function to a more complicated one (like you see in the attached code snippet), the loss function zeroes out after the first epoch. I checked the weights, and almost all of them seem to hover closely around -300. They are not exactly identical, but very close to each other (differing only in the decimal places).
Relevant code that implements the discriminative loss function:
def regDLF(y_true, y_pred):
global alpha
global beta
global gamma
global delta_v
global delta_d
global image_height
global image_width
global nDim
y_true = tf.reshape(y_true, [image_height*image_width])
X = tf.reshape(y_pred, [image_height*image_width, nDim])
uniqueLabels, uniqueInd = tf.unique(y_true)
numUnique = tf.size(uniqueLabels)
Sigma = tf.unsorted_segment_sum(X, uniqueInd, numUnique)
ones_Sigma = tf.ones((tf.shape(X)[0], 1))
ones_Sigma = tf.unsorted_segment_sum(ones_Sigma,uniqueInd, numUnique)
mu = tf.divide(Sigma, ones_Sigma)
Lreg = tf.reduce_mean(tf.norm(mu, axis = 1))
T = tf.norm(tf.subtract(tf.gather(mu, uniqueInd), X), axis = 1)
T = tf.divide(T, Lreg)
T = tf.subtract(T, delta_v)
T = tf.clip_by_value(T, 0, T)
T = tf.square(T)
ones_Sigma = tf.ones_like(uniqueInd, dtype = tf.float32)
ones_Sigma = tf.unsorted_segment_sum(ones_Sigma,uniqueInd, numUnique)
clusterSigma = tf.unsorted_segment_sum(T, uniqueInd, numUnique)
clusterSigma = tf.divide(clusterSigma, ones_Sigma)
Lvar = tf.reduce_mean(clusterSigma, axis = 0)
mu_interleaved_rep = tf.tile(mu, [numUnique, 1])
mu_band_rep = tf.tile(mu, [1, numUnique])
mu_band_rep = tf.reshape(mu_band_rep, (numUnique*numUnique, nDim))
mu_diff = tf.subtract(mu_band_rep, mu_interleaved_rep)
mu_diff = tf.norm(mu_diff, axis = 1)
mu_diff = tf.divide(mu_diff, Lreg)
mu_diff = tf.subtract(2*delta_d, mu_diff)
mu_diff = tf.clip_by_value(mu_diff, 0, mu_diff)
mu_diff = tf.square(mu_diff)
numUniqueF = tf.cast(numUnique, tf.float32)
Ldist = tf.reduce_mean(mu_diff)
L = alpha * Lvar + beta * Ldist + gamma * Lreg
return L
Question: I know it's hard to understand what the code does without reading the paper, but I have a couple questions:
Is there something glaringly wrong with the loss function defined
above?
Anyone has a general idea as to why the loss function could zero out after the first epoch?
Thank you very much for your time and help!
I think your problem suffers from tf.norm which is not safe (leads to zeros somewhere in the vector and hence nan in its gradients).
It would be better to replace tf.norm by this custom function:
def tf_norm(inputs, axis=1, epsilon=1e-7, name='safe_norm'):
squared_norm = tf.reduce_sum(tf.square(inputs), axis=axis, keep_dims=True)
safe_norm = tf.sqrt(squared_norm+epsilon)
return tf.identity(safe_norm, name=name)
In your Ldist calculation you use tf.tile and tf.reshape to find the distance between different cluster means in the following manner (suppose we have three clusters):
mu_1 - mu_1
mu_2 - mu_1
mu_3 - mu_1
mu_1 - mu_2
mu_2 - mu_2
mu_3 - mu_2
mu_1 - mu_3
mu_2 - mu_3
mu_3 - mu_3
The problem is that your distance vector contains zero vectors and you perform a norm operation afterwards. tf.norm gets numerical unstable since it performs a division over the length of the vector. The result is that the gradient either gets zero or inf. See this github issue.
The solution would be to remove those zero vectors in a fashion like this Stackoverflow question.

Visualizing output of convolutional layer in tensorflow

I'm trying to visualize the output of a convolutional layer in tensorflow using the function tf.image_summary. I'm already using it successfully in other instances (e. g. visualizing the input image), but have some difficulties reshaping the output here correctly. I have the following conv layer:
img_size = 256
x_image = tf.reshape(x, [-1,img_size, img_size,1], "sketch_image")
W_conv1 = weight_variable([5, 5, 1, 32])
b_conv1 = bias_variable([32])
h_conv1 = tf.nn.relu(conv2d(x_image, W_conv1) + b_conv1)
So the output of h_conv1 would have the shape [-1, img_size, img_size, 32]. Just using tf.image_summary("first_conv", tf.reshape(h_conv1, [-1, img_size, img_size, 1])) Doesn't account for the 32 different kernels, so I'm basically slicing through different feature maps here.
How can I reshape them correctly? Or is there another helper function I could use for including this output in the summary?
I don't know of a helper function but if you want to see all the filters you can pack them into one image with some fancy uses of tf.transpose.
So if you have a tensor that's images x ix x iy x channels
>>> V = tf.Variable()
>>> print V.get_shape()
TensorShape([Dimension(-1), Dimension(256), Dimension(256), Dimension(32)])
So in this example ix = 256, iy=256, channels=32
first slice off 1 image, and remove the image dimension
V = tf.slice(V,(0,0,0,0),(1,-1,-1,-1)) #V[0,...]
V = tf.reshape(V,(iy,ix,channels))
Next add a couple of pixels of zero padding around the image
ix += 4
iy += 4
V = tf.image.resize_image_with_crop_or_pad(image, iy, ix)
Then reshape so that instead of 32 channels you have 4x8 channels, lets call them cy=4 and cx=8.
V = tf.reshape(V,(iy,ix,cy,cx))
Now the tricky part. tf seems to return results in C-order, numpy's default.
The current order, if flattened, would list all the channels for the first pixel (iterating over cx and cy), before listing the channels of the second pixel (incrementing ix). Going across the rows of pixels (ix) before incrementing to the next row (iy).
We want the order that would lay out the images in a grid.
So you go across a row of an image (ix), before stepping along the row of channels (cx), when you hit the end of the row of channels you step to the next row in the image (iy) and when you run out or rows in the image you increment to the next row of channels (cy). so:
V = tf.transpose(V,(2,0,3,1)) #cy,iy,cx,ix
Personally I prefer np.einsum for fancy transposes, for readability, but it's not in tf yet.
newtensor = np.einsum('yxYX->YyXx',oldtensor)
anyway, now that the pixels are in the right order, we can safely flatten it into a 2d tensor:
# image_summary needs 4d input
V = tf.reshape(V,(1,cy*iy,cx*ix,1))
try tf.image_summary on that, you should get a grid of little images.
Below is an image of what one gets after following all the steps here.
In case someone would like to "jump" to numpy and visualize "there" here is an example how to display both Weights and processing result. All transformations are based on prev answer by mdaoust.
# to visualize 1st conv layer Weights
vv1 = sess.run(W_conv1)
# to visualize 1st conv layer output
vv2 = sess.run(h_conv1,feed_dict = {img_ph:x, keep_prob: 1.0})
vv2 = vv2[0,:,:,:] # in case of bunch out - slice first img
def vis_conv(v,ix,iy,ch,cy,cx, p = 0) :
v = np.reshape(v,(iy,ix,ch))
ix += 2
iy += 2
npad = ((1,1), (1,1), (0,0))
v = np.pad(v, pad_width=npad, mode='constant', constant_values=p)
v = np.reshape(v,(iy,ix,cy,cx))
v = np.transpose(v,(2,0,3,1)) #cy,iy,cx,ix
v = np.reshape(v,(cy*iy,cx*ix))
return v
# W_conv1 - weights
ix = 5 # data size
iy = 5
ch = 32
cy = 4 # grid from channels: 32 = 4x8
cx = 8
v = vis_conv(vv1,ix,iy,ch,cy,cx)
plt.figure(figsize = (8,8))
plt.imshow(v,cmap="Greys_r",interpolation='nearest')
# h_conv1 - processed image
ix = 30 # data size
iy = 30
v = vis_conv(vv2,ix,iy,ch,cy,cx)
plt.figure(figsize = (8,8))
plt.imshow(v,cmap="Greys_r",interpolation='nearest')
you may try to get convolution layer activation image this way:
h_conv1_features = tf.unpack(h_conv1, axis=3)
h_conv1_imgs = tf.expand_dims(tf.concat(1, h_conv1_features_padded), -1)
this gets one vertical stripe with all images concatenated vertically.
if you want them padded (in my case of relu activations to pad with white line):
h_conv1_features = tf.unpack(h_conv1, axis=3)
h_conv1_max = tf.reduce_max(h_conv1)
h_conv1_features_padded = map(lambda t: tf.pad(t-h_conv1_max, [[0,0],[0,1],[0,0]])+h_conv1_max, h_conv1_features)
h_conv1_imgs = tf.expand_dims(tf.concat(1, h_conv1_features_padded), -1)
I personally try to tile every 2d-filter in a single image.
For doing this -if i'm not terribly mistaken since I'm quite new to DL- I found out that it could be helpful to exploit the depth_to_space function, since it takes a 4d tensor
[batch, height, width, depth]
and produces an output of shape
[batch, height*block_size, width*block_size, depth/(block_size*block_size)]
Where block_size is the number of "tiles" in the output image. The only limitation to this is that the depth should be the square of block_size, which is an integer, otherwise it cannot "fill" the resulting image correctly.
A possible solution could be of padding the depth of the input tensor up to a depth that is accepted by the method, but I sill havn't tried this.
Another way, which I think very easy, is using the get_operation_by_name function. I had hard time visualizing the layers with other methods but this helped me.
#first, find out the operations, many of those are micro-operations such as add etc.
graph = tf.get_default_graph()
graph.get_operations()
#choose relevant operations
op_name = '...'
op = graph.get_operation_by_name(op_name)
out = sess.run([op.outputs[0]], feed_dict={x: img_batch, is_training: False})
#img_batch is a single image whose dimensions are (1,n,n,1).
# out is the output of the layer, do whatever you want with the output
#in my case, I wanted to see the output of a convolution layer
out2 = np.array(out)
print(out2.shape)
# determine, row, col, and fig size etc.
for each_depth in range(out2.shape[4]):
fig.add_subplot(rows, cols, each_depth+1)
plt.imshow(out2[0,0,:,:,each_depth], cmap='gray')
For example below is the input(colored cat) and output of the second conv layer in my model.
Note that I am aware this question is old and there are easier methods with Keras but for people who use an old model from other people (such as me), this may be useful.

Checking the gradient when doing gradient descent

I'm trying to implement a feed-forward backpropagating autoencoder (training with gradient descent) and wanted to verify that I'm calculating the gradient correctly. This tutorial suggests calculating the derivative of each parameter one at a time: grad_i(theta) = (J(theta_i+epsilon) - J(theta_i-epsilon)) / (2*epsilon). I've written a sample piece of code in Matlab to do just this, but without much luck -- the differences between the gradient calculated from the derivative and the gradient numerically found tend to be largish (>> 4 significant figures).
If anyone can offer any suggestions, I would greatly appreciate the help (either in my calculation of the gradient or how I perform the check). Because I've simplified the code greatly to make it more readable, I haven't included a biases, and am no longer tying the weight matrices.
First, I initialize the variables:
numHidden = 200;
numVisible = 784;
low = -4*sqrt(6./(numHidden + numVisible));
high = 4*sqrt(6./(numHidden + numVisible));
encoder = low + (high-low)*rand(numVisible, numHidden);
decoder = low + (high-low)*rand(numHidden, numVisible);
Next, given some input image x, do feed-forward propagation:
a = sigmoid(x*encoder);
z = sigmoid(a*decoder); % (reconstruction of x)
The loss function I'm using is the standard Σ(0.5*(z - x)^2)):
% first calculate the error by finding the derivative of sum(0.5*(z-x).^2),
% which is (f(h)-x)*f'(h), where z = f(h), h = a*decoder, and
% f = sigmoid(x). However, since the derivative of the sigmoid is
% sigmoid*(1 - sigmoid), we get:
error_0 = (z - x).*z.*(1-z);
% The gradient \Delta w_{ji} = error_j*a_i
gDecoder = error_0'*a;
% not important, but included for completeness
% do back-propagation one layer down
error_1 = (error_0*encoder).*a.*(1-a);
gEncoder = error_1'*x;
And finally, check that the gradient is correct (in this case, just do it for the decoder):
epsilon = 10e-5;
check = gDecoder(:); % the values we obtained above
for i = 1:size(decoder(:), 1)
% calculate J+
theta = decoder(:); % unroll
theta(i) = theta(i) + epsilon;
decoderp = reshape(theta, size(decoder)); % re-roll
a = sigmoid(x*encoder);
z = sigmoid(a*decoderp);
Jp = sum(0.5*(z - x).^2);
% calculate J-
theta = decoder(:);
theta(i) = theta(i) - epsilon;
decoderp = reshape(theta, size(decoder));
a = sigmoid(x*encoder);
z = sigmoid(a*decoderp);
Jm = sum(0.5*(z - x).^2);
grad_i = (Jp - Jm) / (2*epsilon);
diff = abs(grad_i - check(i));
fprintf('%d: %f <=> %f: %f\n', i, grad_i, check(i), diff);
end
Running this on the MNIST dataset (for the first entry) gives results such as:
2: 0.093885 <=> 0.028398: 0.065487
3: 0.066285 <=> 0.031096: 0.035189
5: 0.053074 <=> 0.019839: 0.033235
6: 0.108249 <=> 0.042407: 0.065843
7: 0.091576 <=> 0.009014: 0.082562
Do not sigmoid on both a and z. Just use it on z.
a = x*encoder;
z = sigmoid(a*decoderp);