How to calculate shifted distance gaussian map efficiently numpy - numpy

def build_gaussian_map(s, point, sigma=25):
x, y = point[0], point[1]
gmap = np.zeros(s)
for row in range(s[0]):
for col in range(s[1]):
gmap[row][col] = 1 / (2 * np.pi * sigma * sigma) * np.exp(-((x - row) * (x - row) + (y - col) * (y - col)) / (2 * sigma * sigma))
return gmap
s - 2D array shape
point - point coordinates
I am calculating distance gaussian map with a center in a certain point of image point. Can I do it somehow using matrix operations?
Result map example:

import numpy as np
def build_gaussian_map(s, point, sigma=25):
x, y = point[0], point[1]
gmap = np.zeros(s)
for row in range(s[0]):
for col in range(s[1]):
gmap[row][col] = 1 / (2 * np.pi * sigma * sigma) * np.exp(-((x - row) * (x - row) + (y - col) * (y - col)) / (2 * sigma * sigma))
return gmap
def build_gaussian_map2(shape, point, sigma=25):
x, y = point[0], point[1]
row, col = np.indices(shape)
gmap = 1 / (2 * np.pi * sigma * sigma) * np.exp(-((x - row) * (x - row) + (y - col) * (y - col)) / (2 * sigma * sigma))
return gmap
def main():
s = (1000, 1000)
result1 = build_gaussian_map(s, (100, 100))
result2 = build_gaussian_map2(s, (100, 100))
assert np.all(result1 == result2)
main()
Profiling results:
24 def main():
25 1 3.0 3.0 0.0 s = (1000, 1000)
26 1 6126705.0 6126705.0 98.2 result1 = build_gaussian_map(s, (100, 100))
27 1 105593.0 105593.0 1.7 result2 = build_gaussian_map2(s, (100, 100))

def gaussian_map(shape, point, sigma=20):
a = np.arange(shape[0])
b = np.arange(shape[1])
x_grid, y_grid = np.meshgrid(a, b)
return 1 / (2 * np.pi * sigma * sigma) * np.exp(- ((x_grid - point[0]) * (x_grid - point[0]) + (y_grid - point[1]) * (y_grid - point[1])) / (2 * sigma * sigma))
Thought up this function. It seems to be effective

Related

what is the difference between s[:] and s if s is a torch.Tensor [duplicate]

import numpy as np
import time
features, labels = d2l.get_data_ch7()
def init_adam_states():
v_w, v_b = torch.zeros((features.shape[1], 1),dtype=torch.float32), torch.zeros(1, dtype=torch.float32)
s_w, s_b = torch.zeros((features.shape[1], 1),dtype=torch.float32), torch.zeros(1, dtype=torch.float32)
return ((v_w, s_w), (v_b, s_b))
def adam(params, states, hyperparams):
beta1, beta2, eps = 0.9, 0.999, 1e-6
for p, (v, s) in zip(params, states):
v[:] = beta1 * v + (1 - beta1) * p.grad.data
s = beta2 * s + (1 - beta2) * p.grad.data**2
v_bias_corr = v / (1 - beta1 ** hyperparams['t'])
s_bias_corr = s / (1 - beta2 ** hyperparams['t'])
p.data -= hyperparams['lr'] * v_bias_corr / (torch.sqrt(s_bias_corr) + eps)
hyperparams['t'] += 1
def train_ch7(optimizer_fn, states, hyperparams, features, labels, batch_size=10, num_epochs=2):
# 初始化模型
net, loss = d2l.linreg, d2l.squared_loss
w = torch.nn.Parameter(torch.tensor(np.random.normal(0, 0.01, size=(features.shape[1], 1)), dtype=torch.float32),
requires_grad=True)
b = torch.nn.Parameter(torch.zeros(1, dtype=torch.float32), requires_grad=True)
def eval_loss():
return loss(net(features, w, b), labels).mean().item()
ls = [eval_loss()]
data_iter = torch.utils.data.DataLoader(torch.utils.data.TensorDataset(features, labels), batch_size, shuffle=True)
for _ in range(num_epochs):
start = time.time()
print(w)
print(b)
for batch_i, (X, y) in enumerate(data_iter):
l = loss(net(X, w, b), y).mean() # 使⽤平均损失
# 梯度清零
if w.grad is not None:
w.grad.data.zero_()
b.grad.data.zero_()
l.backward()
optimizer_fn([w, b], states, hyperparams) # 迭代模型参数
if (batch_i + 1) * batch_size % 100 == 0:
ls.append(eval_loss()) # 每100个样本记录下当前训练误差
# 打印结果和作图
print('loss: %f, %f sec per epoch' % (ls[-1], time.time() - start))
d2l.set_figsize()
d2l.plt.plot(np.linspace(0, num_epochs, len(ls)), ls)
d2l.plt.xlabel('epoch')
d2l.plt.ylabel('loss')
train_ch7(adam, init_adam_states(), {'lr': 0.01, 't': 1}, features, labels)
I want to implement the Adam algorithm in the follow code and I feel confused in the function named adam.
v = beta1 * v + (1 - beta1) * p.grad.data
s = beta2 * s + (1 - beta2) * p.grad.data**2
when I use the follow code, the loss function curve is figure 1.
figure 1
v[:] = beta1 * v + (1 - beta1) * p.grad.data
s = beta2 * s + (1 - beta2) * p.grad.data**2
or
v = beta1 * v + (1 - beta1) * p.grad.data
s[:] = beta2 * s + (1 - beta2) * p.grad.data**2
when I use the follow code, the loss function curve is figure 2.
figure 2
v[:] = beta1 * v + (1 - beta1) * p.grad.data
s[:] = beta2 * s + (1 - beta2) * p.grad.data**2
when I use the follow code, the loss function curve is figure 3.
figure 3
The loss function curve in case 3 has always been smoother than that in case 1.
The loss function curve in case 2 sometimes can't converge.
Why is different?
To answer the first question,
v = beta1 * v + (1 - beta1) * p.grad.data
is an out-of-place operation. Remember that python variables are references to objects. By assigning a new value to variable v, the underlying object which v referred to before this assignment will not be changed. Instead the expression beta1 * v + (1 - beta1) * p.grad.data results in a new tensor which is then referred to by v.
On the other hand
v[:] = beta1 * v + (1 - beta1) * p.grad.data
is an in-place operation. After this operation v still refers to the same underlying object, and the elements of that tensor are modified and replaced with the values of the new tensor beta1 * v + (1 - beta1) * p.grad.data.
Take a look at the following 3 lines to see why this matters
for p, (v, s) in zip(params, states):
v[:] = beta1 * v + (1 - beta1) * p.grad.data
s[:] = beta2 * s + (1 - beta2) * p.grad.data**2
v and s are actually referring to tensors which are stored in states. If we do in-place operations then the values in states are changed to reflect the value assigned to v[:] and s[:].
If out-of-place operations are used then the values in states remain unchanged.

Errors on parameters using scipy.curve_fit

I am fitting the following function (variables A, D, μ and τ) and x and E are fixed:
I created some example data using the equation and added some noise. The fit looks very good and has a low chi-squared however the errors from the covariance matrix are odd; some are very large whereas others are smaller. What am I doing wrong?
import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit
# Constants
E_field = 1
x = 1
def function(t, A, D, μ, τ):
return A/np.sqrt(4*np.pi*D*t) * np.exp(-pow(x-μ*E_field*t, 2)/(4*D*t) - t/τ)
def chi(E, O):
return np.sum(np.ma.masked_invalid(pow(O-E, 2)/E))
def fit(t, n, m, p0):
ddof = n.size - m
popt, pcov = curve_fit(function, t, n, p0=p0)
fitted_n = function(t, *popt)
reduced_χ_squared = chi(n, fitted_n) / ddof
σ = np.sqrt(np.diag(pcov))
return popt, σ, reduced_χ_squared
# Choose random variables to generate data
x, t = 1, np.linspace(0.01, 5, num=100)
A, D, μ, τ = 1, 0.2, 1, 1
n = function(t, A, D, μ, τ)
n_noise = n + 0.005 * np.random.normal(size=n.size)
n_noise += abs(min(n_noise)) # Shift data to lie on y = 0
p0 = [1, 0.25, 1, 1]
vars, σ, reduced_χ_squared = fit(t, n_noise, 4, p0)
fitted_A, fitted_D, fitted_μ, fitted_τ = vars
σ_A, σ_D, σ_μ, σ_τ = σ
fitted_n = function(t, *vars)
fig, ax = plt.subplots()
ax.plot(t, n_noise)
ax.plot(t, fitted_n)
#ax.text(0.82, 0.75, "χᵣ²={:.4f}".format(reduced_χ_squared), transform = ax.transAxes)
ax.legend(["Observed n", "Expected n"])
print("Fitted parameters: A = {:.4f}, D = {:.4f}, μ = {:.4f}, τ = {:.4f}".format(*vars))
print("Fitted parameter errors: σ_A = {:.4f}, σ_D = {:.4f}, σ_μ = {:.4f}, σ_τ = {:.4f}".format(*σ))
print("Reduced χ² = {:.4f}".format(reduced_χ_squared))
Running this code gives me the following output
As mentioned in my comment above, correlation is a big problem here. Biggest problem though is that you fit more parameters than required.
Let us transform:
A = exp( alpha) i.e alpha = log(A)
delta = 4 * D
epsilon = mu * E
We then get:
1 / sqrt( pi* delta ) * exp( -( x**2 + epsilon**2 * t**2 -2*x*epsilon t) / ( delta * t ) -t / tau + alpha )
= 1 / sqrt( pi* delta ) * exp( -( x**2 + epsilon**2 * t**2 -2*x*epsilon t) / ( delta * t ) -delta / tau * t**2/( delta * t) + delta * alpha * t/ ( delta * t ) )
= 1 / sqrt( pi* delta ) * exp( -( x**2 + epsilon**2 * t**2 -2*x*epsilon t + delta / tau * t**2 - delta * alpha * t ) / ( delta * t ) )
= 1 / sqrt( pi* delta ) * exp( -( x**2 + ( epsilon**2 + delta / tau ) * t**2 -x * ( 2 * epsilon + delta * alpha ) * t ) / ( delta * t ) )
now renaming:
( epsilon**2 + delta / tau ) -> gamma**2
( 2 * epsilon + delta * alpha ) -> eta
we get
= 1 / sqrt( pi * delta ) * exp( -( x**2 + gamma**2 * t**2 - x * eta * t ) / ( delta * t ) )
So there are actually only 3 parameters to fit and it looks like this:
import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit
# Constants
E_field = 1
x = 1
def function(t, A, D, μ, τ):
return A/np.sqrt(4*np.pi*D*t) * np.exp(-pow(x-μ*E_field*t, 2)/(4*D*t) - t/τ)
def alt_func( t, gamma, eta, delta ):
return np.exp( -( x**2 + gamma**2 * t**2 - eta * t ) / ( delta * t ) ) / np.sqrt( np.pi * delta * t )
# Choose random variables to generate data
x, t = 1, np.linspace(0.01, 5, num=100)
A, D, μ, τ = 1, 0.2, 1, 1
n = function(t, A, D, μ, τ)
n_noise = n + 0.005 * np.random.normal(size=n.size)
n_noise += abs(min(n_noise)) # Shift data to lie on y = 0
guess=[1.34, 2, .8]
palt, covalt = curve_fit( alt_func, t, n_noise)
print( covalt )
print( palt )
yt = alt_func( t, *palt )
yg = alt_func( t, *guess )
yorg = function( t, A, D, μ, τ )
fig, ax = plt.subplots()
ax.plot(t, n_noise)
ax.plot(t, yg )
ax.plot(t, yt, ls="--")
ax.plot(t, yorg, ls=":" )
plt.show()
This has a reasonable covariance matrix. One can get the original parameters easily via error propagation.
Altzernatively, it should be enough to fix A=1 and only fit the three left parameters in the original function.
Concerning the transformation and back calculation one has to keep in mind that this is of course from R³ to R⁴, so it is naturally not unique either. Again one can just fix one value, or one might to try to spread the error evenly between the parameters or who knows....

Numpy - linear algebra

I have two matrices: quantities and displacements.
The problem is as follows:
[0.011 * x + 0.0295 * y + 0.080 * w + 0.182 * z] = [-4.31, 8.15, 0.83]
[0.011 * x + 0.0220 * y + 0.098 * w + 0.180 * z] = [-3.70, 6.30, 1.03]
[0.013 * x + 0.0230 * y + 0.108 * w + 0.172 * z] = [-3.89, 6.33, 0.52]
[0.013 * x + 0.0230 * y + 0.105 * w + 0.175 * z] = [-3.38, 5.55, 0.54]
In numpy:
quantities = np matrix ([[0.011, 0.0295, 0.080, 0.182], [0.011, 0.022, 0.098, 0.180], [0.013, 0.023, 0.108, 0.172], [0.013, 0.023, 0.105, 0.175]))
displacements = np matrix ([[-4.31, 8.15, 0.83], [-3.7, 6.3, 1.03], [-3.89, 6.33, 0.52] , [-3.38, 5.55, 0.54]])
To obtain the displacement [-4.37, 7.44, 1.01], what are the values ​​of x, y, w, z used?
That is:
[ax + by + cw + dz] = [-4.37, 7.44, 1.01]
What are the values ​​of a, b, c and d?

Octave fminunc "trust region become excessively small"

I am trying to run a linear regression using fminunc to optimize my parameters. However, while the code never fails, the fminunc function seems to only be running once and not converging. The exit flag that the fminunc funtion returns is -3, which - according to documentation- means "The trust region radius became excessively small". What does this mean and how can I fix it?
This is my main:
load('data.mat');
% returns matrix X, a matrix of data
% Initliaze parameters
[m, n] = size(X);
X = [ones(m, 1), X];
initialTheta = zeros(n + 1, 1);
alpha = 1;
lambda = 0;
costfun = #(t) costFunction(t, X, surv, lambda, alpha);
options = optimset('GradObj', 'on', 'MaxIter', 1000);
[theta, cost, info] = fminunc(costfun, initialTheta, options);
And the cost function:
function [J, grad] = costFunction(theta, X, y, lambda, alpha)
%COSTFUNCTION Implements a logistic regression cost function.
% [J grad] = COSTFUNCTION(initialParameters, X, y, lambda) computes the cost
% and the gradient for the logistic regression.
%
m = size(X, 1);
J = 0;
grad = zeros(size(theta));
% un-regularized
z = X * theta;
J = (-1 / m) * y' * log(sigmoid(z)) + (1 - y)' * log(1 - sigmoid(z));
grad = (alpha / m) * X' * (sigmoid(z) - y);
% regularization
theta(1) = 0;
J = J + (lambda / (2 * m)) * (theta' * theta);
grad = grad + alpha * ((lambda / m) * theta);
endfunction
Any help is much appreciated.
There are a few issues with the code above:
Using the fminunc means you don't have to provide an alpha. Remove all instances of it from the code and your gradient functions should look like the following
grad = (1 / m) * X' * (sigmoid(z) - y);
and
grad = grad + ((lambda / m) * theta); % This isn't quite correct, see below
In the regularization of the grad, you can't use theta as you don't add in the theta for j = 0. There are a number ways to do this, but here is one
temp = theta;
temp(1) = 0;
grad = grad + ((lambda / m) * temp);
You missing a set of bracket in your cost function. The (-1 / m) is being applied only to a portion of the rest of the equation. It should look like.
J = (-1 / m) * ( y' * log(sigmoid(z)) + (1 - y)' * log(1 - sigmoid(z)) );
And finally, as a nit, a lambda value of 0 means that your regularization does nothing.

Quaternion addition like 3ds/gmax does with it's quats

A project I'm working on needs a function which mimics 3ds/gmax's quaternion addition. A test case of (quat 1 2 3 4)+(quat 3 5 7 9) should equal (quat 20 40 54 2). These quats are in xyzw.
So, I figure it's basic algebra, given the clean numbers. It's got to be something like this multiply function, since it doesn't involve sin/cos:
const quaternion &operator *=(const quaternion &q)
{
float x= v.x, y= v.y, z= v.z, sn= s*q.s - v*q.v;
v.x= y*q.v.z - z*q.v.y + s*q.v.x + x*q.s;
v.y= z*q.v.x - x*q.v.z + s*q.v.y + y*q.s;
v.z= x*q.v.y - y*q.v.x + s*q.v.z + z*q.s;
s= sn;
return *this;
}
source
But, I don't understand how sn= s*q.s - v*q.v is supposed to work. s is a float, v is vector. Multiply vectors and add to float?
I'm not even sure which terms of direction/rotation/orientation these values represent, but if the function satisfies the quat values above, it'll work.
Found it. Turns out to be known as multiplication. Addition is multiplication. Up is sideways. Not confusing at all :/
fn qAdd q1 q2 = (
x1=q1.x
y1=q1.y
z1=q1.z
w1=q1.w
x2=q2.x
y2=q2.y
z2=q2.z
w2=q2.w
W = (W1 * W2) - (X1 * X2) - (Y1 * Y2) - (Z1 * Z2)
X = (W1 * X2) + (X1 * W2) + (Y1 * Z2) - (Z1 * Y2)
Y = (W1 * Y2) + (Y1 * W2) + (Z1 * X2) - (X1 * Z2)
Z = (W1 * Z2) + (Z1 * W2) + (X1 * Y2) - (Y1 * X2)
return (quat x y z w)
)
Swapping q1 & q2 yields different results, quite neither like addition nor multiplication.
source