computing cumulative distribution of a conditional probability distribution - numpy

I have a conditional probability of z for the given m, p(z|m), where the coefficients are chosen in order that integral over z in the limit of [0,1.5] and m in the range of [18:28] would be equal to one.
def p(z,m):
if (m<21.25):
E = { 'ft':0.55, 'alpha': 2.99, 'z0':0.191, 'km':0.089, 'kt':0.25 }
S = { 'ft':0.39, 'alpha': 2.15, 'z0':0.121, 'km':0.093, 'kt':-0.175 }
I={ 'ft':0.06, 'alpha': 1.77, 'z0':0.045, 'km':0.096, 'kt':-0.9196 }
Evalue=E['ft']*np.exp(-1*E['kt']*(m-18))*z**E['alpha']*np.exp(-1*(z/(E['z0']+E['km']*(m-18)))**E['alpha'])
Svalue=S['ft']*np.exp(-1*S['kt']*(m-18))*z**S['alpha']*np.exp(-1*(z/(S['z0']+S['km']*(m-18)))**S['alpha'])
Ivalue=I['ft']*np.exp(-1*I['kt']*(m-18))*z**I['alpha']*np.exp(-1*(z/(I['z0']+I['km']*(m-18)))**I['alpha'])
value=Evalue+Svalue+Ivalue
elif(m>=21.25):
E = { 'ft':0.25, 'alpha': 1.957, 'z0':0.321, 'km':0.196, 'kt':0.565 }
S = { 'ft':0.61, 'alpha': 1.598, 'z0':0.291, 'km':0.167, 'kt':0.155 }
I = { 'ft':0.14, 'alpha': 0.964, 'z0':0.170, 'km':0.129, 'kt':0.1759 }
Evalue=E['ft']*np.exp(-1*E['kt']*(m-18))*z**E['alpha']*np.exp(-1*(z/(E['z0']+E['km']*(m-18)))**E['alpha'])
Svalue=S['ft']*np.exp(-1*S['kt']*(m-18))*z**S['alpha']*np.exp(-1*(z/(S['z0']+S['km']*(m-18)))**S['alpha'])
Ivalue=I['ft']*np.exp(-1*I['kt']*(m-18))*z**I['alpha']*np.exp(-1*(z/(I['z0']+I['km']*(m-18)))**I['alpha'])
value=Evalue+Svalue+Ivalue
return value
I would like to draw a sample from this distribution, therefore I made a grid points in z and m plane to estimate the cumulative distribution, the cumulative integral over m reaches to one but the cumulative integral over z doesn't give me one in the edge. I don't know why it won't get converged to one?!!
grid_m = np.linspace(18, 28, 1000)
grid_z = np.linspace(0, 1.5, 1000)
dz = np.diff(grid_z[:2])
# get cdf on grid, use cumtrapz
prob_zgm=np.empty((grid_z.shape[0], grid_m.shape[0]),float)
for i in range(grid_z.shape[0]):
for j in range(grid_m.shape[0]):
prob_zgm[i,j]=p(grid_z[i],grid_m[j])
pr = np.column_stack((np.zeros(prob_zgm.shape[0]),prob_zgm))
dm = np.diff(grid_m[:2])
cdf_zgm = integrate.cumtrapz(pr, dx=dm, axis=1)
cdf = integrate.cumtrapz(pr, dx=dz, axis=0)
Which assumption might cause this inconsistency or I compute something wrongly?
Update: The cumulative distribution cdf_zgm is shown as
In the rest, in order to get the inverse of the probability, it is the approach I have used:
# fix bounds of cdf_zgm
cdf_zgm[:, 0] = 0
cdf_zgm[:, -1] = 1
#Interpolate the data using a linear spline to "grid_q" samples
grid_q = np.linspace(0, 1, 200)
grid_qm = np.empty((len(grid_m), len(grid_q)), float)
for i in range(len(grid_m)):
grid_qm[i] = interpolate.interp1d(cdf_zgm[i], grid_z)(grid_q)
# build 2d interpolation for z as function of (q,m)
z_interp = interpolate.interp2d(grid_q, grid_m, grid_qm)
#sample magnitude
ng=20000
r = dist_m.rvs(ng)
rvs_u = np.random.rand(ng)
rvs_z = np.asarray([z_interp(rvs_u[i], r[i]) for i in range(len(rvs_u))]).ravel()
Is it right approach to fix the boundaries of CDF to one?

I don't know what's wrong with that code. But here are a couple of different ideas to try:
(1) Just sum the array elements instead of trying to compute the numerical integrals. It is simpler that way. (Summing the array elements is essentially computing a rectangle rule approximation, which as it turns out, is actually more accurate than the trapezoidal rule.)
(2) Instead of trying to create a whole 2-d array at once, write a function which creates just a 1-d slice of p(z | m) for a given value of m. Then just sum those elements to get the cumulative probability.

Related

how to vectorize exponential probability function

I believe code below is somewhat correct implementation of this exponential heatmap function:
def expfunc(image, landmark, sigma=6): #image = array of shape (512,512), landmark = array of shape (2,)
a= np.sqrt(np.log(2)/2)/sigma #
for i in range(image.shape[0]):
for j in range(image.shape[1]):
prob = np.exp(-a*(np.abs(i-landmark[0])+np.abs(j-landmark[1])))
if prob > 0.01:
image[i][j] = prob
else:
image[i][j]= 0
return image
My questions are:
How could I vectorize this code?
This probability function gives values to all pixels so how should proceed with very small values? Now I am using threshold of 0.01 for zeros?
Let me know if this works for you:
i = np.arange(image.shape[0])
j = np.arange(image.shape[1])
prob = np.exp(-a*(np.abs(i[:,None]-landmark[0])+np.abs(j-landmark[1])))
image = np.where(prob>0.01, prob, 0)
First compute the array prob for all of the indices i and j. Then prob has the same shape as image, and you can redefine image based on the values of prob using numpy.where.

A Simple Bayesian Network with a Coin-Flipping Problem

I am trying to implement a Bayesian network and solve a regression problem using PYMC3. In my model, I have a fair coin as the parent node. If the parent node is H, the child node selects the normal distribution N(5,0.2); if T, the child selects N(0,0.5). Here is an illustration of my network.
To simulate this network, I generated a sample dataset and tried doing Bayesian regression using the code below. Currently, the model does regression only for the child node as if the parent node does not exist. I would greatly appreciate it if anyone can let me know how to implement the conditional probability P(D|C). Ultimately, I am interested in finding the probability distribution for mu1 and mu2. Thank you!
# Generate data for coin flip P(C) and store in c1
theta_real = 0.5 # unkown value in a real experiment
n_sample = 10
c1 = bernoulli.rvs(p=theta_real, size=n_sample)
# Generate data for normal distribution P(D|C) and store in d1
np.random.seed(123)
mu1 = 0
sigma1 = 0.5
mu2 = 5
sigma2 = 0.2
d1 = []
for index, item in enumerate(c1):
if item == 0:
d1.extend(normal(mu1, sigma1, 1))
else:
d1.extend(normal(mu2, sigma2, 1))
# I start building PYMC3 model here
c1_tensor = theano.shared(np.array(c1))
d1_tensor = theano.shared(np.array(d1))
with pm.Model() as model:
# define prior for c1. I am not sure how to do this.
#c1_present = pm.Categorical('c1',observed=c1_tensor)
# how do I incorporate P(D | C)
mu_prior = pm.Normal('mu', mu=2, sd=2, shape=1)
sigma_prior = pm.HalfNormal('sigma', sd=2, shape=1)
y_likelihood = pm.Normal('y', mu=mu_prior, sd=sigma_prior, observed=d1_tensor)
You could use the Dirichlet distribution as a prior for the coin toss and NormalMixture as the prior of the two Gaussians. In the following snippet I changed the fairness of the coin and increased the number of coin tosses, but you could adjust these in any way want:
import numpy as np
import pymc3 as pm
from scipy.stats import bernoulli
# Generate data for coin flip P(C) and store in c1
theta_real = 0.2 # unkown value in a real experiment
n_sample = 2000
c1 = bernoulli.rvs(p=theta_real, size=n_sample)
# Generate data for normal distribution P(D|C) and store in d1
np.random.seed(123)
mu1 = 0
sigma1 = 0.5
mu2 = 5
sigma2 = 0.2
d1 = []
for index, item in enumerate(c1):
if item == 0:
d1.extend(np.random.normal(mu1, sigma1, 1))
else:
d1.extend(np.random.normal(mu2, sigma2, 1))
with pm.Model() as model:
w = pm.Dirichlet('p', a=np.ones(2))
mu = pm.Normal('mu', 0, 20, shape=2)
sigma = np.array([0.5,0.2])
pm.NormalMixture('like',w=w,mu=mu,sigma=sigma,observed=np.array(d1))
trace = pm.sample()
pm.summary(trace)
This will give you the following:
mean sd mc_error hpd_2.5 hpd_97.5 n_eff Rhat
mu__0 4.981222 0.023900 0.000491 4.935044 5.027420 2643.052184 0.999637
mu__1 -0.007660 0.004946 0.000095 -0.017388 0.001576 2481.146286 1.000312
p__0 0.213976 0.009393 0.000167 0.195602 0.231803 2245.905021 0.999302
p__1 0.786024 0.009393 0.000167 0.768197 0.804398 2245.905021 0.999302
The parameters are recovered nicely as you can also see from the traceplots:
The above implementation will give you the posterior of theta_real, mu1 and mu2 but I could not get convergence when I added sigma1 and sigma2 as parameters to be estimated by the data (even though the prior was quite narrow):
with pm.Model() as model:
w = pm.Dirichlet('p', a=np.ones(2))
mu = pm.Normal('mu', 0, 20, shape=2)
sigma = pm.HalfNormal('sigma', sd=2, shape=2)
pm.NormalMixture('like',w=w,mu=mu,sigma=sigma,observed=np.array(d1))
trace = pm.sample()
print(pm.summary(trace))
Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (4 chains in 4 jobs)
NUTS: [sigma, mu, p]
Sampling 4 chains: 100%|██████████| 4000/4000 [00:10<00:00, 395.57draws/s]
The acceptance probability does not match the target. It is 0.883057127209148, but should be close to 0.8. Try to increase the number of tuning steps.
The gelman-rubin statistic is larger than 1.4 for some parameters. The sampler did not converge.
The estimated number of effective samples is smaller than 200 for some parameters.
mean sd mc_error ... hpd_97.5 n_eff Rhat
mu__0 1.244021 2.165433 0.216540 ... 5.005507 2.002049 212.596596
mu__1 3.743879 2.165122 0.216510 ... 5.012067 2.002040 235.750129
p__0 0.643069 0.248630 0.024846 ... 0.803369 2.004185 30.966189
p__1 0.356931 0.248630 0.024846 ... 0.798632 2.004185 30.966189
sigma__0 0.416207 0.125435 0.012517 ... 0.504110 2.009031 17.333177
sigma__1 0.271763 0.125539 0.012533 ... 0.497208 2.007779 19.217223
[6 rows x 7 columns]
Based on that you most likely will need to reparametrize if you also wanted to estimate the two standard deviations from this data.
This answer is to supplement #balleveryday's answer, which suggests the Gaussian Mixture Model, but had some trouble getting the symmetry breaking to work. Admittedly, the symmetry breaking in the official example is done in the context of Metropolis-Hastings sampling, whereas I think NUTS might be a little more sensitive to encountering impossible values (not sure). Here's what worked for me:
import numpy as np
import pymc3 as pm
from scipy.stats import bernoulli
import theano.tensor as tt
# everything should reproduce
np.random.seed(123)
n_sample = 2000
# Generate data for coin flip P(C) and store in c1
theta_real = 0.2 # unknown value in a real experiment
c1 = bernoulli.rvs(p=theta_real, size=n_sample)
# Generate data for normal distribution P(D|C) and store in d1
mu1, mu2 = 0, 5
sigma1, sigma2 = 0.5, 0.2
d1 = np.empty_like(c1, dtype=np.float64)
d1[c1 == 0] = np.random.normal(mu1, sigma1, np.sum(c1 == 0))
d1[c1 == 1] = np.random.normal(mu2, sigma2, np.sum(c1 == 1))
with pm.Model() as gmm_asym:
# mixture vector
w = pm.Dirichlet('p', a=np.ones(2))
# Gaussian parameters (testval helps start off ordered)
mu = pm.Normal('mu', 0, 20, shape=2, testval=[-10, 10])
sigma = pm.HalfNormal('sigma', sd=2, shape=2)
# break symmetry, forcing mu[0] < mu[1]
order_means_potential = pm.Potential('order_means_potential',
tt.switch(mu[1] - mu[0] < 0, -np.inf, 0))
# observed
pm.NormalMixture('like', w=w, mu=mu, sigma=sigma, observed=d1)
# reproducible sampling
tr_gmm_asym = pm.sample(tune=2000, target_accept=0.9, random_seed=20191121)
This produces samples with the statistics
mean sd mc_error hpd_2.5 hpd_97.5 n_eff Rhat
mu__0 0.004549 0.011975 0.000226 -0.017398 0.029375 2425.487301 0.999916
mu__1 5.007663 0.008993 0.000166 4.989247 5.024692 2181.134002 0.999563
p__0 0.789983 0.009091 0.000188 0.773059 0.808062 2417.356539 0.999788
p__1 0.210017 0.009091 0.000188 0.191938 0.226941 2417.356539 0.999788
sigma__0 0.497322 0.009103 0.000186 0.480394 0.515867 2227.397854 0.999358
sigma__1 0.191310 0.006633 0.000141 0.178924 0.204859 2286.817037 0.999614
and the traces

How to optimize the linear coefficients for numpy arrays in a maximization function?

I have to optimize the coefficients for three numpy arrays which maximizes my evaluation function.
I have a target array called train['target'] and three predictions arrays named array1, array2 and array3.
I want to put the best linear coefficients i.e., x,y,z for these three arrays which will maximize the function
roc_aoc_curve(train['target'], xarray1 + yarray2 +z*array3)
the above function would be maximum when prediction is closer to the target.
i.e, xarray1 + yarray2 + z*array3 should be closer to train['target'].
The range of x,y,z >=0 and x,y,z <= 1
Basically I am trying to put the weights x,y,z for each of the three arrays which would make the function
xarray1 + yarray2 +z*array3 closer to the train['target']
Any help in getting this would be appreciated.
I used pulp.LpProblem('Giapetto', pulp.LpMaximize) to do the maximization. It works for normal numbers, integers etc, however failing while trying to do with arrays.
import numpy as np
import pulp
# create the LP object, set up as a maximization problem
prob = pulp.LpProblem('Giapetto', pulp.LpMaximize)
# set up decision variables
x = pulp.LpVariable('x', lowBound=0)
y = pulp.LpVariable('y', lowBound=0)
z = pulp.LpVariable('z', lowBound=0)
score = roc_auc_score(train['target'],x*array1+ y*array2 + z*array3)
prob += score
coef = x+y+z
prob += (coef==1)
# solve the LP using the default solver
optimization_result = prob.solve()
# make sure we got an optimal solution
assert optimization_result == pulp.LpStatusOptimal
# display the results
for var in (x, y,z):
print('Optimal weekly number of {} to produce: {:1.0f}'.format(var.name, var.value()))
Getting error at the line
score = roc_auc_score(train['target'],x*array1+ y*array2 + z*array3)
TypeError: unsupported operand type(s) for /: 'int' and 'LpVariable'
Can't progress beyond this line when using arrays. Not sure if my approach is correct. Any help in optimizing the function would be appreciated.
When you add sums of array elements to a PuLP model, you have to use built-in PuLP constructs like lpSum to do it -- you can't just add arrays together (as you discovered).
So your score definition should look something like this:
score = pulp.lpSum([train['target'][i] - (x * array1[i] + y * array2[i] + z * array3[i]) for i in arr_ind])
A few notes about this:
[+] You didn't provide the definition of roc_auc_score so I just pretended that it equals the sum of the element-wise difference between the target array and the weighted sum of the other 3 arrays.
[+] I suspect your actual calculation for roc_auc_score is nonlinear; more on this below.
[+] arr_ind is a list of the indices of the arrays, which I created like this:
# build array index
arr_ind = range(len(array1))
[+] You also didn't include the arrays, so I created them like this:
array1 = np.random.rand(10, 1)
array2 = np.random.rand(10, 1)
array3 = np.random.rand(10, 1)
train = {}
train['target'] = np.ones((10, 1))
Here is my complete code, which compiles and executes, though I'm sure it doesn't give you the result you are hoping for, since I just guessed about target and roc_auc_score:
import numpy as np
import pulp
# create the LP object, set up as a maximization problem
prob = pulp.LpProblem('Giapetto', pulp.LpMaximize)
# dummy arrays since arrays weren't in OP code
array1 = np.random.rand(10, 1)
array2 = np.random.rand(10, 1)
array3 = np.random.rand(10, 1)
# build array index
arr_ind = range(len(array1))
# set up decision variables
x = pulp.LpVariable('x', lowBound=0)
y = pulp.LpVariable('y', lowBound=0)
z = pulp.LpVariable('z', lowBound=0)
# dummy roc_auc_score since roc_auc_score wasn't in OP code
train = {}
train['target'] = np.ones((10, 1))
score = pulp.lpSum([train['target'][i] - (x * array1[i] + y * array2[i] + z * array3[i]) for i in arr_ind])
prob += score
coef = x + y + z
prob += coef == 1
# solve the LP using the default solver
optimization_result = prob.solve()
# make sure we got an optimal solution
assert optimization_result == pulp.LpStatusOptimal
# display the results
for var in (x, y,z):
print('Optimal weekly number of {} to produce: {:1.0f}'.format(var.name, var.value()))
Output:
Optimal weekly number of x to produce: 0
Optimal weekly number of y to produce: 0
Optimal weekly number of z to produce: 1
Process finished with exit code 0
Now, if your roc_auc_score function is nonlinear, you will have additional troubles. I would encourage you to try to formulate the score in a way that is linear, possibly using additional variables (for example, if you want the score to be an absolute value).

Plot random points a specified distance apart

I'm trying to come up with a function that plots n points inside the unit circle, but I need them to be sufficiently spread out.
ie. something that looks like this:
Is it possible to write a function with two parameters, n (number of points) and min_d (minimum distance apart) such that the points are:
a) equidistant
b) no pairwise distance exceeds a given min_d
The problem with sampling from a uniform distribution is that it could happen that two points are almost on top of each other, which I do not want to happen. I need this kind of input for a network diagram representing node clusters.
EDIT: I have found an answer to a) here: Generator of evenly spaced points in a circle in python, but b) still eludes me.
At the time this answer was provided, the question asked for random numbers. This answer thus gives a solution drawing random numbers. It ignores any edits made to the question afterwards.
On may simply draw random points and for each one check if the condition of the minimum distance is fulfilled. If not, the point can be discarded. This can be done until a list is filled with enough points or some break condition is met.
import numpy as np
import matplotlib.pyplot as plt
class Points():
def __init__(self,n=10, r=1, center=(0,0), mindist=0.2, maxtrials=1000 ) :
self.success = False
self.n = n
self.r = r
self.center=np.array(center)
self.d = mindist
self.points = np.ones((self.n,2))*10*r+self.center
self.c = 0
self.trials = 0
self.maxtrials = maxtrials
self.tx = "rad: {}, center: {}, min. dist: {} ".format(self.r, center, self.d)
self.fill()
def dist(self, p, x):
if len(p.shape) >1:
return np.sqrt(np.sum((p-x)**2, axis=1))
else:
return np.sqrt(np.sum((p-x)**2))
def newpoint(self):
x = (np.random.rand(2)-0.5)*2
x = x*self.r-self.center
if self.dist(self.center, x) < self.r:
self.trials += 1
if np.all(self.dist(self.points, x) > self.d):
self.points[self.c,:] = x
self.c += 1
def fill(self):
while self.trials < self.maxtrials and self.c < self.n:
self.newpoint()
self.points = self.points[self.dist(self.points,self.center) < self.r,:]
if len(self.points) == self.n:
self.success = True
self.tx +="\n{} of {} found ({} trials)".format(len(self.points),self.n,self.trials)
def __repr__(self):
return self.tx
center =(0,0)
radius = 1
p = Points(n=40,r=radius, center=center)
fig, ax = plt.subplots()
x,y = p.points[:,0], p.points[:,1]
plt.scatter(x,y)
ax.add_patch(plt.Circle(center, radius, fill=False))
ax.set_title(p)
ax.relim()
ax.autoscale_view()
ax.set_aspect("equal")
plt.show()
If the number of points should be fixed, you may try to run find this number of points for decreasing distances until the desired number of points are found.
In the following case, we are looking for 60 points and start with a minimum distance of 0.6 which we decrease stepwise by 0.05 until there is a solution found. Note that this will not necessarily be the optimum solution, as there is only maxtrials of retries in each step. Increasing maxtrials will of course bring us closer to the optimum but requires more runtime.
center =(0,0)
radius = 1
mindist = 0.6
step = 0.05
success = False
while not success:
mindist -= step
p = Points(n=60,r=radius, center=center, mindist=mindist)
print p
if p.success:
break
fig, ax = plt.subplots()
x,y = p.points[:,0], p.points[:,1]
plt.scatter(x,y)
ax.add_patch(plt.Circle(center, radius, fill=False))
ax.set_title(p)
ax.relim()
ax.autoscale_view()
ax.set_aspect("equal")
plt.show()
Here the solution is found for a minimum distance of 0.15.

Can I implement a gradient descent for arbitrary convex loss function?

I have a loss function I would like to try and minimize:
def lossfunction(X,b,lambs):
B = b.reshape(X.shape)
penalty = np.linalg.norm(B, axis = 1)**(0.5)
return np.linalg.norm(np.dot(X,B)-X) + lambs*penalty.sum()
Gradient descent, or similar methods, might be useful. I can't calculate the gradient of this function analytically, so I am wondering how I can numerically calculate the gradient for this loss function in order to implement a descent method.
Numpy has a gradient function, but it requires me to pass a scalar field at pre determined points.
You could try scipy.optimize.minimize
For your case a sample call would be:
import scipy.optimize.minimize
scipy.optimize.minimize(lossfunction, args=(b, lambs), method='Nelder-mead')
You could estimate the derivative numerically by a central difference:
def derivative(fun, X, b, lambs, h):
return (fun(X + 0.5*h,b,lambs) - fun(X - 0.5*h,b,lambs))/h
And use it like this:
# assign values to X, b, lambs
# set the value of h
h = 0.001
print derivative(lossfunction, X, b, lambs, h)
The code above is valid for dimX = 1, some modifications are needed to account for multidimensional vector X:
def gradient(fun, X, b, lambs, h):
res = []
for i in range (0,len(X)):
t1 = list(X)
t1[i] = t1[i] + 0.5*h
t2 = list(X)
t2[i] = t2[i] - 0.5*h
res = res + [(fun(t1,b,lambs) - fun(t2,b,lambs))/h]
return res
Forgive the naivity of the code, I barely know how to write some python :-)