I was looking for some help to code an equation of ellipse within WinBUGS. I need to form a Bivariate ellipse using p1's in my data. I tried to use the equation as (X-mu)'sigmainverse(X-mu), where X is the Bivariate Normal Variable, mu is the vector of means and sigmainverse is th e inverse of the var-covariance matrix. In my example p1's are Bivariate Normal Variables with mean gamma and inverse sigma2 matrix. Within double quotes is what I did but it doesnot work. Heres the WinBUGS code:
model
{
for (j in 1 : Nf)
{
p1[j, 1:2 ] ~ dmnorm(gamma[1:2 ], T[1:2 ,1:2 ])
# gamma is the MVN mean or mean of logit (p)
#T is the precision matrix inverse sigma of MVN or logit(p)
# precision equals reciprocal of variance
# precision matrix is the matrix inverse of the covariance matrix
for (i in 1:2)
{
logit(p[j,i])<-p1[j,i]
Y[j,i] ~ dbin(p[j,i],n)
wp[j,i] <- p[j,i]*dbw[j,i]
}
sumwp[j] <- sum(wp[j, ])
#X_mu[j,1:2]<-p1[j,1:2]-gamma[1:2]**
#ell[j]<-((t(p1[j,1:2]-gamma[1:2]))*T[1:2,1:2]*(p1[j.1:2]-gamma[1:2]))**
X_mu[j,1]<-p1[j,1]-gamma[1]
X_mu[j,2]<-p1[j,2]-gamma[2]
T1[j,1]<-inprod(T[1,],X_mu[j,])
T1[j,2]<-inprod(T[2,],X_mu[j,])
ell[j,1]<-inprod2(X_mu[j,1],T1[j,1])
ell[j,2]<-inprod2(X_mu[j,2],T1[j,2])
#ell[j]<-((t(p1[j,1:2]-gamma[1:2]))*T
}
# Hyper-priors:
gamma[1:2] ~ dmnorm(mn[1:2],prec[1:2 ,1:2])
T[1:2 ,1:2] ~ dwish(R[1:2 ,1:2], 2)
sigma2[1:2, 1:2] <-inverse(T[,])
#sigma2 is the covariance matrix
rho <- sigma2[1,2]/sqrt(sigma2[1,1]*sigma2[2,2])
#rho is the correlation matrix
}
expit[i]<-exp(gamma[i])/(1+exp(gamma[i]))
}
# Data
list(Nf =20, mn=c(-0.69, -1.06), n=60,
prec = structure(.Data = c(.001, 0,
0, .001),.Dim = c(2, 2)),
R = structure(.Data = c(.001, 0,
0, .001),.Dim = c(2, 2)),
Y= structure(.Data=c(32,13,
32,12,
10,4,
28,11,
10,5,
25,10,
4,1,
16,5,
28,10,
21,7,
19,9,
18,12,
31,12,
13,3,
10,4,
18,7,
3,2,
27,5,
8,1,
8,4),.Dim = c(20, 2)),
dbw=structure(.Data=c(0.25,0.25,
0.25,0.25,
0.25,0.25,
0.25,0.25,
0.25,0.25,
0.25,0.25,
0.25,0.25,
0.25,0.25,
0.25,0.25,
0.25,0.25,
0.25,0.25,
0.25,0.25,
0.25,0.25,
0.25,0.25,
0.25,0.25,
0.25,0.25,
0.25,0.25,
0.25,0.25,
0.25,0.25,
0.25,0.25
),.Dim=c(20,2))
)
The * operator won't multiply matrices and vectors, just scalars. Unfortunately there's no general matrix product function in WinBUGS. Instead you could use two calls to the "inprod" function (or the faster "inprod2") to take the inner product of each row of T with the X - mu, giving a new (temporary) vector node. Then use another inprod to take the inner product of that vector with the transposed (X - mu), giving your ell[j]. Or if speed is a concern, just write the inner product out by hand, according to some reports this may be faster.
Related
I am new to scala and I desperately need some guidance on the following problem:
I have a dataframe like the one below (some elements may be NULL)
val dfDouble = Seq(
(1.0, 1.0, 1.0, 3.0),
(1.0, 2.0, 0.0, 0.0),
(1.0, 3.0, 1.0, 1.0),
(1.0, 4.0, 0.0, 2.0)).toDF("m1", "m2", "m3", "m4")
dfDouble.show
+---+---+---+---+
| m1| m2| m3| m4|
+---+---+---+---+
|1.0|1.0|1.0|3.0|
|1.0|2.0|0.0|0.0|
|1.0|3.0|1.0|1.0|
|1.0|4.0|0.0|2.0|
+---+---+---+---+
I need to get the following statistics out of this dataframe:
a vector that contains the mean of each column (some elements might be NULL and I want to calculate the mean using only the non-NULL elements); I would also like to refer to each element of the vector by name for example, vec_mean["m1_mean"] would return the first element
vec_mean: Vector(m1_mean, m2_mean, m3_mean, m4_mean)
a variance-covariance matrix that is (4 x 4), where the diagonals are var(m1), var(m2),..., and the off-diagonals are cov(m1,m2), cov(m1,m3) ... Here, I would also like to only use the non-NULL elements in the variance-covariance calculation
A vector that contains the number of non-null for each column
vec_n: Vector(m1_n, m2_n, m3_n, m4_n)
A vector that contains the standard deviation of each column
vec_stdev: Vector(m1_stde, m2_stde, m3_stde, m4_stde)
In R I would convert everything to a matrix and then the rest is easy. But in scala, I'm unfamiliar with matrices and there are apparently multiple types of matrices, which are confusing (DenseMatrix, IndexedMatrix, etc.)
Edited: apparently it makes a difference if the content of the dataframe is Double or Int. Revised the elements to be double
Used the following command per suggested answer and it worked!
val rdd = dfDouble0.rdd.map {
case a: Row => (0 until a.length).foldRight(Array[Double]())((b, acc) =>
{ val k = a.getAs[Double](b)
if(k == null)
acc.+:(0.0)
else acc.+:(k)}).map(_.toDouble)
}
Yo can work with Spark RowMatrix. It has these kind of operations like computing the co-variance matrix using each row as an observation, mean, varianze, etc... The only thing that you have to know is how to build it from a Dataframe.
It turns out that a Dataframe in Spark contains a schema that represents the type of information that can be stored in it, and it is not only floating point numbers arrays. So the first thing is to transform this DF to a RDD of vectors(dense vector in this case).
Having this DF:
val df = Seq(
(1, 1, 1, 3),
(1, 2, 0, 0),
(1, 3, 1, 1),
(1, 4, 0, 2),
(1, 5, 0, 1),
(2, 1, 1, 3),
(2, 2, 1, 1),
(2, 3, 0, 0)).toDF("m1", "m2", "m3", "m4")
Convert it to a RDD Row[DenseVector] representation. There must be dozens of ways of doing this. One could be:
val rdd = df.rdd.map {
case a: Row =>
(0 until a.length).foldRight(Array[Int]())((b, acc) => {
val k = a.getAs[Int](b)
if(k == null) acc.+:(0) else acc.+:(k)
}).map(_.toDouble)
}
As you can see in your IDE, the inferred type is RDD[Array[Float]. Now convert this to a RDD[DenseVector]. As simple as doing:
val rowsRdd = rdd.map(Vectors.dense(_))
And now you can build your Matrix:
val mat: RowMatrix = new RowMatrix(rowsRdd)
Once you have the matrix, you can easily compute the different metrix per column:
println("Mean: " + mat.computeColumnSummaryStatistics().mean)
println("Variance: " + mat.computeColumnSummaryStatistics().variance)
It gives:
Mean: [1.375,2.625,0.5,1.375]
Variance:
[0.26785714285714285,1.9821428571428572,0.2857142857142857,1.4107142857142858]
you can read more info about the capabilities of Spark and these distributed types in the doc: https://spark.apache.org/docs/latest/mllib-data-types.html#data-types-rdd-based-api
You can also compute the Covariance matrix, doing the SVD, etc...
I have a system given by this recursive relationship: xt = At xt-1 + bt. I wish to compute xt for all t, with At, bt and x0 given. Is there are built-in function for that? If I use a loop it would be extremely slow. Thanks!
There is sort of a way. Let's say you have your A matrices in a 3D tensor with shape (T, N, N), where T is the total number of time steps and N is the size of your vector. Similarly, B values are in a 2D tensor (T, N). The first step in the computation would be:
x1 = A[0] # x0 + B[0]
Where # represents matrix product. But you can convert this into a single matrix product. Suppose we add a value 1 at the end of x0, and we call that x0p (for prime):
x0p = tf.concat([x, [1]], axis=0)
And now we build a new 3D tensor Ap with shape (T, N+1, N+1), such that for each A[i] we concatenate B[i] as a new column, and then we add a row with N zeros and a single one at the end:
AwithB = tf.concat([tf.concat([A, tf.expand_dims(B, 2)], axis=2)], axis=1)
AnewRow = tf.concat([tf.zeros((T, 1, N), A.dtype), tf.ones((T, 1, 1), A.dtype)], axis=2)
Ap = tf.concat([AwithB, AnewRow], axis=1)
As it turns out, you can now say:
x1p = Ap[0] # x0p
And therefore:
x2p = Ap[1] # x1p = Ap[1] # Ap[0] # x0p
So we just need to compute all the matrix product of all matrices in Ap across the first dimension. Unfortunately, there does not seem to be a direct operation to compute that with TensorFlow, but you can do it relatively fast with tf.scan:
Ap_prod = tf.scan(tf.matmul, Ap)[-1]
And with that you just have to do:
xtp = Ap_prod # x0p
Here is a proof of concept (the code is tweaked to support single examples and batches, either in the A and B values or in the x)
import tensorflow as tf
def compute_state(a, b, x):
s = tf.shape(a)
t = s[-3]
n = s[-1]
# Add final 1 to x
xp = tf.concat([x, tf.ones_like(x[..., :1])], axis=-1)
# Add B column to A
a_b = tf.concat([tf.concat([a, tf.expand_dims(b, axis=-1)], axis=-1)], axis=-2)
# Make new final row for A
a_row = tf.concat([tf.zeros_like(a[..., :1, :]),
tf.ones_like(a[..., :1, :1])], axis=-1)
# Add new row to A
ap = tf.concat([a_b, a_row], axis=-2)
# Compute matrix product reduction
ap_prod = tf.scan(tf.matmul, ap)[..., -1, :, :]
# Compute final result
outp = tf.linalg.matvec(ap_prod, xp)
return outp[..., :-1]
#Test
tf.random.set_seed(0)
a = tf.random.uniform((10, 5, 5), -1, 1)
b = tf.random.uniform((10, 5), -1, 1)
x = tf.random.uniform((5,), -1, 1)
y = compute_state(a, b, x)
# Also works with batches of (a, b) or x
a = tf.random.uniform((100, 10, 5, 5), -1, 1)
b = tf.random.uniform((100, 10, 5), -1, 1)
x = tf.random.uniform((100, 5), -1, 1)
y = compute_state(a, b, x)
I want to create a symmetric matrix of n*n and train this matrix in TensorFlow. Effectively I should only train (n+1)*n/2 parameters. How should I do this?
I saw some previous threads which suggest do the following:
X = tf.Variable(tf.random_uniform([d,d], minval=-.1, maxval=.1, dtype=tf.float64))
X_symm = 0.5 * (X + tf.transpose(X))
However, this means I have to train n*n variables, not n*(n+1)/2 variables.
Even there is no function to achieve this, a patch of self-written code would help!
Thanks!
You can use tf.matrix_band_part(input, 0, -1) to create an upper triangular matrix from a square one, so this code would allow you to train on n(n+1)/2 variables although it has you create n*n:
X = tf.Variable(tf.random_uniform([d,d], minval=-.1, maxval=.1, dtype=tf.float64))
X_upper = tf.matrix_band_part(X, 0, -1)
X_symm = 0.5 * (X_upper + tf.transpose(X_upper))
Referring to answer of gdelab: in Tensorflow 2.x, you have to use following code.
X_upper = tf.linalg.band_part(X, 0, -1)
gdelab's answer is correct and will work, since a neural network can adjust the 0.5 factor by itself. I aimed for a solution, where the neural network actually only has (n+1)*n/2 output neurons. The following function transforms these into a symmetric matrix:
def create_symmetric_matrix(x,n):
x_rev = tf.reverse(x[:, n:], [1])
xc = tf.concat([x, x_rev], axis=1)
x_res = tf.reshape(xc, [-1, n, n])
x_upper_triangular = tf.linalg.band_part(x_res, 0, -1)
x_lower_triangular = tf.linalg.set_diag( tf.transpose(x_upper_triangular, perm=[0, 2, 1]), tf.zeros([tf.shape(x)[0], n], dtype=tf.float32))
return x_upper_triangular + x_lower_triangular
with x as a vector of rank [batch,n*(n+1)/2] and n as the rank of the output matrix.
The code is inspired by tfp.math.fill_triangular.
I've implemented the Bayesian Probabilistic Matrix Factorization algorithm using pymc3 in Python. I also implemented it's precursor, Probabilistic Matrix Factorization (PMF). See my previous question for a reference to the data used here.
I'm having trouble drawing MCMC samples using the NUTS sampler. I initialize the model parameters using the MAP from PMF, and the hyperparameters using Gaussian random draws sprinkled around 0. However, I get a PositiveDefiniteError when setting up the step object for the sampler. I've verified that the MAP estimate from PMF is reasonable, so I expect it has something to do with the way the hyperparameters are being initialized. Here is the PMF model:
import pymc3 as pm
import numpy as np
import pandas as pd
import theano
import scipy as sp
data = pd.read_csv('jester-dense-subset-100x20.csv')
n, m = data.shape
test_size = m / 10
train_size = m - test_size
train = data.copy()
train.ix[:,train_size:] = np.nan # remove test set data
train[train.isnull()] = train.mean().mean() # mean value imputation
train = train.values
test = data.copy()
test.ix[:,:train_size] = np.nan # remove train set data
test = test.values
# Low precision reflects uncertainty; prevents overfitting
alpha_u = alpha_v = 1/np.var(train)
alpha = np.ones((n,m)) * 2 # fixed precision for likelihood function
dim = 10 # dimensionality
# Specify the model.
with pm.Model() as pmf:
pmf_U = pm.MvNormal('U', mu=0, tau=alpha_u * np.eye(dim),
shape=(n, dim), testval=np.random.randn(n, dim)*.01)
pmf_V = pm.MvNormal('V', mu=0, tau=alpha_v * np.eye(dim),
shape=(m, dim), testval=np.random.randn(m, dim)*.01)
pmf_R = pm.Normal('R', mu=theano.tensor.dot(pmf_U, pmf_V.T),
tau=alpha, observed=train)
# Find mode of posterior using optimization
start = pm.find_MAP(fmin=sp.optimize.fmin_powell)
And here is BPMF:
n, m = data.shape
dim = 10 # dimensionality
beta_0 = 1 # scaling factor for lambdas; unclear on its use
alpha = np.ones((n,m)) * 2 # fixed precision for likelihood function
logging.info('building the BPMF model')
std = .05 # how much noise to use for model initialization
with pm.Model() as bpmf:
# Specify user feature matrix
lambda_u = pm.Wishart(
'lambda_u', n=dim, V=np.eye(dim), shape=(dim, dim),
testval=np.random.randn(dim, dim) * std)
mu_u = pm.Normal(
'mu_u', mu=0, tau=beta_0 * lambda_u, shape=dim,
testval=np.random.randn(dim) * std)
U = pm.MvNormal(
'U', mu=mu_u, tau=lambda_u, shape=(n, dim),
testval=np.random.randn(n, dim) * std)
# Specify item feature matrix
lambda_v = pm.Wishart(
'lambda_v', n=dim, V=np.eye(dim), shape=(dim, dim),
testval=np.random.randn(dim, dim) * std)
mu_v = pm.Normal(
'mu_v', mu=0, tau=beta_0 * lambda_v, shape=dim,
testval=np.random.randn(dim) * std)
V = pm.MvNormal(
'V', mu=mu_v, tau=lambda_v, shape=(m, dim),
testval=np.random.randn(m, dim) * std)
# Specify rating likelihood function
R = pm.Normal(
'R', mu=theano.tensor.dot(U, V.T), tau=alpha,
observed=train)
# `start` is the start dictionary obtained from running find_MAP for PMF.
for key in bpmf.test_point:
if key not in start:
start[key] = bpmf.test_point[key]
with bpmf:
step = pm.NUTS(scaling=start)
At the last line, I get the following error:
PositiveDefiniteError: Scaling is not positive definite. Simple check failed. Diagonal contains negatives. Check indexes [ 0 2 ... 2206 2207 ]
As I understand it, I can't use find_MAP with models that have hyperpriors like BPMF. This is why I'm attempting to initialize with the MAP values from PMF, which uses point estimates for the parameters on U and V rather than parameterized hyperpriors.
Unfortunately the Wishart distribution is non-functional. I recently added a warning here: https://github.com/pymc-devs/pymc3/commit/642f63973ec9f807fb6e55a0fc4b31bdfa1f261e
See here for more discussions on this tricky distribution: https://github.com/pymc-devs/pymc3/issues/538
You could confirm that that's the source by fixing the covariance matrix. If that's the case, I'd try using the JKL prior distribution: https://github.com/pymc-devs/pymc3/blob/master/pymc3/examples/LKJ_correlation.py
I have a conditional probability of z for the given m, p(z|m), where the coefficients are chosen in order that integral over z in the limit of [0,1.5] and m in the range of [18:28] would be equal to one.
def p(z,m):
if (m<21.25):
E = { 'ft':0.55, 'alpha': 2.99, 'z0':0.191, 'km':0.089, 'kt':0.25 }
S = { 'ft':0.39, 'alpha': 2.15, 'z0':0.121, 'km':0.093, 'kt':-0.175 }
I={ 'ft':0.06, 'alpha': 1.77, 'z0':0.045, 'km':0.096, 'kt':-0.9196 }
Evalue=E['ft']*np.exp(-1*E['kt']*(m-18))*z**E['alpha']*np.exp(-1*(z/(E['z0']+E['km']*(m-18)))**E['alpha'])
Svalue=S['ft']*np.exp(-1*S['kt']*(m-18))*z**S['alpha']*np.exp(-1*(z/(S['z0']+S['km']*(m-18)))**S['alpha'])
Ivalue=I['ft']*np.exp(-1*I['kt']*(m-18))*z**I['alpha']*np.exp(-1*(z/(I['z0']+I['km']*(m-18)))**I['alpha'])
value=Evalue+Svalue+Ivalue
elif(m>=21.25):
E = { 'ft':0.25, 'alpha': 1.957, 'z0':0.321, 'km':0.196, 'kt':0.565 }
S = { 'ft':0.61, 'alpha': 1.598, 'z0':0.291, 'km':0.167, 'kt':0.155 }
I = { 'ft':0.14, 'alpha': 0.964, 'z0':0.170, 'km':0.129, 'kt':0.1759 }
Evalue=E['ft']*np.exp(-1*E['kt']*(m-18))*z**E['alpha']*np.exp(-1*(z/(E['z0']+E['km']*(m-18)))**E['alpha'])
Svalue=S['ft']*np.exp(-1*S['kt']*(m-18))*z**S['alpha']*np.exp(-1*(z/(S['z0']+S['km']*(m-18)))**S['alpha'])
Ivalue=I['ft']*np.exp(-1*I['kt']*(m-18))*z**I['alpha']*np.exp(-1*(z/(I['z0']+I['km']*(m-18)))**I['alpha'])
value=Evalue+Svalue+Ivalue
return value
I would like to draw a sample from this distribution, therefore I made a grid points in z and m plane to estimate the cumulative distribution, the cumulative integral over m reaches to one but the cumulative integral over z doesn't give me one in the edge. I don't know why it won't get converged to one?!!
grid_m = np.linspace(18, 28, 1000)
grid_z = np.linspace(0, 1.5, 1000)
dz = np.diff(grid_z[:2])
# get cdf on grid, use cumtrapz
prob_zgm=np.empty((grid_z.shape[0], grid_m.shape[0]),float)
for i in range(grid_z.shape[0]):
for j in range(grid_m.shape[0]):
prob_zgm[i,j]=p(grid_z[i],grid_m[j])
pr = np.column_stack((np.zeros(prob_zgm.shape[0]),prob_zgm))
dm = np.diff(grid_m[:2])
cdf_zgm = integrate.cumtrapz(pr, dx=dm, axis=1)
cdf = integrate.cumtrapz(pr, dx=dz, axis=0)
Which assumption might cause this inconsistency or I compute something wrongly?
Update: The cumulative distribution cdf_zgm is shown as
In the rest, in order to get the inverse of the probability, it is the approach I have used:
# fix bounds of cdf_zgm
cdf_zgm[:, 0] = 0
cdf_zgm[:, -1] = 1
#Interpolate the data using a linear spline to "grid_q" samples
grid_q = np.linspace(0, 1, 200)
grid_qm = np.empty((len(grid_m), len(grid_q)), float)
for i in range(len(grid_m)):
grid_qm[i] = interpolate.interp1d(cdf_zgm[i], grid_z)(grid_q)
# build 2d interpolation for z as function of (q,m)
z_interp = interpolate.interp2d(grid_q, grid_m, grid_qm)
#sample magnitude
ng=20000
r = dist_m.rvs(ng)
rvs_u = np.random.rand(ng)
rvs_z = np.asarray([z_interp(rvs_u[i], r[i]) for i in range(len(rvs_u))]).ravel()
Is it right approach to fix the boundaries of CDF to one?
I don't know what's wrong with that code. But here are a couple of different ideas to try:
(1) Just sum the array elements instead of trying to compute the numerical integrals. It is simpler that way. (Summing the array elements is essentially computing a rectangle rule approximation, which as it turns out, is actually more accurate than the trapezoidal rule.)
(2) Instead of trying to create a whole 2-d array at once, write a function which creates just a 1-d slice of p(z | m) for a given value of m. Then just sum those elements to get the cumulative probability.