From what I know, for any square real matrix A, a matrix generated with the following should be a positive semidefinite (PSD) matrix:
Q = A # A.T
I have this matrix A, which is sparse and not symmetric. However, regardless of the properties of A, I think the matrix Q should be PSD.
However, upon using np.linalg.eigvals, I get the following:
np.sort(np.linalg.eigvals(Q))
>>>array([-1.54781185e+01+0.j, -7.27494242e-04+0.j, 2.09363431e-04+0.j, ...,
3.55351888e+15+0.j, 5.82221014e+17+0.j, 1.78954577e+18+0.j])
I think the complex eigenvalues result from the numerical instability of the operation. Using scipy.linalg.eigh, which takes advantage of the fact that the matrix is symmetric, gives,
np.sort(eigh(Q, eigvals_only=True))
>>>array([-3.10854357e+01, -6.60108485e+00, -7.34059692e-01, ...,
3.55351888e+15, 5.82221014e+17, 1.78954577e+18])
which again, contains negative eigenvalues.
My goal is to perform Cholesky decomposition on the matrix Q, however, I keep getting this error message saying that the matrix Q is not positive definite, which can be again confirmed with the negative eigenvalues shown above.
Does anyone know why the matrix is not PSD? Thank you.
Of course that's a numerical problem, but I would say that Q is probably still PSD.
Notice that the largest eigenvalue is 1.7e18 while the smallest is 3.1e1 so the ratio is about, if you take probably min(L) + max(L) == max(L) will return true, meaning that the minimum value is negligible compared to the maximum.
What I would suggest to you is to compute Cholesky on a slightly shifted version of the matrix.
e.g.
d = np.linalg.norm(Q) * np.finfo(Q.dtype).eps;
I = np.eye(len(Q));
np.linalg.cholesky(Q + d * I);
Related
I know that the Hessian matrix is a kind of second derivative test of functions involving more than one independent variable. How does one find the maximum or minimum of a function involving more than one variable? Is it found using the eigenvalues of the Hessian matrix or its principal minors?
You should have a look here:
https://en.wikipedia.org/wiki/Second_partial_derivative_test
For an n-dimensional function f, find an x where the gradient grad f = 0. This is a critical point.
Then, the 2nd derivatives tell, whether x marks a local minimum, a maximum or a saddle point.
The Hessian H is the matrix of all combinations of 2nd derivatives of f.
For the 2D-case the determinant and the minors of the Hessian are relevant.
For the nD-case it might involve a computation of eigen values of the Hessian H (if H is invertible) as part of checking H for being positive (or negative) definite.
In fact, the shortcut in 1) is generalized by 2)
For numeric calculations, some kind of optimization strategy can be used for finding x where grad f = 0.
I'm trying to use linalg to find $P^{500}$ where $ P$ is a 9x9 matrix but Python displays the following:
Matrix full of inf
I think this is too much for this method so my question is, there is annother library to find $P^{500}$? Must I surrender?
Thank you all in advance
Use the eigendecomposition and then exponentiate the matrix of eigenvalues. Like this. You end up getting an inf up in the first column. Unless you control the type of matrix by their eigenvalues this won't happen I believe. In other words, your eigenvalues have to be bounded. You can generate a random matrix by the Schur decomposition putting the eigenvalues along the diagonal. This is a post I have about generating a matrix with given eigenvalues. This should be the way that method works anyways.
% Generate random 9x9 matrix
n=9;
A = randn(n);
[V,D] = eig(A);
p = 500;
Dp = D^p;
Ap = V^(-1)*Dp*V;
Ap1 = mpower(A,p);
NumPy arrays have homogeneous data types and float datatype maximum is
>>> np.finfo('d').max
1.7976931348623157e+308
>>> _**0.002
4.135322944991858
>>> np.array(4.135)**500
1.7288485271474026e+308
>>> np.array(4.136)**500
__main__:1: RuntimeWarning: overflow encountered in power
inf
So if there is an inner product that results higher than approx. 4.135 it is going to blowup and once it blows up, the next product will be multiplied with infinities and more entries will get infinities until everything becomes infinities.
Metahominid's suggestion certainly helps but it will not solve the issue if your eigenvalues are larger than this value. In general, you need to use specialized high-precision tools to get correct results.
I'm hoping to generate new "fake" data from the data I already have with numpy.random.multivariate_normal.
With n samples and d features in an n x d pandas DataFrame:
means = data.mean(axis=0)
covariances = data.cov()
variances = data.var()
means.shape, covariances.shape, variances.shape
>>> ((16349,), (16349, 16349), (16349,))
This looks fine, but the covariance matrix covariances isn't positive semidefinite, which is a requirement of numpy.random.multivariate_normal.
x = np.linalg.eigvals(covariances)
np.all(x >= 0)
>>> False
len([y for y in x if y < 0]) # negative eigenvalues
>>> 4396
len([y for y in x if y > 0]) # positive eigenvalues
>>> 4585
len([y for y in x if y == 0]) # zero eigenvalues.
>>> 7368
However, Wikipedia says
In addition, every covariance matrix is positive semi-definite.
Which leads me to wonder whether pandas.DataFrame.cov gets you a real covariance matrix. Here's the function's implementation. It seems to mostly defer to numpy.cov which also seems to promise a covariance matrix.
Can someone clear this up for me? Why is pandas.DataFrame.covs() not positive semidefinite?
Updated question:
From the first answer, it seems like all the negative eigenvalues are tiny. The author of that answer suggests clipping these eigenvalues, but it's still unclear to me how to sensibly generate a proper covariance matrix with this information.
I can imagine using pd.DataFrame.cov(), doing eigendecomposition to get eigenvectors and values, clipping the values, and then multiplying those matrices to get a new covariance matrix, but that seems quite precarious. Is that done in practice, or is there a better way?
Probably what's happening is that the result is positive-semidefinite, to within the accuracy of the computation. For example:
In [71]: np.linalg.eigvals(np.cov(np.random.random((5,5))))
Out[71]:
array([ 1.87557170e-01, 9.98250875e-02, 6.85211153e-02,
1.01062281e-02, -5.99164839e-18])
has a negative eigenvalue, but the magnitude is small.
So in your shoes I'd verify that the magnitude of the violations was small, and then clip to zero.
I have high (100) dimensional data. I want to get the eigenvectors of the covariance matrix of the data.
Cov = numpy.cov(data)
EVs = numpy.linalg.eigvals(Cov)
I get a vector containing some eigenvalues which are complex numbers. This is mathematically impossible. Granted, the imaginary parts of the complex numbers are very small but it still causes issues later on. Is this a numerical issue? If so, does the issue lie with cov, eigvals function or both?
To give more color on that, I did the same calculation in Mathematica which gives, of course, a correct result. Turns out there are some eigenvalues which are very close to zero but not quiet zero and numpy gets all of these wrong (magnitude wise and it makes some of them into complex numbers)
I was facing a similar issue: np.linalg.eigvals was returning a complex vector in which the imaginary part was quasi-zero everywhere.
Using np.linalg.eigvalsh instead fixed it for me.
I don't know the exact reason, but most probably it is a numerical issue and eigvalsh seems to handle it whereas eigvals doesn't. Note that the ordering of the actual eigenvalues may differ.
The following snippet illustrates the fix:
import numpy as np
from numpy.linalg import eigvalsh, eigvals
D = 10
MUL = 100
EPS = 1e-8
x = np.random.rand(1, D) * MUL
x -= x.mean()
S = np.matmul(x.T, x) + I
# adding epsilon*I avoids negative eigenvalues due to numerical error
# since the matrix is actually positive semidef. (useful for cholesky etc)
S += np.eye(D, dtype=np.float64) * EPS
print(sorted(eigvalsh(S)))
print(sorted(eigvals(S)))
I have two large square sparse matrices, A & B, and need to compute the following: A * B^-1 in the most efficient way. I have a feeling that the answer involves using scipy.sparse, but can't for the life of me figure it out.
After extensive searching, I have run across the following thread: Efficient numpy / lapack routine for product of inverse and sparse matrix? but can't figure out what the most efficient way would be.
Someone suggested using LU decomposition which is built into the sparse module of scipy, but when I try and do LU on sample matrix is says the result is singular (although when I just do a * B^-1 i get an answer). I have also heard someone suggest using linalg.spsolve(), but i can't figure out how to implement this as it requires a vector as the second argument.
If it helps, once I have the solution s.t. A * B^-1 = C, i only need to know the value for one row of the matrix C. The matrices will be roughly 1000x1000 to 1500x1500.
Actually 1000x1000 matrices are not that large. You can compute the inverse of such a matrix using numpy.linalg.inv(B) in less than 1 second on a modern desktop computer.
But you can be much more efficient if you rewrite your problem taking into account the fact that you only need one row of C (this is actually very often the case).
Let us write d_i = [0 0 0 ... 0 1 0 ... 0 ], a vector with only one one on the i-th element.
You can write, if ^t denotes the transpose :
AB^-1 = C <=> A = CB <=> A^t = B^t C^t
For the i-th row :
A^t d_i = B^t C^t d_i <=> a_i = B^t c_i
So you have a linear inverse problem which can be solved using numpy.linalg.solve
ci = np.linalg.solve(B.T, a[i])