I have high (100) dimensional data. I want to get the eigenvectors of the covariance matrix of the data.
Cov = numpy.cov(data)
EVs = numpy.linalg.eigvals(Cov)
I get a vector containing some eigenvalues which are complex numbers. This is mathematically impossible. Granted, the imaginary parts of the complex numbers are very small but it still causes issues later on. Is this a numerical issue? If so, does the issue lie with cov, eigvals function or both?
To give more color on that, I did the same calculation in Mathematica which gives, of course, a correct result. Turns out there are some eigenvalues which are very close to zero but not quiet zero and numpy gets all of these wrong (magnitude wise and it makes some of them into complex numbers)
I was facing a similar issue: np.linalg.eigvals was returning a complex vector in which the imaginary part was quasi-zero everywhere.
Using np.linalg.eigvalsh instead fixed it for me.
I don't know the exact reason, but most probably it is a numerical issue and eigvalsh seems to handle it whereas eigvals doesn't. Note that the ordering of the actual eigenvalues may differ.
The following snippet illustrates the fix:
import numpy as np
from numpy.linalg import eigvalsh, eigvals
D = 10
MUL = 100
EPS = 1e-8
x = np.random.rand(1, D) * MUL
x -= x.mean()
S = np.matmul(x.T, x) + I
# adding epsilon*I avoids negative eigenvalues due to numerical error
# since the matrix is actually positive semidef. (useful for cholesky etc)
S += np.eye(D, dtype=np.float64) * EPS
print(sorted(eigvalsh(S)))
print(sorted(eigvals(S)))
Related
From what I know, for any square real matrix A, a matrix generated with the following should be a positive semidefinite (PSD) matrix:
Q = A # A.T
I have this matrix A, which is sparse and not symmetric. However, regardless of the properties of A, I think the matrix Q should be PSD.
However, upon using np.linalg.eigvals, I get the following:
np.sort(np.linalg.eigvals(Q))
>>>array([-1.54781185e+01+0.j, -7.27494242e-04+0.j, 2.09363431e-04+0.j, ...,
3.55351888e+15+0.j, 5.82221014e+17+0.j, 1.78954577e+18+0.j])
I think the complex eigenvalues result from the numerical instability of the operation. Using scipy.linalg.eigh, which takes advantage of the fact that the matrix is symmetric, gives,
np.sort(eigh(Q, eigvals_only=True))
>>>array([-3.10854357e+01, -6.60108485e+00, -7.34059692e-01, ...,
3.55351888e+15, 5.82221014e+17, 1.78954577e+18])
which again, contains negative eigenvalues.
My goal is to perform Cholesky decomposition on the matrix Q, however, I keep getting this error message saying that the matrix Q is not positive definite, which can be again confirmed with the negative eigenvalues shown above.
Does anyone know why the matrix is not PSD? Thank you.
Of course that's a numerical problem, but I would say that Q is probably still PSD.
Notice that the largest eigenvalue is 1.7e18 while the smallest is 3.1e1 so the ratio is about, if you take probably min(L) + max(L) == max(L) will return true, meaning that the minimum value is negligible compared to the maximum.
What I would suggest to you is to compute Cholesky on a slightly shifted version of the matrix.
e.g.
d = np.linalg.norm(Q) * np.finfo(Q.dtype).eps;
I = np.eye(len(Q));
np.linalg.cholesky(Q + d * I);
I'm trying to use linalg to find $P^{500}$ where $ P$ is a 9x9 matrix but Python displays the following:
Matrix full of inf
I think this is too much for this method so my question is, there is annother library to find $P^{500}$? Must I surrender?
Thank you all in advance
Use the eigendecomposition and then exponentiate the matrix of eigenvalues. Like this. You end up getting an inf up in the first column. Unless you control the type of matrix by their eigenvalues this won't happen I believe. In other words, your eigenvalues have to be bounded. You can generate a random matrix by the Schur decomposition putting the eigenvalues along the diagonal. This is a post I have about generating a matrix with given eigenvalues. This should be the way that method works anyways.
% Generate random 9x9 matrix
n=9;
A = randn(n);
[V,D] = eig(A);
p = 500;
Dp = D^p;
Ap = V^(-1)*Dp*V;
Ap1 = mpower(A,p);
NumPy arrays have homogeneous data types and float datatype maximum is
>>> np.finfo('d').max
1.7976931348623157e+308
>>> _**0.002
4.135322944991858
>>> np.array(4.135)**500
1.7288485271474026e+308
>>> np.array(4.136)**500
__main__:1: RuntimeWarning: overflow encountered in power
inf
So if there is an inner product that results higher than approx. 4.135 it is going to blowup and once it blows up, the next product will be multiplied with infinities and more entries will get infinities until everything becomes infinities.
Metahominid's suggestion certainly helps but it will not solve the issue if your eigenvalues are larger than this value. In general, you need to use specialized high-precision tools to get correct results.
Is there a way to chose the x/y output axes range from np.fft2 ?
I have a piece of code computing the diffraction pattern of an aperture. The aperture is defined in a 2k x 2k pixel array. The diffraction pattern is basically the inner part of the 2D FT of the aperture. The np.fft2 gives me an output array same size of the input but with some preset range of the x/y axes. Of course I can zoom in by using the image viewer, but I have already lost detail. What is the solution?
Thanks,
Gert
import numpy as np
import matplotlib.pyplot as plt
r= 500
s= 1000
y,x = np.ogrid[-s:s+1, -s:s+1]
mask = x*x + y*y <= r*r
aperture = np.ones((2*s+1, 2*s+1))
aperture[mask] = 0
plt.imshow(aperture)
plt.show()
ffta= np.fft.fft2(aperture)
plt.imshow(np.log(np.abs(np.fft.fftshift(ffta))**2))
plt.show()
Unfortunately, much of the speed and accuracy of the FFT come from the outputs being the same size as the input.
The conventional way to increase the apparent resolution in the output Fourier domain is by zero-padding the input: np.fft.fft2(aperture, [4 * (2*s+1), 4 * (2*s+1)]) tells the FFT to pad your input to be 4 * (2*s+1) pixels tall and wide, i.e., make the input four times larger (sixteen times the number of pixels).
Begin aside I say "apparent" resolution because the actual amount of data you have hasn't increased, but the Fourier transform will appear smoother because zero-padding in the input domain causes the Fourier transform to interpolate the output. In the example above, any feature that could be seen with one pixel will be shown with four pixels. Just to make this fully concrete, this example shows that every fourth pixel of the zero-padded FFT is numerically the same as every pixel of the original unpadded FFT:
# Generate your `ffta` as above, then
N = 2 * s + 1
Up = 4
fftup = np.fft.fft2(aperture, [Up * N, Up * N])
relerr = lambda dirt, gold: np.abs((dirt - gold) / gold)
print(np.max(relerr(fftup[::Up, ::Up] , ffta))) # ~6e-12.
(That relerr is just a simple relative error, which you want to be close to machine precision, around 2e-16. The largest error between every 4th sample of the zero-padded FFT and the unpadded FFT is 6e-12 which is quite close to machine precision, meaning these two arrays are nearly numerically equivalent.) End aside
Zero-padding is the most straightforward way around your problem. But it does cost you a lot of memory. And it is frustrating because you might only care about a tiny, tiny part of the transform. There's an algorithm called the chirp z-transform (CZT, or colloquially the "zoom FFT") which can do this. If your input is N (for you 2*s+1) and you want just M samples of the FFT's output evaluated anywhere, it will compute three Fourier transforms of size N + M - 1 to obtain the desired M samples of the output. This would solve your problem too, since you can ask for M samples in the region of interest, and it wouldn't require prohibitively-much memory, though it would need at least 3x more CPU time. The downside is that a solid implementation of CZT isn't in Numpy/Scipy yet: see the scipy issue and the code it references. Matlab's CZT seems reliable, if that's an option; Octave-forge has one too and the Octave people usually try hard to match/exceed Matlab.
But if you have the memory, zero-padding the input is the way to go.
I'm trying to solve a large eigenvalue problem with Scipy where the matrix A is dense but I can compute its action on a vector without having to assemble A explicitly. So in order to avoid memory issues when the matrix A gets big I'd like to use the sparse solver scipy.sparse.linalg.eigs with a LinearOperator that implemements this action.
Applying eigs to an explicit numpy array A works fine. However, if I apply eigs to a LinearOperator instead then the iterative solver fails to converge. This is true even if the matvec method of the LinearOperator is simply matrix-vector multiplication with the given matrix A.
A minimal example illustrating the failure is attached below (I'm using shift-invert mode because I am interested in the smallest few eigenvalues). This computes the eigenvalues of a random matrix A just fine, but fails when applied to a LinearOperator that is directly converted from A. I tried to fiddle with the parameters for the iterative solver (v0, ncv, maxiter) but to no avail.
Am I missing something obvious? Is there a way to make this work? Any suggestions would be highly appreciated. Many thanks!
Edit: I should clarify what I mean by "make this work" (thanks, Dietrich). The example below uses a random matrix for illustration. However, in my application I know that the eigenvalues are almost purely imaginary (or almost purely real if I multiply the matrix by 1j). I'm interested in the 10-20 smallest-magnitude eigenvalues, but the algorithm doesn't behave well (i.e., never stops even for small-ish matrix sizes) if I specify which='SM'. Therefore I'm using shift-invert mode by passing the parameters sigma=0.0, which='LM'. I'm happy to try a different approach so long as it allows me to compute a bunch of smallest-magnitude eigenvalues.
from scipy.sparse.linalg import eigs, LinearOperator, aslinearoperator
import numpy as np
# Set a seed for reproducibility
np.random.seed(0)
# Size of the matrix
N = 100
# Generate a random matrix of size N x N
# and compute its eigenvalues
A = np.random.random_sample((N, N))
eigvals = eigs(A, sigma=0.0, which='LM', return_eigenvectors=False)
print eigvals
# Convert the matrix to a LinearOperator
A_op = aslinearoperator(A)
# Try to solve the same eigenproblem again.
# This time it produces an error:
#
# ValueError: Error in inverting M: function gmres did not converge (info = 1000).
eigvals2 = eigs(A_op, sigma=0.0, which='LM', return_eigenvectors=False)
I tried running your code, but not passing the sigma parameter to eigs() and it ran without problems (read eigs() docs for its meaning). I didn't see the benefit of it in your example.
Eigs can already find the smallest eigenvalues first. Set which = 'SM'
Consider the following simple piece of code:
import numpy as np
A = np.array([[0,1,1,0],[1,0,0,1],[1,0,0,1],[0,1,1,0]], dtype=float)
eye4 = np.eye(4, dtype=float) # 4x4 identity
H1 = np.kron(A,eye4)
w1,v1 = np.linalg.eig(H1)
H1copy = np.dot(np.dot(v1,np.diag(w1)),np.transpose(v1)) # reconstructing from eigvals and eigvecs
H2 = np.kron(eye4,A)
w2,v2 = np.linalg.eig(H2)
H2copy = np.dot(np.dot(v2,np.diag(w2)),np.transpose(v2))
print np.sum((H1-H1copy)**2) # sum of squares of elements
print np.sum((H2-H2copy)**2)
It produces the output
1.06656622138
8.7514256673e-30
This is very perplexing. These two matrices differ only in the order of the kronecker product. And yet, the accuracy is so low in just one of them. Further, an norm-square error > 1.066 is highly unacceptable according to me. What is going wrong here?
Further, what is the best way to work around this issue, given that the eigenvalue decomposition is a small part of a code that has to be run several (>100) times.
Your matrices are symmetric. Use eigh instead of eig.
If you use eig, the transpose of v1 is not necessarily equal to the inverse of v1.