Is there a way to calculate fractional power of a batch/3d array (e.g. shape : 4*3*3) using any scientific computing libraries?
I've come across scipy.linalg fractional_matrix_power, but it doesn't seem to work for batch matrices. Currently, I'm using list comprehension to iterate over the batch, but it doesn't seem very efficient.
Is there any workaround or libraries to parallelize the task?
D_nsqrt = fractional_matrix_power(D, -0.5)
Code above throws error : ValueError: expected square array_like input.
But following works fine:
D_nsqrt = fractional_matrix_power(D[0], -0.5)
Shape of D : 4*3*3
Related
I'm trying to use linalg to find $P^{500}$ where $ P$ is a 9x9 matrix but Python displays the following:
Matrix full of inf
I think this is too much for this method so my question is, there is annother library to find $P^{500}$? Must I surrender?
Thank you all in advance
Use the eigendecomposition and then exponentiate the matrix of eigenvalues. Like this. You end up getting an inf up in the first column. Unless you control the type of matrix by their eigenvalues this won't happen I believe. In other words, your eigenvalues have to be bounded. You can generate a random matrix by the Schur decomposition putting the eigenvalues along the diagonal. This is a post I have about generating a matrix with given eigenvalues. This should be the way that method works anyways.
% Generate random 9x9 matrix
n=9;
A = randn(n);
[V,D] = eig(A);
p = 500;
Dp = D^p;
Ap = V^(-1)*Dp*V;
Ap1 = mpower(A,p);
NumPy arrays have homogeneous data types and float datatype maximum is
>>> np.finfo('d').max
1.7976931348623157e+308
>>> _**0.002
4.135322944991858
>>> np.array(4.135)**500
1.7288485271474026e+308
>>> np.array(4.136)**500
__main__:1: RuntimeWarning: overflow encountered in power
inf
So if there is an inner product that results higher than approx. 4.135 it is going to blowup and once it blows up, the next product will be multiplied with infinities and more entries will get infinities until everything becomes infinities.
Metahominid's suggestion certainly helps but it will not solve the issue if your eigenvalues are larger than this value. In general, you need to use specialized high-precision tools to get correct results.
I have the following equation:
Where M is a [Dx3] matrix and V is a [DxD] matrix. Each of these forms a [3x3] block in a larger [3Kx3K] matrix, indexed by i, j. For now, I'm wondering if anyone has come across doing a reduce-sum of this form in TensorFlow - I'm still getting used to the API structure!
There are few key parameters associated with Linear Regression e.g. Adjusted R Square, Coefficients, P-value, R square, Multiple R etc. While using google Tensorflow API to implement Linear Regression how are these parameter mapped? Is there any way we can get the value of these parameters after/during model execution
From my experience, if you want to have these values while your model runs then you have to hand code them using tensorflow functions. If you want them after the model has run you can use scipy or other implementations. Below are some examples of how you might go about coding R^2, MAPE, RMSE...
total_error = tf.reduce_sum(tf.square(tf.sub(y, tf.reduce_mean(y))))
unexplained_error = tf.reduce_sum(tf.square(tf.sub(y, prediction)))
R_squared = tf.sub(tf.div(total_error, unexplained_error),1.0)
R = tf.mul(tf.sign(R_squared),tf.sqrt(tf.abs(unexplained_error)))
MAPE = tf.reduce_mean(tf.abs(tf.div(tf.sub(y, prediction), y)))
RMSE = tf.sqrt(tf.reduce_mean(tf.square(tf.sub(y, prediction))))
I believe the formula for R2 should be the following. Note that it would go negative when the network is so bad that it does a worse job than the mere average as a predictor:
total_error = tf.reduce_sum(tf.square(tf.subtract(y, tf.reduce_mean(y))))
unexplained_error = tf.reduce_sum(tf.square(tf.subtract(y, pred)))
R_squared = tf.subtract(1.0, tf.divide(unexplained_error, total_error))
Adjusted_R_squared = 1 - [ (1-R_squared)*(n-1)/(n-k-1) ]
whereas n is the number of observations and k is the number of features.
You should not use a formula for R Squared. This exists in Tensorflow Addons. You will only need to extend it to Adjusted R Squared.
I would strongly recommend against using a recipe to calculate r-squared itself! The examples I've found do not produce consistent results, especially with just one target variable. This gave me enormous headaches!
The correct thing to do is to use tensorflow_addons.metrics.RQsquare(). Tensorflow Add Ons is on PyPi here and the documentation is a part of Tensorflow here. All you have to do is set y_shape to the shape of your output, often it is (1,) for a single output variable.
Then you can use what RSquare() returns in your own metric that handled the adjustments.
I'm trying to solve a large eigenvalue problem with Scipy where the matrix A is dense but I can compute its action on a vector without having to assemble A explicitly. So in order to avoid memory issues when the matrix A gets big I'd like to use the sparse solver scipy.sparse.linalg.eigs with a LinearOperator that implemements this action.
Applying eigs to an explicit numpy array A works fine. However, if I apply eigs to a LinearOperator instead then the iterative solver fails to converge. This is true even if the matvec method of the LinearOperator is simply matrix-vector multiplication with the given matrix A.
A minimal example illustrating the failure is attached below (I'm using shift-invert mode because I am interested in the smallest few eigenvalues). This computes the eigenvalues of a random matrix A just fine, but fails when applied to a LinearOperator that is directly converted from A. I tried to fiddle with the parameters for the iterative solver (v0, ncv, maxiter) but to no avail.
Am I missing something obvious? Is there a way to make this work? Any suggestions would be highly appreciated. Many thanks!
Edit: I should clarify what I mean by "make this work" (thanks, Dietrich). The example below uses a random matrix for illustration. However, in my application I know that the eigenvalues are almost purely imaginary (or almost purely real if I multiply the matrix by 1j). I'm interested in the 10-20 smallest-magnitude eigenvalues, but the algorithm doesn't behave well (i.e., never stops even for small-ish matrix sizes) if I specify which='SM'. Therefore I'm using shift-invert mode by passing the parameters sigma=0.0, which='LM'. I'm happy to try a different approach so long as it allows me to compute a bunch of smallest-magnitude eigenvalues.
from scipy.sparse.linalg import eigs, LinearOperator, aslinearoperator
import numpy as np
# Set a seed for reproducibility
np.random.seed(0)
# Size of the matrix
N = 100
# Generate a random matrix of size N x N
# and compute its eigenvalues
A = np.random.random_sample((N, N))
eigvals = eigs(A, sigma=0.0, which='LM', return_eigenvectors=False)
print eigvals
# Convert the matrix to a LinearOperator
A_op = aslinearoperator(A)
# Try to solve the same eigenproblem again.
# This time it produces an error:
#
# ValueError: Error in inverting M: function gmres did not converge (info = 1000).
eigvals2 = eigs(A_op, sigma=0.0, which='LM', return_eigenvectors=False)
I tried running your code, but not passing the sigma parameter to eigs() and it ran without problems (read eigs() docs for its meaning). I didn't see the benefit of it in your example.
Eigs can already find the smallest eigenvalues first. Set which = 'SM'
I am currently trying to implement eigenfaces with numpy, but it seems to struggle with my 32bit Linux system (I use 32bit because of the formerly bad support for flash and java in 64bit, my processor is 64bit…), because when trying to multiply two vectors to get a matrix (vector * transposed vector) numpy gives me
ValueError: broadcast dimensions too large.
I read that this is due to too little memory and could be solved with 64bit. Is there some way to circumvent this? The matrix would be 528000*528000 elements. According to my paper this big matrix is needed for the covariance matrix (suming up all these huge matrices and then dividing it by the number of matrices).
My piece of code looks like this (I do not understand why numpy gives me a matrix anyway, because for my matrix knowledge it looks the wrong way round (horizontal*vertical), but it worked with examples of smaller size):
tmp = []
for face in faces: # just an array of all face vectors (len = 528000)
diff = np.subtract(averageFace, face)
diff = np.asmatrix(diff)
tmp.append(np.multiply(diff, np.transpose(diff)))
C = np.divide(np.sum(tmp, axis=0), len(tmp))
As pv already elaborated, it's not really practically feasible to try to produce such huge covariance matrix.
But please note that eigenvectors (explained in your drexel link) of phi* phi^T and phi^T* phi are related and this is the key to make the problem more manageable. See more on this topic in Eigenface.