Numpy and Scipy matrix inversion functions differences - numpy

My question is rather simple : What is the difference between the numpy.linalg.inv and the scipy.linalg.inv functions for matrices inversion
Is the Scipy function just a wrapper of the Numpy one ?
Efficiency, numerical stability, speed ... which one should I prefer ?
Thanks !

From the SciPy Documentation you get the following information:
scipy.linalg vs numpy.linalg
scipy.linalg contains all the functions in numpy.linalg. plus some other more advanced ones not contained in numpy.linalg
Another advantage of using scipy.linalg over numpy.linalg is that it is always compiled with BLAS/LAPACK support, while for numpy this is optional. Therefore, the scipy version might be faster depending on how numpy was installed.
Therefore, unless you don’t want to add scipy as a dependency to your numpy program, use scipy.linalg instead of numpy.linalg
I hope this helps!

Related

optimize an array for interpolation

I am using an array (51x51x181) to make a 3d interpolation in python (and I can calculate any point inbetween if needed).
I need to reduce the size of the array and would like to do this with the least amount of error possible.
Attached you find an example, with the error function I would like to improve on. The number of values in the array should stay the same, however the Angles and Shifts in the example do not have to be equally spaced.
import numpy as np
from scipy.interpolate import RegularGridInterpolator
import itertools
Data=np.zeros((5,180))
Angles=np.linspace(0,360,10)
Shifts=np.linspace(0,100,10)
Data=np.sin(np.deg2rad(Angles[:,None]+Shifts[None,:]))
interp = RegularGridInterpolator((Angles, Shifts),Data, bounds_error=False, fill_value=None)
def errorfunc():
Angles=np.linspace(0,360,50)
Shifts=np.linspace(0,100,50)
Function_Results=np.sin(np.deg2rad(Angles[:,None]+Shifts[None,:]).flatten())
Data_interp=interp(np.array(list(itertools.product(Angles,Shifts))))
Error=np.sqrt(np.mean(np.square(Function_Results-Data_interp)))
return(Error)
I could not find a feasible optimizer in scipy (tried some with poor performance). Is there a standard way to do this?

Object arrays not supported on numpy with mkl?

I recently switched from numpy compiled with open blas to numpy compiled with mkl. In pure numeric operations there was a clear speed up for matrix multiplication. However when I ran some code I have been using which multiplies matrices containing sympy variables, I now get the error
'Object arrays are not currently supported'
Does anyone have information on why this is the case for mkl and not for open blas?
Release notes for 1.17.0
Support of object arrays in matmul
It is now possible to use matmul (or the # operator) with object arrays. For instance, it is now possible to do:
from fractions import Fraction
a = np.array([[Fraction(1, 2), Fraction(1, 3)], [Fraction(1, 3), Fraction(1, 2)]])
b = a # a
Are you using # (matmul or dot)? A numpy array containing sympy objects will be object dtype. Math on object arrays depends on delegating the action to the object's own methods. It cannot be performed by the fast compiled libraries, which only work with c types such as float and double.
As a general rule you should not be trying to mix numpy and sympy. Math is hit-or-miss, and never fast. Use sympy's own Matrix module, or lambdify the sympy expressions for numeric work.
What's the mkl version? You may have to explore this with creator of that compilation.

Does PyTorch have a RandomState-like object for random number generation?

in numpy i can
import numpy as np
rs = np.random.RandomState(seed=0)
and then pass that object around, eg for dependency injection.
Does PyTorch have a similar interface? I can't find anything in the docs, but maybe i'm missing something.
The closest thing would be torch.manual_seed, which sets the seed for generating random numbers and returns a torch.Generator. This thread here has more information, apparently there may be some inconsistencies depending on whether you are using GPU or a CPU.

Convert Breeze Matrix to Numpy Array

Is it possible to convert a breeze dense matrix to numpy array using spark?
I have here a breeze dense matrix I want to convert to numpy array.
Here is a way that works correctly but is slow / inefficient (creates multiple copies). i used zeppelin spark and pyspark interpreters (i guess toree should also be possible):
in spark:
%spark
import breeze.linalg._
import breeze.numerics._
z.put("matrix", DenseMatrix.eye[Double](4));
z.get("matrix")
then in python:
%pyspark
import numpy as np
def breeze2numpy(breeze_matrix):
data = list(breeze_matrix.copy().data())
return np.array(data).reshape(breeze_matrix.rows(), breeze_matrix.cols(), order='F')
breeze2numpy(z.z.get("matrix"))
this works but will be impractical for big datasets (because of the multiple copies involved via a python list). it would be nice to have a zero-copy method using python's buffer protocol like there is for C++ Eigen matrix --> numpy array.

Why the difference between octave's prctile and numpy's percentile?

I've been rewriting a matlab/octave program into numpy and ran across a difference in some resultant values.
This occurs with both the percentile/prctile and the stdard-deviation functions.
In Numpy:
import matplotlib.mlab as ml
import numpy
>>> t = numpy.linspace(0,100, 100)
>>> numpy.percentile(t,95)
95.0
>>> numpy.std(t)
29.157646512850626
>>> ml.prctile(t,95)
95.000000000000014
In Octave:
octave:1> t = linspace(0,100,100)';
octave:2> prctile(t,95)
ans = 95.454545
octave:3> std(t)
ans = 29.304537
Although the array values of 't' are the same, the results are more different than I would suspect.
In the numpy help(numpy.std) they specifically mention that the algorithm is:
std = sqrt(mean(abs(x - x.mean())**2))
So I implemented that in octave and got the exact answer numpy gives. So it seems the std-deviation function differs.
But why/how? And which is correct? (if there is such a thing)
And even prctile/percentile?
Just in case since I'm in Linux aptosid...
GNU Octave, version 3.6.2
numpy.version '1.6.2rc1'
Numpy simply uses a different algorithm when the percentile lies between two data points. Octave, Matlab and R always center it exactly between two points when needed (I believe), numpy does a bit more then that... if you check http://en.wikipedia.org/wiki/Percentile you will see there are a couple of ways to calculate percentiles.
It seems like Octave assumes ddof=1, at least by default, and numpy uses 0 by default:
>>> numpy.std(t, ddof=0)
29.157646512850633
>>> numpy.std(t, ddof=1)
29.304537349375785