I need to solve linear equations with varied sizes. Sometime the size may be 0 or 1 in which cases some errors will happen. For example,
import numpy as np
from numpy.linalg import solve
from scipy.sparse.linalg import spsolve
A1 = np.array([[1,2],[2,1]])
b1 = np.array([[1],[1]])
A2 = np.array([[1]])
b2 = np.array([[1]])
Some unexpected results will happen when calling spsolve or solve:
sage: solve(A1,b1)
array([[ 0.33333333],
[ 0.33333333]])
sage: solve(A2,b2)
array([[ 1.]])
sage: spsolve(A1,b1)
array([ 0.33333333, 0.33333333])
sage: spsolve(A2,b2)
ValueError: object of too small depth for desired array
Notice that the call of "spsolve(A1,b1)" actually yields a row vector, is there anyway to force it to be a column vector? Also, the error in calling "spsolve(A2,b2)" is also very strange since the size of A1 and b1 are not zero.
spsolve does not return an 2d array but a 1d vector.
Use numpy.atleast_2d to inflate the vector, e.g., in your example
In [10]: np.atleast_2d(spsolve(A1,b1)).T
Out[10]:
array([[ 0.33333333],
[ 0.33333333]])
and .T to get a column (2d) vector. This probably also solves your second issue, related to the depth of the result vector.
(I don't use sage, so I can't reproduce your error.)
Related
From linear algebra we know that the eigenvectors of any symmetric matrix (let's call it A) are orthonormal, meaning if M is the matrix of all eigenvectors, we should obtain |det(M)| = 1. I had hoped to see this in numpy.linalg.eig, but got the following behaviour:
import numpy as np
def get_det(A, func, decimals=12):
eigenvectors = func(A)[1]
return np.round(np.absolute(np.linalg.det(eigenvectors)), decimals=decimals)
n = 100
x = np.meshgrid(* 2 * [np.linspace(0, 2 * np.pi, n)])
A = np.sin(x[0]) + np.sin(x[1])
A += A.T # This step is redundant; just to convince everyone that it's symmetric
print(get_det(A, np.linalg.eigh), get_det(A, np.linalg.eig))
Output
>>> 1.0 0.0
As you can see, numpy.linalg.eigh gives the correct result, while numpy.linalg.eig apparently returns a near non-invertible matrix. I presume that it comes from the fact that it has a degenerate eigenvalue (which is 0 in this case) and the corresponding eigenspace is not orthonormal and therefore the total determinant is not 1. In the following example, where there's (usually) no degenerate eigenvalue, the results are indeed the same:
import numpy as np
n = 100
A = np.random.randn(n, n)
A += A.T
print(get_det(A, np.linalg.eigh), get_det(A, np.linalg.eig))
Output
>>> 1.0 1.0
Now, regardless of whether my guess was correct or not (i.e. the difference between eig and eigh comes from the degeneracy of the eigenvalues), I would like to know if there is a way to get a full dimensional eigenvector matrix (i.e. the one that maximises the determinant) using numpy.linalg.eig, because I'm now working with a near-symmetric matrix, but not entirely symmetric, and it gives me a lot of problems if the eigenvector matrix is not invertible. Thank you for your help in advance!
To find the linearly independent eigenvectors, you could try Matrix.rref() which specifies the reduced row-echelon form of the matrix formed by eigenvectors.
Consider a matrix which is not diagonalizable, i.e., its eigenspace is not full rank
array([[0., 1., 0.],
[0., 0., 1.],
[0., 0., 0.]])
We can find its linearly independent eigenvectors using rref()
A = np.diag(np.ones(2), 1)
v = np.linalg.eig(A)[1]
result = sympy.Matrix(np.round(v, decimals=100)).rref()
v[:, result[1]]
which returns
array([[1.],
[0.],
[0.]])
See also: python built-in function to do matrix reduction
Find a linearly independent set of vectors that spans the same substance of R^3 as that spanned
I have a regression model that I fit in SKlearn's LinearRegression module:
To extract the coefficients, I used the code;
coefficients = model.coef_
It produced the following array with a shape of (1, 10):
[-4.72307152e-05 1.29731143e-04 8.75483702e-05 -6.28749019e-04
1.75096740e-04 -3.30209379e-06 1.35937650e-03 3.89048429e-11
8.48406857e-03 -1.36499030e-05]
Now, I would like to save the array to a pd.Series. I am taking the following approach:
features = ["f1", "f2", "f3", "f4", "f5", "f6", "f7", "f8", "f9", "f10"]
model_coefs = pd.Series(coefficients, index=features)
And, the system gives me the following error:
ValueError: Length of passed values is 1, index implies 10.
What I have tried:
Transposing the underlying array, coefficients, to give it a length of 10.
Reshaping the array to give it a shape of (10,1).
But nothing seems to work. I am not sure where I am going wrong.
For your case you want to flatten the array so .ravel should do the trick for example:
pd.Series(np.zeros((1, 10)).ravel(), index=features)
It's strange the coeffs output are of shape (1, 10), when I run the base sklearn example here (with multiple features) my coeffs are of 1-d:
In [27]: regr.coef_
Out[27]:
array([ 3.03499549e-01, -2.37639315e+02, 5.10530605e+02, 3.27736980e+02,
-8.14131709e+02, 4.92814588e+02, 1.02848452e+02, 1.84606489e+02,
7.43519617e+02, 7.60951722e+01])
In [28]: regr.coef_.shape
Out[28]: (10,)
NumPy's eigenvector solution differs from Wolfram Alpha and my personal calculation by hand.
>>> import numpy.linalg
>>> import numpy as np
>>> numpy.linalg.eig(np.array([[-2, 1], [2, -1]]))
(array([-3., 0.]), array([[-0.70710678, -0.4472136 ],
[ 0.70710678, -0.89442719]]))
Wolfram Alpha https://www.wolframalpha.com/input/?i=eigenvectors+%7B%7B-2,1%7D,%7B%2B2,-1%7D%7D and my personal calculation give the eigenvectors (-1, 1) and (2, 1). The NumPy solution however differs.
NumPy's calculated eigenvalues however are confirmed by Wolfram Alpha and my personal calculation.
So, is this a bug in NumPy or is my understanding of math to simple? A similar thread Numpy seems to produce incorrect eigenvectors sees the main difference in rounding/scaling of the eigenvectors but the deviation between the solutions would be massive.
Regards
numpy.linalg.eig normalizes the eigen vectors with the results being the column vectors
eig_vectors = np.linalg.eig(np.array([[-2, 1], [2, -1]]))[1]
vec_1 = eig_vectors[:,0]
vec_2 = eig_vectors[:,1]
now these 2 vectors are just normalized versions of the vectors you calculated ie
print(vec_1 * np.sqrt(2)) # where root 2 is the magnitude of [-1, 1]
print(vec_1 * np.sqrt(5)) # where root 5 is the magnitude of [2, 1]
So bottom line the both sets of calculations are equivalent just Numpy likes to normalze the results.
So first off, I think what I'm trying to achieve is some sort of Cartesian product but elementwise, across the columns only.
What I'm trying to do is, if you have multiple 2D arrays of size [ (N,D1), (N,D2), (N,D3)...(N,Dn) ]
The result is thus to be a combinatorial product across axis=1 such that the final result will then be of shape (N, D) where D=D1*D2*D3*...Dn
e.g.
A = np.array([[1,2],
[3,4]])
B = np.array([[10,20,30],
[5,6,7]])
cartesian_product( [A,B], axis=1 )
>> np.array([[ 1*10, 1*20, 1*30, 2*10, 2*20, 2*30 ]
[ 3*5, 3*6, 3*7, 4*5, 4*6, 4*7 ]])
and extendable to cartesian_product([A,B,C,D...], axis=1)
e.g.
A = np.array([[1,2],
[3,4]])
B = np.array([[10,20],
[5,6]])
C = np.array([[50, 0],
[60, 8]])
cartesian_product( [A,B,C], axis=1 )
>> np.array([[ 1*10*50, 1*10*0, 1*20*50, 1*20*0, 2*10*50, 2*10*0, 2*20*50, 2*20*0]
[ 3*5*60, 3*5*8, 3*6*60, 3*6*8, 4*5*60, 4*5*8, 4*6*60, 4*6*8]])
I have a working solution that essentially creates an empty (N,D) matrix and then broadcasting a vector columnwise product for each column within nested for loops for each matrix in the provided list. Clearly is horrible once the arrays get larger!
Is there an existing solution within numpy or tensorflow for this? Potentially one that is efficiently paralleizable (A tensorflow solution would be wonderful but a numpy is ok and as long as the vector logic is clear then it shouldn't be hard to make a tf equivalent)
I'm not sure if I need to use einsum, tensordot, meshgrid or some combination thereof to achieve this. I have a solution but only for single-dimension vectors from https://stackoverflow.com/a/11146645/2123721 even though that solution says to work for arbitrary dimensions array (which appears to mean vectors). With that one i can do a .prod(axis=1), but again this is only valid for vectors.
thanks!
Here's one approach to do this iteratively in an accumulating manner making use of broadcasting after extending dimensions for each pair from the list of arrays for elmentwise multiplications -
L = [A,B,C] # list of arrays
n = L[0].shape[0]
out = (L[1][:,None]*L[0][:,:,None]).reshape(n,-1)
for i in L[2:]:
out = (i[:,None]*out[:,:,None]).reshape(n,-1)
I want to normalize the pixel values of an image to the range [0, 1] for each channel (R, G, B).
Minimal Example
#!/usr/bin/env python
import numpy as np
import scipy
from sklearn import preprocessing
original = scipy.misc.imread('Crocodylus-johnsoni-3.jpg')
scipy.misc.imshow(original)
transformed = np.zeros(original.shape, dtype=np.float64)
scaler = preprocessing.MinMaxScaler()
for channel in range(3):
transformed[:, :, channel] = scaler.fit_transform(original[:, :, channel])
scipy.misc.imsave("transformed.jpg", transformed)
What happens
Taking https://commons.wikimedia.org/wiki/File:Crocodylus-johnsoni-3.jpg,
I get the following "normalized" result:
As you can see there are lines from top to bottom at the right side. What happened there? It seems to me that the normalization went wrong. If so: How do I fix it?
In scikit-learn, a two-dimensional array with shape (m, n) is usually interpreted as a collection of m samples, with each sample having n features.
MinMaxScaler.fit_transform() transforms each feature, so each column of your array is transformed independently of the others. That results in the vertical "stripes" in the image.
It looks like you intended to scale each color channel independently. To do that using MinMaxScaler, reshape the input so that each channel becomes one column. That is, if the original image has shape (m, n, 3), reshape it to (m*n, 3) before passing it to the fit_transform() method, and then restore the shape of the result to create the transformed array.
For example,
ascolumns = original.reshape(-1, 3)
t = scaler.fit_transform(ascolumns)
transformed = t.reshape(original.shape)
With this, transformed looks like this:
The image looks exactly like the original, because it turns out that in the array original, the minimum and maximum are 0 and 255, respectively, in each channel:
In [41]: original.min(axis=(0, 1))
Out[41]: array([0, 0, 0], dtype=uint8)
In [42]: original.max(axis=(0, 1))
Out[42]: array([255, 255, 255], dtype=uint8)
So all fit_transform does in this case is transform all the input values to the floating point range [0.0, 1.0] uniformly. If the minimum or maximum was different in one of the channels, the transformed image would look different.
By the way, it is not difficult to perform the transform using pure numpy. (I'm using Python 3, so in the following, the division automatically casts the result to floating point. If you are using Python 2, you'll need to convert one of the argument to floating point, or use from __future__ import division.)
In [58]: omin = original.min(axis=(0, 1), keepdims=True)
In [59]: omax = original.max(axis=(0, 1), keepdims=True)
In [60]: xformed = (original - omin)/(omax - omin)
In [61]: np.allclose(xformed, transformed)
Out[61]: True
(One potential problem with that method is that it will generate an error if one of the channels is constant, because then one of the values in omax - omin will be 0.)