I only managed to extract one diagonal using Numpy einsum. How do I get the other diagonals like [6, 37, 68, 99] with help of einsum?
x = np.arange(1, 26 ).reshape(5,5)
y = np.arange(26, 51).reshape(5,5)
z = np.arange(51, 76).reshape(5,5)
t = np.arange(76, 101).reshape(5,5)
p = np.arange(101, 126).reshape(5,5)
a4 = np.array([x, y, z, t, p]
Extracting one diagonal:
>>>np.einsum('iii->i', a4)
>>>[ 1 32 63 94 125]
I don't have any "easy" solution using einsum but it is quite simple with a for loop:
import numpy as np
# Generation of a 3x3x3 matrix
x = np.arange(1 , 10).reshape(3,3)
y = np.arange(11, 20).reshape(3,3)
z = np.arange(21, 30).reshape(3,3)
M = np.array([x, y, z])
# Generation of the index
I = np.arange(0,len(M))
# Generation of all the possible diagonals
for ii in [1,-1]:
for jj in [1,-1]:
print(M[I[::ii],I[::jj],I])
# OUTPUT:
# [ 1 15 29]
# [ 7 15 23]
# [21 15 9]
# [27 15 3]
We fix the index of the last dimension and we find all the possible combinations of backward and forward indexing for the other dimensions.
Do you realize that this einsum is the same as:
In [64]: a4=np.arange(1,126).reshape(5,5,5)
In [65]: i=np.arange(5)
In [66]: a4[i,i,i]
Out[66]: array([ 1, 32, 63, 94, 125])
It should be easy to tweak the indices to get other diagonals.
In [73]: a4[np.arange(4),np.arange(1,5),np.arange(4)]
Out[73]: array([ 6, 37, 68, 99])
That `iii->i' producing the main diagonal is more of an happy accident than a designed feature. Don't try to push it.
Related
I am trying to create a python program in which the user inputs a set of data and the program spits out an output in which it creates a graph with a line/polynomial which best fits the data.
This is the code:
from matplotlib import pyplot as plt
import numpy as np
x = []
y = []
x_num = 0
while True:
sequence = int(input("Input 1 number in the sequence, type 9040321 to stop"))
if sequence == 9040321:
poly = np.polyfit(x, y, deg=2, rcond=None, full=False, w=None, cov=False)
plt.plot(poly)
plt.scatter(x, y, c="blue", label="data")
plt.legend()
plt.show()
break
else:
y.append(sequence)
x.append(x_num)
x_num += 1
I used the polynomial where I inputed 1, 2, 4, 8 each in separate inputs. MatPlotLib graphed it properly, however, for the degree of 2, the output was the following image:
This is clearly not correct, however I am unsure what the problem is. I think it has something to do with the degree, however when I change the degree to 3, it still does not fit. I am looking for a graph like y=sqrt(x) to go over each of the points and when that is not possible, create the line that fits the best.
Edit: I added a print(poly) feature and for the selected input above, it gives [0.75 0.05 1.05]. I do not know what to make of this.
Approximation by a second degree polynomial
np.polyfit gives the coefficients of a polynomial close to the given points. To plot the polynomial as a smooth curve with matplotlib, you need to calculate a lot of x,y pairs. Using np.linspace(start, stop, numsteps) for the xs, numpy's vectorization allows calculating all the corresponding ys in one go. E.g. ys = a * x**2 + b * x + c.
from matplotlib import pyplot as plt
import numpy as np
x = [0, 1, 2, 3, 4, 5, 6]
y = [1, 2, 4, 8, 16, 32, 64]
plt.scatter(x, y, color='crimson', label='given points')
poly = np.polyfit(x, y, deg=2, rcond=None, full=False, w=None, cov=False)
xs = np.linspace(min(x), max(x), 100)
ys = poly[0] * xs ** 2 + poly[1] * xs + poly[2]
plt.plot(xs, ys, color='dodgerblue', label=f'$({poly[0]:.2f})x^2+({poly[1]:.2f})x + ({poly[2]:.2f})$')
plt.legend()
plt.show()
Higher degree approximating polynomials
Given N points, an N-1 degree polynomial can pass exactly through each of them. Here is an example with 7 points and polynomials of up to degree 6,
from matplotlib import pyplot as plt
import numpy as np
x = [0, 1, 2, 3, 4, 5, 6]
y = [1, 2, 4, 8, 16, 32, 64]
plt.scatter(x, y, color='black', zorder=3, label='given points')
for degree in range(0, len(x)):
poly = np.polyfit(x, y, deg=degree, rcond=None, full=False, w=None, cov=False)
xs = np.linspace(min(x) - 0.5, max(x) + 0.5, 100)
ys = sum(poly_i * xs**i for i, poly_i in enumerate(poly[::-1]))
plt.plot(xs, ys, label=f'degree {degree}')
plt.legend()
plt.show()
Another example
x = [0, 1, 2, 3, 4]
y = [1, 1, 6, 5, 5]
import numpy as np
import matplotlib.pyplot as plt
x = [1, 2, 3, 4]
y = [1, 2, 4, 8]
coeffs = np.polyfit(x, y, 2)
print(coeffs)
poly = np.poly1d(coeffs)
print(poly)
x_cont = np.linspace(0, 4, 81)
y_cont = poly(x_cont)
plt.scatter(x, y)
plt.plot(x_cont, y_cont)
plt.grid(1)
plt.show()
Executing the code, you have the graph above and this is printed in the terminal:
[ 0.75 -1.45 1.75]
2
0.75 x - 1.45 x + 1.75
It seems to me that you had false expectations about the output of polyfit.
In numpy / PyTorch, I have two matrices, e.g. X=[[1,2],[3,4],[5,6]], Y=[[1,1],[2,2]]. I would like to dot product every row of X with every row of Y, and have the results
[[3, 6],[7, 14], [11,22]]
How do I achieve this?, Thanks!
I think this is what you are looking for:
import numpy as np
x= [[1,2],[3,4],[5,6]]
y= [[1,1],[2,2]]
x = np.asarray(x) #convert list to numpy array
y = np.asarray(y) #convert list to numpy array
product = np.dot(x, y.T)
.T transposes the matrix, which is neccessary in this case for the multiplication (because of the way dot products are defined). print(product) will output:
[[ 3 6]
[ 7 14]
[11 22]]
Using einsum
np.einsum('ij,kj->ik', X, Y)
array([[ 3, 6],
[ 7, 14],
[11, 22]])
In PyTorch, you can achieve this using torch.mm(a, b) or torch.matmul(a, b), as shown below:
x = np.array([[1,2],[3,4],[5,6]])
y = np.array([[1,1],[2,2]])
x = torch.from_numpy(x)
y = torch.from_numpy(y)
# print(torch.matmul(x, torch.t(y)))
print(torch.mm(x, torch.t(y)))
output:
tensor([[ 3, 6],
[ 7, 14],
[11, 22]], dtype=torch.int32)
I have the following equation:
where v, mu are |R^3, where Sigma is |R^(3x3) and where the result is a scalar value. Implementing this in numpy is no problem:
result = np.transpose(v - mu) # Sigma_inv # (v - mu)
Now I have a bunch of v-vectors (lets call them V \in |R^3xn) and I would
like to execute the above equation in a vectorized manner so that, as
a result I get a new vector Result \in |R^1xn.
# pseudocode
Result = np.zeros((n, 1))
for i,v in V:
Result[i,:] = np.transpose(v - mu) # Sigma_inv # (v - mu)
I looked at np.vectorize but the documentation suggests that its just the same as looping over all entries which I would prefer not to do. What would be an elegant vectorized solution?
As a side node: n might be quite large and a |R^nxn matrix will certainly not fit into my memory!
edit: working code sample
import numpy as np
S = np.array([[1, 2], [3,4]])
V = np.array([[10, 11, 12, 13, 14, 15],[20, 21, 22, 23, 24, 25]])
Res = np.zeros((V.shape[1], 1))
for i in range(V.shape[1]):
v = np.transpose(np.atleast_2d(V[:,i]))
Res[i,:] = (np.transpose(v) # S # v)[0][0]
print(Res)
Using a combination of matrix-multiplication and np.einsum -
np.einsum('ij,ij->j',V,S.dot(V))
Does this work for you?
res = np.diag(V.T # S # V).reshape(-1, 1)
It seems to provide the same result as you want.
import numpy as np
S = np.array([[1, 2], [3,4]])
V = np.array([[10, 11, 12, 13, 14, 15],[20, 21, 22, 23, 24, 25]])
Res = np.zeros((V.shape[1], 1))
for i in range(V.shape[1]):
v = np.transpose(np.atleast_2d(V[:,i]))
Res[i,:] = (np.transpose(v) # S # v)[0][0]
res = np.diag(V.T # S # V).reshape(-1, 1)
print(np.all(np.isclose(Res, res)))
# output: True
Although there is probably a more memory efficient solution using np.einsum.
Here is a simple solution:
import numpy as np
S = np.array([[1, 2], [3,4]])
V = np.array([[10, 11, 12, 13, 14, 15],[20, 21, 22, 23, 24, 25]])
Res = np.sum((V.T # S) * V.T, axis=1)
This are multiplications of matrix/vector stacks. numpy.matmul can do that after bringing S and V into the correct shape:
S = S[np.newaxis, :, :]
VT = V.T[:, np.newaxis, :]
V = VT.transpose(0, 2, 1)
tmp = np.matmul(S, V)
Res = np.matmul(VT, tmp)
print(Res)
#[[[2700]]
# [[3040]]
# [[3400]]
# [[3780]]
# [[4180]]
# [[4600]]]
We have the following sets of data which are already given to us : A,B,C that represent variances and D,E, F that represent covariances . I would like to position this sets of data in the matrix form:
matrix: Z Y X
Z A D F
Y D B E
X F E C
How can I arrange the sets of data in the matrix form considering that I don't Know the number of variances/cov?
Then I would like the resulting matrix multiply :
matrix* (G,H,I) * (G
H
I)
The second question is , how I multiply matrix `dimensions 3*3 by 1*3 and 3*1
You can use numpy.matrix and numpy.array to create your own matrix and arrays,
In [1]: import numpy as np
matrix1 = np.matrix([[1, 4, 6], [4, 2, 5],[6, 5, 3]])
array1 = np.array([7,8,9])
Second question: Now use numpy.transpose to calculate the quadratic matrix from array1,
In [2]: matrix2 = array1*np.transpose([array1])
In [3]: matrix2
Out[3]: array([[49, 56, 63],
[56, 64, 72],
[63, 72, 81]])
Finally, multiply both matrix with numpy.matmul,
In [4]: matrix3 = np.matmul(matrix1, matrix2)
In [5]: matrix3
Out[5]: matrix([[651, 744, 837],
[623, 712, 801],
[763, 872, 981]])
I am having an issue with Ipython - Numpy. I want to do the following operation:
x^T.x
with and x^T the transpose operation on vector x. x is extracted from a txt file with the instruction:
x = np.loadtxt('myfile.txt')
The problem is that if i use the transpose function
np.transpose(x)
and uses the shape function to know the size of x, I get the same dimensions for x and x^T. Numpy gives the size with a L uppercase indice after each dimensions. e.g.
print x.shape
print np.transpose(x).shape
(3L, 5L)
(3L, 5L)
Does anybody know how to solve this, and compute x^T.x as a matrix product?
Thank you!
What np.transpose does is reverse the shape tuple, i.e. you feed it an array of shape (m, n), it returns an array of shape (n, m), you feed it an array of shape (n,)... and it returns you the same array with shape(n,).
What you are implicitly expecting is for numpy to take your 1D vector as a 2D array of shape (1, n), that will get transposed into a (n, 1) vector. Numpy will not do that on its own, but you can tell it that's what you want, e.g.:
>>> a = np.arange(4)
>>> a
array([0, 1, 2, 3])
>>> a.T
array([0, 1, 2, 3])
>>> a[np.newaxis, :].T
array([[0],
[1],
[2],
[3]])
As explained by others, transposition won't "work" like you want it to for 1D arrays.
You might want to use np.atleast_2d to have a consistent scalar product definition:
def vprod(x):
y = np.atleast_2d(x)
return np.dot(y.T, y)
I had the same problem, I used numpy matrix to solve it:
# assuming x is a list or a numpy 1d-array
>>> x = [1,2,3,4,5]
# convert it to a numpy matrix
>>> x = np.matrix(x)
>>> x
matrix([[1, 2, 3, 4, 5]])
# take the transpose of x
>>> x.T
matrix([[1],
[2],
[3],
[4],
[5]])
# use * for the matrix product
>>> x*x.T
matrix([[55]])
>>> (x*x.T)[0,0]
55
>>> x.T*x
matrix([[ 1, 2, 3, 4, 5],
[ 2, 4, 6, 8, 10],
[ 3, 6, 9, 12, 15],
[ 4, 8, 12, 16, 20],
[ 5, 10, 15, 20, 25]])
While using numpy matrices may not be the best way to represent your data from a coding perspective, it's pretty good if you are going to do a lot of matrix operations!
For starters L just means that the type is a long int. This shouldn't be an issue. You'll have to give additional information about your problem though since I cannot reproduce it with a simple test case:
In [1]: import numpy as np
In [2]: a = np.arange(12).reshape((4,3))
In [3]: a
Out[3]:
array([[ 0, 1, 2],
[ 3, 4, 5],
[ 6, 7, 8],
[ 9, 10, 11]])
In [4]: a.T #same as np.transpose(a)
Out[4]:
array([[ 0, 3, 6, 9],
[ 1, 4, 7, 10],
[ 2, 5, 8, 11]])
In [5]: a.shape
Out[5]: (4, 3)
In [6]: np.transpose(a).shape
Out[6]: (3, 4)
There is likely something subtle going on with your particular case which is causing problems. Can you post the contents of the file that you're reading into x?
This is either the inner or outer product of the two vectors, depending on the orientation you assign to them. Here is how to calculate either without changing x.
import numpy
x = numpy.array([1, 2, 3])
inner = x.dot(x)
outer = numpy.outer(x, x)
The file 'myfile.txt' contain lines such as
5.100000 3.500000 1.400000 0.200000 1
4.900000 3.000000 1.400000 0.200000 1
Here is the code I run:
import numpy as np
data = np.loadtxt('iris.txt')
x = data[1,:]
print x.shape
print np.transpose(x).shape
print x*np.transpose(x)
print np.transpose(x)*x
And I get as a result
(5L,)
(5L,)
[ 24.01 9. 1.96 0.04 1. ]
[ 24.01 9. 1.96 0.04 1. ]
I would be expecting one of the two last result to be a scalar instead of a vector, because x^T.x (or x.x^T) should give a scalar.
b = np.array([1, 2, 2])
print(b)
print(np.transpose([b]))
print("rows, cols: ", b.shape)
print("rows, cols: ", np.transpose([b]).shape)
Results in
[1 2 2]
[[1]
[2]
[2]]
rows, cols: (3,)
rows, cols: (3, 1)
Here (3,) can be thought as "(3, 0)".
However if you want the transpose of a matrix A, np.transpose(A) is the solution. Shortly, [] converts a vector to a matrix, a matrix to a higher dimension tensor.