First of i know there is an identical question with answer in SO here: FFT in Matlab and numpy / scipy give different results
but the answer given there does not work on the test i did:
when i do an fft from numpy.fft i get following result:
In [30]: numpy.fft.fft(numpy.array([1+0.5j, 3+0j, 2+0j, 8+3j]))
Out[30]: array([ 14.+3.5j, -4.+5.5j, -8.-2.5j, 2.-4.5j])
which is identical to the output of in my case octave)
octave:39> fft([1+0.5j,3+0j,2+0j,8+3j])
ans =
Columns 1 through 3:
14.0000 + 3.5000i -4.0000 + 5.5000i -8.0000 - 2.5000i
Column 4:
2.0000 - 4.5000i
but if i transpose the list in octave and python i get:
In [9]: numpy.fft.fft(numpy.array([1+0.5j, 3+0j, 2+0j, 8+3j]).transpose())
Out[9]: array([ 14.+3.5j, -4.+5.5j, -8.-2.5j, 2.-4.5j])
and for octave:
octave:40> fft([1+0.5j,3+0j,2+0j,8+3j]')
ans =
14.0000 - 3.5000i
2.0000 + 4.5000i
-8.0000 + 2.5000i
-4.0000 - 5.5000i
I also tried to reshape in python but this results in:
In [33]: numpy.fft.fft(numpy.reshape(numpy.array([1+0.5j,3+0j,2+0j,8+3j]), (4,1)))
Out[33]:
array([[ 1.+0.5j],
[ 3.+0.j ],
[ 2.+0.j ],
[ 8.+3.j ]])
how do i get the same result in python as in octave?
+ i don't have matlab to test, otherwise i would check if it returns the same as octave just to be sure.
Why NumPy and octave gave different results:
The inputs were different. The ' in octave is returning the complex conjugate transpose, not the transpose, .':
octave:6> [1+0.5j,3+0j,2+0j,8+3j]'
ans =
1.0000 - 0.5000i
3.0000 - 0.0000i
2.0000 - 0.0000i
8.0000 - 3.0000i
So to make NumPy's result match octave's:
In [115]: np.fft.fft(np.array([1+0.5j, 3+0j, 2+0j, 8+3j]).conj()).reshape(-1, 1)
Out[115]:
array([[ 14.-3.5j],
[ 2.+4.5j],
[ -8.+2.5j],
[ -4.-5.5j]])
octave:7> fft([1+0.5j,3+0j,2+0j,8+3j]')
ans =
14.0000 - 3.5000i
2.0000 + 4.5000i
-8.0000 + 2.5000i
-4.0000 - 5.5000i
In NumPy, the transpose of a 1D array is the same 1D array.
That's why fft(np.array([1+0.5j, 3+0j, 2+0j, 8+3j]).transpose()) returns a 1D array.
Reshaping after taking the FFT of a 1D array:
You could take the FFT first, and then reshape. To make a 1D array 2-dimensional you could use reshape to obtain a column-like array of shape (4,1), or use np.atleast_2d followed by transpose:
In [115]: np.fft.fft(np.array([1+0.5j, 3+0j, 2+0j, 8+3j]).conj()).reshape(-1, 1)
Out[115]:
array([[ 14.-3.5j],
[ 2.+4.5j],
[ -8.+2.5j],
[ -4.-5.5j]])
or
In [116]: np.atleast_2d(np.fft.fft(np.array([1+0.5j, 3+0j, 2+0j, 8+3j]).conj())).T
Out[116]:
array([[ 14.-3.5j],
[ 2.+4.5j],
[ -8.+2.5j],
[ -4.-5.5j]])
Taking the FFT of a 2D array:
np.fft.fft takes the FFT over the last axis by default.
This is why reshaping to shape (4, 1) did not work. Instead, reshape the array to (1, 4):
In [117]: np.fft.fft(np.reshape(np.array([1+0.5j,3+0j,2+0j,8+3j]), (1,4)).conj()).T
Out[117]:
array([[ 14.-3.5j],
[ 2.+4.5j],
[ -8.+2.5j],
[ -4.-5.5j]])
Or you could use np.matrix to
make a 2D matrix of shape (1, 4).
Again the FFT is taken over the last axis, returns an array of shape (1, 4), which you can then transpose to get the desired result:
In [121]: np.fft.fft(np.matrix([1+0.5j, 3+0j, 2+0j, 8+3j]).conj()).T
Out[121]:
array([[ 14.-3.5j],
[ 2.+4.5j],
[ -8.+2.5j],
[ -4.-5.5j]])
This, perhaps, gives you the neatest syntax. But be aware that this passes a np.matrix as input but returns a np.ndarray as output.
As Warren Weckesser pointed out, if you already have a 2D NumPy array, and wish to take the FFT of its columns, then you could pass axis=0 to the call to np.fft.fft.
Also, The matrix class (unlike the ndarray class) has a H property which returns the complex conjugate transpose. Thus
In [114]: np.fft.fft(np.matrix([1+0.5j, 3+0j, 2+0j, 8+3j]).H, axis=0)
Out[114]:
array([[ 14.-3.5j],
[ 2.+4.5j],
[ -8.+2.5j],
[ -4.-5.5j]])
Related
I am trying to optimize a transformation problem and let numpy do as much heavy-lifting as possible.
In my case I have a range of coordinate sets that each have to be dotted with corresponding indexed roll/pitch/yaw values.
Programatically it looks like this:
In [1]: import numpy as np
...: from ds.tools.math import rotation_array
...: from math import pi
...:
...: rpy1 = rotation_array(pi, 0.001232, 1.234243)
...: rpy2 = rotation_array(pi/1, 1.325, 0.5674543)
In [2]: rpy1
Out[2]:
array([[ 3.30235500e-01, 9.43897768e-01, -1.23199969e-03],
[ 9.43898485e-01, -3.30235750e-01, 1.22464587e-16],
[-4.06850342e-04, -1.16288264e-03, -9.99999241e-01]])
In [3]: rpy2
Out[3]:
array([[ 2.05192356e-01, 1.30786082e-01, -9.69943863e-01],
[ 5.37487075e-01, -8.43271987e-01, 2.97991829e-17],
[-8.17926489e-01, -5.21332290e-01, -2.43328794e-01]])
...:
...: a1 = np.array([[-9.64996132, -5.42488639, -3.08443],
...: [-8.08814188, -4.56431952, -3.01381]])
...:
...: a2 = np.array([[-6.91346292, -3.91137259, -2.82621],
...: [-4.34534536, -2.34535546, -4.87692]])
Then I dot the coordinates in a1 with rpy1 and a2 with rpy2
In [4]: a1.dot(rpy1)
Out[4]:
array([[-8.30604694, -7.31349869, 3.09631641],
[-6.97801968, -6.12357288, 3.0237723 ]])
In [5]: a2.dot(rpy2)
Out[5]:
array([[-1.20926993, 3.86756074, 7.3933692 ],
[ 1.83673215, 3.95195774, 5.40143613]])
Instead of iterating over lists of a's and rpy's I want to do the whole thing in one operation. So I was hoping for that effect with the following code, so that each set of coordinates in a12 would be dotted with the corresponding indexed array from rpy_a.
But as it is clear, from the output I an getting more than I was hoping for:
In [6]: rpy_a = np.array([rpy1, rpy2])
...:
...: a12 = np.array([a1, a2])
In [7]: a12.dot(rpy_a)
Out[7]:
array([[[[-8.30604694, -7.31349869, 3.09631641],
[-2.37306761, 4.92058705, 10.1104514 ]],
[[-6.97801968, -6.12357288, 3.0237723 ],
[-1.6478126 , 4.36234287, 8.57839034]]],
[[[-5.9738597 , -5.23064061, 2.83472524],
[-1.20926993, 3.86756074, 7.3933692 ]],
[[-3.64678058, -3.32137028, 4.88226976],
[ 1.83673215, 3.95195774, 5.40143613]]]])
What I need is:
array([[[-8.30604694, -7.31349869, 3.09631641],
[-6.97801968, -6.12357288, 3.0237723 ]],
[[-1.20926993, 3.86756074, 7.3933692 ],
[ 1.83673215, 3.95195774, 5.40143613]]])
Can anyone tell me how to achieve this?
EDIT:
Runnable example:
import numpy as np
rpy1 = np.array([[ 3.30235500e-01, 9.43897768e-01, -1.23199969e-03],
[ 9.43898485e-01, -3.30235750e-01, 1.22464587e-16],
[-4.06850342e-04, -1.16288264e-03, -9.99999241e-01]])
rpy2 = np.array([[ 2.05192356e-01, 1.30786082e-01, -9.69943863e-01],
[ 5.37487075e-01, -8.43271987e-01, 2.97991829e-17],
[-8.17926489e-01, -5.21332290e-01, -2.43328794e-01]])
a1 = np.array([[-9.64996132, -5.42488639, -3.08443],
[-8.08814188, -4.56431952, -3.01381]])
a2 = np.array([[-6.91346292, -3.91137259, -2.82621],
[-4.34534536, -2.34535546, -4.87692]])
print(a1.dot(rpy1))
# array([[-8.30604694, -7.31349869, 3.09631641],
# [-6.97801968, -6.12357288, 3.0237723 ]])
print(a2.dot(rpy2))
# array([[-1.20926993, 3.86756074, 7.3933692 ],
# [ 1.83673215, 3.95195774, 5.40143613]])
rpy_a = np.array([rpy1, rpy2])
a12 = np.array([a1, a2])
print(a12.dot(rpy_a))
# Result:
# array([[[[-8.30604694, -7.31349869, 3.09631641],
# [-2.37306761, 4.92058705, 10.1104514 ]],
# [[-6.97801968, -6.12357288, 3.0237723 ],
# [-1.6478126 , 4.36234287, 8.57839034]]],
# [[[-5.9738597 , -5.23064061, 2.83472524],
# [-1.20926993, 3.86756074, 7.3933692 ]],
# [[-3.64678058, -3.32137028, 4.88226976],
# [ 1.83673215, 3.95195774, 5.40143613]]]])
# Need:
# array([[[-8.30604694, -7.31349869, 3.09631641],
# [-6.97801968, -6.12357288, 3.0237723 ]],
# [[-1.20926993, 3.86756074, 7.3933692 ],
# [ 1.83673215, 3.95195774, 5.40143613]]])
Assuming you want to treat an arbitrary number of arrays rpy1, rpy2, ..., rpyn and a1, a2, ..., an, I suggest explicit first axis concatenation with explicit broadcasting, simply for the cause of "explicit is better than implicit":
a12 = np.concatenate([_a[None, ...] for _a in (a1, a2)], axis=0)
rpy_a = np.concatenate([_a[None, ...] for _a in (rpy1, rpy2)], axis=0)
This is equal to:
a12 = np.array([a1, a2])
rpy_a = np.array([rpy1, rpy2])
np.array requires less code and is also faster than my explicit approach, but I just like defining the axis explicitly so that everyone reading the code can guess the resulting shape without executing it.
Whatever path you choose, the important part is the following:
np.einsum('jnk,jkm->jnm', a12, rpy_a)
# Out:
array([[[-8.30604694, -7.3134987 , 3.09631641],
[-6.97801969, -6.12357288, 3.0237723 ]],
[[-1.20926993, 3.86756074, 7.3933692 ],
[ 1.83673215, 3.95195774, 5.40143613]]])
Using the Einstein summation convention, you can define your np.matmul (equal to np.dot for 2D-arrays) to be executed along a specific axis.
In this case, we define the concatenation axis j (or first dim. or axis 0) to be the shared axis, along which the operation 'nk,km->nm' (equal to np.matmul, see the signature description in the out parameter) is performed.
It is also possible to simply use np.matmul (or the python operator #) for the same results:
np.matmul(a12, rpy_a)
a12 # rpy_a
But again: For the general case, where the concatenation axis or shapes may change, the more explicit np.einsum is preferable. If you know that no changes will be made to shapes etc., np.matmul should be preferred (less code and faster).
return tf.sets.intersection(set_1,set_2)
I got the error message with
return ops.EagerTensor(value, ctx.device_name, dtype)
ValueError: TypeError: object of type 'RaggedTensor' has no len()
my set_1 and set_2's type is as following
> set_1-> <tf.RaggedTensor [[0.1733333319425583, 0.2866666615009308,
> 1.5666667222976685, 1.3966666460037231], [0.5233333110809326, 0.1433333307504654, 0.9599999785423279, 0.5533333420753479]]>
>
> set_2-> tf.Tensor( [[-0.03684211 -0.03684211 0.06315789 0.06315789]
> [-0.05755278 -0.05755278 0.08386857 0.08386857] [-0.05755278
> -0.02219744 0.08386857 0.04851323] ... [ 0. 0. 1. 1. ] [-0.1363961 0.18180195 1.1363961 0.81819805] [ 0.18180195 -0.1363961 0.81819805 1.1363961 ]], shape=(8732, 4), dtype=float64) set1-> <tf.RaggedTensor
> [[0.1733333319425583, 0.2866666615009308, 1.5666667222976685,
> 1.3966666460037231], [0.5233333110809326, 0.1433333307504654, 0.9599999785423279, 0.5533333420753479]]> set_2-> tf.Tensor( [[-0.03684211 -0.03684211 0.06315789 0.06315789] [-0.05755278
> -0.05755278 0.08386857 0.08386857] [-0.05755278 -0.02219744 0.08386857 0.04851323] ... [ 0. 0. 1. 1. ] [-0.1363961 0.18180195 1.1363961 0.81819805] [ 0.18180195
> -0.1363961 0.81819805 1.1363961 ]], shape=(8732, 4), dtype=float64)
set_1 is ragged tensor and set_2 is tensor
because
new_boxes = tf.ragged.constant(new_boxes)
dataset = tf.data.Dataset.from_tensor_slices((images,new_boxes,labels))
This won't work if I did not change new_boxes to the ragged_tensor
I want to find the intersection of tow set_1 and set_2.
How should I fix it and how to approach it?
That operation is not supported for ragged tensors, but it works with sparse tensors, so you can just convert the ragged tensor into that with .to_sparse():
return tf.sets.intersection(set_1.to_sparse(), set_2)
You could also convert it into a regular tensor with .to_tensor(), but that would be more expensive, and would also require you to find a default_value that does not appear in the data.
The following is a test for conv2d_transpose.
import tensorflow as tf
import numpy as np
x = tf.constant(np.array([[
[[-67], [-77]],
[[-117], [-127]]
]]), tf.float32)
# shape = (3, 3, 1, 1) -> (height, width, input_channels, output_channels) - 3x3x1 filter
f = tf.constant(np.array([
[[[-1]], [[2]], [[-3]]],
[[[4]], [[-5]], [[6]]],
[[[-7]], [[8]], [[-9]]]
]), tf.float32)
conv = tf.nn.conv2d_transpose(x, f, output_shape=(1, 5, 5, 1), strides=[1, 2, 2, 1], padding='VALID')
The result:
tf.Tensor(
[[[[ 67.]
[ -134.]
[ 278.]
[ -154.]
[ 231.]]
[[ -268.]
[ 335.]
[ -710.]
[ 385.]
[ -462.]]
[[ 586.]
[ -770.]
[ 1620.]
[ -870.]
[ 1074.]]
[[ -468.]
[ 585.]
[-1210.]
[ 635.]
[ -762.]]
[[ 819.]
[ -936.]
[ 1942.]
[-1016.]
[ 1143.]]]], shape=(1, 5, 5, 1), dtype=float32)
To my understanding, it should work as described in Figure 4.5 in the doc
Therefore, the first element (conv[0,0,0,0]) should be -67*-9=603. Why it turns out to be 67?
The result may be expained by the following image:. But why the convolution kernel is inversed?
To explain best, I have made a draw.io figure to explain the results that you obtained.
I guess above illustration might help explain the reason why the first element of transpose conv. feature map is 67.
A key thing to note:
Unlike traditional convolution, in transpose convolution each element of the filter is multiplied by an element of the input feature map and the results of those individual multiplications & intermediate feature maps are overlaid on one another to create the final feature map. The stride determines how far apart the overlays are. In our case, stride = 2, hence the filter moves by 2 in both x & y dimension after each convolution with the original downsampled feature map.
I have a 2D array, say
x = np.random.rand(10, 3)
array([[ 0.51158246, 0.51214272, 0.1107923 ],
[ 0.5210391 , 0.85308284, 0.63227215],
[ 0.57239625, 0.06276943, 0.1069803 ],
[ 0.71627613, 0.66454443, 0.56771438],
[ 0.24595493, 0.01007568, 0.84959605],
[ 0.99158904, 0.25034553, 0.00144037],
[ 0.43292656, 0.9247424 , 0.5123086 ],
[ 0.07224077, 0.57230282, 0.88522979],
[ 0.55665913, 0.20119776, 0.58865823],
[ 0.55129624, 0.26226446, 0.63070611]])
Then I find the indexes of maximum elements along the columns:
indexes = np.argmax(x, axis=0)
array([5, 6, 7])
So far so good.
But how do I actually get those elements? That is, how do I get ?some_operation?(x, indexes) == [0.99158904, 0.9247424, 0.88522979]?
Note that I need both the indexes and the associated values.
The best I could come up with was x[indexes, range(x.shape[1])], but it looks kinda complicated and inefficient. Is there a more idiomatic way?
You can use np.amax to find max value along an axis.
Using your example (x is the original array in your post):
In[1]: np.argmax(x, axis=0)
Out[1]:
array([5, 6, 7], dtype=int64)
In[2]: np.amax(x, axis=0)
Out[2]:
array([ 0.99158904, 0.9247424 , 0.88522979])
Documentation link
I have a question on numpy.linalg.eig().
Here's my covariance matrix after normalizing /standardizing the data.
lr_cov = np.cov(lr_norm, rowvar = False, ddof = 0)
lr_cov
array([[ 0.95454545, 0.88156287, 0.8601369 ],
[ 0.88156287, 0.95454545, 0.87367031],
[ 0.8601369 , 0.87367031, 0.95454545]])
I use the eig() function as below -- no problems here.
eig_val, eig_vec = np.linalg.eig(lr_cov)
eig_vec
array([[-0.57694452, -0.6184592 , 0.53351967],
[-0.57990975, -0.14982268, -0.80078577],
[-0.57518668, 0.77140222, 0.27221115]])
eig_val
array([ 2.69815538, 0.09525935, 0.07022164])
But when I proceed to sanity check that (Covariance Matrix)*(Eigen vector) = (Eigen Value)*(Eigen Vector), the LHS and RHS in this case don't match up.
lr_cov*eig_vec
array([[-0.55071977, -0.54521067, 0.45889996],
[-0.5112269 , -0.14301256, -0.69962276],
[-0.49473928, 0.67395122, 0.25983791]])
eig_val*eig_vec
array([[-1.55668595, -0.05891402, 0.03746463],
[-1.5646866 , -0.01427201, -0.05623249],
[-1.55194302, 0.07348327, 0.01911511]])
What I am doing incorrectly?
Two points:
* is element-wise multipication. Use the dot() method for matrix multiplication.
eig_val is a 1d array. Convert it to a 2d square diagonal array with np.diag(eig_val).
Example:
In [70]: cov
Out[70]:
array([[ 0.95454545, 0.88156287, 0.8601369 ],
[ 0.88156287, 0.95454545, 0.87367031],
[ 0.8601369 , 0.87367031, 0.95454545]])
In [71]: eig_val, eig_vec = np.linalg.eig(cov)
In [72]: cov.dot(eig_vec)
Out[72]:
array([[-1.55668595, -0.05891401, 0.03746463],
[-1.56468659, -0.01427202, -0.05623249],
[-1.55194302, 0.07348327, 0.01911511]])
In [73]: eig_vec.dot(np.diag(eig_val))
Out[73]:
array([[-1.55668595, -0.05891401, 0.03746463],
[-1.56468659, -0.01427202, -0.05623249],
[-1.55194302, 0.07348327, 0.01911511]])
In the last line, np.diag(eig_val) is on the right in order to multiply each column of eig_vec by the corresponding eigenvalue.
If you take advantage of numpy's broadcasting, you don't have to use np.diag(eig_val), and you can use element-wise multiplication (in either order, since element-wise multiplication is commutative):
In [75]: eig_vec * eig_val # element-wise multiplication with broadcasting
Out[75]:
array([[-1.55668595, -0.05891401, 0.03746463],
[-1.56468659, -0.01427202, -0.05623249],
[-1.55194302, 0.07348327, 0.01911511]])
In [76]: eig_val * eig_vec
Out[76]:
array([[-1.55668595, -0.05891401, 0.03746463],
[-1.56468659, -0.01427202, -0.05623249],
[-1.55194302, 0.07348327, 0.01911511]])