Numpy element-wise addition with multiple arrays - numpy

I'd like to know if there is a more efficient/pythonic way to add multiple numpy arrays (2D) rather than:
def sum_multiple_arrays(list_of_arrays):
a = np.zeros(shape=list_of_arrays[0].shape) #initialize array of 0s
for array in list_of_arrays:
a += array
return a
Ps: I am aware of np.add() but it works only with 2 arrays.

np.sum(list_of_arrays, axis=0)
should work. Or
np.add.reduce(list_of_arrays).

Related

How to vectorize this operation in numpy?

I have a 2d array s and I want to calculate differences elementwise, i.e.:
Since it cannot be written as a single matrix multiplication, I was wondering what is the proper way to vectorize it?
You can use broadcasting for that: d = s[:, None, :] - s[None, :, :]. Note the None enable you to create a new dimension. Numpy implicitly perform the broadcasting operation between the two arrays.

How to check the presence of a given numpy array in a larger-shape numpy array?

I guess the title of my question might not be very clear..
I have a small array, say a = ([[0,0,0],[0,0,1],[0,1,1]]). Then I have a bigger array of a higher dimension, say b = ([[[2,2,2],[2,0,1],[2,1,1]],[[0,0,0],[3,3,1],[3,1,1]],[...]]).
I'd like to check if one of the elements of a can be found in b. In this case, I'd find that the first element of a [0,0,0] is indeed in b, and then I'd like to retrieve the corresponding index in b.
I'd like to do that avoiding looping, since from the very little I understood from numpy arrays, they are not meant to be iterated over in a classic way. In other words, I need it to be very fast, because my actual arrays are quite big.
Any idea?
Thanks a lot!
Arnaud.
I don't know of a direct way, but I here's a function that works around the problem:
import numpy as np
def find_indices(val, arr):
# first take a mean at the lowest level of each array,
# then compare these to eliminate the majority of entries
mb = np.mean(arr, axis=2); ma = np.mean(val)
Y = np.argwhere(mb==ma)
indices = []
# Then run a quick loop on the remaining elements to
# eliminate arrays that don't match the order
for i in range(len(Y)):
idx = (Y[i,0],Y[i,1])
if np.array_equal(val, arr[idx]):
indices.append(idx)
return indices
# Sample arrays
a = np.array([[0,0,0],[0,0,1],[0,1,1]])
b = np.array([ [[6,5,4],[0,0,1],[2,3,3]], \
[[2,5,4],[6,5,4],[0,0,0]], \
[[2,0,2],[3,5,4],[5,4,6]], \
[[6,5,4],[0,0,0],[2,5,3]] ])
print(find_indices(a[0], b))
# [(1, 2), (3, 1)]
print(find_indices(a[1], b))
# [(0, 1)]
The idea is to use the mean of each array and compare this with the mean of the input. np.argwhere() is the key here. That way you remove most of the unwanted matches, but I did need to use a loop on the remainder to avoid the unsorted matches (this shouldn't be too memory-consuming). You'll probably want to customise it further, but I hope this helps.

Construct NumPy matrix row by row

I'm trying to construct a 2D NumPy array from values in an extant 2D NumPy array using an iterative process. Using ordinary python lists the process I'm describing would look like so:
coords = #data from file contained in a 2D list
d = #integer
edges = []
for i in range(d+1):
for j in range(i+1, d+1):
edge = coords[j] - coords[i]
edges.append(edge)
However, the NumPy array imposes restrictions that do not permit the process shown above. Below I try to do the same thing using NumPy arrays, and it should immediately be clear where the problems are:
coords = np.genfromtxt('Energies.txt', dtype=float, skip_header=1)
d = #integer
#how to initialize?
for i in range(d+1):
for j in range(i+1, d+1):
edge = coords[j] - coords[i]
#how to append?
Because .append does not exist for NumPy arrays I need to rely on concatenate or stack instead. But these functions are designed to join existing arrays, and I don't have anything to concatenate or stack until after the first iteration of my loop. So I suppose I need to change my data flow, but I'm unsure how to go about this.
Any help would be greatly appreciated. Thanks in advance.
that function is numpy.meshgrid [1] , the function does it by default.
[1] https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.meshgrid.html

Numpy: Search for an array A with same pattern in a larger array B

I have two 1D numpy array A(small) and B(large)
A=np.array([6,7,8,9,10])
B=np.array([0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,10])
I want to check if we have elements of the array A in the same order being detected in the array B.
Get the index value of array B from where the we detect the starting of array A
Index Value returned = 6
Do we have any inbuilt numpy function to perform such an operation?
I have also encountered this problem sometimes.I think the fastest way especially for big numpy arrays would be to convert them to strings and then do it.
Here is the code I use:
b=np.array([6,7,8,9,10])
a=np.array([0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,10])
a.tostring().index(b.tostring())//a.itemsize
I found a nice solution.
Given by #EdSmith in Finding Patterns in a Numpy Array
In short this is the process
Short the length of array being searched for.(My example A)
Check through entire length of the array being searched in(My example B), using np.where and np.all
This is not my code but the code that can be found in the about link, Simple and easy. I'll just alter it a bit to fit my example above Hope it helps someone :)
Thanks to #EdSmith
import numpy as np
A=np.array([6,7,8,9,10])
B=np.array([0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,10])
N = len(A)
possibles = np.where(B == A[0])[0]
solns = []
for p in possibles:
check = B[p:p+N]
if np.all(check == A):
solns.append(p)
print(solns)
Ouput
[6]
Try this:
import numpy as np
A=np.array([6,7,8,9,10])
B=np.array([0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,10])
r = np.ones_like(B)
for x in range(len(A)):r*=np.roll((B==A[x]),-x)
#first index, answer: /6/
print(np.where(r)[0][0])

Iterating over multidimensional arrays(images) with numpy array - python

Hy!
I have two images(same dimension) as numpy array imgA - imgB
i would like to iterate each row and column and get somenthing like that:
for i in range(0, h-1):
for j in range(0, w-1):
final[i][j]= imgA[i,j] - imgB[i-k[i],j]
where h and w are the height and the width of the image and k is and array with dimension[h*w].
i have seen this topic:
Iterating over a numpy array
but it doens't work with images, i get the error: too many values to unpack
Is there any way to do that with numpy and python 2.7?
thanks
edit
I try to explain better myself.
I have 2 images in LAB color space.
these images are (288,384,3).
Now I would like to make deltaE so I could do like that(spitting the 2 arrays):
imgLabL=np.dsplit(imgL,3)
imgLabR=np.dsplit(imgR,3)
imgLl=imgLabL[0]
imgLa=imgLabL[1]
imgLb=imgLabL[2]
imgRl=imgLabR[0]
imgRa=imgLabR[1]
imgRb=imgLabR[2]
delta=np.sqrt(((imgLl-imgRl)**2) + ((imgLa - imgRa)**2) + ((imgLb - imgRb)**2) )
Till now everything is fine.
But now i have this array k of size (288,384).
So now i need a new delta but with different x axis,like the pixel in imgRl(0,0) i want to add the pixel in imgLl(0+k,0)
do you get more my problems?
I'm pretty sure that whatever it is you are trying to do can be vectorized and run without any loops in it. But the way your code is written, it is no surprise that it doesn't work...
If k is an array of shape (h, w), then k[i] is an array of shape (w,). when you do i-k[i], numpy will do its broadcasting magic, and you will get an array of shape (w,). So you are indexing imgB with an array of shape (w,) and a single integer. Because one of the items in the indexing is an array, fancy indexing kicks in. So assuming imgB also has shape (h, w, 1), the return value of imgB[i-k[i], j] will not be an array of shape (1,), but an array of shape (w, 1). When you then try to substract that from imgA[i, j], which is an array of shape (1,), broadcasting magic works again, and so you get an array of shape (w, 1).
We do not know what is final. But if it is an array of shape (h, w, 1), as imgA and imgB, then final[i][j] is an array of shape (1,), and you are trying to assign to it an array of shape (w, 1), which does not fit. Hence the operand requires a reduction,but reduction is not enabled error message.
EDIT
You don't really need to split your arrays to compute DeltaE...
def deltaE(a, b) :
return np.sqrt(((a - b)**2).sum(axis=-1))
delta = deltaE(imgLabL, imgLabR)
I still don't understand what you want to do in the second case... If you want to compare the two images displaced along the x-axis, I would suggest using np.roll:
deltaE(imgLabL, np.roll(imgLabR, k, axis=0))
will have at position (r, c) the deltaE between the pixel (r, c) of imgLabL and the pixel (r - k, c) of imgLAbR. Is that what you want?
I usually use numpy.nditer, the docs for which are here and have many examples. Briefly:
import numpy as np
a = np.ones([4,4])
it = np.nditer(a)
for elem in a:
#do stuff
You can also use c style iteration, i.e.
while not it.finished:
#do stuff
it.iternext()
If you need to access the indices of your arrays. In your situation, I would zip your two images together to create an array of shape [2,h,w] and then iterate over this, filling an empty array with the results of the computation.