NumPy Convert Elements Across Dimensions - numpy

I have 3d numpy array of the following shape:
(3600L, 7200L, 3L)
If any element in any dimension is 0, how can I convert the elements in the same position in other two dimensions into 0?

If an element is 0, it is 0 in each of the dimensions. I'll illustrate with a small 2d array:
In [1240]: M=np.arange(9).reshape(3,3)
In [1241]: M
Out[1241]:
array([[0, 1, 2],
[3, 4, 5],
[6, 7, 8]])
In [1242]: M[0,0]
Out[1242]: 0
One element is 0, the 0 row and the 0 column. I can set the rest of those 2 dimensions to 0 with:
In [1243]: M[0,:]=0
In [1244]: M[:,0]=0
In [1245]: M
Out[1245]:
array([[0, 0, 0],
[0, 4, 5],
[0, 7, 8]])
You can generalize this to 3d and larger arrays. As long as you know the coordinates of that element in all dimensions. With a 3d array
M[i,:,:]=0
actually sets all the values in a plane (2d) to 0. Similarly for M[:,j,:] and M[:,:,k].
np.where gives the coordinates that match some condition:
In [1248]: I=np.where(M==0)
In [1249]: M[I[0],:]=0
In [1250]: M[:,I[1]]=0
In [1251]: M
Out[1251]:
array([[0, 0, 0],
[0, 4, 5],
[0, 7, 8]])
In [1252]:
In [1252]: I
Out[1252]: (array([0], dtype=int32), array([0], dtype=int32))
This works regardless of whether the match is for 1 element, 0, or more. Here it's just one.

Related

Indices in Numpy and MATLAB

I have a piece of code in Matlab that I want to convert into Python/numpy.
I have a matrix ind which has the dimensions (32768, 24). I have another matrix X which has the dimensions (98304, 6). When I perform the operation
result = X(ind)
the shape of the matrix is (32768, 24).
but in numpy when I perform the same shape
result = X[ind]
I get the shape of the result matrix as (32768, 24, 6).
I would greatly appreciate it if someone can help me with why I can these two different results and how can I fix them. I would want to get the shape (32768, 24) for the result matrix in numpy as well
In Octave, if I define:
>> X=diag([1,2,3,4])
X =
Diagonal Matrix
1 0 0 0
0 2 0 0
0 0 3 0
0 0 0 4
>> idx = [6 7;10 11]
idx =
6 7
10 11
then the indexing selects a block:
>> X(idx)
ans =
2 0
0 3
The numpy equivalent is
In [312]: X=np.diag([1,2,3,4])
In [313]: X
Out[313]:
array([[1, 0, 0, 0],
[0, 2, 0, 0],
[0, 0, 3, 0],
[0, 0, 0, 4]])
In [314]: idx = np.array([[5,6],[9,10]]) # shifted for 0 base indexing
In [315]: np.unravel_index(idx,(4,4)) # raveled to unraveled conversion
Out[315]:
(array([[1, 1],
[2, 2]]),
array([[1, 2],
[1, 2]]))
In [316]: X[_] # this indexes with a tuple of arrays
Out[316]:
array([[2, 0],
[0, 3]])
another way:
In [318]: X.flat[idx]
Out[318]:
array([[2, 0],
[0, 3]])

how to get row indices where row slice contains a single value (0)

With the numpy array
arr = np.array([[1, 1, 0, 0, 0, 1], [1, 1, 0, 0, 1, 1], [1, 1, 0, 0, 0, 1]])
I would like to get the indices of all rows where the row slice 2:5 contains all zeros.
In the above example, it should return rows 0 and 2.
I tried:
zero_indices = np.where(not np.any(arr[:,2:5]))
but it doesn't seem to work.
I'm trying to do this over a large array with several million rows.
Try this
np.nonzero((~arr[:,2:5].astype(bool)).all(1))[0]
Out[133]: array([0, 2], dtype=int32)
Or
np.nonzero((arr[:,2:5] == 0).all(1))[0]
Out[139]: array([0, 2], dtype=int32)

recarray with lists: how to reference first element in list

I want to copy contents of a few fields in a record array into a ndarray (both type float64).
I know how to do this when the recarray data has a single value in each field:
my_ndarray[:,0]=my_recarray['X'] #(for field 'X')
Now I have a recarray with a list of 5 floats in each field, and I only want to
copy the first element of each list.
When I use the above with the new recarray (and list), I get this error:
ValueError: could not broadcast input array from shape (92,5) into shape (92)
That makes total sense (in hindsight).
I thought I could get just the first element of each with this:
my_ndarray[:,0]=my_recarray['X'][0] #(for field 'X')
I get this error:
ValueError: could not broadcast input array from shape (5) into shape (92)
I sorta understand...numpy is only taking the first row (5 elements) and trying to broadcast into a 92 element column.
So....now I'm wondering how to get the first element of each list down the 92 element column,
Scratchin my head....
Thanks in advance for advice.
My guess is that the recarray has a dtype where one of the fields has shape 5:
In [48]: dt = np.dtype([('X',int,5),('Y',float)])
In [49]: arr = np.zeros(3, dtype=dt)
In [50]: arr
Out[50]:
array([([0, 0, 0, 0, 0], 0.), ([0, 0, 0, 0, 0], 0.),
([0, 0, 0, 0, 0], 0.)], dtype=[('X', '<i8', (5,)), ('Y', '<f8')])
Accessing this field by name produces an array that is (3,5) shape (analogous to your (92,5):
In [51]: arr['X']
Out[51]:
array([[0, 0, 0, 0, 0],
[0, 0, 0, 0, 0],
[0, 0, 0, 0, 0]])
This could be described as a list of 5 items for each record, but indexing with field name produces a 2d array, which can be indexing like any 2d numpy array.
Let's set those values to something interesting:
In [52]: arr['X'] = np.arange(15).reshape(3,5)
In [53]: arr
Out[53]:
array([([ 0, 1, 2, 3, 4], 0.), ([ 5, 6, 7, 8, 9], 0.),
([10, 11, 12, 13, 14], 0.)],
dtype=[('X', '<i8', (5,)), ('Y', '<f8')])
We can fetch the first column of this field with:
In [54]: arr['X'][:,0]
Out[54]: array([ 0, 5, 10])
If you have several fields with a structure like this, you'll probably have to access each one by name. There's a limit to what you can do with multi-field indexing.

scipy: Adding a sparse vector to a specific row of a sparse matrix

In python, what is the best way to add a CSR vector to a specific row of a CSR matrix? I found one workaround here, but wondering if there is a better/more efficient way to do this. Would appreciate any help.
Given an NxM CSR matrix A and a 1xM CSR matrix B, and a row index i, the goal is to add B to the i-th row of A efficiently.
The obvious indexed addition does work. It gives a efficiency warning, but that doesn't mean it is the slowest way, just that you shouldn't count of doing this repeatedly. It suggests working with the lil format, but conversion to that and back probably takes more time than performing the addition to the csr matrix.
In [1049]: B.A
Out[1049]:
array([[0, 9, 0, 0, 1, 0],
[2, 0, 5, 0, 0, 9],
[0, 2, 0, 0, 0, 0],
[2, 0, 0, 0, 0, 0],
[0, 9, 5, 3, 0, 7],
[1, 0, 0, 8, 9, 0]], dtype=int32)
In [1051]: B[1,:] += np.array([1,0,1,0,0,0])
/usr/local/lib/python3.5/dist-packages/scipy/sparse/compressed.py:730: SparseEfficiencyWarning: Changing the sparsity structure of a csr_matrix is expensive. lil_matrix is more efficient.
SparseEfficiencyWarning)
In [1052]: B
Out[1052]:
<6x6 sparse matrix of type '<class 'numpy.int32'>'
with 17 stored elements in Compressed Sparse Row format>
In [1053]: B.A
Out[1053]:
array([[0, 9, 0, 0, 1, 0],
[3, 0, 6, 0, 0, 9],
[0, 2, 0, 0, 0, 0],
[2, 0, 0, 0, 0, 0],
[0, 9, 5, 3, 0, 7],
[1, 0, 0, 8, 9, 0]])
As your linked question shows, it is possible to act directly on the attributes of the sparse matrix. His code shows why there's an efficiency warning - in the general case it has to rebuild the matrix attributes.
lil is more efficient for row replacement because it just has to change a sublist in the matrix .data and .rows attributes. A change in one row doesn't change the attributes of any of the others.
That said, IF your addition has the same sparsity as the original row, it is possible change specific elements of the data attribute without reworking .indices or .indptr. Drawing on the linked code
A.data[:idx_start_row : idx_end_row]
is the slice of A.data that will be changed. You need of course the corresponding slice from the 'vector'.
Starting with the In [1049] B
In [1085]: B.indptr
Out[1085]: array([ 0, 2, 5, 6, 7, 11, 14], dtype=int32)
In [1086]: B.data
Out[1086]: array([9, 1, 2, 5, 9, 2, 2, 9, 5, 3, 7, 1, 8, 9], dtype=int32)
In [1087]: B.indptr[[1,2]] # row 1
Out[1087]: array([2, 5], dtype=int32)
In [1088]: B.data[2:5]
Out[1088]: array([2, 5, 9], dtype=int32)
In [1089]: B.indices[2:5] # row 1 column indices
Out[1089]: array([0, 2, 5], dtype=int32)
In [1090]: B.data[2:5] += np.array([1,2,3])
In [1091]: B.A
Out[1091]:
array([[ 0, 9, 0, 0, 1, 0],
[ 3, 0, 7, 0, 0, 12],
[ 0, 2, 0, 0, 0, 0],
[ 2, 0, 0, 0, 0, 0],
[ 0, 9, 5, 3, 0, 7],
[ 1, 0, 0, 8, 9, 0]], dtype=int32)
Notice where the changed values, [3,7,12], are in the lil format:
In [1092]: B.tolil().data
Out[1092]: array([[9, 1], [3, 7, 12], [2], [2], [9, 5, 3, 7], [1, 8, 9]], dtype=object)
csr / csc matrices are efficient for most operations including addition (O(nnz)). However, little changes that affect the sparsity structure such as your example or even switching a single position from 0 to 1 are not because they require a O(nnz) reorganisation of the representation. Values and indices are packed; inserting one, all above need to move.
If you do just a single such operation, my guess would be that you can't easily beat scipy's implementation. However, if you are adding multiple rows for example it may be worthwile first making a sparse matrix of them and then adding that in one go.
Creating a csr matrix by hand from rows, say, is not that difficult. For example if your rows are dense and in order:
row_numbers, indices = np.where(rows)
data = rows[row_numbers, indices]
indptr = np.searchsorted(np.r_[true_row_numbers[row_numbers], N], np.arange(N+1))
If you have a collection of sparse rows and their row numbers:
data = np.r_[tuple([r.data for r in rows])]
indices = np.r_[tuple(r.indices for r in rows])]
jumps = np.add.accumulate([0] + [len(r) for r in rows])
indptr = np.repeat(jumps, np.diff(np.r_[-1, true_row_numbers, N]))

Default value when indexing outside of a numpy array, even with non-trivial indexing

Is it possible to look up entries from an nd array without throwing an IndexError?
I'm hoping for something like:
>>> a = np.arange(10) * 2
>>> a[[-4, 2, 8, 12]]
IndexError
>>> wrap(a, default=-1)[[-4, 2, 8, 12]]
[-1, 4, 16, -1]
>>> wrap(a, default=-1)[200]
-1
Or possibly more like get_with_default(a, [-4, 2, 8, 12], default=-1)
Is there some builtin way to do this? Can I ask numpy not to throw the exception and return garbage, which I can then replace with my default value?
np.take with clip mode, sort of does this
In [155]: a
Out[155]: array([ 0, 2, 4, 6, 8, 10, 12, 14, 16, 18])
In [156]: a.take([-4,2,8,12],mode='raise')
...
IndexError: index 12 is out of bounds for size 10
In [157]: a.take([-4,2,8,12],mode='wrap')
Out[157]: array([12, 4, 16, 4])
In [158]: a.take([-4,2,8,12],mode='clip')
Out[158]: array([ 0, 4, 16, 18])
Except you don't have much control over the return value - here indexing on 12 return 18, the last value. And treated the -4 as out of bounds in the other direction, returning 0.
One way of adding the defaults is to pad a first
In [174]: a = np.arange(10) * 2
In [175]: ind=np.array([-4,2,8,12])
In [176]: np.pad(a, [1,1], 'constant', constant_values=-1).take(ind+1, mode='clip')
Out[176]: array([-1, 4, 16, -1])
Not exactly pretty, but a start.
This is my first post on any stack exchange site so forgive me for any stylistic errors (hopefully there are only stylistic errors). I am interested in the same feature but could not find anything from numpy better than np.take mentioned by hpaulj. Still np.take doesn't do exactly what's needed. Alfe's answer works but would need some elaboration in order to handle n-dimensional inputs. The following is another workaround that generalizes to the n-dimensional case. The basic idea is similar the one used by Alfe: create a new index with the out of bounds indices masked out (in my case) or disguised (in Alfe's case) and use it to index the input array without raising an error.
def take(a,indices,default=0):
#initialize mask; will broadcast to length of indices[0] in first iteration
mask = True
for i,ind in enumerate(indices):
#each element of the mask is only True if all indices at that position are in bounds
mask = mask & (0 <= ind) & (ind < a.shape[i])
#create in_bound indices
in_bound = [ind[mask] for ind in indices]
#initialize result with default value
result = default * np.ones(len(mask),dtype=a.dtype)
#set elements indexed by in_bound to their appropriate values in a
result[mask] = a[tuple(in_bound)]
return result
And here is the output from Eric's sample problem:
>>> a = np.arange(10)*2
>>> indices = (np.array([-4,2,8,12]),)
>>> take(a,indices,default=-1)
array([-1, 4, 16, -1])
You can restrict the range of the indexes to the size of your value array you want to index in using np.maximum() and np.minimum().
Example:
I have a heatmap like
h = np.array([[ 2, 3, 1],
[ 3, -1, 5]])
and I have a palette of RGB values I want to use to color the heatmap. The palette only names colors for the values 0..4:
p = np.array([[0, 0, 0], # black
[0, 0, 1], # blue
[1, 0, 1], # purple
[1, 1, 0], # yellow
[1, 1, 1]]) # white
Now I want to color my heatmap using the palette:
p[h]
Currently this leads to an error because of the values -1 and 5 in the heatmap:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
IndexError: index 5 is out of bounds for axis 0 with size 5
But I can limit the range of the heatmap:
p[np.maximum(np.minimum(h, 4), 0)]
This works and gives me the result:
array([[[1, 0, 1],
[1, 1, 0],
[0, 0, 1]],
[[1, 1, 0],
[0, 0, 0],
[1, 1, 1]]])
If you really need to have a special value for the indexes which are out of bound, you could implement your proposed get_with_default() like this:
def get_with_default(values, indexes, default=-1):
return np.concatenate([[default], values, [default]])[
np.maximum(np.minimum(indexes, len(values)), -1) + 1]
a = np.arange(10) * 2
get_with_default(a, [-4, 2, 8, 12], default=-1)
Will return:
array([-1, 4, 16, -1])
as wanted.