Counting zeros in a rolling - numpy array (including NaNs) - pandas

I am trying to find a way of Counting zeros in a rolling using numpy array ?
Using pandas I can get it using:
df['demand'].apply(lambda x: (x == 0).rolling(7).sum()).fillna(0))
or
df['demand'].transform(lambda x: x.rolling(7).apply(lambda x: 7 - np.count _nonzero(x))).fillna(0)
In numpy, using the code from Here
def rolling_window(a, window_size):
shape = (a.shape[0] - window_size + 1, window_size) + a.shape[1:]
print(shape)
strides = (a.strides[0],) + a.strides
return np.lib.stride_tricks.as_strided(a, shape=shape, strides=strides)
arr = np.asarray([10, 20, 30, 5, 6, 0, 0, 0])
np.count_nonzero(rolling_window(arr==0, 7), axis=1)
Output:
array([2, 3])
However, I need the first 6 NaNs as well, and fill it with zeros:
Expected output:
array([0, 0, 0, 0, 0, 0, 2, 3])

Think an efficient one would be with 1D convolution -
def sum_occurences_windowed(arr, W):
K = np.ones(W, dtype=int)
out = np.convolve(arr==0,K)[:len(arr)]
out[:W-1] = 0
return out
Sample run -
In [42]: arr
Out[42]: array([10, 20, 30, 5, 6, 0, 0, 0])
In [43]: sum_occurences_windowed(arr,W=7)
Out[43]: array([0, 0, 0, 0, 0, 0, 2, 3])
Timings on varying length arrays and window of 7
Including count_rolling from #Quang Hoang's post.
Using benchit package (few benchmarking tools packaged together; disclaimer: I am its author) to benchmark proposed solutions.
import benchit
funcs = [sum_occurences_windowed, count_rolling]
in_ = {n:(np.random.randint(0,5,(n)),7) for n in [10,20,50,100,200,500,1000,2000,5000]}
t = benchit.timings(funcs, in_, multivar=True, input_name='Length')
t.plot(logx=True, save='timings.png')
Extending to generic n-dim arrays
from scipy.ndimage.filters import convolve1d
def sum_occurences_windowed_ndim(arr, W, axis=-1):
K = np.ones(W, dtype=int)
out = convolve1d((arr==0).astype(int),K,axis=axis,origin=-(W//2))
out.swapaxes(axis,0)[:W-1] = 0
return out
So, on a 2D array, for counting along each row, use axis=1 and for cols, axis=0 and so on.
Sample run -
In [155]: np.random.seed(0)
In [156]: a = np.random.randint(0,3,(3,10))
In [157]: a
Out[157]:
array([[0, 1, 0, 1, 1, 2, 0, 2, 0, 0],
[0, 2, 1, 2, 2, 0, 1, 1, 1, 1],
[0, 1, 0, 0, 1, 2, 0, 2, 0, 1]])
In [158]: sum_occurences_windowed_ndim(a, W=7)
Out[158]:
array([[0, 0, 0, 0, 0, 0, 3, 2, 3, 3],
[0, 0, 0, 0, 0, 0, 2, 1, 1, 1],
[0, 0, 0, 0, 0, 0, 4, 3, 4, 3]])
# Verify with earlier 1D solution
In [159]: np.vstack([sum_occurences_windowed(i,7) for i in a])
Out[159]:
array([[0, 0, 0, 0, 0, 0, 3, 2, 3, 3],
[0, 0, 0, 0, 0, 0, 2, 1, 1, 1],
[0, 0, 0, 0, 0, 0, 4, 3, 4, 3]])
Let's test out our original 1D input array -
In [187]: arr
Out[187]: array([10, 20, 30, 5, 6, 0, 0, 0])
In [188]: sum_occurences_windowed_ndim(arr, W=7)
Out[188]: array([0, 0, 0, 0, 0, 0, 2, 3])

I would modify the function as follow:
def count_rolling(a, window_size):
shape = (a.shape[0] - window_size + 1, window_size) + a.shape[1:]
strides = (a.strides[0],) + a.strides
rolling = np.lib.stride_tricks.as_strided(a, shape=shape, strides=strides)
out = np.zeros_like(a)
out[window_size-1:] = (rolling == 0).sum(1)
return out
arr = np.asarray([10, 20, 30, 5, 6, 0, 0, 0])
count_rolling(arr,7)
Output:
array([0, 0, 0, 0, 0, 0, 2, 3])

Related

Identify vectors being a multiple of another in rectangular matrix

Given a nxm matrix (n > m) of integers, I'd like to identify rows that are a multiple of a single other row, so not a linear combination of multiple other rows.
I could scale all rows to their length and find unique rows, but that is prone to numerical issues on floating points and would also not detect vectors being opposite (pointing in the other directon) of each other.
Any ideas?
Example
A = array([[-1, -1, 0, 0],
[-1, -1, 0, 1],
[-1, 0, -1, 0],
[-1, 0, 0, 0],
[-1, 0, 0, 1],
[-1, 0, 1, 1],
[-1, 1, -1, 0],
[-1, 1, 0, 0],
[-1, 1, 1, 0],
[ 0, -1, 0, 0],
[ 0, -1, 0, 1],
[ 0, -1, 1, 0],
[ 0, -1, 1, 1],
[ 0, 0, -1, 0],
[ 0, 0, 0, 1],
[ 0, 0, 1, 0],
[ 0, 1, -1, 0],
[ 0, 1, 0, 0],
[ 0, 1, 0, 1],
[ 0, 1, 1, 0],
[ 0, 1, 1, 1],
[ 1, -1, 0, 0],
[ 1, -1, 1, 0],
[ 1, 0, 0, 0],
[ 1, 0, 0, 1],
[ 1, 0, 1, 0],
[ 1, 0, 1, 1],
[ 1, 1, 0, 0],
[ 1, 1, 0, 1],
[ 1, 1, 1, 0]])
For example Rows 0 and -3 just point in the opposite direction (multiply one by -1 to make them equal).
You can normalize each row dividing it by its GCD:
import numpy as np
def normalize(a):
return a // np.gcd.reduce(a, axis=1, keepdims=True)
And you can define a distance that considers opposite vectors as equal:
def distance(a, b):
equal = np.all(a == b) or np.all(a == -b)
return 0 if equal else 1
Then you can use standard clustering methods:
from scipy.spatial.distance import pdist
from scipy.cluster.hierarchy import linkage, fcluster
def cluster(a):
norm_a = normalize(a)
distances = pdist(norm_a, metric=distance)
return fcluster(linkage(distances), t=0.5)
For example:
>>> A = np.array([( 1, 2, 3, 4),
... ( 0, 2, 4, 8),
... (-1, -2, -3, -4),
... ( 0, 1, 2, 4),
... (-1, 2, -3, 4),
... ( 2, -4, 6, -8)])
>>> cluster(A)
array([2, 3, 2, 3, 1, 1], dtype=int32)
Interpretation: cluster 1 is formed by rows 4 and 5, cluster 2 by rows 0 and 2, and cluster 3 by rows 1 and 3.
You can take advantage of the fact that inner product of two normalized linearly dependent vectors gives 1 or -1, so the code could look like this:
>>> A_normalized = (A.T/np.linalg.norm(A, axis=-1)).T
>>> M = np.absolute(np.einsum('ix,jx->ij', A_normalized, A_normalized))
>>> i, j = np.where(np.isclose(M, 1))
>>> i, j = i[i < j], j[i < j] # Remove repetitions
>>> print(i, j)
output: [ 0 2 3 6 7 9 11 13] [27 25 23 22 21 17 16 15]

Set all elements left to index to one, right of index to zero for list of indices

Say I have a list of Indices:
np.array([1, 3, 2, 4])
How do I create the following matrix, where all elements left to the index are ones and right to the index zeros?
[[1, 1, 0, 0, 0, 0],
[1, 1, 1, 1, 0, 0],
[1, 1, 1, 0, 0, 0],
[1, 1, 1, 1, 1, 0]]
1*(np.arange( 6 ) <= arr[:,None])
# array([[1, 1, 0, 0, 0, 0],
# [1, 1, 1, 1, 0, 0],
# [1, 1, 1, 0, 0, 0],
# [1, 1, 1, 1, 1, 0]])
This broadcasts the array of 6 elements across the rows and the array of indices across the columns. The 1* converts boolean to int.

Is there a way to slice out multiple 2D numpy arrays from one 2D numpy array in one batch operation?

I have a numpy array heatmap of shape (img_height, img_width) and another array bboxes of shape (K, 4), where K is a number of bounding boxes.
Each bounding box is defined
like so: [x_top_left, y_top_left, width, height].
Here's an example of such array:
bboxes = np.array([
[0, 0, 4, 7],
[3, 4, 3, 4],
[7, 2, 3, 7]
])
heatmap is initally filled with zeros.
What I need to do is to put value 1 for each bounding box in it's corresponding place.
The resulting heatmap should be:
heatmap = np.array([
[1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0],
[1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0],
[1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0],
[1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0],
[1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0],
[1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0],
[1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0],
[0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0],
[0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
])
Important things to note:
axis 0 corresponds to image height
axis 1 corresponds to image width
I've already solved this using python for loop, like so:
for bbox in bboxes:
# y_top_left:y_top_left + img_height, x_top_left:x_top_left + img_width
heatmap[bbox[1] : bbox[1] + bbox[3], bbox[0] : bbox[0] + bbox[2]] = 1
I would like to avoid using python for loops (if it's possible) and be able to do something like this:
heatmap[bboxes[:,1] : bboxes[:,1] + bboxes[:,3], bboxes[:,0]:bboxes[:,0] + bboxes[:,2]] = 1
Is there a way of doing such multiple slicing in numpy?
I am aware of numpy integer array indexing, but to generate such indices I am also unable to avoid python for loops.

How can I keep the patch which contain all the elements 1

from sklearn.feature_extraction.image import extract_patches
import numpy as np
data = np.array([[1, 1 , 0 , 0 , 0 , 0 , 1 , 0],
[1, 1 , 1 , 0 , 0 , 1 , 1 , 0],
[1, 1 , 0 , 1 , 1 , 0 , 0 , 0],
[0, 0 , 0 , 1 , 1 , 0 , 0 , 0],
[0, 0 , 0 , 1 , 1 , 0 , 0 , 1],
[1, 1 , 0 , 0 , 0 , 0 , 1 , 0],
[1, 1 , 0 , 0 , 0 , 0 , 0 , 0]])
patches = extract_patches(data, patch_shape=(2, 2))
How can I keep the patch which contain all the elements 1?
From the corrections to your post, I believe you might be looking for a way to detect where submatrices of shape (2,2) are all ones. Anywhere where that condition isn't fulfilled should be zero, but priority should be given to the submatrices where that condition is fulfilled, because submatrices can be overlapping.
In that case, you're most likely interested in the staggered grid of that matrix that has a one in the center of each 2x2 submatrix whenever the 4 elements of that submatrix are all ones:
>>> import numpy as np
>>> from sklearn.feature_extraction.image import extract_patches # similar to numpy's stride_tricks
>>>
>>> data = np.array([[1, 1, 0, 0, 0, 0, 1, 0],
... [1, 1, 1, 0, 0, 1, 1, 0],
... [1, 1, 0, 1, 1, 0, 0, 0],
... [0, 0, 0, 1, 1, 0, 0, 0],
... [0, 0, 0, 1, 1, 0, 0, 1],
... [1, 1, 0, 0, 0, 0, 1, 0],
... [1, 1, 0, 0, 0, 0, 0, 0]])
>>>
>>> # to take boundary effects into account, append ones to the right and bottom
... # modify this to `np.zeros` if boundaries are to be set to zero
... data2 = np.ones((data.shape[0]+1, data.shape[1]+1))
>>> data2[:-1,:-1] = data
>>> vert = np.logical_and(data2[:-1,:], data2[1:,])
>>> dual = np.logical_and(vert[:,:-1], vert[:,1:]) # dual is now the "dual" graph/staggered grid of the data2 array
>>> patches = extract_patches(data2, patch_shape=(2, 2)) # could've used numpy stride_tricks too
>>> patches[dual==0] = 0
>>> patches[dual] = 1 # Give precedence to the dual positives
>>> data2[:-1, :-1].astype(np.uint8)
array([[1, 1, 0, 0, 0, 0, 0, 0],
[1, 1, 0, 0, 0, 0, 0, 0],
[1, 1, 0, 1, 1, 0, 0, 0],
[0, 0, 0, 1, 1, 0, 0, 0],
[0, 0, 0, 1, 1, 0, 0, 0],
[1, 1, 0, 0, 0, 0, 0, 0],
[1, 1, 0, 0, 0, 0, 0, 0]], dtype=uint8)
For completeness, this staggered grid form of the matrix could also be obtained easily with a correlation with a np.ones((2,2)) kernel. However, that is computationally more heavy, because a lot more work has to be done (multiplications and summations) rather than simple bit-operations. The method above will outperform a correlation-based method in terms of speed.
The staggered grid dual above could also be generated in the following way:
patches = extract_patches(data, patch_shape=(2, 2))
dual = patches.all(axis=-1).all(axis=-1)
And you would obtain the final result with:
dual = patches.all(axis=-1).all(axis=-1)
patches[dual==False] = 0
patches[dual] = 1
It differs from the previous method in what happens at the boundaries though.
Here's an alternative method, using minimum_filter and maximum_filter from scipy.ndimage. (The description in the question is still too vague--for me, anyway--so this is based on the result shown in #OliverW.'s answer.)
In [138]: from scipy.ndimage import minimum_filter, maximum_filter
In [139]: data
Out[139]:
array([[1, 1, 0, 0, 0, 0, 1, 0],
[1, 1, 1, 0, 0, 1, 1, 0],
[1, 1, 0, 1, 1, 0, 0, 0],
[0, 0, 0, 1, 1, 0, 0, 0],
[0, 0, 0, 1, 1, 0, 0, 1],
[1, 1, 0, 0, 0, 0, 1, 0],
[1, 1, 0, 0, 0, 0, 0, 0]])
In [140]: m = minimum_filter(data, size=(2,2), mode='constant', origin=(-1,-1))
In [141]: result = maximum_filter(m, size=(2,2), mode='constant')
In [142]: result
Out[142]:
array([[1, 1, 0, 0, 0, 0, 0, 0],
[1, 1, 0, 0, 0, 0, 0, 0],
[1, 1, 0, 1, 1, 0, 0, 0],
[0, 0, 0, 1, 1, 0, 0, 0],
[0, 0, 0, 1, 1, 0, 0, 0],
[1, 1, 0, 0, 0, 0, 0, 0],
[1, 1, 0, 0, 0, 0, 0, 0]])

Convert string to integer pandas dataframe index

I have a pandas dataframe with a multiindex. Unfortunately one of the indices gives years as a string
e.g. '2010', '2011'
how do I convert these to integers?
More concretely
MultiIndex(levels=[[u'2010', u'2011'], [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]],
labels=[[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, ...], [0, 1, 2, 3, 4, 5, 6, 7, 8, 9,
10, 11, 12, , ...]], names=[u'Year', u'Month'])
.
df_cbs_prelim_total.index.set_levels(df_cbs_prelim_total.index.get_level_values(0).astype('int'))
seems to do it, but not inplace. Any proper way of changing them?
Cheers,
Mike
Will probably be cleaner to do this before you assign it as index (as #EdChum points out), but when you already have it as index, you can indeed use set_levels to alter one of the labels of a level of your multi-index. A bit cleaner as your code (you can use index.levels[..]):
In [165]: idx = pd.MultiIndex.from_product([[1,2,3], ['2011','2012','2013']])
In [166]: idx
Out[166]:
MultiIndex(levels=[[1, 2, 3], [u'2011', u'2012', u'2013']],
labels=[[0, 0, 0, 1, 1, 1, 2, 2, 2], [0, 1, 2, 0, 1, 2, 0, 1, 2]])
In [167]: idx.levels[1]
Out[167]: Index([u'2011', u'2012', u'2013'], dtype='object')
In [168]: idx = idx.set_levels(idx.levels[1].astype(int), level=1)
In [169]: idx
Out[169]:
MultiIndex(levels=[[1, 2, 3], [2011, 2012, 2013]],
labels=[[0, 0, 0, 1, 1, 1, 2, 2, 2], [0, 1, 2, 0, 1, 2, 0, 1, 2]])
You have to reassign it to save the changes (as is done above, in your case this would be df_cbs_prelim_total.index = df_cbs_prelim_total.index.set_levels(...))