Cython Typing List of Strings - numpy

I'm trying to use cython to improve the performance of a loop, but I'm running
into some issues declaring the types of the inputs.
How do I include a field in my typed struct which is a string that can be
either 'front' or 'back'
I have a np.recarray that looks like the following (note the length of the
recarray is unknown as compile time)
import numpy as np
weights = np.recarray(4, dtype=[('a', np.int64), ('b', np.str_, 5), ('c', np.float64)])
weights[0] = (0, "front", 0.5)
weights[1] = (0, "back", 0.5)
weights[2] = (1, "front", 1.0)
weights[3] = (1, "back", 0.0)
as well as inputs of a list of strings and a pandas.Timestamp
import pandas as pd
ts = pd.Timestamp("2015-01-01")
contracts = ["CLX16", "CLZ16"]
I am trying to cythonize the following loop
def ploop(weights, contracts, timestamp):
cwts = []
for gen_num, position, weighting in weights:
if weighting != 0:
if position == "front":
cntrct_idx = gen_num
elif position == "back":
cntrct_idx = gen_num + 1
else:
raise ValueError("transition.columns must contain "
"'front' or 'back'")
cwts.append((gen_num, contracts[cntrct_idx], weighting, timestamp))
return cwts
My attempt involved typing the weights input as a struct in cython,
in a file struct_test.pyx as follows
import numpy as np
cimport numpy as np
cdef packed struct tstruct:
np.int64_t gen_num
char[5] position
np.float64_t weighting
def cloop(tstruct[:] weights_array, contracts, timestamp):
cdef tstruct weights
cdef int i
cdef int cntrct_idx
cwts = []
for k in xrange(len(weights_array)):
w = weights_array[k]
if w.weighting != 0:
if w.position == "front":
cntrct_idx = w.gen_num
elif w.position == "back":
cntrct_idx = w.gen_num + 1
else:
raise ValueError("transition.columns must contain "
"'front' or 'back'")
cwts.append((w.gen_num, contracts[cntrct_idx], w.weighting,
timestamp))
return cwts
But I am receiving runtime errors, which I believe are related to the
char[5] position.
import pyximport
pyximport.install()
import struct_test
struct_test.cloop(weights, contracts, ts)
ValueError: Does not understand character buffer dtype format string ('w')
In addition I am a bit unclear how I would go about typing contracts as well
as timestamp.

Your ploop (without the timestamp variable) produces:
In [226]: ploop(weights, contracts)
Out[226]: [(0, 'CLX16', 0.5), (0, 'CLZ16', 0.5), (1, 'CLZ16', 1.0)]
Equivalent function without a loop:
def ploopless(weights, contracts):
arr_contracts = np.array(contracts) # to allow array indexing
wgts1 = weights[weights['c']!=0]
mask = wgts1['b']=='front'
wgts1['b'][mask] = arr_contracts[wgts1['a'][mask]]
mask = wgts1['b']=='back'
wgts1['b'][mask] = arr_contracts[wgts1['a'][mask]+1]
return wgts1.tolist()
In [250]: ploopless(weights, contracts)
Out[250]: [(0, 'CLX16', 0.5), (0, 'CLZ16', 0.5), (1, 'CLZ16', 1.0)]
I'm taking advantage of the fact that returned list of tuples has same (int, str, int) layout as the input weight array. So I'm just making a copy of weights and replacing selected values of the b field.
Note that I use the field selection index before the mask one. The boolean mask produces a copy, so we have to careful about indexing order.
I'm guessing that loop-less array version will be competitive in time with the cloop (on realistic arrays). The string and list operations in cloop probably limit its speedup.

Related

How to get the indices of x smallest elements in a large numpy matrix/multi-dimensional array (works for any number of dimensions)?

Given a large numpy matrix/multi-dimensional array, what is the best and fastest way to get the indices of the x smallest elements?
from typing import Tuple
import numpy as np
def get_indices_of_k_smallest_as_array(arr: np.ndarray, k: int) -> np.ndarray:
idx = np.argpartition(arr.ravel(), k)
return np.array(np.unravel_index(idx, arr.shape))[:, range(k)].transpose().tolist()
def get_indices_of_k_smallest_as_tuple(arr: np.ndarray, k: int) -> Tuple:
idx = np.argpartition(arr.ravel(), k)
return tuple(np.array(np.unravel_index(idx, arr.shape))[:, range(min(k, 0), max(k, 0))])
This answer gives the correct indices, but those indices aren't sorted based on size of the elements. That's just how the introselect algorithm works, which is used by np.argpartition under the hood, https://en.wikipedia.org/wiki/Introselect.
It would be nice if the return was also sorted based on the size of the elements, ex. index 0 of the return points to the smallest element, index 1 points to the 2nd smallest element, etc.
Here's how to do it with sorting. Keep in mind that sorting the results after np.argpartition is going to be much faster than sorting the entire multi-dimensional array.
def get_indices_of_k_smallest_as_array(arr: np.ndarray, k: int) -> np.ndarray:
ravel_array = arr.ravel()
indices_on_ravel = np.argpartition(ravel_array, k)
sorted_indices_on_ravel = sorted(indices_on_ravel, key=lambda x: ravel_array[x])
sorted_indices_on_original = np.array(np.unravel_index(sorted_indices_on_ravel, arr.shape))[:, range(k)].transpose().tolist()
# for the fun of numpy indexing, you can do it this way too
# indices_on_original = np.array(np.unravel_index(indices_on_ravel, arr.shape))[:, range(k)].transpose().tolist()
# sorted_indices_on_original = sorted(indices_on_original, key=lambda x: arr[tuple(np.array(x).T)])
return sorted_indices_on_original
def get_indices_of_k_smallest_as_tuple(arr: np.ndarray, k: int) -> Tuple:
ravel_array = arr.ravel()
indices_on_ravel = np.argpartition(ravel_array, k)
sorted_indices_on_ravel = sorted(indices_on_ravel, key=lambda x: ravel_array[x])
sorted_indices_on_original = tuple(
np.array(np.unravel_index(sorted_indices_on_ravel, arr.shape))[:, range(min(k, 0), max(k, 0))]
)
return sorted_indices_on_original

PYQGIS: How to use QgsRasterFileWriter.writeRaster to create raster from numpy array

I am trying to use the method writeRaster from qgis.core.writeRaster to create a singleBand raster of float and Nans but according to the documentation, I need to provide theses inputs:
writeRaster(
self, # OK
pipe: QgsRasterPipe, # Q1
nCols: int, # OK
nRows: int, # OK
outputExtent: QgsRectangle, # Q2
crs: QgsCoordinateReferenceSystem, # OK
feedback: QgsRasterBlockFeedback = None # OK
) → QgsRasterFileWriter.WriterError
I have 2 questions here:
Q1: What is a QgsRasterPipe, how to use it and what is its purpose?
The documentation says: Constructor for QgsRasterPipe. Base class for processing modules.
Few examples online of writeRaster just initialize this object. So what do I need to provide in the argument pipe ?
Q2: The argument outputExtent of type QgsRectangle seems to be the bounding area of my raster: QgsRectangle(x_min, y_min, x_max, y_max). But here is my question: Where do I declare the values of pixels?
Here is the script (not working) I have for the moment:
import os
import numpy
from qgis.core import (
QgsMapLayer,
QgsRasterFileWriter,
QgsCoordinateReferenceSystem,
QgsRasterPipe,
)
def write_to_geotiff(data: list, filename: str, epsg: str, layer: str=None) -> None:
x_data = data[0]
y_data = data[1]
z_data = data[2]
nx, ny = len(x_data), len(y_data)
QgsRasterFileWriter.writeRaster(
QgsRasterPipe(),
nCols=nx,
nRows=ny,
QgsRectangle(
min(x_data),
min(y_data),
max(x_data),
max(y_data)
),
crs = QgsCoordinateReferenceSystem(f"epsg:{epsg}"),
)
if __name__ == "__main__":
filename = r"C:\Users\vince\Downloads\test.gpkg"
x_data = numpy.asarray([0, 1, 2])
y_data = numpy.asarray([0, 1])
z_data = numpy.asarray([
[0.1, numpy.nan],
[0.5, 139.5],
[150.98, numpy.nan],
])
epsg = "4326"
write_to_geotiff(
[x_data, y_data, z_data],
filename,
epsg
)
I saw this answer for Q1, the data is in the pipe variable. But I don t know how to create a qgsRasterBlock from my numpy array...
I get it using the method QgsRasterFileWriter.createOneBandRaster creating a provider.
You can get the bloc of the provider of type QgsRasterBlock and use the method setValue to associate values.
writer = QgsRasterFileWriter(filename)
provider = QgsRasterFileWriter.createOneBandRaster(
writer,
dataType=Qgis.Float32,
width=nx,
height=ny,
extent=QgsRectangle(
min(x_data),
min(y_data),
max(x_data),
max(y_data)
),
crs = QgsCoordinateReferenceSystem(f"epsg:{epsg}"),
)
provider.setNoDataValue(1, -1)
provider.setEditable(True)
block = provider.block(
bandNo=1,
boundingBox=provider.extent(),
width=provider.xSize(),
height=provider.ySize()
)
for ix in range(nx):
for iy in range(ny):
value = z_data[ix][iy]
if value == numpy.nan:
continue
block.setValue(iy, ix, value)
provider.writeBlock(
block=block,
band=1,
xOffset=0,
yOffset=0
)
provider.setEditable(False)
This will create a tiffile:

What is wrong with my cython implementation of erosion operation of mathematical morphology

I have produced a naive implementation of "erosion". The performance is not relevant since I just trying to understand the algorithm. However, the output of my implementation does not match the one I get from scipy.ndimage. What is wrong with my implementation ?
Here is my implementation with a small test case:
import numpy as np
from PIL import Image
# a small image to play with a cross structuring element
imgmat = np.array([
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,1,1,1,1,1,1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,1,1,0,0,1,1,1,1,0,0,0,1,1,0,0,1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,1,1,0,0,1,1,1,1,0,0,0,1,1,1,0,1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,1,1,0,0,1,1,1,1,0,0,0,1,1,1,1,1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,1,1,0,0,1,1,1,1,0,0,0,1,1,1,1,1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,1,1,0,0,1,1,1,1,0,0,0,0,1,1,1,1,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,1,1,0,0,1,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,1,1,0,0,1,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,1,1,0,0,0,0,0,0,0,0,1,1,1,1,1,0,0,1,0,1,1,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,1,0,0,1,0,1,1,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,1,0,1,1,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,1,0,1,1,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1,1,1,0,0,0,0],
])
imgmat2 = np.where(imgmat == 0, 0, 255).astype(np.uint8)
imarr = Image.fromarray(imgmat2).resize((100, 200))
imarr = np.array(imgrrr)
imarr = np.where(imarr == 0, 0, 1)
se_mat3 = np.array([
[0,1,0],
[1,1,1],
[0,1,0]
])
se_mat31 = np.where(se_mat3 == 1, 0, 1)
The imarr is .
My implementation of erosion:
%%cython -a
import numpy as np
cimport numpy as cnp
cdef erosionC(cnp.ndarray[cnp.int_t, ndim=2] img,
cnp.ndarray[cnp.int_t, ndim=2] B, cnp.ndarray[cnp.int_t, ndim=2] X):
"""
X: image coordinates
struct_element_mat: black and white image, black region is considered as the shape
of structuring element
This operation checks whether (B *includes* X) = $B \subset X$
as per defined in
Serra (Jean), « Introduction to mathematical morphology »,
Computer Vision, Graphics, and Image Processing,
vol. 35, nᵒ 3 (septembre 1986).
URL : https://linkinghub.elsevier.com/retrieve/pii/0734189X86900022..
doi: 10.1016/0734-189X(86)90002-2
Consulted le 6 août 2020, p. 283‑305.
"""
cdef cnp.ndarray[cnp.int_t, ndim=1] a, x, bx
cdef cnp.ndarray[cnp.int_t, ndim=2] Bx, B_frame, Xcp, b
cdef bint check
a = B[0] # get an anchor point from the structuring element coordinates
B_frame = B - a # express the se element coordinates in with respect to anchor point
Xcp = X.copy()
b = img.copy()
for x in X: # X contains the foreground coordinates in the image
Bx = B_frame + x # translate relative coordinates with respect to foreground coordinates considering it as the anchor point
check = True # this is erosion so if any of the se coordinates is not in foreground coordinates we consider it a miss
for bx in Bx: # Bx contains all the translated coordinates of se
if bx not in Xcp:
check = False
if check:
b[x[0], x[1]] = 1 # if there is a hit
else:
b[x[0], x[1]] = 0 # if there is no hit
return b
def erosion(img: np.ndarray, struct_el_mat: np.ndarray, foregroundValue = 0):
B = np.argwhere(struct_el_mat == 0)
X = np.argwhere(img == foregroundValue)
nimg = erosionC(img, B, X)
return np.where(nimg == 1, 255, 0)
The calling code for both is:
from scipy import ndimage as nd
err = nd.binary_erosion(imarr, se_mat3)
imerrCustom = erosion(imarr, se_mat31, foregroundValue=1)
err produces
imerrCustom produces
In the end, I am still not sure about it, but after having read several papers more, I assume that my interpretation of X as foreground coordinates was an error. It should have probably been the entire image that is being iterated.
As I have stated I am not sure if this interpretation is correct as well. But I made a new implementation which iterates over the image, and it gives a more plausible result. I am sharing it in here, hoping that it might help someone:
%%cython -a
import numpy as np
cimport numpy as cnp
cdef dilation_c(cnp.ndarray[cnp.uint8_t, ndim=2] X,
cnp.ndarray[cnp.uint8_t, ndim=2] SE):
"""
X: boolean image
SE: structuring element matrix
origin: coordinate of the origin of the structuring element
This operation checks whether (B *hits* X) = $B \cap X \not = \emptyset$
as per defined in
Serra (Jean), « Introduction to mathematical morphology »,
Computer Vision, Graphics, and Image Processing,
vol. 35, nᵒ 3 (septembre 1986).
URL : https://linkinghub.elsevier.com/retrieve/pii/0734189X86900022..
doi: 10.1016/0734-189X(86)90002-2
Consulted le 6 août 2020, p. 283‑305.
The algorithm adapts DILDIRECT of
Najman (Laurent) et Talbot (Hugues),
Mathematical morphology: from theory to applications,
2013. ISBN : 9781118600788, p. 329
to the formula given in
Jähne (Bernd),
Digital image processing,
6th rev. and ext. ed, Berlin ; New York,
2005. TA1637 .J34 2005.
ISBN : 978-3-540-24035-8.
"""
cdef cnp.ndarray[cnp.uint8_t, ndim=2] O
cdef list elst
cdef int r, c, X_rows, X_cols, SE_rows, SE_cols, se_r, se_c
cdef cnp.ndarray[cnp.int_t, ndim=1] bp
cdef list conds
cdef bint check, b, p, cond
O = np.zeros_like(X)
X_rows, X_cols = X.shape[:2]
SE_rows, SE_cols = SE.shape[:2]
# a boolean convolution
for r in range(0, X_rows-SE_rows):
for c in range(0, X_cols - SE_cols):
conds = []
for se_r in range(SE_rows):
for se_c in range(SE_cols):
b = <bint>SE[se_r, se_c]
p = <bint>X[se_r+r, se_c+c]
conds.append(b and p)
O[r,c] = <cnp.uint8_t>any(conds)
return O
def dilation_erosion(
img: np.ndarray,
struct_el_mat: np.ndarray,
foregroundValue: int = 1,
isErosion: bool = False):
"""
img: image matrix
struct_el: NxN mesh grid of the structuring element whose center is SE's origin
structuring element is encoded as 1
foregroundValue: value to be considered as foreground in the image
"""
B = struct_el_mat.astype(np.uint8)
if isErosion:
X = np.where(img == foregroundValue, 0, 1).astype(np.uint8)
else:
X = np.where(img == foregroundValue, 1, 0).astype(np.uint8)
nimg = dilation_c(X, B)
foreground, background = (255, 0) if foregroundValue == 1 else (0, 1)
if isErosion:
return np.where(nimg == 1, background, foreground).astype(np.uint8)
else:
return np.where(nimg == 1, foreground, background).astype(np.uint8)
# return nimg

Unravel Index loops forever

I am doing some work using image processing and sparse coding. Problem is, the following code works only on some images.
Here is the image that it works perfectly on:
And here is the image that it loops forever on:
Here is the code:
import cv2
import numpy as np
import networkx as nx
from preproc import Preproc
# From https://github.com/vicariousinc/science_rcn/blob/master/science_rcn/learning.py
def sparsify(bu_msg, suppress_radius=3):
"""Make a sparse representation of the edges by greedily selecting features from the
output of preprocessing layer and suppressing overlapping activations.
Parameters
----------
bu_msg : 3D numpy.ndarray of float
The bottom-up messages from the preprocessing layer.
Shape is (num_feats, rows, cols)
suppress_radius : int
How many pixels in each direction we assume this filter
explains when included in the sparsification.
Returns
-------
frcs : see train_image.
"""
frcs = []
img = bu_msg.max(0) > 0
while True:
r, c = np.unravel_index(img.argmax(), img.shape)
print(r, c)
if not img[r, c]:
break
frcs.append((bu_msg[:, r, c].argmax(), r, c))
img[r - suppress_radius:r + suppress_radius + 1,
c - suppress_radius:c + suppress_radius + 1] = False
return np.array(frcs)
if __name__ == '__main__':
img = cv2.imread('https://i.stack.imgur.com/Nb08A.png', 0)
img2 = cv2.imread('https://i.stack.imgur.com/2MW93.png', 0)
prp = Preproc()
bu_msg = prp.fwd_infer(img)
frcs = sparsify(bu_msg)
and the accompanying preprocessing code:
"""A pre-processing layer of the RCN model. See Sec S8.1 for details.
"""
import numpy as np
from scipy.ndimage import maximum_filter
from scipy.ndimage.filters import gaussian_filter
from scipy.signal import fftconvolve
class Preproc(object):
"""
A simplified preprocessing layer implementing Gabor filters and suppression.
Parameters
----------
num_orients : int
Number of edge filter orientations (over 2pi).
filter_scale : float
A scale parameter for the filters.
cross_channel_pooling : bool
Whether to pool across neighboring orientation channels (cf. Sec S8.1.4).
Attributes
----------
filters : [numpy.ndarray]
Kernels for oriented Gabor filters.
pos_filters : [numpy.ndarray]
Kernels for oriented Gabor filters with all-positive values.
suppression_masks : numpy.ndarray
Masks for oriented non-max suppression.
"""
def __init__(self,
num_orients=16,
filter_scale=2.,
cross_channel_pooling=False):
self.num_orients = num_orients
self.filter_scale = filter_scale
self.cross_channel_pooling = cross_channel_pooling
self.suppression_masks = generate_suppression_masks(filter_scale=filter_scale,
num_orients=num_orients)
def fwd_infer(self, img, brightness_diff_threshold=18.):
"""Compute bottom-up (forward) inference.
Parameters
----------
img : numpy.ndarray
The input image.
brightness_diff_threshold : float
Brightness difference threshold for oriented edges.
Returns
-------
bu_msg : 3D numpy.ndarray of float
The bottom-up messages from the preprocessing layer.
Shape is (num_feats, rows, cols)
"""
filtered = np.zeros((len(self.filters),) + img.shape, dtype=np.float32)
for i, kern in enumerate(self.filters):
filtered[i] = fftconvolve(img, kern, mode='same')
localized = local_nonmax_suppression(filtered, self.suppression_masks)
# Threshold and binarize
localized *= (filtered / brightness_diff_threshold).clip(0, 1)
localized[localized < 1] = 0
if self.cross_channel_pooling:
pooled_channel_weights = [(0, 1), (-1, 1), (1, 1)]
pooled_channels = [-np.ones_like(sf) for sf in localized]
for i, pc in enumerate(pooled_channels):
for channel_offset, factor in pooled_channel_weights:
ch = (i + channel_offset) % self.num_orients
pos_chan = localized[ch]
if factor != 1:
pos_chan[pos_chan > 0] *= factor
np.maximum(pc, pos_chan, pc)
bu_msg = np.array(pooled_channels)
else:
bu_msg = localized
# Setting background to -1
bu_msg[bu_msg == 0] = -1.
return bu_msg
#property
def filters(self):
return get_gabor_filters(
filter_scale=self.filter_scale, num_orients=self.num_orients, weights=False)
#property
def pos_filters(self):
return get_gabor_filters(
filter_scale=self.filter_scale, num_orients=self.num_orients, weights=True)
def get_gabor_filters(size=21, filter_scale=4., num_orients=16, weights=False):
"""Get Gabor filter bank. See Preproc for parameters and returns."""
def _get_sparse_gaussian():
"""Sparse Gaussian."""
size = 2 * np.ceil(np.sqrt(2.) * filter_scale) + 1
alt = np.zeros((int(size), int(size)), np.float32)
alt[int(size // 2), int(size // 2)] = 1
gaussian = gaussian_filter(alt, filter_scale / np.sqrt(2.), mode='constant')
gaussian[gaussian < 0.05 * gaussian.max()] = 0
return gaussian
gaussian = _get_sparse_gaussian()
filts = []
for angle in np.linspace(0., 2 * np.pi, num_orients, endpoint=False):
acts = np.zeros((size, size), np.float32)
x, y = np.cos(angle) * filter_scale, np.sin(angle) * filter_scale
acts[int(size / 2 + y), int(size / 2 + x)] = 1.
acts[int(size / 2 - y), int(size / 2 - x)] = -1.
filt = fftconvolve(acts, gaussian, mode='same')
filt /= np.abs(filt).sum() # Normalize to ensure the maximum output is 1
if weights:
filt = np.abs(filt)
filts.append(filt)
return filts
def generate_suppression_masks(filter_scale=4., num_orients=16):
"""
Generate the masks for oriented non-max suppression at the given filter_scale.
See Preproc for parameters and returns.
"""
size = 2 * int(np.ceil(filter_scale * np.sqrt(2))) + 1
cx, cy = size // 2, size // 2
filter_masks = np.zeros((num_orients, size, size), np.float32)
# Compute for orientations [0, pi), then flip for [pi, 2*pi)
for i, angle in enumerate(np.linspace(0., np.pi, num_orients // 2, endpoint=False)):
x, y = np.cos(angle), np.sin(angle)
for r in range(1, int(np.sqrt(2) * size / 2)):
dx, dy = round(r * x), round(r * y)
if abs(dx) > cx or abs(dy) > cy:
continue
filter_masks[i, int(cy + dy), int(cx + dx)] = 1
filter_masks[i, int(cy - dy), int(cx - dx)] = 1
filter_masks[num_orients // 2:] = filter_masks[:num_orients // 2]
return filter_masks
def local_nonmax_suppression(filtered, suppression_masks, num_orients=16):
"""
Apply oriented non-max suppression to the filters, so that only a single
orientated edge is active at a pixel. See Preproc for additional parameters.
Parameters
----------
filtered : numpy.ndarray
Output of filtering the input image with the filter bank.
Shape is (num feats, rows, columns).
Returns
-------
localized : numpy.ndarray
Result of oriented non-max suppression.
"""
localized = np.zeros_like(filtered)
cross_orient_max = filtered.max(0)
filtered[filtered < 0] = 0
for i, (layer, suppress_mask) in enumerate(zip(filtered, suppression_masks)):
competitor_maxs = maximum_filter(layer, footprint=suppress_mask, mode='nearest')
localized[i] = competitor_maxs <= layer
localized[cross_orient_max > filtered] = 0
return localized
The problem I found was that np.unravel_index returns all the positions of features for the first image, whereas it only returns (0, 0) continuously for the second. My hypothesis is that it is either a problem with the preprocessing code, or it is a bug in the np.unravel_index function itself, but I am not too sure.
Okay, so turns out there is an underlying problem when calling argmax on the image. I rewrote the sparsification script to not use argmax and it works exactly the same. It should now work with any image.
def sparsify(bu_msg, suppress_radius=3):
"""Make a sparse representation of the edges by greedily selecting features from the
output of preprocessing layer and suppressing overlapping activations.
Parameters
----------
bu_msg : 3D numpy.ndarray of float
The bottom-up messages from the preprocessing layer.
Shape is (num_feats, rows, cols)
suppress_radius : int
How many pixels in each direction we assume this filter
explains when included in the sparsification.
Returns
-------
frcs : see train_image.
"""
frcs = []
img = bu_msg.max(0) > 0
for (r, c), _ in np.ndenumerate(img):
if img[r, c]:
frcs.append((bu_msg[:, r, c].argmax(), r, c))
img[r - suppress_radius:r + suppress_radius + 1,
c - suppress_radius:c + suppress_radius + 1] = False
return np.array(frcs)

Iterator with memory?

I'm working on an application which use a markov chain.
An example on this code follows:
chain = MarkovChain(order=1)
train_seq = ["","hello","this","is","a","beautiful","world"]
for i, word in enum(train_seq):
chain.train(previous_state=train_seq[i-1],next_state=word)
What I am looking for is to iterate over train_seq, but to keep the N last elements.
for states in unknown(train_seq,order=1):
# states should be a list of states, with states[-1] the newest word,
# and states[:-1] should be the previous occurrences of the iteration.
chain.train(*states)
Hope the description of my problem is clear enough for
window will give you n items from iterable at a time.
from collections import deque
def window(iterable, n=3):
it = iter(iterable)
d = deque(maxlen = n)
for elem in it:
d.append(elem)
yield tuple(d)
print [x for x in window([1, 2, 3, 4, 5])]
# [(1,), (1, 2), (1, 2, 3), (2, 3, 4), (3, 4, 5)]
If you want the same number of items even the first few times,
from collections import deque
from itertools import islice
def window(iterable, n=3):
it = iter(iterable)
d = deque((next(it) for Null in range(n-1)), n)
for elem in it:
d.append(elem)
yield tuple(d)
print [x for x in window([1, 2, 3, 4, 5])]
will do that.
seq = [1,2,3,4,5,6,7]
for w in zip(seq, seq[1:]):
print w
You can also do the following to create an arbitrarily-sized pairs:
tuple_size = 2
for w in zip(*(seq[i:] for i in range(tuple_size)))
print w
edit: But it's probably better using the iterative zip:
from itertools import izip
tuple_size = 4
for w in izip(*(seq[i:] for i in range(tuple_size)))
print w
I tried this on my system with seq being 10,000,000 integers and the results were fairly instant.
Improving upon yan's answer to avoid copies:
from itertools import *
def staggered_iterators(sequence, count):
iterator = iter(sequence)
for i in xrange(count):
result, iterator = tee(iterator)
yield result
next(iterator)
tuple_size = 4
for w in izip(*(i for i in takewhile(staggered_iterators(seq, order)))):
print w