I want to do a voxel-based measurement of spherical objects, represented in a numpy array. Because of the sampling, these spheres are represented as a group of cubes (because they are sampled in the array). I want to do a simulation of the introduced error by this grid-restriction. Is there any way to paint a 3D sphere in a numpy grid to run my simulations on? (So basically, a sphere of unit length one, would be one point in the array)
Or is there another way of calculating the error introduced by sampling?
In 2-D that seems to be easy...
The most direct approach is to create a bounding box array, holding at each point the distance to the center of the sphere:
>>> radius = 3
>>> r2 = np.arange(-radius, radius+1)**2
>>> dist2 = r2[:, None, None] + r2[:, None] + r2
>>> volume = np.sum(dist2 <= radius**2)
>>> volume
123
The 2D case is easier to visualize:
>>> dist2 = r2[:, None] + r2
>>> (dist2 <= radius**2).astype(np.int)
array([[0, 0, 0, 1, 0, 0, 0],
[0, 1, 1, 1, 1, 1, 0],
[0, 1, 1, 1, 1, 1, 0],
[1, 1, 1, 1, 1, 1, 1],
[0, 1, 1, 1, 1, 1, 0],
[0, 1, 1, 1, 1, 1, 0],
[0, 0, 0, 1, 0, 0, 0]])
>>> np.sum(dist2 <= radius**2)
29
Related
Dear community I have a challenge with regard to tensor indexing in PyTorch. The problem is very simple. Given a tensor create an index tensor to index its maximum values per column.
x = T.tensor([[0, 3, 0, 5, 9, 8, 2, 0],
[0, 4, 9, 6, 7, 9, 1, 0]])
Given this tensor I would like to build a boolean mask for indexing its maximum values per colum. To be specific I do not need its maximum values, torch.max(x, dim=0), nor its indices, torch.argmax(x, dim=0), but a boolean mask for indexing other tensor based on this tensor max values. My ideal output would be:
# Input tensor
x
tensor([[0, 3, 0, 5, 9, 8, 2, 0],
[0, 4, 9, 6, 7, 9, 1, 0]])
# Ideal output bool mask tensor
idx
tensor([[1, 0, 0, 0, 1, 0, 1, 1],
[0, 1, 1, 1, 0, 1, 0, 0]])
I know that values_max = x[idx] and values_max = x.max(dim=0) are equivalent but I am not looking for values_max but for idx.
I have built a solution around it but it just seem to complex and I am sure torch have an optimized way to do this. I have tried to use torch.index_select with the output of x.argmax(dim=0) but failed so I built a custom solution that seems to cumbersome to me so I am asking for help to do this in a vectorized / tensorial / torch way.
You can perform this operation by first extracting the index of the maximum value column-wise of your tensor with torch.argmax, setting keepdim to True
>>> x.argmax(0, keepdim=True)
tensor([[0, 1, 1, 1, 0, 1, 0, 0]])
Then you can use torch.scatter to place 1s in a zero tensor at the designated indices:
>>> torch.zeros_like(x).scatter(0, x.argmax(0,True), value=1)
tensor([[1, 0, 0, 0, 1, 0, 1, 1],
[0, 1, 1, 1, 0, 1, 0, 0]])
Given a nxm matrix (n > m) of integers, I'd like to identify rows that are a multiple of a single other row, so not a linear combination of multiple other rows.
I could scale all rows to their length and find unique rows, but that is prone to numerical issues on floating points and would also not detect vectors being opposite (pointing in the other directon) of each other.
Any ideas?
Example
A = array([[-1, -1, 0, 0],
[-1, -1, 0, 1],
[-1, 0, -1, 0],
[-1, 0, 0, 0],
[-1, 0, 0, 1],
[-1, 0, 1, 1],
[-1, 1, -1, 0],
[-1, 1, 0, 0],
[-1, 1, 1, 0],
[ 0, -1, 0, 0],
[ 0, -1, 0, 1],
[ 0, -1, 1, 0],
[ 0, -1, 1, 1],
[ 0, 0, -1, 0],
[ 0, 0, 0, 1],
[ 0, 0, 1, 0],
[ 0, 1, -1, 0],
[ 0, 1, 0, 0],
[ 0, 1, 0, 1],
[ 0, 1, 1, 0],
[ 0, 1, 1, 1],
[ 1, -1, 0, 0],
[ 1, -1, 1, 0],
[ 1, 0, 0, 0],
[ 1, 0, 0, 1],
[ 1, 0, 1, 0],
[ 1, 0, 1, 1],
[ 1, 1, 0, 0],
[ 1, 1, 0, 1],
[ 1, 1, 1, 0]])
For example Rows 0 and -3 just point in the opposite direction (multiply one by -1 to make them equal).
You can normalize each row dividing it by its GCD:
import numpy as np
def normalize(a):
return a // np.gcd.reduce(a, axis=1, keepdims=True)
And you can define a distance that considers opposite vectors as equal:
def distance(a, b):
equal = np.all(a == b) or np.all(a == -b)
return 0 if equal else 1
Then you can use standard clustering methods:
from scipy.spatial.distance import pdist
from scipy.cluster.hierarchy import linkage, fcluster
def cluster(a):
norm_a = normalize(a)
distances = pdist(norm_a, metric=distance)
return fcluster(linkage(distances), t=0.5)
For example:
>>> A = np.array([( 1, 2, 3, 4),
... ( 0, 2, 4, 8),
... (-1, -2, -3, -4),
... ( 0, 1, 2, 4),
... (-1, 2, -3, 4),
... ( 2, -4, 6, -8)])
>>> cluster(A)
array([2, 3, 2, 3, 1, 1], dtype=int32)
Interpretation: cluster 1 is formed by rows 4 and 5, cluster 2 by rows 0 and 2, and cluster 3 by rows 1 and 3.
You can take advantage of the fact that inner product of two normalized linearly dependent vectors gives 1 or -1, so the code could look like this:
>>> A_normalized = (A.T/np.linalg.norm(A, axis=-1)).T
>>> M = np.absolute(np.einsum('ix,jx->ij', A_normalized, A_normalized))
>>> i, j = np.where(np.isclose(M, 1))
>>> i, j = i[i < j], j[i < j] # Remove repetitions
>>> print(i, j)
output: [ 0 2 3 6 7 9 11 13] [27 25 23 22 21 17 16 15]
Say I have a list of Indices:
np.array([1, 3, 2, 4])
How do I create the following matrix, where all elements left to the index are ones and right to the index zeros?
[[1, 1, 0, 0, 0, 0],
[1, 1, 1, 1, 0, 0],
[1, 1, 1, 0, 0, 0],
[1, 1, 1, 1, 1, 0]]
1*(np.arange( 6 ) <= arr[:,None])
# array([[1, 1, 0, 0, 0, 0],
# [1, 1, 1, 1, 0, 0],
# [1, 1, 1, 0, 0, 0],
# [1, 1, 1, 1, 1, 0]])
This broadcasts the array of 6 elements across the rows and the array of indices across the columns. The 1* converts boolean to int.
I'm a noob in python!
I'd like to get sequences and anomaly together like this:
and sort only normal sequence.(if a value of anomaly column is 0, it's a normal sequence)
turn normal sequences to numpy array (without anomaly column)
each row(Sequence) is one session. so in this case their are 6 independent sequences.
each element represent some specific activity.
'''
sequence = np.array([[5, 1, 1, 0, 0, 0],
[5, 1, 1, 0, 0, 0],
[5, 1, 1, 0, 0, 0],
[5, 1, 1, 0, 0, 0],
[5, 1, 1, 0, 0, 0],
[5, 1, 1, 300, 200, 100]])
anomaly = np.array((0,0,0,0,0,1))
'''
i got these two variables and have to sort only normal sequences.
Here is the code i tried:
'''
# sequence to dataframe
empty_df = pd.DataFrame(columns = ['Sequence'])
empty_df.reset_index()
for i in range(sequence.shape[0]):
empty_df = empty_df.append({"Sequence":sequence[i]},ignore_index = True) #
#concat anomaly
anomaly_df = pd.DataFrame(anomaly)
df = pd.concat([empty_df,anomaly_df],axis = 1)
df.columns = ['Sequence','anomaly']
df
'''
I didn't want to use pd.DataFrame because it gives me this:
pd.DataFrame(sequence)
anyways, after making df, I tried to sort normal sequences
#sorting normal seq
normal = df[df['anomaly'] == 0]['Sequence']
# back to numpy. only sequence column.
normal = normal.to_numpy()
normal.shape
'''
and this numpy gives me different shape1 from the variable sequence.
sequence.shape: (6,6) normal.shape =(5,)
I want to have (5,6). Tried reshape but didn't work..
Can someone help me with this?
If there are any unspecific explanation from my question, plz leave a comment. I appreciate it.
I am not quite sure of what you need but here you could do:
import pandas as pd
df = pd.DataFrame({'sequence':sequence.tolist(), 'anomaly':anomaly})
df
sequence anomaly
0 [5, 1, 1, 0, 0, 0] 0
1 [5, 1, 1, 0, 0, 0] 0
2 [5, 1, 1, 0, 0, 0] 0
3 [5, 1, 1, 0, 0, 0] 0
4 [5, 1, 1, 0, 0, 0] 0
5 [5, 1, 1, 300, 200, 100] 1
Convert it into list then create an array.
Try:
normal = df.loc[df['anomaly'].eq(0), 'Sequence']
normal = np.array(normal.tolist())
print(normal.shape)
# (5,6)
I have a numpy array heatmap of shape (img_height, img_width) and another array bboxes of shape (K, 4), where K is a number of bounding boxes.
Each bounding box is defined
like so: [x_top_left, y_top_left, width, height].
Here's an example of such array:
bboxes = np.array([
[0, 0, 4, 7],
[3, 4, 3, 4],
[7, 2, 3, 7]
])
heatmap is initally filled with zeros.
What I need to do is to put value 1 for each bounding box in it's corresponding place.
The resulting heatmap should be:
heatmap = np.array([
[1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0],
[1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0],
[1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0],
[1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0],
[1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0],
[1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0],
[1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0],
[0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0],
[0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
])
Important things to note:
axis 0 corresponds to image height
axis 1 corresponds to image width
I've already solved this using python for loop, like so:
for bbox in bboxes:
# y_top_left:y_top_left + img_height, x_top_left:x_top_left + img_width
heatmap[bbox[1] : bbox[1] + bbox[3], bbox[0] : bbox[0] + bbox[2]] = 1
I would like to avoid using python for loops (if it's possible) and be able to do something like this:
heatmap[bboxes[:,1] : bboxes[:,1] + bboxes[:,3], bboxes[:,0]:bboxes[:,0] + bboxes[:,2]] = 1
Is there a way of doing such multiple slicing in numpy?
I am aware of numpy integer array indexing, but to generate such indices I am also unable to avoid python for loops.