Getting the count of items under diagonal in numpy - numpy

I have a correlation matrix, and I want to get the count of number of items below the diagonal. Preferably in numpy.
[[1, 0, 0, 0, 0],
[.35, 1, 0, 0, 0],
[.42, .31, 1, 0, 0],
[.25, .38, .41, 1, 0],
[.21, .36, .46, .31, 1]]
I want it to return 10. Or, to return the mean of all numbers under the diagonal.

Setup
a = np.array([[1. , 0. , 0. , 0. , 0. ],
[0.35, 1. , 0. , 0. , 0. ],
[0.42, 0.31, 1. , 0. , 0. ],
[0.25, 0.38, 0.41, 1. , 0. ],
[0.21, 0.36, 0.46, 0.31, 1. ]])
numpy.tril_indices will give the indices of all elements under the diagonal (if you provide an offset of -1), and from there, it becomes as simple as indexing and calling mean and size
n, m = a.shape
m = np.tril_indices(n=n, k=-1, m=m)
a[m]
# array([0.35, 0.42, 0.31, 0.25, 0.38, 0.41, 0.21, 0.36, 0.46, 0.31])
a[m].mean()
# 0.346
a[m].size
# 10

A more primitive and bulky answer since numpy provides np.tril_indices as user3483203 mentioned, but what you want per row iteration i is the following (in terms of [row,col] indices):
(i=0)
[1,0] (i=1)
[2,0] [2,1] (i=2)
[3,0] [3,1] [3,2] (i=3)
...
This is essentially the zip of list [i,i,i,...] = [i]*i (i repetitions of i) with [0,1,...,i-1] = range(i). So iterating over the rows of the table, you can actually get the indices per iteration and perform the operator of your choice.
Example setup:
test = np.array(
[[1, 0, 0, 0, 0],
[.35, 1, 0, 0, 0],
[.42, .31, 1, 0, 0],
[.25, .38, .41, 1, 0],
[.21, .36, .46, .31, 1]])
Function definition:
def countdiag(myarray):
numvals = 0
totsum = 0
for i in range(myarray.shape[0]): # row iteration
colc = np.array(range(i)) # calculate column indices
rowc = np.array([i]*i) # calculate row indices
if any(rowc):
print(np.sum(myarray[rowc,colc]))
print(len(myarray[rowc,colc]))
numvals += len(myarray[rowc,colc])
totsum += np.sum(myarray[rowc,colc])
print(list(zip([i]*i, np.arange(i))))
mean = totsum / numvals
return mean, numvals
Test:
[165]: countdiag(test)
[]
0.35
1
[(1, 0)]
0.73
2
[(2, 0), (2, 1)]
1.04
3
[(3, 0), (3, 1), (3, 2)]
1.34
4
[(4, 0), (4, 1), (4, 2), (4, 3)]
0.346
Out[165]:
(0.346, 10)

Related

Identify vectors being a multiple of another in rectangular matrix

Given a nxm matrix (n > m) of integers, I'd like to identify rows that are a multiple of a single other row, so not a linear combination of multiple other rows.
I could scale all rows to their length and find unique rows, but that is prone to numerical issues on floating points and would also not detect vectors being opposite (pointing in the other directon) of each other.
Any ideas?
Example
A = array([[-1, -1, 0, 0],
[-1, -1, 0, 1],
[-1, 0, -1, 0],
[-1, 0, 0, 0],
[-1, 0, 0, 1],
[-1, 0, 1, 1],
[-1, 1, -1, 0],
[-1, 1, 0, 0],
[-1, 1, 1, 0],
[ 0, -1, 0, 0],
[ 0, -1, 0, 1],
[ 0, -1, 1, 0],
[ 0, -1, 1, 1],
[ 0, 0, -1, 0],
[ 0, 0, 0, 1],
[ 0, 0, 1, 0],
[ 0, 1, -1, 0],
[ 0, 1, 0, 0],
[ 0, 1, 0, 1],
[ 0, 1, 1, 0],
[ 0, 1, 1, 1],
[ 1, -1, 0, 0],
[ 1, -1, 1, 0],
[ 1, 0, 0, 0],
[ 1, 0, 0, 1],
[ 1, 0, 1, 0],
[ 1, 0, 1, 1],
[ 1, 1, 0, 0],
[ 1, 1, 0, 1],
[ 1, 1, 1, 0]])
For example Rows 0 and -3 just point in the opposite direction (multiply one by -1 to make them equal).
You can normalize each row dividing it by its GCD:
import numpy as np
def normalize(a):
return a // np.gcd.reduce(a, axis=1, keepdims=True)
And you can define a distance that considers opposite vectors as equal:
def distance(a, b):
equal = np.all(a == b) or np.all(a == -b)
return 0 if equal else 1
Then you can use standard clustering methods:
from scipy.spatial.distance import pdist
from scipy.cluster.hierarchy import linkage, fcluster
def cluster(a):
norm_a = normalize(a)
distances = pdist(norm_a, metric=distance)
return fcluster(linkage(distances), t=0.5)
For example:
>>> A = np.array([( 1, 2, 3, 4),
... ( 0, 2, 4, 8),
... (-1, -2, -3, -4),
... ( 0, 1, 2, 4),
... (-1, 2, -3, 4),
... ( 2, -4, 6, -8)])
>>> cluster(A)
array([2, 3, 2, 3, 1, 1], dtype=int32)
Interpretation: cluster 1 is formed by rows 4 and 5, cluster 2 by rows 0 and 2, and cluster 3 by rows 1 and 3.
You can take advantage of the fact that inner product of two normalized linearly dependent vectors gives 1 or -1, so the code could look like this:
>>> A_normalized = (A.T/np.linalg.norm(A, axis=-1)).T
>>> M = np.absolute(np.einsum('ix,jx->ij', A_normalized, A_normalized))
>>> i, j = np.where(np.isclose(M, 1))
>>> i, j = i[i < j], j[i < j] # Remove repetitions
>>> print(i, j)
output: [ 0 2 3 6 7 9 11 13] [27 25 23 22 21 17 16 15]

how to create 0-1 matrix from probability matrix

(In any language) For a research project, I am stuck on how to convert a matrix P of probability values to a matrix A such that A_ij = 1 with probability P_ij and 0 otherwise? I have looked through various random number generator documentations, but have been unable to figure out how to do this.
If I understand correctly:
In [11]: p = np.random.uniform(size=(5,5))
In [12]: p
Out[12]:
array([[ 0.45481883, 0.21242567, 0.3124863 , 0.00485797, 0.31970718],
[ 0.91995847, 0.29907277, 0.59154085, 0.85847147, 0.13227595],
[ 0.91914631, 0.5495813 , 0.58648856, 0.08037582, 0.23005148],
[ 0.12464628, 0.70657028, 0.75975869, 0.77632964, 0.24587041],
[ 0.69259133, 0.183515 , 0.65500547, 0.19526148, 0.26975325]])
In [13]: a = (p.round(1)==0.7).astype(np.int8)
In [14]: a
Out[14]:
array([[0, 0, 0, 0, 0],
[0, 0, 0, 0, 0],
[0, 0, 0, 0, 0],
[0, 1, 0, 0, 0],
[1, 0, 1, 0, 0]], dtype=int8)

How to set cells of matrix from matrix of columns indexes

I'd like to build a kernel from a list of positions and list of kernel centers. The kernel should be an indicator of the TWO closest centers to each position.
> x = np.array([0.1, .49, 1.9, ]).reshape((3,1)) # Positions
> c = np.array([-2., 0.1, 0.2, 0.4, 0.5, 2.]) # centers
print x
print c
[[ 0.1 ]
[ 0.49]
[ 1.9 ]]
[-2. 0.1 0.2 0.4 0.5 2. ]
What I'd like to get out is:
array([[ 0, 1, 1, 0, 0, 0], # Index 1,2 closest to 0.1
[ 0, 0, 0, 1, 1, 0], # Index 3,4 closest to 0.49
[ 0, 0, 0, 0, 1, 1]]) # Index 4,5 closest to 1.9
I can get:
> dist = np.abs(x-c)
array([[ 2.1 , 0. , 0.1 , 0.3 , 0.4 , 1.9 ],
[ 2.49, 0.39, 0.29, 0.09, 0.01, 1.51],
[ 3.9 , 1.8 , 1.7 , 1.5 , 1.4 , 0.1 ]])
and:
> np.argsort(dist, axis=1)[:,:2]
array([[1, 2],
[4, 3],
[5, 4]])
Here I have a matrix of column indexes, but I but can't see how to use them to set values of those columns in another matrix (using efficient numpy operations).
idx = np.argsort(dist, axis=1)[:,:2]
z = np.zeros(dist.shape)
z[idx]=1 # NOPE
z[idx,:]=1 # NOPE
z[:,idx]=1 # NOPE
One way would be to initialize zeros array and then index with advanced-indexing -
out = np.zeros(dist.shape,dtype=int)
out[np.arange(idx.shape[0])[:,None],idx] = 1
Alternatively, we could play around with dimensions extension to use broadcasting and come up with a one-liner -
out = (idx[...,None] == np.arange(dist.shape[1])).any(1).astype(int)
For performance, I would suggest using np.argpartition to get those indices -
idx = np.argpartition(dist, 2, axis=1)[:,:2]

Compare all values in one column with all values in another column and return indexes

I am interested in comparing all values from 1 dataframe column with all values from a 2nd column and then generating a list or a subset df with values from a 3rd column that is adjacent to the 1st column hits. Hopefully this example will explain it better:
For a simplified example, say I generate the following pandas dataframe:
fake_df=pd.DataFrame({'m':[100,120,101,200,201,501,350,420,525,500],
'n':[10.0,11.0,10.2,1.0,2.0,1.1,3.0,1.0,2.0,1.0],
'mod':[101.001,121.001,102.001,201.001,202.001,502.001,351.001,421.001,526.001,501.001]})
print fake_df
What I am interested in doing is finding all values in column 'm' that are within 0.1 of any value in
column 'mod' and return the values in column 'n' that correspond to the column 'm' hits. So for the above code, the return would be:
10.2, 2.0, 1.1
(since 101,201 and 501 all have close hits in column 'mod').
I have found ways to compare across the same row, but not like above. Is there a way to do this in pandas without extensive loops?
Thanks!
I don't know such method in pandas, but when you extend your scope to include
numpy, two options come to mind.
Easy/Expensive Method
If you can live with N**2 memory overhead, you can do numpy broadcasting to
find out all "adjacent" elements in one step:
In [25]: fake_df=pd.DataFrame({'m':[100,120,101,200,201,501,350,420,525,500],
'n':[10.0,11.0,10.2,1.0,2.0,1.1,3.0,1.0,2.0,1.0],
'mod':[101.001,121.001,102.001,201.001,202.001,502.001,351.001,421.001,526.001,501.001]})
In [26]: mvals = fake_df['m'].values
In [27]: modvals = fake_df['mod'].values
In [28]: is_close = np.abs(mvals - modvals[:, np.newaxis]) <= 0.1; is_close.astype(int)
Out[28]:
array([[0, 0, 1, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 1, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 1, 0, 0, 0, 0]])
As we care only about 'mod' values that have adjacent 'm's, aggregate over axis=0:
In [29]: is_close.any(axis=0).astype(int)
Out[29]: array([0, 0, 1, 0, 1, 1, 0, 0, 0, 0])
Or otherwise
In [30]: fake_df.ix[is_close.any(axis=0), 'n']
Out[30]:
2 10.2
4 2.0
5 1.1
Name: n, dtype: float64
Efficient/Complex Method
To find adjacent elements in less than O(N**2) without any hashing/rounding
tricks, you have to do some sorting:
In [103]: modvals_sorted = np.sort(modvals)
In [104]: next_indices = np.searchsorted(modvals_sorted, mvals)
You have indices of next elements, but they may point beyond the original
array, so you need a one extra NaN at the end to avoid IndexError. Same
logic applies to previous elements which are next_indices - 1: to avoid
indexing before the first element we must prepend one NaN, too. Note the + 1 that arises because one of NaN has been added to the beginning.
In [105]: modvals_sorted_plus = np.r_[np.nan, modvals_sorted, np.nan]
In [106]: nexts = modvals_sorted_plus[next_indices + 1]
In [107]: prevs = modvals_sorted_plus[(next_indices - 1) + 1]
Now it's trivial. Note that we already have prevs <= mvals <= nexts, so we
don't need to use np.abs. Also, all missing elements are NaN and comparing with them results in False that doesn't alter the result of any operation.
In [108]: adjacent = np.c_[prevs, mvals, nexts]; adjacent
Out[108]:
array([[ nan, 100. , 101.001],
[ 102.001, 120. , 121.001],
[ nan, 101. , 101.001],
[ 121.001, 200. , 201.001],
[ 121.001, 201. , 201.001],
[ 421.001, 501. , 501.001],
[ 202.001, 350. , 351.001],
[ 351.001, 420. , 421.001],
[ 502.001, 525. , 526.001],
[ 421.001, 500. , 501.001]])
In [109]: (np.diff(adjacent, axis=1) <= 0.1).any(axis=1)
Out[109]: array([False, False, True, False, True, True, False, False, False, False], dtype=bool)
In [110]: mask = (np.diff(adjacent, axis=1) <= 0.1).any(axis=1)
In [112]: fake_df.ix[mask, 'n']
Out[112]:
2 10.2
4 2.0
5 1.1
Name: n, dtype: float64
Try the following:
# I assume all arrays involved to be or to be converted to numpy arrays
import numpy as np
m = np.array([100,120,101,200,201,501,350,420,525,500])
n = np.array([10.0,11.0,10.2,1.0,2.0,1.1,3.0,1.0,2.0,1.0])
mod = np.array([101.001,121.001,102.001,201.001,202.001,502.001,351.001,421.001,526.001,501.001])
res = []
# for each entry in mod, look in m for "close" values
for i in range(len(mod)):
# for each hit, store entry from n in result list
res.extend(n[np.fabs(mod[i]-m)<=0.1])
# cast result to numpy array
res = np.array(res)
print res
The output is
[ 10.2 2. 1.1]
I'll be making of numpy (imported as np) which pandas uses under the hood. np.isclose returns a boolean indexer: for each value of the iterable, there's a True or False value corresponding to the value m being within atol of each value of df["mod"].
>>> for i, m in df["m"].iteritems():
... indices = np.isclose(m, df["mod"], atol=0.1)
... if any(indices):
... print df["n"][i]
Using the DataFrame you gave produces the output:
10.2
2.0
1.1

Fastest way to compare neighboring elements in multi-dimensional numpy array

What is the fastest way to compare neighboring elements in a 3-dimensional array?
Assume I have a numpy array of (4,4,4). I want to loop in the k-direction and compare elements in pairs. So, compare all neighboring elements and assign the lowest index if they are not equal. Essentially this:
if array([0, 0, 0)] != array[(0, 0, 1)]:
array[(0, 0, 0)] = 111
Thus, the comparisons would be:
(0, 0, 0) and (0, 0, 1)
(0, 0, 1) and (0, 0, 2)
(0, 0, 2) and (0, 0, 3)
(0, 0, 3) and (0, 0, 4)
... for all i and j ...
However, I want to do this for every i and j in the array and writing a standard Python for loop for this on huge arrays with millions of cells is incredibly slow. Is there a more 'standard' numpy way to do this without the explicit for loop?
Maybe there's some trick using the slicing step (i.e array[:,:,::2], array[:,:,1::2])?
Try np.diff.
import numpy as np
a = np.arange(9).reshape(3, 3)
A = np.array([a, a, a + 1]).T
same_with_neighbor_on_last_axis = np.diff(A, axis=-1) == 0
print A
print same_with_neighbor_on_last_axis
A is constructed to have 2 consecutive equal entries along the third axis,
>>>print A
array([[[0, 0, 1],
[3, 3, 4],
[6, 6, 7]],
[[1, 1, 2],
[4, 4, 5],
[7, 7, 8]],
[[2, 2, 3],
[5, 5, 6],
[8, 8, 9]]])
The output vector then yields
>>>print same_with_neighbor_on_last_axis
[[[ True False]
[ True False]
[ True False]]
[[ True False]
[ True False]
[ True False]]
[[ True False]
[ True False]
[ True False]]]
Using the axis keyword, you can choose whichever axis you need to do this operation on. If it is all of them, you can use a loop. np.diff does not much else than the following
np.diff(A, axis=-1) == A[..., 1:] - A[..., :-1]