Related
A = [[2,2,4,2,2,2]
[2,6,2,2,2,2]
[2,2,2,2,8,2]]
I want matrix B to be equal to:
B = [[0,0,4,0,0,0]
[0,6,0,0,0,0]
[0,0,0,0,8,0]]
So I want to find the maximum value of each row and replace other values with 0. Is there any way to do this without using for loops?
Thanks in advance for your comments.
Instead of looking at the argmax, you could take the max values for each row directly, then mask the elements which are lower and replace them with zeros:
Inplace this would look like (here True stands for keepdims=True):
>>> A[A < A.max(1, True)] = 0
>>> A
array([[0, 0, 4, 0, 0, 0],
[0, 6, 0, 0, 0, 0],
[0, 0, 0, 0, 8, 0]])
An out of place alternative is to use np.where:
>>> np.where(A == A.max(1, True), A, 0)
array([[0, 0, 4, 0, 0, 0],
[0, 6, 0, 0, 0, 0],
[0, 0, 0, 0, 8, 0]])
(In any language) For a research project, I am stuck on how to convert a matrix P of probability values to a matrix A such that A_ij = 1 with probability P_ij and 0 otherwise? I have looked through various random number generator documentations, but have been unable to figure out how to do this.
If I understand correctly:
In [11]: p = np.random.uniform(size=(5,5))
In [12]: p
Out[12]:
array([[ 0.45481883, 0.21242567, 0.3124863 , 0.00485797, 0.31970718],
[ 0.91995847, 0.29907277, 0.59154085, 0.85847147, 0.13227595],
[ 0.91914631, 0.5495813 , 0.58648856, 0.08037582, 0.23005148],
[ 0.12464628, 0.70657028, 0.75975869, 0.77632964, 0.24587041],
[ 0.69259133, 0.183515 , 0.65500547, 0.19526148, 0.26975325]])
In [13]: a = (p.round(1)==0.7).astype(np.int8)
In [14]: a
Out[14]:
array([[0, 0, 0, 0, 0],
[0, 0, 0, 0, 0],
[0, 0, 0, 0, 0],
[0, 1, 0, 0, 0],
[1, 0, 1, 0, 0]], dtype=int8)
What is a generic, efficient algorithm to find the minimal subset of columns in a discrete-valued matrix that makes that rows unique.
For example, consider this matrix (with named columns):
a b c d
2 1 0 0
2 0 0 0
2 1 2 2
1 2 2 2
2 1 1 0
Each row in the matrix is unique. However, if we remove columns a and d we maintain that same property.
I could enumerate all possible combinations of the columns, however, that will quickly become intractable as my matrix grows. Is there a faster, optimal algorithm for doing this?
Actually, my original formulation wasn't very good. This is better as a set cover.
import pulp
# Input data
A = [
[2, 1, 0, 0],
[2, 0, 0, 0],
[2, 1, 2, 2],
[1, 2, 2, 2],
[2, 1, 1, 0]
]
# Preprocess the data a bit.
# Bikj = 1 if Aij != Akj, 0 otherwise
B = []
for i in range(len(A)):
Bi = []
for k in range(len(A)):
Bik = [int(A[i][j] != A[k][j]) for j in range(len(A[i]))]
Bi.append(Bik)
B.append(Bi)
model = pulp.LpProblem('Tim', pulp.LpMinimize)
# Variables turn on and off columns.
x = [pulp.LpVariable('x_%d' % j, cat=pulp.LpBinary) for j in range(len(A[0]))]
# The sum of elementwise absolute difference per element and row.
for i in range(len(A)):
for k in range(i + 1, len(A)):
model += sum(B[i][k][j] * x[j] for j in range(len(A[i]))) >= 1
model.setObjective(pulp.lpSum(x))
assert model.solve() == pulp.LpStatusOptimal
print([xi.value() for xi in x])
An observation: if M has unique rows without both columns i and j, then it has unique rows without column i and without column j independently (in other words, adding a column to a matrix with unique rows cannot make the rows not unique). Therefore, you should be able to find the minimum (not just minimal) solution by using a depth first search.
def has_unique_rows(M):
return len(set([tuple(i) for i in M])) == len(M)
def remove_cols(M, cols):
ret = []
for row in M:
new_row = []
for i in range(len(row)):
if i in cols:
continue
new_row.append(row[i])
ret.append(new_row)
return ret
def minimum_unique_rows(M):
if not has_unique_rows(M):
raise ValueError("M must have unique rows")
cols = list(range(len(M[0])))
def _cols_to_remove(M, removed_cols=(), max_removed_cols=()):
for i in set(cols) - set(removed_cols):
new_removed_cols = removed_cols + (i,)
new_M = remove_cols(M, new_removed_cols)
if not has_unique_rows(new_M):
continue
if len(new_removed_cols) > len(max_removed_cols):
max_removed_cols = new_removed_cols
return _cols_to_remove(M, new_removed_cols, max_removed_cols)
return max_removed_cols
removed_cols = _cols_to_remove(M)
return remove_cols(M, removed_cols), removed_cols
(note that my variable naming is terrible)
Here's it on your matrix
In [172]: rows = [
.....: [2, 1, 0, 0],
.....: [2, 0, 0, 0],
.....: [2, 1, 2, 2],
.....: [1, 2, 2, 2],
.....: [2, 1, 1, 0]
.....: ]
In [173]: minimum_unique_rows(rows)
Out[173]: ([[1, 0], [0, 0], [1, 2], [2, 2], [1, 1]], (0, 3))
I generated a random matrix (using sympy.randMatrix) which is shown below
⎡0 1 0 1 0 1 1⎤
⎢ ⎥
⎢0 1 1 2 0 0 2⎥
⎢ ⎥
⎢1 0 1 1 1 0 0⎥
⎢ ⎥
⎢1 2 2 1 1 2 2⎥
⎢ ⎥
⎢2 0 0 0 0 1 1⎥
⎢ ⎥
⎢2 0 2 2 1 1 0⎥
⎢ ⎥
⎢2 1 2 1 1 0 1⎥
⎢ ⎥
⎢2 2 1 2 1 0 1⎥
⎢ ⎥
⎣2 2 2 1 1 2 1⎦
(note that sorting the rows of M helps a lot in checking these things by hand)
In [224]: M1 = [[0, 1, 0, 1, 0, 1, 1], [0, 1, 1, 2, 0, 0, 2], [1, 0, 1, 1, 1, 0, 0], [1, 2, 2, 1, 1, 2, 2], [2, 0, 0, 0, 0, 1, 1], [2, 0, 2, 2, 1, 1, 0], [2, 1, 2, 1, 1, 0
, 1], [2, 2, 1, 2, 1, 0, 1], [2, 2, 2, 1, 1, 2, 1]]
In [225]: minimum_unique_rows(M1)
Out[225]: ([[1, 1, 1], [2, 0, 2], [1, 0, 0], [1, 2, 2], [0, 1, 1], [2, 1, 0], [1, 0, 1], [2, 0, 1], [1, 2, 1]], (0, 1, 2, 4))
Here's a brute-force check that it's the minimum answer (actually there are quite a few minimums).
In [229]: from itertools import combinations
In [230]: print([has_unique_rows(remove_cols(M1, r)) for r in combinations(range(7), 6)])
[False, False, False, False, False, False, False]
In [231]: print([has_unique_rows(remove_cols(M1, r)) for r in combinations(range(7), 5)])
[False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False, False]
In [232]: print([has_unique_rows(remove_cols(M1, r)) for r in combinations(range(7), 4)])
[False, True, False, False, False, False, False, False, False, False, True, False, False, False, False, False, True, False, False, False, False, False, False, False, True, False, False, True, False, False, False, False, False, True, True]
Here is my greedy solution. (Yes, that fails your "optimal" criterion.) Randomly pick a row that can be safely thrown away and throw it away. Keep going until no more such rows. I'm sure the is_valid could be optimized.
rows = [
[2, 1, 0, 0],
[2, 0, 0, 0],
[2, 1, 2, 2],
[1, 2, 2, 2],
[2, 1, 1, 0]
]
col_names = [0, 1, 2, 3]
def is_valid(rows, col_names):
# it's valid if every row has a distinct "signature"
signatures = { tuple(row[col] for col in col_names) for row in rows }
return len(signatures) == len(rows)
import random
def minimal_distinct_columns(rows, col_names):
col_names = col_names[:]
random.shuffle(col_names)
for i, col in enumerate(col_names):
fewer_col_names = col_names[:i] + col_names[(i+1):]
if is_valid(rows, fewer_col_names):
return minimal_distinct_columns(rows, fewer_col_names)
return col_names
Since it's greedy, it doesn't get the best answer always, but it should be relatively speedy (and simple).
Although I'm sure there's better approaches, this fondly reminded me of some Genetic Algorithms stuff I did in the 90s. I wrote up a quick version using R's GA package.
library(GA)
matrix_to_minimize <- matrix(c(2,2,1,1,2,
1,0,1,2,1,
0,0,2,2,1,
0,0,2,2,0), ncol=4)
evaluate <- function(indices) {
if(all(indices == 0)) {
return(0)
}
selected_cols <- matrix_to_minimize[, as.logical(indices), drop=FALSE]
are_unique <- nrow(selected_cols) == nrow(unique(selected_cols))
if (are_unique == FALSE) {
return(0)
}
retval <- (1/sum(as.logical(indices)))
return(retval)
}
ga_results <- ga("binary", evaluate,
nBits=ncol(matrix_to_minimize),
popSize=10 * ncol(matrix_to_minimize), #why not
maxiter=1000,
run=10) #probably want to play with this
print("Best Solution: ")
print(ga_results#solution)
I don't know that it's good or optimal, but I bet it will provide a reasonably good answer in a reasonable amount of time? :)
I am interested in comparing all values from 1 dataframe column with all values from a 2nd column and then generating a list or a subset df with values from a 3rd column that is adjacent to the 1st column hits. Hopefully this example will explain it better:
For a simplified example, say I generate the following pandas dataframe:
fake_df=pd.DataFrame({'m':[100,120,101,200,201,501,350,420,525,500],
'n':[10.0,11.0,10.2,1.0,2.0,1.1,3.0,1.0,2.0,1.0],
'mod':[101.001,121.001,102.001,201.001,202.001,502.001,351.001,421.001,526.001,501.001]})
print fake_df
What I am interested in doing is finding all values in column 'm' that are within 0.1 of any value in
column 'mod' and return the values in column 'n' that correspond to the column 'm' hits. So for the above code, the return would be:
10.2, 2.0, 1.1
(since 101,201 and 501 all have close hits in column 'mod').
I have found ways to compare across the same row, but not like above. Is there a way to do this in pandas without extensive loops?
Thanks!
I don't know such method in pandas, but when you extend your scope to include
numpy, two options come to mind.
Easy/Expensive Method
If you can live with N**2 memory overhead, you can do numpy broadcasting to
find out all "adjacent" elements in one step:
In [25]: fake_df=pd.DataFrame({'m':[100,120,101,200,201,501,350,420,525,500],
'n':[10.0,11.0,10.2,1.0,2.0,1.1,3.0,1.0,2.0,1.0],
'mod':[101.001,121.001,102.001,201.001,202.001,502.001,351.001,421.001,526.001,501.001]})
In [26]: mvals = fake_df['m'].values
In [27]: modvals = fake_df['mod'].values
In [28]: is_close = np.abs(mvals - modvals[:, np.newaxis]) <= 0.1; is_close.astype(int)
Out[28]:
array([[0, 0, 1, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 1, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 1, 0, 0, 0, 0]])
As we care only about 'mod' values that have adjacent 'm's, aggregate over axis=0:
In [29]: is_close.any(axis=0).astype(int)
Out[29]: array([0, 0, 1, 0, 1, 1, 0, 0, 0, 0])
Or otherwise
In [30]: fake_df.ix[is_close.any(axis=0), 'n']
Out[30]:
2 10.2
4 2.0
5 1.1
Name: n, dtype: float64
Efficient/Complex Method
To find adjacent elements in less than O(N**2) without any hashing/rounding
tricks, you have to do some sorting:
In [103]: modvals_sorted = np.sort(modvals)
In [104]: next_indices = np.searchsorted(modvals_sorted, mvals)
You have indices of next elements, but they may point beyond the original
array, so you need a one extra NaN at the end to avoid IndexError. Same
logic applies to previous elements which are next_indices - 1: to avoid
indexing before the first element we must prepend one NaN, too. Note the + 1 that arises because one of NaN has been added to the beginning.
In [105]: modvals_sorted_plus = np.r_[np.nan, modvals_sorted, np.nan]
In [106]: nexts = modvals_sorted_plus[next_indices + 1]
In [107]: prevs = modvals_sorted_plus[(next_indices - 1) + 1]
Now it's trivial. Note that we already have prevs <= mvals <= nexts, so we
don't need to use np.abs. Also, all missing elements are NaN and comparing with them results in False that doesn't alter the result of any operation.
In [108]: adjacent = np.c_[prevs, mvals, nexts]; adjacent
Out[108]:
array([[ nan, 100. , 101.001],
[ 102.001, 120. , 121.001],
[ nan, 101. , 101.001],
[ 121.001, 200. , 201.001],
[ 121.001, 201. , 201.001],
[ 421.001, 501. , 501.001],
[ 202.001, 350. , 351.001],
[ 351.001, 420. , 421.001],
[ 502.001, 525. , 526.001],
[ 421.001, 500. , 501.001]])
In [109]: (np.diff(adjacent, axis=1) <= 0.1).any(axis=1)
Out[109]: array([False, False, True, False, True, True, False, False, False, False], dtype=bool)
In [110]: mask = (np.diff(adjacent, axis=1) <= 0.1).any(axis=1)
In [112]: fake_df.ix[mask, 'n']
Out[112]:
2 10.2
4 2.0
5 1.1
Name: n, dtype: float64
Try the following:
# I assume all arrays involved to be or to be converted to numpy arrays
import numpy as np
m = np.array([100,120,101,200,201,501,350,420,525,500])
n = np.array([10.0,11.0,10.2,1.0,2.0,1.1,3.0,1.0,2.0,1.0])
mod = np.array([101.001,121.001,102.001,201.001,202.001,502.001,351.001,421.001,526.001,501.001])
res = []
# for each entry in mod, look in m for "close" values
for i in range(len(mod)):
# for each hit, store entry from n in result list
res.extend(n[np.fabs(mod[i]-m)<=0.1])
# cast result to numpy array
res = np.array(res)
print res
The output is
[ 10.2 2. 1.1]
I'll be making of numpy (imported as np) which pandas uses under the hood. np.isclose returns a boolean indexer: for each value of the iterable, there's a True or False value corresponding to the value m being within atol of each value of df["mod"].
>>> for i, m in df["m"].iteritems():
... indices = np.isclose(m, df["mod"], atol=0.1)
... if any(indices):
... print df["n"][i]
Using the DataFrame you gave produces the output:
10.2
2.0
1.1
What is the fastest way to compare neighboring elements in a 3-dimensional array?
Assume I have a numpy array of (4,4,4). I want to loop in the k-direction and compare elements in pairs. So, compare all neighboring elements and assign the lowest index if they are not equal. Essentially this:
if array([0, 0, 0)] != array[(0, 0, 1)]:
array[(0, 0, 0)] = 111
Thus, the comparisons would be:
(0, 0, 0) and (0, 0, 1)
(0, 0, 1) and (0, 0, 2)
(0, 0, 2) and (0, 0, 3)
(0, 0, 3) and (0, 0, 4)
... for all i and j ...
However, I want to do this for every i and j in the array and writing a standard Python for loop for this on huge arrays with millions of cells is incredibly slow. Is there a more 'standard' numpy way to do this without the explicit for loop?
Maybe there's some trick using the slicing step (i.e array[:,:,::2], array[:,:,1::2])?
Try np.diff.
import numpy as np
a = np.arange(9).reshape(3, 3)
A = np.array([a, a, a + 1]).T
same_with_neighbor_on_last_axis = np.diff(A, axis=-1) == 0
print A
print same_with_neighbor_on_last_axis
A is constructed to have 2 consecutive equal entries along the third axis,
>>>print A
array([[[0, 0, 1],
[3, 3, 4],
[6, 6, 7]],
[[1, 1, 2],
[4, 4, 5],
[7, 7, 8]],
[[2, 2, 3],
[5, 5, 6],
[8, 8, 9]]])
The output vector then yields
>>>print same_with_neighbor_on_last_axis
[[[ True False]
[ True False]
[ True False]]
[[ True False]
[ True False]
[ True False]]
[[ True False]
[ True False]
[ True False]]]
Using the axis keyword, you can choose whichever axis you need to do this operation on. If it is all of them, you can use a loop. np.diff does not much else than the following
np.diff(A, axis=-1) == A[..., 1:] - A[..., :-1]