Positions of the maximum in Pandas - pandas

I have a pandas dataframe and I want to retrieve the positions (row, column) of the maximum value in the dataframe. How can I do?

Sample:
df = pd.DataFrame({
'B':[4,5,4,5,5,4],
'C':[7,8,9,4,2,3],
'D':[1,3,5,7,1,0],
'E':[5,3,6,9,2,4],
})
First if necessary get only numbers by DataFrame.select_dtypes:
df = df.select_dtypes(np.number)
For return first max value in DataFrame use DataFrame.stack with Series.idxmax:
print (df.stack().idxmax())
(2, 'C')
If performance is important get indices by compare by max value with numpy.where and get first value by indexing:
r, c = np.where(df == df.values.max())
print ((df.index[r[0]], df.columns[c[0]]))
(2, 'C')
If need all max values:
s = df.stack()
print (s.index[s == s.max()].tolist())
[(2, 'C'), (3, 'E')
print (list(zip(df.index[r], df.columns[c])))
[(2, 'C'), (3, 'E')]

Related

Combinations & Numpy

I need to rate each combination in order to get the best one.
I completed the first step but it is not optimized at all.
When the value of RQ or NBPIS or NBSER is big, my code is much too long.
Do you have an idea to get the same result much faster?
Thank you very much
import numpy as np
from itertools import combinations, combinations_with_replacement
#USER SETTINGS
RQ=['A','B','C','D','E','F','G','H']
NBPIS=3
NBSER=3
#CODE
Combi1=np.array(list(combinations_with_replacement(RQ,NBPIS)))
Combi2=combinations_with_replacement(Combi1,NBSER)
Combi3=np.array([])
Compt=0
First=0
for X in Combi2:
Long=0
Compt=Compt+1
Y=np.array(X)
for Z in RQ:
Long=Long+1
if Z not in Y:
break
elif Long==len(RQ):
if First==0:
Combi3=Y
Combi3 = np.expand_dims(Combi3, axis = 0)
First=1
else:
Combi3=np.append(Combi3, [Y], axis = 0)
#RESULTS
print(Combi3)
print(Combi3.shape)
print(Compt)
Assuming your code produces the desirable result, the first step to optimize your code is refactoring it. This might help others to jump in and help as well.
Let's start making a function of it.
#USER SETTINGS
RQ=['A','B','C','D','E','F','G','H']
NBPIS=3
NBSER=3
def your_code():
Combi1=np.array(list(combinations_with_replacement(RQ,NBPIS)))
Combi2=combinations_with_replacement(Combi1,NBSER)
Combi3=np.array([])
Compt=0
First=0
for X in Combi2:
Long=0
Compt=Compt+1
Y=np.array(X)
for Z in RQ:
Long=Long+1
if Z not in Y:
break
elif Long==len(RQ):
if First==0:
Combi3=Y
Combi3 = np.expand_dims(Combi3, axis = 0)
First=1
else:
Combi3=np.append(Combi3, [Y], axis = 0)
shape = Combi3.shape
size = Compt
return Combi3, shape, size
Refactoring
Notice that Compt is equal to len(Combi2), so turning Combi2 as a numpy array will help to simplify the code. This also allow the variable Y to be replaced by X only. Also, there is no need for Combi1 to be a numpy array since it is only consumed by combinations_with_replacement.
def your_code_refactored():
Combi1 = combinations_with_replacement(RQ,NBPIS)
Combi2 = np.array(list(combinations_with_replacement(Combi1,NBSER)))
Combi3=np.array([])
First=0
for X in Combi2:
Long=0
for Z in RQ:
Long=Long+1
if Z not in X:
break
elif Long==len(RQ):
if First==0:
Combi3=X
Combi3 = np.expand_dims(Combi3, axis = 0)
First=1
else:
Combi3=np.append(Combi3, [X], axis = 0)
shape = Combi3.shape
size = len(Combi2)
return Combi3, shape, size
Next thing is to refactor how Combi3 is created and populated. The varialbe First is used to expand Combi3 dimension in the first interaction only, so this logic can be simplified as,
def your_code_refactored():
Combi1 = combinations_with_replacement(RQ,NBPIS)
Combi2 = np.array(list(combinations_with_replacement(Combi1,NBSER)))
Combi3 = np.empty((0, NBPIS, NBSER))
for X in Combi2:
Long=0
for Z in RQ:
Long=Long+1
if Z not in X:
break
elif Long==len(RQ):
Combi3 = np.append(Combi3, [X], axis = 0)
shape = Combi3.shape
size = len(Combi2)
return Combi3, shape, size
It seems Combi2 is populated only with combinations that have at least one of each element from RQ. This is accomplished by testing if each element of RQ is in X, which is basically checking if RQ is a subset of X. So it is simplified further,
def your_code_refactored():
Combi1 = combinations_with_replacement(RQ,NBPIS)
Combi2 = np.array(list(combinations_with_replacement(Combi1,NBSER)))
Combi3 = np.empty((0, NBPIS, NBSER))
unique_RQ = set(RQ)
for X in Combi2:
if unique_RQ.issubset(X.flatten()):
Combi3 = np.append(Combi3, [X], axis = 0)
shape = Combi3.shape
size = len(Combi2)
return Combi3, shape, size
This looks much simpler. Time to make it faster, maybe :)
Optimizing
One way this code can be optimized is to replace np.append by list.append. In numpy's documentation we see that np.append reallocate a larger and larger array each time it is called. The code might be speed up with list.append, since it over-allocates memory. So the code could be rewritten with list comprehension.
def your_code_refactored_and_optimized():
Combi1 = combinations_with_replacement(RQ,NBPIS)
Combi2 = np.array(list(combinations_with_replacement(Combi1,NBSER)))
unique_RQ = set(RQ)
Combi3 = np.array([X for X in Combi2 if unique_RQ.issubset(X.flatten())])
shape = Combi3.shape
size = len(Combi2)
return Combi3, shape, size
Testing
Now we can see it run faster.
import timeit
n = 5
print(timeit.timeit('your_code()', globals=globals(), number=n))
print(timeit.timeit('your_code_refactored_and_optimized()', globals=globals(), number=n))
This isn't much a gain in speed but it's something :)
I have an idea to reduce execution time by removing unnecessary combinations, simplifying with the following example with :
RQ=['A','B','C']
NBPIS=3
NBSER=3
Currently with :
Combi1 = combinations_with_replacement(RQ,NBPIS)
print(list(Combi1))
[('A', 'A', 'A'), ('A', 'A', 'B'), ('A', 'A', 'C'), ('A', 'B', 'B'),
('A', 'B', 'C'), ('A', 'C', 'C'), ('B', 'B', 'B'), ('B', 'B', 'C'),
('B', 'C', 'C'), ('C', 'C', 'C')]
But with :
Combi1 = list(list(combinations(RQ,W)) for W in range(1,NBPIS+1))
print(Combi1)
[[('A',), ('B',), ('C',)], [('A', 'B'), ('A', 'C'), ('B', 'C')],
[('A', 'B', 'C')]]
Problem with :
Combi1 = list(list(combinations(RQ,W)) for W in range(1,NBPIS+1))
Error message :
Combi3 = np.array([X for X in Combi2 if
unique_RQ.issubset(X.flatten())])
TypeError: unhashable type: 'list'
But with :
(Combi1 = combinations(RQ,W) for W in range(1,NBPIS+1))
print(Combi3)
[]
Questions :
For Combi1,
Instead of :
[[('A',), ('B',), ('C',)], [('A', 'B'), ('A', 'C'), ('B', 'C')],
[('A', 'B', 'C')]]
how to get this ? :
[('A'), ('B'), ('C'), ('A', 'B'), ('A', 'C'), ('B', 'C'), ('A', 'B',
'C')]
For Combi3, is it possible to get an array with different sizes ?
Instead of :
[[['A' 'A' 'A'] ['A' 'A' 'A'] ['A' 'B' 'C']]...
Obtain this ? :
[[['A'] ['A'] ['A' 'B' 'C']]...

pandas: Calculate the rowwise max of categorical columns

I have a DataFrame containing 2 columns of ordered categorical data (of the same category). I want to construct another column that contains the categorical maximum of the first 2 columns. I set up the following.
import pandas as pd
from pandas.api.types import CategoricalDtype
import numpy as np
cats = CategoricalDtype(categories=['small', 'normal', 'large'], ordered=True)
data = {
'A': ['normal', 'small', 'normal', 'large', np.nan],
'B': ['small', 'normal', 'large', np.nan, 'small'],
'desired max(A,B)': ['normal', 'normal', 'large', 'large', 'small']
}
df = pd.DataFrame(data).astype(cats)
The columns can be compared, although the np.nan items are problematic, as running the following code shows.
df['A'] > df['B']
The manual suggests that max() works on categorical data, so I try to define my new column as follows.
df[['A', 'B']].max(axis=1)
This yields a column of NaN. Why?
The following code constructs the desired column using the comparability of the categorical columns. I still don't know why max() fails here.
dfA = df['A']
dfB = df['B']
conditions = [dfA.isna(), (dfB.isna() | (dfA >= dfB)), True]
cases = [dfB, dfA, dfB]
df['maxAB'] = np.select(conditions, cases)
Columns A and B are string-types. So you gotta assign integer values to each of these categories first.
# size string -> integer value mapping
size2int_map = {
'small': 0,
'normal': 1,
'large': 2
}
# integer value -> size string mapping
int2size_map = {
0: 'small',
1: 'normal',
2: 'large'
}
# create columns containing the integer value for each size string
for c in df:
df['%s_int' % c] = df[c].map(size2int_map)
# apply the int2size map back to get the string sizes back
print(df[['A_int', 'B_int']].max(axis=1).map(int2size_map))
and you should get
0 normal
1 normal
2 large
3 large
4 small
dtype: object

pandas filter with replace and search

I have such dataframe
I want to fitler rows and replace values based on the filter. I want to apply two different operations based on the replacement value.
How to do this as my current approach does not work.
import pandas as pd
import re
df = pd.DataFrame(data={'a': ['aa', 'bb', 'c banana a dupa'], 'b': ['\w','\d','[ab-c]','[^c b]']})
df['filter'] = (df['a'] > 2).replace({True: f"dupa {df['b']}", False: re.search(df['b'], df['a'])})
I think you want numpy.where, because replace is used for replacement by scalars, then for compare lengths use Series.str.len, for prepend value only + and for search use apply:
df = pd.DataFrame(data={'a': ['aa', 'bb', 'c banana a dupa', 'dd'],
'b': ['\w','\d','[ab-c]','[^c b]']})
df['filter'] = np.where(df['a'].str.len() > 2,
"dupa " + df['b'],
df.apply(lambda x: re.search(x['b'], x['a']), axis=1))
print (df)
a b filter
0 aa \w <re.Match object; span=(0, 1), match='a'>
1 bb \d None
2 c banana a dupa [ab-c] dupa [ab-c]
3 dd [^c b] <re.Match object; span=(0, 1), match='d'>

New dataframe creation within loop and append of the results to the existing dataframe

I am trying to create conditional subsets of rows and columns from a DataFrame and append them to the existing dataframes that match the structure of the subsets. New subsets of data would need to be stored in the smaller dataframes and names of these smaller dataframes would need to be dynamic. Below is an example.
#Sample Data
df = pd.DataFrame({'a': [1,2,3,4,5,6,7], 'b': [4,5,6,4,3,4,6,], 'c': [1,2,2,4,2,1,7], 'd': [4,4,2,2,3,5,6,], 'e': [1,3,3,4,2,1,7], 'f': [1,1,2,2,1,5,6,]})
#Function to apply to create the subsets of data - I would need to apply a #function like this to many combinations of columns
def f1 (df, input_col1, input_col2):
#Subset ros
t=df[df[input_col1]>=3]
#Subset of columns
t=t[[input_col1, input_col2]]
t = t.sort_values([input_col1], ascending=False)
return t
#I want to create 3 different dataframes t1, #t2, and t3, but I would like to create them in the loop - not via individual #function calls.
#These Individual calls - these are just examples of what I am trying to achieve via loop
#t1=f1(df, 'a', 'b')
#t2=f1(df, 'c', 'd')
#t3=f1(df, 'e', 'f')
#These are empty dataframes to which I would like to append the resulting #subsets of data
column_names=['col1','col2']
g1 = pd.DataFrame(np.empty(0, dtype=[('col1', 'f8'),('col2', 'f8')]))
g2 = pd.DataFrame(np.empty(0, dtype=[('col1', 'f8'),('col2', 'f8')]))
g3 = pd.DataFrame(np.empty(0, dtype=[('col1', 'f8'),('col2', 'f8')]))
list1=['a', 'c', 'e']
list2=['b', 'd', 'f']
t={}
g={}
#This is what I want in the end - I would like to call the function inside of #the loop, create new dataframes dynamically and then append them to the #existing dataframes, but I am getting errors. Is it possible to do?
for c in range(1,4,1):
for i,j in zip(list1,list2):
t['t'+str(c)]=f1(df, i, j)
g['g'+str(c)]=g['g'+str(c)].append(t['t'+str(c)], ignore_index=True)
I guess you want to create t1,t2,t3 dynamically.
You can use globals().
g1 = pd.DataFrame(np.empty(0, dtype=[('a', 'f8'), ('b', 'f8')]))
g2 = pd.DataFrame(np.empty(0, dtype=[('c', 'f8'), ('d', 'f8')]))
g3 = pd.DataFrame(np.empty(0, dtype=[('e', 'f8'), ('f', 'f8')]))
list1 = ['a', 'c', 'e']
list2 = ['b', 'd', 'f']
for c in range(1, 4, 1):
globals()['t' + str(c)] = f1(df, list1[c-1], list2[c-1])
globals()['g' + str(c)] = globals()['g' + str(c)].append(globals()['t' + str(c)])

How to assign a column to dataframe as weights for each row and then sample the dataframe according to those weights?

I am trying to implement a weighted random selection in a dataframe. I used the code below to build the dataframe:
import pandas as pd
from numpy import exp
import random
moves = [(1, 2), (1, 3), (1, 4), (2, 1), (2, 3), (2, 4)]
data = {'moves': list(map(lambda i: moves[i] if divmod(i, len(moves))[0] != 1 else moves[divmod(i, len(moves))[1]],
[i for i in range(2 * len(moves))])),
'player': list(map(lambda i: 1 if i >= len(moves) else 2,
[i for i in range(2 * len(moves))])),
'wins': [random.randint(0, 2) for i in range(2 * len(moves))],
'playout_number': [random.randint(0,1) for i in range(2 * len(moves))]
}
frame = pd.DataFrame(data)
and then I created a list and inserted it as the new column 'weight':
total = sum(map(lambda a, b: exp(a/b) if b != 0 else 0, frame['wins'], frame['playout_number']))
weights = list(map(lambda a, b: exp(a/b) / total if b != 0 else 0, frame['wins'], frame['playout_number']))
frame = frame.assign(weight=weights)
Now I want to select a random row based on each row's weight in the new column inserted.
The problem is that I want to use pandas.DataFrame.sample(weights=weight), But I don't know how. I can do that with numpy.random.choice(weights=weights), But I'd prefer keep using pandas library functions.
I appreciate helps in advance.
You can use parameters n or frac with weights in sample.
Parameter weights can be array, so is possible use list:
df = frame.sample(n=1, weights=weights)
Or column of df (Series):
#select 1 row - n=1
df = frame.sample(n=1, weights=frame.weight)
print (df)
moves player playout_number wins weight
6 (1, 2) 1 1 2 0.258325
#select 20% rows - frac=0.2
df = frame.sample(frac=0.2, weights=frame.weight)
print (df)
moves player playout_number wins weight
5 (2, 4) 2 1 2 0.221747
4 (2, 3) 2 1 1 0.081576