Groupby pandas calculate percentage - pandas

I have a groupby object as follows after i ran:
grouped_mask=L2014_2.groupby(['state'])
grouped_mask.mask.value_counts()
state mask
AL False 105931
True 77
AR False 67788
True 1774
AZ False 90068
True 151
CA False 586184
True 4
CO False 75188
True 14360
CT False 78270
True 1
Now i need to calculate what is percentage of true in each state. Is there a way to do this?

Also you can set the normalize parameter to obtain the relative frequencies:
grouped_mask.mask.value_counts(normalize=True)
just multiply by 100 to get the percentages :-)
regards

You can groupby on the first level and then apply a lambda that divides the True/False counts against the sum:
In [20]:
df.groupby(level=0).apply(lambda x: x/x.sum() * 100)
Out[20]:
Count
state mask
AL False 99.927364
True 0.072636
AR False 97.449757
True 2.550243
AZ False 99.832629
True 0.167371
CA False 99.999318
True 0.000682
CO False 83.963908
True 16.036092
CT False 99.998722
True 0.001278
To filter the above to get just the False labels you can use advanced indexing using slices:
In [33]:
gp = df.groupby(level=0).apply(lambda x: x/x.sum() * 100)
gp.loc(axis=0)[slice(None),False]
Out[33]:
Count
state mask
AL False 99.927364
AR False 97.449757
AZ False 99.832629
CA False 99.999318
CO False 83.963908
CT False 99.998722

gp=grouped_mask.mask.value_counts().groupby(level=0).apply(lambda x: 100*x/float(x.sum()))
state mask
AL False 94.37
True 5.63
AR False 73.85
True 26.15
AZ False 91.88
True 8.12
CA False 99.57
True 0.43
CO False 64.66
True 35.34
gp.sort_index(level=0)
gp.loc(axis=0)[slice(None),False]
KeyError: 'MultiIndex Slicing requires the index to be fully lexsorted tuple len (2), lexsort depth (1)'

Related

Consolidating columns by the number before the decimal point in the column name

I have the following dataframe (three example columns below):
import pandas as pd
array = {'25.2': [False, True, False], '25.4': [False, False, True], '27.78': [True, False, True]}
df = pd.DataFrame(array)
25.2 25.4 27.78
0 False False True
1 True False False
2 False True True
I want to create a new dataframe with consolidated columns names, i.e. add 25.2 and 25.4 into 25 new column. If one of the values in the separate columns is True then the value in the new column is True.
Expected output:
25 27
0 False True
1 True False
2 True True
Any ideas?
use rename()+groupby()+sum():
df=(df.rename(columns=lambda x:x.split('.')[0])
.groupby(axis=1,level=0).sum().astype(bool))
OR
In 2 steps:
df.columns=[x.split('.')[0] for x in df]
#OR
#df.columns=df.columns.str.replace(r'\.\d+','',regex=True)
df=df.groupby(axis=1,level=0).sum().astype(bool)
output:
25 27
0 False True
1 True False
2 True True
Note: If you have int columns then you can use round() instead of split()
Another way:
>>> df.T.groupby(np.floor(df.columns.astype(float))).sum().astype(bool).T
25.0 27.0
0 False True
1 True False
2 True True

Pandas True False Matching

For this table:
I would like to generate the 'desired_output' column. One way to achieve this maybe:
All the True values from col_1 are transferred straight across to desired_output (red arrow)
In desired_output, place a True value above any existing True value (green arrow)
Code I have tried:
df['desired_output']=df.col_1.apply(lambda x: True if x.shift()==True else False)
Thankyou
You can chain by | for bitwise OR original with shifted values by Series.shift:
d = {"col1":[False,True,True,True,False,True,False,False,True,False,False,False]}
df = pd.DataFrame(d)
df['new'] = df.col1 | df.col1.shift(-1)
print (df)
col1 new
0 False True
1 True True
2 True True
3 True True
4 False True
5 True True
6 False False
7 False True
8 True True
9 False False
10 False False
11 False False
try this
df['desired_output'] = df['col_1']
df.loc[1:, 'desired_output'] = df.col_1[1:].values | df.col_1[:-1].values
print(df)
In case those are saved as string. all_caps (TRUE / FALSE)
Input:
col_1
0 True
1 True
2 False
3 True
4 True
5 False
6 Flase
7 True
8 False
Code:
df['desired']=df['col_1']
for i, e in enumerate(df['col_1']):
if e=='True':
df.at[i-1,'desired']=df.at[i,'col_1']
df = df[:(len(df)-1)]
df
Output:
col_1 desired
0 True True
1 True True
2 False True
3 True True
4 True True
5 False False
6 Flase True
7 True True
8 False False

How to vectorize in Pandas when values depend on prior values

I'd like to use Pandas to implement a function that keeps a running balance, but I'm not sure it can be vectorized for speed.
In short, the problem I'm trying to solve is to keep track consumption, generation, and the "bank" of over-generation.
"consumption" means how much is used in a given time period.
"generation" is how much is generated.
When generation is greater than consumption then the homeowner can "bank" the extra generation, to be applied in subsequent time periods. they can apply it if their consumption exceeds their generation for a later month.
This will be for many entities, hence the "id" field. The time sequence is defined by "order"
Very basic example:
Month 1 generates 13 consumes 8 -> therefore banks 5
month 2 generates 8 consumes 10 -> therefore uses 2 from the the bank, and still has 3 left over
Month 3 generates 7 consumes 20 -> exhausts remaining 3 from bank, and has no bank left over.
Code
import numpy as np
import pandas as pd
id = [1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,2,2,2,2,2,2,2,2,2,2,2]
order = [1,2,3,4,5,6,7,8,9,18,11,12,13,14,15,1,2,3,4,5,6,7,8,9,10,11]
consume = [10, 17, 20, 11, 17, 19, 20, 10, 10, 19, 14, 12, 10, 14, 13, 19, 12, 17, 12, 18, 15, 14, 15, 20, 16, 15]
generate = [20, 16, 17, 21, 9, 13, 10, 16, 12, 10, 9, 9, 15, 13, 100, 15, 18, 16, 10, 16, 12, 12, 13, 20, 10, 15]
df = pd.DataFrame(list(zip(id, order, consume, generate)),
columns =['id','Order','Consume', 'Generate'])
begin_bal = [0,10,9,6,16,8,2,0,6,8,0,0,0,5,4,0,0,6,5,3,1,0,0,0,0,0]
end_bal = [10,9,6,16,8,2,0,6,8,0,0,0,5,4,91,0,6,5,3,1,0,0,0,0,0,0]
withdraw = [0,1,3,0,8,6,2,0,0,8,0,0,0,1,4,0,0,1,2,2,1,0,0,0,0,0]
df_solution = pd.DataFrame(list(zip(id, order, consume, generate, begin_bal, end_bal, withdraw)),
columns =['id','Order','Consume', 'Generate', 'begin_bal', 'end_bal', 'Withdraw'])
def bank(df):
# deposit all excess when generation exceeds consumption
deposit = (df['Generate'] > df['Consume']) * (df['Generate'] - df['Consume'])
df['end_bal'] = 0
# beginning balance = prior period ending balance
df = df.sort_values(by=['id', 'Order'])
df['begin_bal'] = df['end_bal'].shift(periods=1)
df.loc[df['Order']==1, 'begin_bal'] = 0 # set first month beginning balance of each customer to 0
# calculate withdrawal
df['Withdraw'] = 0
ok_to_withdraw = df['Consume'] > df['Generate']
df.loc[ok_to_withdraw,'Withdraw'] = np.minimum(df.loc[ok_to_withdraw, 'begin_bal'],
df.loc[ok_to_withdraw, 'Consume'] -
df.loc[ok_to_withdraw, 'Generate'] -
deposit[ok_to_withdraw])
# ending balance = beginning balance + deposit - withdraw
df['end_bal'] = df['begin_bal'] + deposit - df['Withdraw']
return df
df = bank(df)
df.head()
id Order Consume Generate end_bal begin_bal Withdraw
0 1 1 10 20 10.0 0.0 0.0
1 1 2 17 16 0.0 0.0 0.0
2 1 3 20 17 0.0 0.0 0.0
3 1 4 11 21 10.0 0.0 0.0
4 1 5 17 9 0.0 0.0 0.0
df_solution.head()
id Order Consume Generate begin_bal end_bal Withdraw
0 1 1 10 20 0 10 0
1 1 2 17 16 10 9 1
2 1 3 20 17 9 6 3
3 1 4 11 21 6 16 0
4 1 5 17 9 16 8 9
I tried to implement with various iterations of cumsum and shift . . . but the fact remains that value of each row seems like it needs to be recalculated based on the prior row, and I'm not sure this is possible to vectorize.
Code to generate some test datasets:
def generate_testdata():
random.seed(42*42)
np.random.seed(42*42)
numids = 10
numorders = 12
id = []
order = []
for i in range(numids):
id = id + [i]*numorders
order = order + list(range(1,numorders+1))
consume = np.random.uniform(low = 10, high = 40, size = numids*numorders)
generate = np.random.uniform(low = 10, high = 40, size = numids*numorders)
df = pd.DataFrame(list(zip(id, order, consume, generate)),
columns =['id','Order','Consume', 'Generate'])
return df
Here is a numpy-ish approach, mostly because I'm not that familiar with pandas:
The idea is to first compute the free cumsum and then to subtract the cumulative minimum if it is negative.
import numpy as np
import pandas as pd
id = [1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,2,2,2,2,2,2,2,2,2,2,2]
order = [1,2,3,4,5,6,7,8,9,18,11,12,13,14,15,1,2,3,4,5,6,7,8,9,10,11]
consume = [10, 17, 20, 11, 17, 19, 20, 10, 10, 19, 14, 12, 10, 14, 13, 19, 12, 17, 12, 18, 15, 14, 15, 20, 16, 15]
generate = [20, 16, 17, 21, 9, 13, 10, 16, 12, 10, 9, 9, 15, 13, 8, 15, 18, 16, 10, 16, 12, 12, 13, 20, 10, 15]
df = pd.DataFrame(list(zip(id, order, consume, generate)),
columns =['id','Order','Consume', 'Generate'])
begin_bal = [0,10,9,6,16,8,2,0,6,8,0,0,0,5,4,0,0,6,5,3,1,0,0,0,0,0]
end_bal = [10,9,6,16,8,2,0,6,8,0,0,0,5,4,0,0,6,5,3,1,0,0,0,0,0,0]
withdraw = [0,1,3,0,9,6,2,0,0,8,0,0,0,1,4,0,0,1,2,2,1,0,0,0,0,0]
df_solution = pd.DataFrame(list(zip(id, order, consume, generate, begin_bal, end_bal, withdraw)),
columns =['id','Order','Consume', 'Generate', 'begin_bal', 'end_bal', 'Withdraw'])
def f(df):
# find block bondaries
ids = df["id"].values
bnds, = np.where(np.diff(ids, prepend=ids[0]-1, append=ids[-1]+1))
# find raw balance change
delta = (df["Generate"] - df["Consume"]).values
# find offset, so cumulative min does not interfere across ids
safe_total = (np.minimum(delta.min(), 0)-1) * np.diff(bnds[:-1])
# must apply offset just before group switch, so it aligns the first
# begin_bal, not end_bal, of the next group
# also keep a copy of original values at switches
delta_orig = delta[bnds[1:-1]-1]
delta[bnds[1:-1]-1] += safe_total - np.add.reduceat(delta, bnds[:-2])
# form free cumsum
acc = delta.cumsum()
# correct
acc -= np.minimum(0, np.minimum.accumulate(acc))
# write solution back to df
shft = np.empty_like(acc)
shft[1:] = acc[:-1]
shft[0] = 0
# reinstate last end_bal of each group
acc[bnds[1:-1]-1] = np.maximum(0, shft[bnds[1:-1]-1] + delta_orig)
df["begin_bal"] = shft
df["end_bal"] = acc
df["Withdraw"] = np.maximum(0, df["begin_bal"] - df["end_bal"])
Test:
f(df)
df == df_solution
Prints:
id Order Consume Generate begin_bal end_bal Withdraw
0 True True True True True True True
1 True True True True True True True
2 True True True True True True True
3 True True True True True True True
4 True True True True True True False
5 True True True True True True True
6 True True True True True True True
7 True True True True True True True
8 True True True True True True True
9 True True True True True True True
10 True True True True True True True
11 True True True True True True True
12 True True True True True True True
13 True True True True True True True
14 True True True True True True True
15 True True True True True True True
16 True True True True True True True
17 True True True True True True True
18 True True True True True True True
19 True True True True True True True
20 True True True True True True True
21 True True True True True True True
22 True True True True True True True
23 True True True True True True True
24 True True True True True True True
25 True True True True True True True
There is one False but that appears to be a typo in the expected output provided.
Using #PaulPanzer's logic here is a pandas version.
def CalcEB(x):
delta = x['Generate'] - x['Consume']
return delta.cumsum() - delta.cumsum().cummin().clip(-np.inf,0)
df['end_bal'] = df.groupby('id', as_index=False).apply(CalcEB).values
df['begin_bal'] = df.groupby('id')['end_bal'].shift().fillna(0)
df['Withdraw'] = (df['begin_bal'] - df['end_bal']).clip(0,np.inf)
df_pandas = df.copy()
#Note the typo mentioned by Paul Panzer
df_pandas.reindex(df_solution.columns, axis=1) == df_solution
Output (check dataframes)
id Order Consume Generate begin_bal end_bal Withdraw
0 True True True True True True True
1 True True True True True True True
2 True True True True True True True
3 True True True True True True True
4 True True True True True True False
5 True True True True True True True
6 True True True True True True True
7 True True True True True True True
8 True True True True True True True
9 True True True True True True True
10 True True True True True True True
11 True True True True True True True
12 True True True True True True True
13 True True True True True True True
14 True True True True True True True
15 True True True True True True True
16 True True True True True True True
17 True True True True True True True
18 True True True True True True True
19 True True True True True True True
20 True True True True True True True
21 True True True True True True True
22 True True True True True True True
23 True True True True True True True
24 True True True True True True True
25 True True True True True True True
I am not sure I understood your question fully, but I am going to give a go at answering.
I will re-phrase what I understood...
1. Source data
There is source data, which is a DataFrame with four columns:
id - ID number of an entity
order - indicates the sequence of periods
consume - how much was consumed during the period
generate - how much was generated during the period
2. Calculations
For each id, we want to calculate:
diff which is the difference between generate and consume for each period
opening balance which is the closing balance from the previous order
closing balance which is the cumulative sum of the diff
3. Code
I will try to solve this with groupby, cumsum and shift.
# Make sure the df is sorted
df = df.sort_values(['id','order'])
df['diff'] = df['generate'] - df['consume']
df['closing_balance'] = df.groupby('id')['diff'].cumsum()
# Opening balance equals the closing balance from the previous period
df['opening_balance'] = df.groupby('id')['closing_balance'].shift(1)
I definitely misunderstood something, feel free to correct me and I will try to come up with a better answer.
In particular, I wasn't sure how to handle the closing_balance going into negative numbers. Should it show negative balance? Should it nullify the "debts"?

Drawing bar charts from boolean fields:

I have three boolean fields, where their count is shown below:
I want to draw a bar chart that have
Offline_RetentionByTime with 37528
Offline_RetentionByCount with 29640
Offline_RetentionByCapacity with 3362
How to achieve that?
I think you can use apply value_counts for creating new df1 and then DataFrame.plot.bar:
df = pd.DataFrame({'Offline_RetentionByTime':[True,False,True, False],
'Offline_RetentionByCount':[True,False,False,True],
'Offline_RetentionByCapacity':[True,True,True, False]})
print (df)
Offline_RetentionByCapacity Offline_RetentionByCount Offline_RetentionByTime
0 True True True
1 True False False
2 True False True
3 False True False
df1 = df.apply(pd.value_counts)
print (df1)
Offline_RetentionByCapacity Offline_RetentionByCount \
True 3 2
False 1 2
Offline_RetentionByTime
True 2
False 2
df1.plot.bar()
If need plot only True values select by loc:
df1.loc[True].plot.bar()

use series to select rows from df pandas

Continued from this thread: get subsection of df based on multiple conditions
I would like to pull given rows based on multiple conditions which are stored in a Series object.
columns = ['is_net', 'is_pct', 'is_mean', 'is_wgted', 'is_sum']
index = ['a','b','c','d']
data = [['True','True','False','False', 'False'],
['True','True','True','False', 'False'],
['True','True','False','False', 'True'],
['True','True','False','True', 'False']]
df = pd.DataFrame(columns=columns, index=index, data=data)
df
is_net is_pct is_mean is_wgted is_sum
a True True False False False
b True True True False False
c True True False False True
d True True False True False
My conditions:
d={'is_net': 'True', 'is_sum': 'True'}
s=pd.Series(d)
Expected output:
is_net is_pct is_mean is_wgted is_sum
c True True False False True
My failed attempt:
(df == s).all(axis=1)
a False
b False
c False
d False
dtype: bool
Not sure why 'c' is False when the two conditions were met.
Note, I can achieve the desired results like this but I would rather use the Series method.
df[(df['is_net']=='True') & (df['is_sum']=='True')]
As you only have 2 conditions we can sum these and filter the df:
In [55]:
df[(df == s).sum(axis=1) == 2]
​
Out[55]:
is_net is_pct is_mean is_wgted is_sum
c True True False False True
This works because booleans convert to 1 and 0 for True and False:
In [56]:
(df == s).sum(axis=1)
​
Out[56]:
a 1
b 1
c 2
d 1
dtype: int64
You could modify a little bit your solution by adding subset for your columns:
In [219]: df[(df == s)[['is_net', 'is_sum']].all(axis=1)]
Out[219]:
is_net is_pct is_mean is_wgted is_sum
c True True False False True
or:
In [219]: df[(df == s)[s.index].all(axis=1)]
Out[219]:
is_net is_pct is_mean is_wgted is_sum
c True True False False True