Pandas Equivalent of Excel COUNTIFS - pandas

I've read through some previous questions and am having trouble implementing. Here is my table.
Value Bool
abc TRUE
abc TRUE
bca TRUE
bca FALSE
asd FALSE
asd FALSE
I want this:
Value Bool Count
abc TRUE 2
abc TRUE 2
bca TRUE 1
bca FALSE 1
asd FALSE 0
asd FALSE 0
For each group of terms in Value, count the number of occurrences of TRUE, which is a boolean in my df.
In Excel you can do COUNTIFS to do this. Can someone please show me the way in Pandas?

Try with groupby transform:
df['Count']=df.groupby('Value')['Bool'].transform('sum')
print(df)
Value Bool Count
0 abc True 2.0
1 abc True 2.0
2 bca True 1.0
3 bca False 1.0
4 asd False 0.0
5 asd False 0.0
Or:
df['Count']=df.groupby('Value')['Bool'].transform(lambda x: x.sum())
print(df)
Value Bool Count
0 abc True 2
1 abc True 2
2 bca True 1
3 bca False 1
4 asd False 0
5 asd False 0

Related

Pandas easy API to find out all inf or nan cells?

I had search stackoverflow about this, all are so complex.
I want to output the row and column info about all cells that is inf or NaN.
You can replace np.inf to missing values and test them by DataFrame.isna and last test at least one True by DataFrame.any passed to DataFrame.loc for SubDataFrame:
df = pd.DataFrame({
'A':list('abcdef'),
'B':[4,5,4,5,5,np.inf],
'C':[7,np.nan,9,4,2,3],
'D':[1,3,5,7,1,0],
'E':[np.inf,3,6,9,2,np.nan],
'F':list('aaabbb')
})
print (df)
A B C D E F
0 a 4.0 7.0 1 inf a
1 b 5.0 NaN 3 3.0 a
2 c 4.0 9.0 5 6.0 a
3 d 5.0 4.0 7 9.0 b
4 e 5.0 2.0 1 2.0 b
5 f inf 3.0 0 NaN b
m = df.replace(np.inf, np.nan).isna()
print (m)
A B C D E F
0 False False False False True False
1 False False True False False False
2 False False False False False False
3 False False False False False False
4 False False False False False False
5 False True False False True False
df = df.loc[m.any(axis=1), m.any()]
print (df)
B C E
0 4.0 7.0 inf
1 5.0 NaN 3.0
5 inf 3.0 NaN
Or if need index and columns names in DataFrame use DataFrame.stack with Index.to_frame:
s = df.replace(np.inf, np.nan).stack(dropna=False)
df1 = s[s.isna()].index.to_frame(index=False)
print (df1)
0 1
0 0 E
1 1 C
2 5 B
3 5 E

How to get count for the non duplicates in column

My code to get the duplicates, how to negate the below meaning
df.duplicated(subset='col', keep='last').sum()
len(df['col'])-len(df['col'].drop_duplicates())
I think you need DataFrame.duplicated with keep=False for all duplicates, invert mask and sum for count Trues:
df = pd.DataFrame({'col':[1,2,2,3,3,3,4,5,5]})
print (df.duplicated(subset='col', keep=False))
0 False
1 True
2 True
3 True
4 True
5 True
6 False
7 True
8 True
dtype: bool
print (~df.duplicated(subset='col', keep=False))
0 True
1 False
2 False
3 False
4 False
5 False
6 True
7 False
8 False
dtype: bool
print ((~df.duplicated(subset='col', keep=False)).sum())
2
Another solution with Series.drop_duplicates and keep=False with length of Series:
print (df['col'].drop_duplicates(keep=False))
0 1
6 4
Name: col, dtype: int64
print (len(df['col'].drop_duplicates(keep=False)))
2

Converting boolean to zero-or-one, for all elements in an array

I have the following datasets of boolean columns
date hr energy
0 5-Feb-18 False False
1 29-Jan-18 False False
2 6-Dec-17 True False
3 16-Nov-17 False False
4 14-Nov-17 True True
5 25-Oct-17 False False
6 24-Oct-17 False False
7 5-Oct-17 False False
8 3-Oct-17 False False
9 26-Sep-17 False False
10 13-Sep-17 True False
11 7-Sep-17 False False
12 31-Aug-17 False False
I want to multiply each boolean column by 1 to turn it into a dummy
I tried:
df = df.iloc[:, 1:]
for col in df:
col = col*1
but the columns remain boolean, why?
Just using
df.iloc[:,1:]=df.iloc[:,1:].astype(int)
df
Out[477]:
date hr energy
0 5-Feb-18 0 0
1 29-Jan-18 0 0
2 6-Dec-17 1 0
3 16-Nov-17 0 0
4 14-Nov-17 1 1
5 25-Oct-17 0 0
6 24-Oct-17 0 0
7 5-Oct-17 0 0
8 3-Oct-17 0 0
9 26-Sep-17 0 0
10 13-Sep-17 1 0
11 7-Sep-17 0 0
12 31-Aug-17 0 0
For future cases other than True or False, If you want to convert categorical into numerical you could always use the replace function.
df.iloc[:,1:]=df.iloc[:,1:].replace({True:1,False:0})

Count How Many Columns in Dataframe before NaN

I want to count how many column data (pd.Dataframe) before Nan data. My data:
df
0 1 2 3 4 5 6 7 8 9 10 11 12 13
Id
A 1 1 2 3 3 NaN NaN NaN NaN NaN NaN NaN NaN NaN
B 6 6 7 7 8 9 10 NaN NaN NaN NaN NaN NaN NaN
C 1 2 3 3 4 5 6 6 7 7 8 9 10 NaN
my desire output:
df_result
count
Id
A 5
B 7
C 13
thank you in advance for the answer.
Use:
print (df)
0 1 2 3 4 5 6 7 8 9 10 11 12 13
A 1 1 2 3 3 NaN NaN NaN NaN NaN NaN NaN NaN 54.0
B 6 6 7 7 8 9.0 10.0 NaN NaN NaN NaN NaN 5.0 NaN
C 1 2 3 3 4 5.0 6.0 6.0 7.0 7.0 8.0 9.0 10.0 NaN
df = df.isnull().cumsum(axis=1).eq(0).sum(axis=1)
print (df)
A 5
B 7
C 13
dtype: int64
Detail:
First check NaNs:
print (df.isnull())
0 1 2 3 4 5 6 7 8 9 \
A False False False False False True True True True True
B False False False False False False False True True True
C False False False False False False False False False False
10 11 12 13
A True True True False
B True True False True
C False False False True
Get cumsum - Trues are processes like 1, False like 0
print (df.isnull().cumsum(axis=1))
0 1 2 3 4 5 6 7 8 9 10 11 12 13
A 0 0 0 0 0 1 2 3 4 5 6 7 8 8
B 0 0 0 0 0 0 0 1 2 3 4 5 5 6
C 0 0 0 0 0 0 0 0 0 0 0 0 0 1
Compare by 0:
print (df.isnull().cumsum(axis=1).eq(0))
0 1 2 3 4 5 6 7 8 9 10 \
A True True True True True False False False False False False
B True True True True True True True False False False False
C True True True True True True True True True True True
11 12 13
A False False False
B False False False
C True True False
Sum boolean mask - Trues like 1s:
print (df.isnull().cumsum(axis=1).eq(0).sum(axis=1))
A 5
B 7
C 13
dtype: int64

returning same initial input when filtering dataframes' values

I have the following dataframe I have obtained from the read_html pandas' property.
A        1.48        2.64    1.02         2.46   2.73
B       658.4        14.33    7.41        15.35   8.59
C        3.76         2.07    4.61         2.26   2.05
D   513854.86         5.70    0.00         5.35  30.16
I would like to remove the rows that are over 150 so I did adf1= df[df > 150], however it returns the same table.
Then I thought to include in the decimals in the routeroute = pd.read_html(https//route , decimal='.') and continues returning the same initial dataframe with no filters.
This would be my desired output:
A        1.48        2.64    1.02         2.46   2.73
C        3.76         2.07    4.61         2.26   2.05
Need:
print (df)
0 1 2 3 4 5
0 A 1.48 2.64 1.02 2.46 2.73
1 B 658.40 14.33 7.41 15.35 8.59
2 C 3.76 2.07 4.61 2.26 2.05
3 D 513854.86 5.70 0.00 5.35 30.16
df1 = df[~(df.iloc[:, 1:] > 150).any(1)]
print (df1)
0 1 2 3 4 5
0 A 1.48 2.64 1.02 2.46 2.73
2 C 3.76 2.07 4.61 2.26 2.05
Or:
df1 = df[(df.iloc[:, 1:] <= 150).all(1)]
print (df1)
0 1 2 3 4 5
0 A 1.48 2.64 1.02 2.46 2.73
2 C 3.76 2.07 4.61 2.26 2.05
Explanation:
First select all columns without first by iloc:
print (df.iloc[:, 1:])
1 2 3 4 5
0 1.48 2.64 1.02 2.46 2.73
1 658.40 14.33 7.41 15.35 8.59
2 3.76 2.07 4.61 2.26 2.05
3 513854.86 5.70 0.00 5.35 30.16
Then compare - get boolean DataFrame:
print (df.iloc[:, 1:] > 150)
1 2 3 4 5
0 False False False False False
1 True False False False False
2 False False False False False
3 True False False False False
print (df.iloc[:, 1:] <= 150)
1 2 3 4 5
0 True True True True True
1 False True True True True
2 True True True True True
3 False True True True True
Then use all for check if all values in row has Trues
or any for check if at least one value is True:
print ((df.iloc[:, 1:] > 150).any(1))
0 False
1 True
2 False
3 True
dtype: bool
print ((df.iloc[:, 1:] <= 150).all(1))
0 True
1 False
2 True
3 False
dtype: bool
Last first Series invert with ~ and filter by boolean indexing.