pandas - how to select rows based on a conjunction of a non indexed column? - pandas

Consider the following DataFrame -
In [47]: dati
Out[47]:
x y
frame face lmark
1 NaN NaN NaN NaN
300 0.0 1.0 745.0 367.0
2.0 753.0 411.0
3.0 759.0 455.0
2201 0.0 1.0 634.0 395.0
2.0 629.0 439.0
3.0 630.0 486.0
How can we select the rows where dati['x'] > 629.5 for all rows sharing the same value in the 'frame' column. For this example, I would expect to result to be
x y
frame face lmark
300 0.0 1.0 745.0 367.0
2.0 753.0 411.0
3.0 759.0 455.0
because column 'x' of 'frame' 2201, 'lmark' 2.0 is not greater than 629.5

Use GroupBy.transform with GroupBy.all for test if all Trues per groups and filter in boolean indexing:
df = dat[(dat['x'] > 629.5).groupby(level=0).transform('all')]
print (df)
x y
frame face lmark
300 0.0 1.0 745.0 367.0
2.0 753.0 411.0
3.0 759.0 455.0

Related

pandas DataFrame column manipulation using previous row value

I have below pandas DataFrame
color
direction
Total
-1.0
1.0
NaN
1.0
1.0
0
1.0
1.0
0
1.0
1.0
0
-1.0
1.0
NaN
1.0
-1.0
NaN
1.0
1.0
0
1.0
1.0
0
I am trying to update the total column based on below logic.
if df['color'] == 1.0 and df['direction'] == 1.0 then Total should be Total of previous row + 1. if Total of previous row is NaN, then 0+1
Note: I was trying to read the previous row total using df['Total'].shift() + 1 but it didnt work.
Expected DataFrame.
color
direction
Total
-1.0
1.0
NaN
1.0
1.0
1
1.0
1.0
2
1.0
1.0
3
-1.0
1.0
NaN
1.0
-1.0
NaN
1.0
1.0
1
1.0
1.0
2
You can create the sub-groupby value with cumsum , the new just groupby with color and direction and do cumcount
df.loc[df.Total.notnull(),'Total'] = df.groupby([df['Total'].isna().cumsum(),df['color'],df['direction']]).cumcount()+1
df
Out[618]:
color direction Total
0 -1.0 1.0 NaN
1 1.0 1.0 1.0
2 1.0 1.0 2.0
3 1.0 1.0 3.0
4 -1.0 1.0 NaN
5 1.0 -1.0 NaN
6 1.0 1.0 1.0
7 1.0 1.0 2.0

how do I sum each column based on condition of another column without iterating over the columns in pandas datframe

I have a data frame as below:
Preg Glucose BloodPressure SkinThickness Insulin Outcome
0 1.0 85.0 66.0 29.0 0.0 0.0
1 8.0 183.0 64.0 0.0 0.0 0.0
2 1.0 89.0 66.0 23.0 94.0 1.0
3 0.0 137.0 40.0 35.0 168.0 1.0
4 5.0 116.0 74.0 0.0 0.0 1.0
I would like a pythonic way to sum each column in separate based on a condition of one of the columns. I could do it with iterating over the df columns, but I'm sure there is a better way I'm not familiar with.
In specific to the data I have, I'd like to sum each column values if at the last column 'Outcome' is equal to 1. In the end, I should get as below:
Preg Glucose BloodPressure SkinThickness Insulin Outcome
0 6.0 342.0 180.0 58.0 262.0 0.0
Any ideas?
Here is a solution to get the expected output:
sum_df = df.loc[df.Outcome == 1.0].sum().to_frame().T
sum_df.Outcome = 0.0
Output:
Preg Glucose BloodPressure SkinThickness Insulin Outcome
0 6.0 342.0 180.0 58.0 262.0 0.0
Documentation:
loc: access a group of rows / columns by labels or boolean array
sum: sum by default over all columns and return a Series indexed by the columns.
to_frame: convert a Series to a DataFrame.
.T: accessor the transpose function, transpose the DataFrame.
use np.where
df1[np.where(df1['Outcome'] == 1,True,False)].sum().to_frame().T
Output
Preg Glucose BloodPressure SkinThickness Insulin Outcome
0 6.0 342.0 180.0 58.0 262.0 3.0
Will these work for you?
df1.loc[~(df1['Outcome'] == 0)].groupby('Outcome').agg('sum').reset_index()
or
df1.loc[df1.Outcome == 1.0].sum().to_frame().T

Sum of NaNs to equal NaN (not zero)

I can add a TOTAL column to this DF using df['TOTAL'] = df.sum(axis=1), and it adds the row elements like this:
col1 col2 TOTAL
0 1.0 5.0 6.0
1 2.0 6.0 8.0
2 0.0 NaN 0.0
3 NaN NaN 0.0
However, I would like the total of the bottom row to be NaN, not zero, like this:
col1 col2 TOTAL
0 1.0 5.0 6.0
1 2.0 6.0 8.0
2 0.0 NaN 0.0
3 NaN NaN Nan
Is there a way I can achieve this in a performant way?
Add parameter min_count=1 to DataFrame.sum:
min_count : int, default 0
The required number of valid values to perform the operation. If fewer than min_count non-NA values are present the result will be NA.
New in version 0.22.0: Added with the default being 0. This means the sum of an all-NA or empty Series is 0, and the product of an all-NA or empty Series is 1.
df['TOTAL'] = df.sum(axis=1, min_count=1)
print (df)
col1 col2 TOTAL
0 1.0 5.0 6.0
1 2.0 6.0 8.0
2 0.0 NaN 0.0
3 NaN NaN NaN

Pandas: replace outliers in all columns with nan

I have a data frame with 3 columns, for ex
c1,c2,c3
10000,1,2
1,3,4
2,5,6
3,1,122
4,3,4
5,5,6
6,155,6
I want to replace the outliers in all the columns which are outside 2 sigma. Using the below code, I can create a dataframe without the outliers.
df[df.apply(lambda x: np.abs(x - x.mean()) / x.std() < 2).all(axis=1)]
c1,c2,c3
1,3,4
2,5,6
4,3,4
5,5,6
I can find the outliers for each column separately and replace with "nan", but that would not be the best way as the number of lines in the code increases with the number of columns. There must be a better way of doing this. May be boolean output from the above command for rows and then replace "TRUE" with "nan".
Any suggestions, many thanks.
pandas
Use pd.DataFrame.mask
df.mask(df.sub(df.mean()).div(df.std()).abs().gt(2))
c1 c2 c3
0 NaN 1.0 2.0
1 1.0 3.0 4.0
2 2.0 5.0 6.0
3 3.0 1.0 NaN
4 4.0 3.0 4.0
5 5.0 5.0 6.0
6 6.0 NaN 6.0
numpy
v = df.values
mask = np.abs((v - v.mean(0)) / v.std(0)) > 2
pd.DataFrame(np.where(mask, np.nan, v), df.index, df.columns)
c1 c2 c3
0 NaN 1.0 2.0
1 1.0 3.0 4.0
2 2.0 5.0 6.0
3 3.0 1.0 NaN
4 4.0 3.0 4.0
5 5.0 5.0 6.0
6 6.0 NaN 6.0
lb = df.quantile(0.01)
ub = df.quantile(0.99)
df_new = df[(df < ub) & (df > lb)]
df_new
I am using interquatile range method to detect outliers. Firstly it calculates the lower bound and upper bound of the df using quantile function. Then based on the condition that all the values should be between lower bound and upper bound it returns a new df with outlier values replaced by NaN.

Pandas element-wise min max against a series along one axis

I have a Dataframe:
df =
A B C D
DATA_DATE
20170103 5.0 3.0 NaN NaN
20170104 NaN NaN NaN 1.0
20170105 1.0 NaN 2.0 3.0
And I have a series
s =
DATA_DATE
20170103 4.0
20170104 0.0
20170105 2.2
I'd like to run an element-wise max() function and align s along the columns of df. In other words, I want to get
result =
A B C D
DATA_DATE
20170103 5.0 4.0 NaN NaN
20170104 NaN NaN NaN 1.0
20170105 2.2 NaN 2.2 3.0
What is the best way to do this? I've checked single column comparison and series to series comparison but haven't found an efficient way to run dataframe against a series.
Bonus: Not sure if the answer will be self-evident from above, but how to do it if I want to align s along the rows of df (assume dimensions match)?
Data:
In [135]: df
Out[135]:
A B C D
DATA_DATE
20170103 5.0 3.0 NaN NaN
20170104 NaN NaN NaN 1.0
20170105 1.0 NaN 2.0 3.0
In [136]: s
Out[136]:
20170103 4.0
20170104 0.0
20170105 2.2
Name: DATA_DATE, dtype: float64
Solution:
In [66]: df.clip_lower(s, axis=0)
C:\Users\Max\Anaconda4\lib\site-packages\pandas\core\ops.py:1247: RuntimeWarning: invalid value encountered in greater_equal
result = op(x, y)
Out[66]:
A B C D
DATA_DATE
20170103 5.0 4.0 NaN NaN
20170104 NaN NaN NaN 1.0
20170105 2.2 NaN 2.2 3.0
we can use the following hack in order to ged rid of the RuntimeWarning:
In [134]: df.fillna(np.inf).clip_lower(s, axis=0).replace(np.inf, np.nan)
Out[134]:
A B C D
DATA_DATE
20170103 5.0 4.0 NaN NaN
20170104 NaN NaN NaN 1.0
20170105 2.2 NaN 2.2 3.0
This is called broadcasting and can be done as follows:
import numpy as np
np.maximum(df, s[:, None])
Out:
A B C D
DATA_DATE
20170103 5.0 4.0 NaN NaN
20170104 NaN NaN NaN 1.0
20170105 2.2 NaN 2.2 3.0
Here, s[:, None] will add a new axis to s. The same can be achieved by s[:, np.newaxis]. When you do this, they can be broadcast together because shapes (3, 4) and (3, 1) have a common element.
Note the difference between s and s[:, None]:
s.values
Out: array([ 4. , 0. , 2.2])
s[:, None]
Out:
array([[ 4. ],
[ 0. ],
[ 2.2]])
s.shape
Out: (3,)
s[:, None].shape
Out: (3, 1)
An alternative would be:
df.mask(df.le(s, axis=0), s, axis=0)
Out:
A B C D
DATA_DATE
20170103 5.0 4.0 NaN NaN
20170104 NaN NaN NaN 1.0
20170105 2.2 NaN 2.2 3.0
This reads: Compare df and s. Where df is larger, use df, and otherwise use s.
While there may be better solutions for your problem, I believe this should give you what you need:
for c in df.columns:
df[c] = pd.concat([df[c], s], axis=1).max(axis=1)