Drop rows in pandas dataframe depending on order and NaN - pandas

I am using pandas to import a dataframe, and want to drop certain rows before grouping the information.
How do I go from the following (example):
Name1 Name2 Name3
0 A1 B1 1
1 NaN NaN 2
2 NaN NaN 3
3 NaN B2 4
4 NaN NaN 5
5 NaN NaN 6
6 NaN B3 7
7 NaN NaN 8
8 NaN NaN 9
9 A2 B4 1
10 NaN NaN 2
11 NaN NaN 3
12 NaN B5 4
13 NaN NaN 5
14 NaN NaN 6
15 NaN B6 7
16 NaN NaN 8
17 NaN NaN 9
to:
Name1 Name2 Name3
0 A1 B1 1
3 NaN B2 4
6 NaN B3 7
8 NaN NaN 9
9 A2 B4 1
12 NaN B5 4
15 NaN B6 7
17 NaN NaN 9
(My actual case consists of several thousand lines with the same structure as the example)
I have tried removing rows with NaN in Name2 using df=df[df['Name2'].notna()] , but then I get this:
Name1 Name2 Name3
0 A1 B1 1
3 NaN B2 4
6 NaN B3 7
9 A2 B4 1
12 NaN B5 4
15 NaN B6 7
I also need to keep line 8 and 17 in the example above.

Assuming you want to keep the rows that are either:
not NA in column "Name2"
or the last row before a non-NA "Name1" or the end of data
You can use boolean indexing:
# is the row not-NA in Name2?
m1 = df['Name2'].notna()
# is is the last row of a group?
m2 = df['Name1'].notna().shift(-1, fill_value=True)
# keep if either of the above condition is True
out = df[m1|m2]
Output:
Name1 Name2 Name3
0 A1 B1 1
3 NaN B2 4
6 NaN B3 7
8 NaN NaN 9
9 A2 B4 1
12 NaN B5 4
15 NaN B6 7
17 NaN NaN 9
Intermediates:
Name1 Name2 Name3 m1 m2 m1|m2
0 A1 B1 1 True False True
1 NaN NaN 2 False False False
2 NaN NaN 3 False False False
3 NaN B2 4 True False True
4 NaN NaN 5 False False False
5 NaN NaN 6 False False False
6 NaN B3 7 True False True
7 NaN NaN 8 False False False
8 NaN NaN 9 False True True
9 A2 B4 1 True False True
10 NaN NaN 2 False False False
11 NaN NaN 3 False False False
12 NaN B5 4 True False True
13 NaN NaN 5 False False False
14 NaN NaN 6 False False False
15 NaN B6 7 True False True
16 NaN NaN 8 False False False
17 NaN NaN 9 False True True

You can use the thresh argument in df.dropna.
# toy data
data = {'name1': [np.nan, np.nan, np.nan, np.nan], 'name2': [np.nan, 1, 2, np.nan], 'name3': [1, 2, 3, 4]}
df = pd.DataFrame(data)
name1 name2 name3
0 NaN NaN 1
1 NaN 1.0 2
2 NaN 2.0 3
3 NaN NaN 4
To remove rows with 2+ NaN, just do this:
df.dropna(thresh = 2)
name1 name2 name3
1 NaN 1.0 2
2 NaN 2.0 3
If you want to keep lines 8 and 17, you may want to first save them separately in another variable and add them to df afterwards using df.append and then resorting by index.

Related

Use condition in a dataframe to replace values in another dataframe with nan

I have a dataframe that contains concentration values for a set of samples as follows:
Sample
Ethanol
Acetone
Formaldehyde
Methane
A
20
20
20
20
A
30
23
20
nan
A
20
23
nan
nan
A
nan
20
nan
nan
B
21
46
87
54
B
23
74
nan
54
B
23
67
nan
53
B
23
nan
nan
33
C
23
nan
nan
66
C
22
nan
nan
88
C
22
nan
nan
90
C
22
nan
nan
88
I have second dataframe that contains the proportion of concentration values that are not missing in the first dataframe:
Sample
Ethanol
Acetone
Formaldehyde
Methane
A
0.75
1
0.5
0.25
B
1
0.75
0.25
1
C
1
0
0
1
I would like to replace value in the first dataframe with nan when the condition in the second dataframe is 0.5 or less. Hence, the resulting dataframe would look like that below. Any help would be great!
Sample
Ethanol
Acetone
Formaldehyde
Methane
A
20
20
nan
nan
A
30
23
nan
nan
A
20
23
nan
nan
A
nan
20
nan
nan
B
21
46
nan
54
B
23
74
nan
54
B
23
67
nan
53
B
23
nan
nan
33
C
23
nan
nan
66
C
22
nan
nan
88
C
22
nan
nan
90
C
22
nan
nan
88
Is it what your are looking for:
>>> df2.set_index('Sample').mask(lambda x: x <= 0.5) \
.mul(df1.set_index('Sample')).reset_index()
Sample Ethanol Acetone Formaldehyde Methane
0 A 15.0 20.00 NaN NaN
1 A 22.5 23.00 NaN NaN
2 A 15.0 23.00 NaN NaN
3 A NaN 20.00 NaN NaN
4 B 21.0 34.50 NaN 54.0
5 B 23.0 55.50 NaN 54.0
6 B 23.0 50.25 NaN 53.0
7 B 23.0 NaN NaN 33.0
8 C 23.0 NaN NaN 66.0
9 C 22.0 NaN NaN 88.0
10 C 22.0 NaN NaN 90.0
11 C 22.0 NaN NaN 88.0

pandas how to get row index satisfying certain condition in a vectorized way?

I have a timeseries dataframe containing market price and order information. For every entry, there is a stoploss accordingly. I want to find out the stoploss triggered bar index in the dataframe for each entry order. If the market price >= stoploss , then stop is triggered and I want to record that the stop belongs to which entry order. Each entry is recorded according to its entry bar index. For example, order with entry price 99 at bar 1 is recorded as entry order 1. Entry price 98 at bar 2 is entry order 2 and entry price 103 at bar 5 is entry order 5 etc.
The original dataframe is like:
entry price index entryprice stoploss
0 0 100 0 NaN NaN
1 1 99 1 99.0 102.0
2 1 98 2 98.0 101.0
3 0 100 3 NaN NaN
4 0 101 4 NaN NaN
5 1 103 5 103.0 106.0
6 0 105 6 NaN NaN
7 0 104 7 NaN NaN
8 0 106 8 NaN NaN
9 1 103 9 103.0 106.0
10 0 100 10 NaN NaN
11 0 104 11 NaN NaN
12 0 108 12 NaN NaN
13 0 110 13 NaN NaN
code is :
import pandas as pd
df = pd.DataFrame(
{'price':[100,99,98,100,101,103,105,104,106,103,100,104,108,110],
'entry': [0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0],})
df['index'] = df.index
df['entryprice'] = df['price'].where(df.entry==1)
df['stoploss'] = df['entryprice'] + 3
In order to find out where stoploss is triggered for each order, I do it in an apply way. I defined an outside parameter stoplist which is recording all the stoploss orders and their corresponding entry order index which are not triggered yet. Then I pass every row of the df to the function and compare the market price with the stoploss in the stoplist, whenever condition is met, assign the entry order index to this row and remove it from the stoplist variable.
The code is like:
def Stop(row, stoplist):
output = None
for i in range(len(stoplist)-1, -1, -1):
(ix, stop) = stoplist[i]
if row['price'] >= stop:
output = ix
stoplist.pop(i)
if row['stoploss'] != None:
stoplist.append( (row['index'], row['stoploss']) )
return output
import pandas as pd
df = pd.DataFrame(
{'price':[100,99,98,100,101,103,105,104,106,103,100,104,108,110],
'entry': [0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0],})
df['index'] = df.index
df['entryprice'] = df['price'].where(df.entry==1)
df['stoploss'] = df['entryprice'] + 3
stoplist = []
df['stopix'] = df.apply(lambda row: Stop(row, stoplist), axis=1)
print(df)
The final output is:
entry price index entryprice stoploss stopix
0 0 100 0 NaN NaN NaN
1 1 99 1 99.0 102.0 NaN
2 1 98 2 98.0 101.0 NaN
3 0 100 3 NaN NaN NaN
4 0 101 4 NaN NaN 2.0
5 1 103 5 103.0 106.0 1.0
6 0 105 6 NaN NaN NaN
7 0 104 7 NaN NaN NaN
8 0 106 8 NaN NaN 5.0
9 1 103 9 103.0 106.0 NaN
10 0 100 10 NaN NaN NaN
11 0 104 11 NaN NaN NaN
12 0 108 12 NaN NaN 9.0
13 0 110 13 NaN NaN NaN
The last column stopix is what I wanted. But the only problem of this solution is that apply is not very efficient and I am wondering if there is a vectorized way to do this? Or if there is any better solution boosting the performance would be helpful. Because efficiency is critical to me.
Thanks
Here's my take:
# mark the block starting by entry
blocks = df.stoploss.notna().cumsum()
# mark where the prices are higher than or equal to entry price
higher = df['stoploss'].ffill().le(df.price)
# group higher by entries
g = higher.groupby(blocks)
# where the entry occurs in each group
idx = g.transform('idxmin')
# transform the idx to where the first higher occurs
df['stopix'] = np.where(g.cumsum().eq(1), idx, np.nan)
Output:
entry price index entryprice stoploss stopix
0 0 100 0 NaN NaN NaN
1 1 99 1 99.0 102.0 NaN
2 1 98 2 98.0 101.0 NaN
3 0 100 3 NaN NaN NaN
4 0 101 4 NaN NaN 2.0
5 1 103 5 103.0 106.0 NaN
6 0 105 6 NaN NaN NaN
7 0 104 7 NaN NaN NaN
8 0 106 8 NaN NaN 5.0
9 1 103 9 103.0 106.0 NaN
10 0 100 10 NaN NaN NaN
11 0 104 11 NaN NaN NaN
12 0 108 12 NaN NaN 9.0
13 0 110 13 NaN NaN NaN

Create a new ID column based on conditions in other column using pandas

I am trying to make a new column 'ID' which should give a unique ID each time there is no 'NaN' value in 'Data' column. If the non null values come right to each other, the ID remains the same. I have provided how my final Id column should look like below as reference to better understand. Could anyone guide me on this?
Id Data
0 NaN
0 NaN
0 NaN
1 54
1 55
0 NaN
0 NaN
2 67
0 NaN
0 NaN
3 33
3 44
3 22
0 NaN
.groupby the cumsum to get consecutive groups, using where to mask the NaN. .ngroup gets the consecutive IDs. Also possible with rank.
s = df.Data.isnull().cumsum().where(df.Data.notnull())
df['ID'] = df.groupby(s).ngroup()+1
# df['ID'] = s.rank(method='dense').fillna(0).astype(int)
Output:
Data ID
0 NaN 0
1 NaN 0
2 NaN 0
3 54.0 1
4 55.0 1
5 NaN 0
6 NaN 0
7 67.0 2
8 NaN 0
9 NaN 0
10 33.0 3
11 44.0 3
12 22.0 3
13 NaN 0
Using factorize
v=pd.factorize(df.Data.isnull().cumsum()[df.Data.notnull()])[0]+1
df.loc[df.Data.notnull(),'Newid']=v
df.Newid.fillna(0,inplace=True)
df
Id Data Newid
0 0 NaN 0.0
1 0 NaN 0.0
2 0 NaN 0.0
3 1 54.0 1.0
4 1 55.0 1.0
5 0 NaN 0.0
6 0 NaN 0.0
7 2 67.0 2.0
8 0 NaN 0.0
9 0 NaN 0.0
10 3 33.0 3.0
11 3 44.0 3.0
12 3 22.0 3.0
13 0 NaN 0.0

Concatenating dataframe that have different number of rows

I have a dataframe df = df[['A', 'B', 'C']] with 3 columns and 2000 rows
Then I have another set of data with only 200 rows
How can I add this into df['D'] such that this 200 rows will only appear as the tail of the 2000 rows?
So that from row 0-1800 for df['D'] it will be NaN and then 1801 to 2000 will be the values
Been trying various ways without success... thank you
data with 200 rows in this format
[[ 0.43628979]
[ 0.43454027]
[ 0.43552566]
[ 0.43542767]
[ 0.43331838]
...
I believe you need join with changing index by last index values of df1:
np.random.seed(100)
df1 = pd.DataFrame(np.random.randint(10, size=(20,3)), columns=list('ABC'))
print (df1)
A B C
0 8 8 3
1 7 7 0
2 4 2 5
3 2 2 2
4 1 0 8
5 4 0 9
6 6 2 4
7 1 5 3
8 4 4 3
9 7 1 1
10 7 7 0
11 2 9 9
12 3 2 5
13 8 1 0
14 7 6 2
15 0 8 2
16 5 1 8
17 1 5 4
18 2 8 3
19 5 0 9
df2 = pd.DataFrame(np.random.randint(10, size=(2,5)), columns=list('werty'))
print (df2)
w e r t y
0 3 6 3 4 7
1 6 3 9 0 4
df2.index = df1.index[-len(df2.index):]
df = df1.join(df2)
print (df)
A B C w e r t y
0 8 8 3 NaN NaN NaN NaN NaN
1 7 7 0 NaN NaN NaN NaN NaN
2 4 2 5 NaN NaN NaN NaN NaN
3 2 2 2 NaN NaN NaN NaN NaN
4 1 0 8 NaN NaN NaN NaN NaN
5 4 0 9 NaN NaN NaN NaN NaN
6 6 2 4 NaN NaN NaN NaN NaN
7 1 5 3 NaN NaN NaN NaN NaN
8 4 4 3 NaN NaN NaN NaN NaN
9 7 1 1 NaN NaN NaN NaN NaN
10 7 7 0 NaN NaN NaN NaN NaN
11 2 9 9 NaN NaN NaN NaN NaN
12 3 2 5 NaN NaN NaN NaN NaN
13 8 1 0 NaN NaN NaN NaN NaN
14 7 6 2 NaN NaN NaN NaN NaN
15 0 8 2 NaN NaN NaN NaN NaN
16 5 1 8 NaN NaN NaN NaN NaN
17 1 5 4 NaN NaN NaN NaN NaN
18 2 8 3 3.0 6.0 3.0 4.0 7.0
19 5 0 9 6.0 3.0 9.0 0.0 4.0

Boxplot with pandas and groupby

I have the following dataset sample:
0 1
0 0 0.040158
1 2 0.500642
2 0 0.005694
3 1 0.065052
4 0 0.034789
5 2 0.128495
6 1 0.088816
7 1 0.056725
8 0 -0.000193
9 2 -0.070252
10 2 0.138282
11 2 0.054638
12 2 0.039994
13 2 0.060659
14 0 0.038562
And need a box and whisker plot, grouped by column 0. I have the following:
plt.figure()
grouped = df.groupby(0)
grouped.boxplot(column=1)
plt.savefig('plot.png')
But I end up with three subplots. How can place all three on one plot?
Thanks.
In 0.16.0 version of pandas, you could simply do this:
df.boxplot(by='0')
Result:
I don't believe you need to use groupby.
df2 = df.pivot(columns=df.columns[0], index=df.index)
df2.columns = df2.columns.droplevel()
>>> df2
0 0 1 2
0 0.040158 NaN NaN
1 NaN NaN 0.500642
2 0.005694 NaN NaN
3 NaN 0.065052 NaN
4 0.034789 NaN NaN
5 NaN NaN 0.128495
6 NaN 0.088816 NaN
7 NaN 0.056725 NaN
8 -0.000193 NaN NaN
9 NaN NaN -0.070252
10 NaN NaN 0.138282
11 NaN NaN 0.054638
12 NaN NaN 0.039994
13 NaN NaN 0.060659
14 0.038562 NaN NaN
df2.boxplot()