Drop all group rows when met a condition? - pandas

I have pandas data frame have two-level group based on 'col10' and 'col1'.All I want to do is, drop all group rows if a specified value in another column repeated or this value did not existed in the group (keep the group which the specified value existed once only) for example:
The original data frame:
df = pd.DataFrame( {'col0':['A','A','A','A','A','B','B','B','B','B','B','B','c'],'col1':[1,1,2,2,2,1,1,1,1,2,2,2,1], 'col2':[1,2,1,2,3,1,2,1,2,2,2,2,1]})
I need to keep the the rows for the group for example (['A',1],['A',2],['B',2]) in this original DF
The desired dataframe:
I tried this step:
df.groupby(['col0','col1']).apply(lambda x: (x['col2']==1).sum()==1)
where the result is
col0 col1
A 1 True
2 True
B 1 False
2 True
c 1 False
dtype: bool
How to create the desired Df based on this bool?

You can do this as below:
m=(df.groupby(['col0','col1'])['col2'].
transform(lambda x: np.where((x.eq(1)).sum()==1,x,np.nan)).dropna().index)
df.loc[m]
Or:
df[df.groupby(['col0','col1'])['col2'].transform(lambda x: x.eq(1).sum()==1)]
col0 col1 col2
0 A 1 1
1 A 1 2
2 A 2 1
3 A 2 2
4 A 2 3
12 c 1 1

Related

How to merge two datasets on incomplete columns?

I want to merge two datasets on 'key1' and 'key2' columns so that in case of missing value, for example, in the 'key2' column, it would take all combinations of the second key that belong to the first key. Here is an example:
def merge_nan_as_any(mask, data, on, how)
...
mask = pd.DataFrame({'key1': [1,1,2,2],
'key2': [None,3,1,2],
'value2': [1,2,3,4]})
data = pd.DataFrame({'key1': [1,1,1,2,2,2],
'key2': [1,2,3,1,2,3],
'value1': [1,2,3,4,5,6]})
result = merge_nan_as_any(mask, data, on=['key1', 'key2'], how='left')
result = pd.DataFrame({'key1': [1,1,1,1,2,2],
'key2': [1,2,3,3,1,2],
'value2': [1,1,1,2,3,4],
'value1': [1,2,3,3,4,5]})
There is a missed value of the second key, so it takes all rows from the second dataset that satisfy the condition: key1 must equal to 1, key2 is any the second key value from the second dataset. How to do that?
The first obvious solution that came to my mind is to iterate over the first dataset and filter out combinations that satisfy the condition and the second one is to split the first dataset into several ones so that they have NaNs in the same columns and merge each of them on columns that have values.
But I don't like these solutions and guess there is more elegant way to do what I want.
I will appreciate for any help!
Simple approach, merge on key1/key2 for the non-NaN values, merge on key1 only for the NaN values and concat:
m = mask['key2'].notna()
result = pd.concat([data.merge(mask[~m].drop(columns='key2'), on='key1'),
data.merge(mask[m], on=['key1', 'key2']),
], ignore_index=True)
Output:
key1 key2 value1 value2
0 1 1 1 1
1 1 2 2 1
2 1 3 3 1
3 1 3 3 2
4 2 1 4 3
5 2 2 5 4
I would begin by filling the null values with a list of all unique values from the other dataframe. Then, explode it to get all possible combinations and transform back to numeric. Finally, merge them both achieving the expected output:
mask['key2'] = mask['key2'].fillna(' '.join([str(x) for x in data['key2'].unique()])).astype(str).str.split(' ')
mask = mask.explode('key2')
mask['key2'] = pd.to_numeric(mask['key2'])
pd.merge(mask,data,on=['key1','key2'],how='left')
Outputting:
key1 key2 value2 value1
0 1 1 1 1
1 1 2 1 2
2 1 3 1 3
3 1 3 2 3
4 2 1 3 4
5 2 2 4 5
use pandasql it will be easy:
mask.sql("""
select data.*,self.value2
from self left join data
on self.key1=data.key1 and (self.key2=data.key2 or self.key2 is null)
""",**globals())
out:
key1 key2 value1 value2
0 1 1 1 1
1 1 2 2 1
2 1 3 3 1
3 1 3 3 2
4 2 1 4 3
5 2 2 5 4

pandas creating new columns for each value in categorical columns

I have a pandas dataframe with some numeric and some categoric columns. I want to create a new column for each value of every categorical column and give that column a value of 1 in every row where that value is true and 0 in every row where that value is false. So the df is something like this -
col1 col2 col3
A P 1
B P 3
A Q 7
expected result is something like this:
col1 col2 col3 A B P Q
A P 1 1 0 1 0
B P 3 0 1 1 0
A Q 7 1 0 0 1
Is this possible? can someone please help me?
Use df.select_dtypes, pd.get_dummies with pd.concat:
# First select all columns which have object dtypes
In [826]: categorical_cols = df.select_dtypes('object').columns
# Create one-hot encoding for the above cols and concat with df
In [817]: out = pd.concat([df, pd.get_dummies(df[categorical_cols])], 1)
In [818]: out
Out[818]:
col1 col2 col3 col1_A col1_B col2_P col2_Q
0 A P 1 1 0 1 0
1 B P 3 0 1 1 0
2 A Q 7 1 0 0 1

pandas group by and concatenate n rows

I have:
pd.DataFrame({'col2':[1,1,1,1,2,2,2,2],'col':['one','fish','two','fish','left','foot','right','foot']})
I want to group by col2 and concat every 3rd col to get:
I am trying
dd.groupby(dd.index // 3).agg(' '.join)
to no avail.
Any suggestions?
Use GroupBy.cumcount for counter per groups, then integers division by 3 and aggregate join:
s = dd.groupby('col2').cumcount()
df = (dd.groupby([s // 3, 'col2'], sort=False)['col']
.agg(' '.join)
.reset_index(level=1)
.reset_index(drop=True))
print (df)
col2 col
0 1 one fish two
1 1 fish
2 2 left foot right
3 2 foot

How to make pandas work for cross multiplication

I have 3 data frame:
df1
id,k,a,b,c
1,2,1,5,1
2,3,0,1,0
3,6,1,1,0
4,1,0,5,0
5,1,1,5,0
df2
name,a,b,c
p,4,6,8
q,1,2,3
df3
type,w_ave,vac,yak
n,3,5,6
v,2,1,4
from the multiplication, using pandas and numpy, I want to the output in df1:
id,k,a,b,c,w_ave,vac,yak
1,2,1,5,1,16,15,18
2,3,0,1,0,0,3,6
3,6,1,1,0,5,4,7
4,1,0,5,0,0,11,14
5,1,1,5,0,13,12,15
the conditions are:
The value of the new column will be =
#its not a code
df1["w_ave"][1] = df3["w_ave"]["v"]+ df1["a"][1]*df2["a"]["q"]+df1["b"][1]*df2["b"]["q"]+df1["c"][1]*df2["c"]["q"]
for output["w_ave"][1]= 2 +(1*1)+(5*2)+(1*3)
df3["w_ave"]["v"]=2
df1["a"][1]=1, df2["a"]["q"]=1 ;
df1["b"][1]=5, df2["b"]["q"]=2 ;
df1["c"][1]=1, df2["c"]["q"]=3 ;
Which means:
- a new column will be added in df1, from the name of the column from df3.
- for each row of the df1, the value of a, b, c will be multiplied with the same-named q value from df2. and summed together with the corresponding value of df3.
-the column name of df1 , matched will column name of df2 will be multiplied. The other not matched column will not be multiplied, like df1[k].
- However, if there is any 0 in df1["a"], the corresponding output will be zero.
I am struggling with this. It was tough to explain also. My attempts are very silly. I know this attempt will not work. However, I have added this:
import pandas as pd, numpy as np
data1 = "Sample_data1.csv"
data2 = "Sample_data2.csv"
data3 = "Sample_data3.csv"
folder = '~Sample_data/'
df1 =pd.read_csv(folder + data1)
df2 =pd.read_csv(folder + data2)
df3 =pd.read_csv(folder + data3)
df1= df2 * df1
Ok, so this will in no way resemble your desired output, but vectorizing the formula you provided:
df2=df2.set_index("name")
df3=df3.set_index("type")
df1["w_ave"] = df3.loc["v", "w_ave"]+ df1["a"].mul(df2.loc["q", "a"])+df1["b"].mul(df2.loc["q", "b"])+df1["c"].mul(df2.loc["q", "c"])
Outputs:
id k a b c w_ave
0 1 2 1 5 1 16
1 2 3 0 1 0 4
2 3 6 1 1 0 5
3 4 1 0 5 0 12
4 5 1 1 5 0 13

Map column names if data is same in two dataframes

I have two pandas dataframes
df1 = A B C
1 2 3
2 3 4
3 4 5
df2 = X Y Z
1 2 3
2 3 4
3 4 5
I need to map based on data If data is same then map column namesenter code here
Output = col1 col2
A X
B Y
C Z
I cannot find any built-in function to support this, hence simply loop over all columns:
pairs = []
for col1 in df1.columns:
for col2 in df2.columns:
if df1[col1].equals(df2[col2]):
pairs.append((col1, col2))
output = pandas.DataFrame(pairs, columns=['col1', 'col2'])