pandas: appending a row to a dataframe with values derived using a user defined formula applied on selected columns - pandas

I have a dataframe as
df = pd.DataFrame(np.random.randn(5,4),columns=list('ABCD'))
I can use the following to achieve the traditional calculation like mean(), sum()etc.
df.loc['calc'] = df[['A','D']].iloc[2:4].mean(axis=0)
Now I have two questions
How can I apply a formula (like exp(mean()) or 2.5*mean()/sqrt(max()) to column 'A' and 'D' for rows 2 to 4
How can I append row to the existing df where two values would be mean() of the A and D and two values would be of specific formula result of C and B.

Q1:
You can use .apply() and lambda functions.
df.iloc[2:4,[0,3]].apply(lambda x: np.exp(np.mean(x)))
df.iloc[2:4,[0,3]].apply(lambda x: 2.5*np.mean(x)/np.sqrt(max(x)))
Q2:
You can use dictionaries and combine them and add it as a row.
First one is mean, the second one is some custom function.
ad = dict(df[['A', 'D']].mean())
bc = dict(df[['B', 'C']].apply(lambda x: x.sum()*45))
Combine them:
ad.update(bc)
df = df.append(ad, ignore_index=True)

Related

Proper way to join data based on coditions

I want to add a new column to a datframe "table" (name: conc) which uses the values in columns (plate, ab) to get the numeric value from the dataframe "concs"
Below is what I mean, with the dataframe "exp" used to show what I expect the data to look like
what is the proper way to do this. Is it using some multiple condition, or do I need to reshape the concs dataframe somehow?
Use DataFrame.melt with left join for new column concs, if no match is created NaNs:
exp = concs.melt('plate', var_name='ab', value_name='concs').merge(table,on=['plate', 'ab'], how='left')
Solution should be simplify - if same columns names 'plate', 'ab' in both DataFrames and need merge by both is possible omit on parameter:
exp = concs.melt('plate', var_name='ab', value_name='concs').merge(table, how='left')
First melt the concs dataframe and then merge with table:
out = concs.melt(id_vars=['plate'],
value_vars=concs.columns.drop('plate').tolist(),
var_name='ab').merge(table, on=['plate', 'ab'
]).rename(columns={'value': 'concs'})
or just make good use of parameters of melt like in jezraels' answer:
out = concs.melt(id_vars=['plate'],
value_name='concs',
var_name='ab').merge(table, on=['plate', 'ab'])

after groupby, using agg, how to get one element on condition of other columns

I am using groupby to process many columns using different functions.
I have used only one column, but I can't choose element on condition of other columns.
import pandas as pd
data = {'a':['A','C','E','J'],'b':[1,2,3,4]}
df = pd.DataFrame(data, index=[1,1,1,1])
df.groupby(level=0).agg({
'b':'sum',
'b':select element from b where a = 'C'
})
The goal is using agg to get this:
df.groupby(level=0).apply(lambda x:x.loc[x.a=='C','b'])
df.groupby(level=0).b.first()
df.groupby(level=0).b.sum()
f first sum
1 2 1 10
No, you can not use agg with multiple columns. Agg is to aggregate values of a single column, if you must have conditions based on a separate column, you need to use apply.
df.groupby(level=0).apply(lambda x: pd.Series([x.loc[x.a =="C", 'b'].values[0],
x.b.iloc[0],
x.b.sum()], index=['f','first','sum']))
Output:
f first sum
1 2 1 10

How to concat 3 dataframes with each into sequential columns

I'm trying to understand how to concat three individual dataframes (i.e df1, df2, df3) into a new dataframe say df4 whereby each individual dataframe has its own column left to right order.
I've tried using concat with axis = 1 to do this, but it appears not possible to automate this with a single action.
Table1_updated = pd.DataFrame(columns=['3P','2PG-3Io','3Io'])
Table1_updated=pd.concat([get_table1_3P,get_table1_2P_max_3Io,get_table1_3Io])
Note that with the exception of get_table1_2P_max_3Io, which has two columns, all other dataframes have one column
For example,
get_table1_3P =
get_table1_2P_max_3Io =
get_table1_3Io =
Ultimately, i would like to see the following:
I believe you need first concat and tthen change order by list of columns names:
Table1_updated=pd.concat([get_table1_3P,get_table1_2P_max_3Io,get_table1_3Io], axis=1)
Table1_updated = Table1_updated[['3P','2PG-3Io','3Io']]

Creating a dataframe column resulting from Groupby and transform

I have a dataframe of 4 columns textID, A, B,C
I would like create a groupby object and then calculate the 5th percentile on column C and then add this column (calling it 'quantile') back to the original dataframe.
I have this following code that works when groupby is on one column
df2['quantile']=df2.C.groupby(df2.itextID).transform(lambda x:
x.quantile(q=0.5))
Question 1:
How can this be extended so the groupby object is now using two columns i.e. textID & A?
Question 2:
Can the groupby object be created first and then the transform applied?
i.e.
### Create groupby object Extract top 4 rows in each group
grp = df2.groupby('textID').head(4)
??? how to apply the transform to column C?
Thanks
(Can square brackets notation be used rather than dot?)
Use alternative with columns names in list inside groupby and specify columns after groupby for processing transform or another function:
df2['quantile']= (df2.groupby(['itextID', 'A'])['C']
.transform(lambda x: x.quantile(q=0.5)))
Here grp is DataFrame, not groupby object, because GroupBy.head return DataFrame:
grp = df2.groupby('textID').head(4)
But is possible create groupby object by removing .head(4):
grp = df2.groupby('textID')
And then use head:
df = grp.head(4)
Or transform:
df2['new'] = grp['C'].transform(lambda x: x.quantile(q=0.5))

Add a column using another column as its index [duplicate]

Is it possible to only merge some columns? I have a DataFrame df1 with columns x, y, z, and df2 with columns x, a ,b, c, d, e, f, etc.
I want to merge the two DataFrames on x, but I only want to merge columns df2.a, df2.b - not the entire DataFrame.
The result would be a DataFrame with x, y, z, a, b.
I could merge then delete the unwanted columns, but it seems like there is a better method.
You want to use TWO brackets, so if you are doing a VLOOKUP sort of action:
df = pd.merge(df,df2[['Key_Column','Target_Column']],on='Key_Column', how='left')
This will give you everything in the original df + add that one corresponding column in df2 that you want to join.
You could merge the sub-DataFrame (with just those columns):
df2[list('xab')] # df2 but only with columns x, a, and b
df1.merge(df2[list('xab')])
If you want to drop column(s) from the target data frame, but the column(s) are required for the join, you can do the following:
df1 = df1.merge(df2[['a', 'b', 'key1']], how = 'left',
left_on = 'key2', right_on = 'key1').drop(columns = ['key1'])
The .drop(columns = 'key1') part will prevent 'key1' from being kept in the resulting data frame, despite it being required to join in the first place.
You can use .loc to select the specific columns with all rows and then pull that. An example is below:
pandas.merge(dataframe1, dataframe2.iloc[:, [0:5]], how='left', on='key')
In this example, you are merging dataframe1 and dataframe2. You have chosen to do an outer left join on 'key'. However, for dataframe2 you have specified .iloc which allows you to specific the rows and columns you want in a numerical format. Using :, your selecting all rows, but [0:5] selects the first 5 columns. You could use .loc to specify by name, but if your dealing with long column names, then .iloc may be better.
This is to merge selected columns from two tables.
If table_1 contains t1_a,t1_b,t1_c..,id,..t1_z columns,
and table_2 contains t2_a, t2_b, t2_c..., id,..t2_z columns,
and only t1_a, id, t2_a are required in the final table, then
mergedCSV = table_1[['t1_a','id']].merge(table_2[['t2_a','id']], on = 'id',how = 'left')
# save resulting output file
mergedCSV.to_csv('output.csv',index = False)
Slight extension of the accepted answer for multi-character column names, using inner join by default:
df1 = df1.merge(df2[["Key_Column", "Target_Column1", "Target_Column2"]])
This assumes that Key_Column is the only column both dataframes have in common.