Have a dataframe, need to apply same calculations for many columns, currently I'm doing it manually.
Any good and elegant way to do this?
tt = pd.DataFrame(data={'Status' : ['green','green','red','blue','red','yellow','black'],
'Group' : ['A','A','B','C','A','B','C'],
'City' : ['Toronto','Montreal','Vancouver','Toronto','Edmonton','Winnipeg','Windsor'],
'Sales' : [13,6,16,8,4,3,1], 'Counts' : [100,200,50,30,20,10,300]})
ss = tt.groupby('Group').agg({'Sales':['count','mean',np.median],\
'Counts':['count','mean',np.median]})
ss.columns = ['_'.join(col).strip() for col in ss.columns.values]
So the result is
How could I do this for many columns with same calculations, count, mean, median for each column if I have a very large dataframe?
In pandas, the agg operation takes single or multiple individual methods to be applied to relevant columns and returns a summary of the outputs. In python, lists hold and parse multiple entities. In this case, I pass a list of functions into the aggregator. In your case, you were parsing a dictionary, which means you had to handle each column individually making it very manual. Happy to explain further if not clear
ss=tt.groupby('Group').agg(['count','mean','median'])
ss.columns = ['_'.join(col).strip() for col in ss.columns.values]
ss
Related
I have the following DataFrame with 3 columns a, b, c. I grouped the DF by c
dfByC = groupby(df, [:C])
How can I select a group from dfByC for a certain value of c?
Do:
dfByC[(the_value_you_have,)]
or
dfByC[(C=the_value_you_have,)]
or
dfByC[Dict(:C => the_value_you_have)]
In essence - you can do such selection by passing a Tuple, a NamedTuple or a dictionary.
The reason why it is not allowed to just write dfByC[the_value_you_have] is that you can also index GroupedDataFrame by integer, where you get the consecutive group, so we need some wrapper to disambiguate. Also if you groupby multiple columns you need some wrapper to keep them together.
Also group selection by grouping variable value is fast (so you can safely write a code where you do millions of such lookups and it will be efficient).
Normally when I have to make aggregations, I use something like the following code in PySpark:
import pyspark.sql.functions as f
df_aggregate = df.groupBy('id')\
.agg(f.mean('value_col').alias('value_col_mean'))
Now I actually want to compute the average or mean on multiple subsets of the dataframe df (i.e. on different time windows, for example a mean for the last year, a mean for the last 2 years, etc.). I understand I could do df.filter(f.col(filter_col) >= condition).groupBy.... for every subset, but instead I would prefer to do this in one 'go'.
Is it possible to apply the filtering within the .agg(..) part of PySpark?
Edit
Example data for one id looks like (the real data contains many values for id):
You can put the conditions inside a when statement, and put them all inside .agg:
import pyspark.sql.functions as f
df_aggregate = df.withColumn('value_col', f.regexp_replace('value_col', ',', '.'))\
.groupBy('id')\
.agg(f.mean(f.when(last_year_condition, f.col('value_col'))).alias('value_col_mean_last_year'),
f.mean(f.when(last_two_years_condition, f.col('value_col'))).alias('value_col_mean_last_two_years')
)
I have a dataframe of monthly returns for 1,000 stocks with ids as column names.
monthly returns
I need to select only the columns that match the values in another dataframe which includes the ids I want.
permno list
I'm sure this is really quite simple, but I have been struggling for 2 days and if someone has an easy solution it would be so very much appreciated. Thank you.
You could convert the single-column permno list dataframe (osr_curr_permnos) into a list, and then use that list to select certain columns from your main dataframe (all_rets).
To convert the osr_curr_permnos column "0" into a list, you can use .to_list()
Then, you can use that list to slice all_rets and .copy() to make a fresh copy of it into a new dataframe.
The python code might look something like:
keep = osr_curr_permnos['0'].to_list()
selected_rets = all_rets[keep].copy()
"keep" would be a list, and "selected_rets" would be your new dataframe.
If there's a chance that osr_curr_permnos would have duplicates, you'll want to filter those out:
keep = osr_curr_permnos['0'].drop_duplicates().to_list()
selected_rets = all_rets[keep].copy()
As I expected, the answer was more simple than I was making it. Basically, I needed to take the integer values in my permnos list and recast those as strings.
osr_curr_permnos['0'] = osr_curr_permnos['0'].apply(str)
keep = osr_curr_permnos['0'].values
Then I can use that to select columns from my returns dataframe which had string values as column headers.
all_rets[keep]
It was all just a mismatch of int vs. string.
I have two csv files having some data and I would like to combine and sort data based on one common column:
Here is data1.csv and data2.csv file:
The data3.csv is the output file where you I need data to be combined and sorted as below:
How can I achieve this?
Here's what I think you want to do here:
I created two dataframes with simple types, assume the first column is like your timestamp:
df1 = pd.DataFrame([[1,1],[2,2], [7,10], [8,15]], columns=['timestamp', 'A'])
df2 = pd.DataFrame([[1,5],[4,7],[6,9], [7,11]], columns=['timestamp', 'B'])
c = df1.merge(df2, how='outer', on='timestamp')
print(c)
The outer merge causes each contributing DataFrame to be fully present in the output even if not matched to the other DataFrame.
The result is that you end up with a DataFrame with a timestamp column and the dependent data from each of the source DataFrames.
Caveats:
You have repeating timestamps in your second sample, which I assume may be due to the fact you do not show enough resolution. You would not want true duplicate records for this merge solution, as we assume timestamps are unique.
I have not repeated the timestamp column here a second time, but it is easy to add in another timestamp column based on whether column A or B is notnull() if you really need to have two timestamp columns. Pandas merge() has an indicator option which would show you the source of the timestamp if you did not want to rely on columns A and B.
In the post you have two output columns named "timestamp". Generally you would not output two columns with same name since they are only distinguished by position (or color) which are not properties you should rely upon.
n1 = DataFrame({'zhanghui':[1,2,3,4] , 'wudi':[17,'gx',356,23] ,'sas'[234,51,354,123] })
n2 = DataFrame({'zhanghui_x':[1,2,3,5] , 'wudi':[17,23,'sd',23] ,'wudi_x':[17,23,'x356',23] ,'wudi_y':[17,23,'y356',23] ,'ddd':[234,51,354,123] })
code above defined two DataFrame objects. I wanna use 'zhanghui' field from n1 and 'zhanghui_x' field from n2 as "on" field merge n1 and n2,so my code like this:
n1.merge(n2,how = 'inner',left_on = 'zhanghui',right_on='zhanghui_x')
and then result columns given like this :
sas wudi_x zhanghui ddd wudi_y wudi_x wudi_y zhanghui_x
Some duplicate columns appeared,such as 'wudi_x' ,'wudi_y'.
So it's a pandas inner problems or I had a wrong usage about pd.merge ?
From pandas documentation, the merge() function has following properties;
pd.merge(left, right, how='inner', on=None, left_on=None, right_on=None,
left_index=False, right_index=False, sort=True,
suffixes=('_x', '_y'), copy=True, indicator=False,
validate=None)
where suffixes denote default suffix string to be attached to 'over-lapping' columns with defaults '_x' and '_y'.
I'm not sure if I understood your follow-up question correctly, but;
#case1
if the first dataFrame has column 'column_name_x' and the second dataFrame has column 'column_name' then there are no over-lapping columns and therefore no suffixes are attached.
#case2
if the first dataFrame has columns 'column_name', 'column_name_x' and the second dataFrame also has column 'column_name', the default suffixes attach to over-lapping columns and therefore the first frame's 'columnn_name' becomes 'column_name_x' and result in a duplicate of already existing column.
You can however, pass a None value to one(not all) of the suffixes to ensure that column names of certain dataFrame remain as-is.
Your approach is right, pandas automatically gives postscripts after merging the columns that are "duplicated" with the original headers given a postscript _x, _y, etc.
you can first select what columns to merge and proceed:
cols_to_use = n2.columns - n1.columns
n1.merge(n2[cols_to_use],how = 'inner',left_on = 'zhanghui',right_on='zhanghui_x')
result columns:
sas wudi zhanghui ddd wudi_x wudi_y zhanghui_x
When I tried to run cols_to_use = n2.columns - n1.columns,it gave me a TypeError like this:
cannot perform __sub__ with this index type: <class pandas.core.indexes.base.Index'>
then I tried to use code below:
cols_to_use = [i for i in list(n2.columns) if i not in list(n1.columns) ]
It worked fine,result columns given like this:
sas wudi zhanghui ddd wudi_x wudi_y zhanghui_x
So,#S Ringne's method really resolved my problems.
=============================================
Pandas just simply add suffix such as '_x' to resolve the duplicate-column-name problem when it comes to merging two Frame objects.
But what will it happen if the name form of 'a-column-name'+'_x' appears in either Frame object? I used to think that it will check if the name form of 'a-column-name'+'_x' appears, But actually pandas doesn't have this check?