pandas time-weighted average groupby in panel data - pandas

Hi I have a panel data set looks like
stock date time spread1 weight spread2
VOD 01-01 9:05 0.01 0.03 ...
VOD 01-01 9.12 0.03 0.05 ...
VOD 01-01 10.04 0.02 0.30 ...
VOD 01-02 11.04 0.02 0.05
... ... ... .... ...
BAT 01-01 0.05 0.04 0.03
BAT 01-01 0.07 0.05 0.03
BAT 01-01 0.10 0.06 0.04
I want to calculate the weighted average of spread1 for each stock in each day. I can break the solution into several steps. i.e. I can apply groupby and agg function to get the sum of spread1*weight for each stock in each day in dataframe1, and then calculate the sum of weight for each stock in each day in dataframe2. After that merge two data sets and get weighted average for spread1.
My question is is there any simple way to calculate weighted average of spread1 here ? I also have spread2, spread3 and spread4. So I want to write as fewer code as possible. Thanks

IIUC, you need to transform the result back to the original, but using .transform with output that depends on two columns is tricky. We write our own function, where we pass the series of spread s and the original DataFrame df so we can also use the weights:
import numpy as np
def weighted_avg(s, df):
return np.average(s, weights=df.loc[df.index.isin(s.index), 'weight'])
df['spread1_avg'] = df.groupby(['stock', 'date']).spread1.transform(weighted_avg, df)
Output:
stock date time spread1 weight spread1_avg
0 VOD 01-01 9:05 0.01 0.03 0.020526
1 VOD 01-01 9.12 0.03 0.05 0.020526
2 VOD 01-01 10.04 0.02 0.30 0.020526
3 VOD 01-02 11.04 0.02 0.05 0.020000
4 BAT 01-01 0.05 0.04 0.03 0.051000
5 BAT 01-01 0.07 0.05 0.03 0.051000
6 BAT 01-01 0.10 0.06 0.04 0.051000
If needed for multiple columns:
gp = df.groupby(['stock', 'date'])
for col in [f'spread{i}' for i in range(1,5)]:
df[f'{col}_avg'] = gp[col].transform(weighted_avg, df)
Alternatively, if you don't need to transform back and one want value per stock-date:
def my_avg2(gp):
avg = np.average(gp.filter(like='spread'), weights=gp.weight, axis=0)
return pd.Series(avg, index=[col for col in gp.columns if col.startswith('spread')])
### Create some dummy data
df['spread2'] = df.spread1+1
df['spread3'] = df.spread1+12.1
df['spread4'] = df.spread1+1.13
df.groupby(['stock', 'date'])[['weight'] + [f'spread{i}' for i in range(1,5)]].apply(my_avg2)
# spread1 spread2 spread3 spread4
#stock date
#BAT 01-01 0.051000 1.051000 12.151000 1.181000
#VOD 01-01 0.020526 1.020526 12.120526 1.150526
# 01-02 0.020000 1.020000 12.120000 1.150000

Related

return list by dataframe linear interpolation

I have a dataframe that has, let's say 5 entries.
moment
stress
strain
0
0.12
13
0.11
1
0.23
14
0.12
2
0.56
15
0.56
I would like to get a 1D float list in the order of [moment, stress, strain], based on the linear interpolation of strain = 0.45
I have read a couple of threads talking about the interpolate() method from pandas. But it is used when you have Nan entry and you fill in the number.
How do I accomplish a similar task with my case?
Thank you
One method is to add new row to your dataframe with NaN values and sort it:
df = df.append(
{"moment": np.nan, "stress": np.nan, "strain": 0.45}, ignore_index=True
)
df = df.sort_values(by="strain").set_index("strain")
df = df.interpolate(method="index")
print(df)
Prints:
moment stress
strain
0.11 0.1200 13.00
0.12 0.2300 14.00
0.45 0.4775 14.75
0.56 0.5600 15.00
To get the values back:
df = df.reset_index()
print(
df.loc[df.strain == 0.45, ["moment", "stress", "strain"]]
.to_numpy()
.tolist()[0]
)
Prints:
[0.47750000000000004, 14.75, 0.45]

Merging 2 or more data frames and transposing the result

I have several DFs derived from a Panda binning process using the below code;
df2 = df.resample(rule=timedelta(milliseconds=250))[('diffA')].mean().dropna()
df3 = df.resample(rule=timedelta(milliseconds=250))[('diffB')].mean().dropna()
.. etc
Every DF will have column containing 'time' in Datetime format( example:2019-11-22 13:18:00.000 ) and second column containing a number (i.e. 0.06 ). Different DFs will have different 'time' bins. I am trying to concatenate all DFs into one , where certain elements of the resulting DF may contain 'NaN'.
The Datetime format of the DFs give an error when using;
method 1) df4=pd.merge(df2,df3,left_on='time',right_on='time')
method 2) pd.pivot_table(df2, values = 'diffA', index=['time'], columns = 'time').reset_index()
When DFs have been combined , I also want to transpose the resulting DF, where:
Rows: are 'DiffA','DiffB'..etc
Columns: are time bins accordingly.
Have tried the transpose() method with individual DFs, just to try, but getting an error as my time /index is in 'Datetime' format..
Once that is in place, I am looking for a method to extract rows from the resulting transposed DF as individual data series.
Please advise how I can achieve the above with some guidance, appreciate any feedback ! thank you so much for your help.
Data frames ( 2 - for example )
time DiffA
2019-11-25 08:18:01.250 0.06
2019-11-25 08:18:01.500 0.05
2019-11-25 08:18:01.750 0.04
2019-11-25 08:18:02.000 0
2019-11-25 08:18:02.250 0.22
2019-11-25 08:18:02.500 0.06
time DiffB
2019-11-26 08:18:01.250 0.2
2019-11-27 08:18:01.500 0.05
2019-11-25 08:18:01.000 0.6
2019-11-25 08:18:02.000 0.01
2019-11-25 08:18:02.250 0.8
2019-11-25 08:18:02.500 0.5
resulting merged DF should be as follows ( text only);
time ( first row )
2019-11-25 08:18:01.000,
2019-11-25 08:18:01.250,
2019-11-25 08:18:01.500,
2019-11-25 08:18:01.750,
2019-11-25 08:18:02.000,
2019-11-25 08:18:02.250,
2019-11-25 08:18:02.500,
2019-11-26 08:18:01.250,
2019-11-27 08:18:01.500
(second row)
diffA nan 0.06 0.05 0.04 0 0.22 0.06 nan nan
(third row)
diffB 0.6 nan nan nan 0.01 0.8 0.5 0.2 0.05
Solution
The core logic: You need to use outer-join on the column 'time' to merge each of the sampled-dataframes together to achieve your objective. Finally resetting the index to the column time completes the solution.
I will use the dummy data I created below to create a reproducible solution.
Note: I have used df as the final dataframe and df0 as the original dataframe. My df0 is your df.
df = pd.DataFrame()
for i, column_name in zip(range(5), column_names):
if i==0:
df = df0.sample(n=10, random_state=i).rename(columns={'data': f'df{column_name}'})
else:
df_other = df0.sample(n=10, random_state=i).rename(columns={'data': f'df{column_name}'})
df = pd.merge(df, df_other, on='time', how='outer')
print(df.set_index('time').T)
Output:
Dummy Data
import pandas as pd
# dummy data:
df0 = pd.DataFrame()
df0['time'] = pd.date_range(start='2020-02-01', periods=15, freq='D')
df0['data'] = np.random.randint(0, high=9, size=15)
print(df0)
Output:
time data
0 2020-02-01 6
1 2020-02-02 1
2 2020-02-03 7
3 2020-02-04 0
4 2020-02-05 8
5 2020-02-06 8
6 2020-02-07 1
7 2020-02-08 6
8 2020-02-09 2
9 2020-02-10 6
10 2020-02-11 8
11 2020-02-12 3
12 2020-02-13 0
13 2020-02-14 1
14 2020-02-15 0

Pandas 'multi-index' issue in merging dataframes

I have a panel dataset as df
stock year date return
VOD 2017 01-01 0.05
VOD 2017 01-02 0.03
VOD 2017 01-03 0.04
... ... ... ....
BAT 2017 01-01 0.05
BAT 2017 01-02 0.07
BAT 2017 01-03 0.10
so I use this code to get the mean and skewness of the return for each stock in each year.
df2=df.groupby(['stock','year']).mean().reset_index()
df3=df.groupby(['stock','year']).skew().reset_index()
df2 and df3 look fine.
df2 is like (after I change the column name)
stock year mean_return
VOD 2017 0.09
BAT 2017 0.14
... ... ...
df3 is like (after I change the column name)
stock year return_skewness
VOD 2017 -0.34
BAT 2017 -0.04
... ... ...
The problem is when I tried to merge df2 and df3 by using
want=pd.merge(df2,df2, on=['stock','year'],how='outer')
python gave me
'The column label 'stock' is not unique.
For a multi-index, the label must be a tuple with elements corresponding to each level.'
, which confuses me alot.
I can use want = pd.merge(df2,df3, left_index=True, right_index=True, how='outer') to merge df2 and df3, but after that i have to rename the columns as column names are in parentheses.
Is there any convenient way to merge df2 and df3 ? Thanks
Better is use agg for specify aggregate function in list and column for aggregation after function:
df3 = (df.groupby(['stock','year'])['return']
.agg([('mean_return','mean'),('return_skewness','skew')])
.reset_index())
print (df3)
stock year mean_return return_skewness
0 BAT 2017 0.073333 0.585583
1 VOD 2017 0.040000 0.000000
Your solution should be changed with remove reset_index, rename and last concat, also is specified column return for aggregate:
s2=df.groupby(['stock','year'])['return'].mean().rename('mean_return')
s3=df.groupby(['stock','year'])['return'].skew().rename('return_skewness')
df3 = pd.concat([s2, s3], axis=1).reset_index()
print (df3)
stock year mean_return return_skewness
0 BAT 2017 0.073333 0.585583
1 VOD 2017 0.040000 0.000000
EDIT:
If need aggregate all numeric columns remove list after groupby first and then use map with join for flatten MultiIndex:
print (df)
stock year date return col
0 VOD 2017 01-01 0.05 1
1 VOD 2017 01-02 0.03 8
2 VOD 2017 01-03 0.04 9
3 BAT 2017 01-01 0.05 1
4 BAT 2017 01-02 0.07 4
5 BAT 2017 01-03 0.10 3
df3 = df.groupby(['stock','year']).agg(['mean','skew'])
print (df3)
return col
mean skew mean skew
stock year
BAT 2017 0.073333 0.585583 2.666667 -0.935220
VOD 2017 0.040000 0.000000 6.000000 -1.630059
df3.columns = df3.columns.map('_'.join)
df3 = df3.reset_index()
print (df3)
stock year return_mean return_skew col_mean col_skew
0 BAT 2017 0.073333 0.585583 2.666667 -0.935220
1 VOD 2017 0.040000 0.000000 6.000000 -1.630059
Your solutions should be changed:
df2=df.groupby(['stock','year']).mean().add_prefix('mean_')
df3=df.groupby(['stock','year']).skew().add_prefix('skew_')
df3 = pd.concat([df2, df3], axis=1).reset_index()
print (df3)
stock year mean_return mean_col skew_return skew_col
0 BAT 2017 0.073333 2.666667 0.585583 -0.935220
1 VOD 2017 0.040000 6.000000 0.000000 -1.630059
A easier way to bypass this issue:
df2.to_clipboard(index=False)
df2clip=pd.read_clipboard(sep='\t')
df3.to_clipboard(index=False)
df3clip=pd.read_clipboard(sep='\t')
Then merge 2 df again:
pd.merge(df2clip,df3clip,on=['stock','year'],how='outer')

How to access results from extractall on a dataframe

I have a dataframe df in which the column df.Type has dimension information about physical objects. The numbers appear inside a text string which I have successfully extracted using this code:
dftemp=df.Type.str.extractall("([-+]?\d*\.\d+|\d+)").astype(float)
But now, the problem is that results appear as:
0
Unit match
5 0 0.02
1 0.03
6 0 0.02
1 0.02
7 0 0.02
...
How can I multiply these successive numbers (e.g. 0.02 * 0.03 = 0.006) and insert the result into the original dataframe df as a new column, say df.Area for each value of df.Type?
Thanks for your ideas!
I think you can do it with unstack and then prod along axis=1 like
print (dftemp.unstack().prod(axis=1))
then if I'm not mistaken, Unit is the name of the index in df, so I would say that
df['Area'] = dftemp.unstack().prod(axis=1)
should create the column you look for.
With an example:
df = pd.DataFrame( {'Type':['bla 0.03 dddd 0.02 jjk','bli 0.02 kjhg 0.02 wait']},
index=pd.Index([5,6],name = 'Unit'))
df['Area'] = (df.Type.str.extractall("([-+]?\d*\.\d+|\d+)").astype(float)
.unstack().prod(axis=1))
print (df)
Type Area
Unit
5 bla 0.03 dddd 0.02 jjk 0.0006
6 bli 0.02 kjhg 0.02 wait 0.0004

SQL linear interpolation based on lookup table

I need to build linear interpolation into an SQL query, using a joined table containing lookup values (more like lookup thresholds, in fact). As I am relatively new to SQL scripting, I have searched for an example code to point me in the right direction, but most of the SQL scripts I came across were for interpolating between dates and timestamps and I couldn't relate these to my situation.
Basically, I have a main data table with many rows of decimal values in a single column, for example:
Main_Value
0.33
0.12
0.56
0.42
0.1
Now, I need to yield interpolated data points for each of the rows above, based on a joined lookup table with 6 rows, containing non-linear threshold values and the associated linear normalized values:
Threshold_Level Normalized_Value
0 0
0.15 20
0.45 40
0.60 60
0.85 80
1 100
So for example, if the value in the Main_Value column is 0.45, the query will lookup its position in (or between) the nearest Threshold_Level, and interpolate this based on the adjacent value in the Normalized_Value column (which would yield a value of 40 in this example).
I really would be grateful for any insight into building a SQL query around this, especially as it has been hard to track down any SQL examples of linear interpolation using a joined table.
It has been pointed out that I could use some sort of rounding, so I have included a more detailed table below. I would like the SQL query to lookup each Main_Value (from the first table above) where it falls between the Threshold_Min and Threshold_Max values in the table below, and return the 'Normalized_%' value:
Threshold_Min Threshold_Max Normalized_%
0.00 0.15 0
0.15 0.18 5
0.18 0.22 10
0.22 0.25 15
0.25 0.28 20
0.28 0.32 25
0.32 0.35 30
0.35 0.38 35
0.38 0.42 40
0.42 0.45 45
0.45 0.60 50
0.60 0.63 55
0.63 0.66 60
0.66 0.68 65
0.68 0.71 70
0.71 0.74 75
0.74 0.77 80
0.77 0.79 85
0.79 0.82 90
0.82 0.85 95
0.85 1.00 100
For example, if the value from the Main_Value table is 0.52, it falls between Threshold_Min 0.45 and Threshold_Max 0.60, so the Normalized_% returned is 50%. The problem is that the Threshold_Min and Max values are not linear. Could anyone point me in the direction of how to script this?
Assuming you want the Main_Value and the nearest (low and not high) or equal Normalized_Value, you can do it like this:
select t1.Main_Value, max(t2.Normalized_Value) as Normalized_Value
from #t1 t1
inner join #t2 t2 on t1.Main_Value >= t2.Threshold_Level
group by t1.Main_Value
Replace #t1 and #t2 by the correct tablenames.