I have a data frame that contains per hour kWh energy consumption (Consumption) for a house (ID), for a duration of a few months e.g:
ID Consumption
DateTime
2016-07-01 01:00:00 1642 0.703400
2016-07-01 02:00:00 1642 0.724033
2016-07-01 03:00:00 1642 0.747300
2016-07-01 04:00:00 1642 0.830450
2016-07-01 05:00:00 1642 0.704917
2016-07-01 06:00:00 1642 0.708467
2016-07-01 07:00:00 1642 0.806533
2016-07-01 08:00:00 1642 0.774483
2016-07-01 09:00:00 1642 0.724833
2016-07-01 10:00:00 1642 0.721900
2016-07-01 11:00:00 1642 0.729450
2016-07-01 12:00:00 1642 0.757233
2016-07-01 13:00:00 1642 0.744667
Here DateTime is the index of type . My objective is to find the mean consumption and variance for each hour across the week i.e. (24*7 = 168 hours)
HourOfWeek Consumption
1 0.703400
2 0.724033
...
168 0.876923
I have tried
print (df.groupby(df.index.week)['Consumption'].transform('mean'))
However this doesn't give the right results, How can this be done in pandas? Any help would be much appreciated.
Even if late: I had a similar issue and I don't think the above answer is correct, it rather should be
df.groupby((df.index.dayofweek) * 24 + (df.index.hour)).mean().rename_axis('HourOfWeek')
In the above mentioned answer you end up with unwanted combinations since the assigned groups are not unique, e.g. Monday 2pm is grouped together with Tuesday 1 am and so on
I think you need groupby with dayofweek and hour, but need add 1 because first value is 0 in both. Then aggregate mean:
df1 = (df.groupby((df.index.dayofweek + 1) * (df.index.hour + 1))['Consumption'].mean()
.rename_axis('HourOfWeek')
.reset_index())
print (df1)
HourOfWeek Consumption
0 10 0.703400
1 15 0.724033
2 20 0.747300
3 25 0.830450
4 30 0.704917
5 35 0.708467
6 40 0.806533
7 45 0.774483
8 50 0.724833
9 55 0.721900
10 60 0.729450
11 65 0.757233
12 70 0.744667
Related
I tried countless answers to similar problems here on SO but couldn't find anything that works for this scenario. It's driving me nuts.
I have these two Dataframes:
df_op:
index
Date
Close
Name
LogRet
0
2022-11-29 00:00:00
240.33
MSFT
-0.0059
1
2022-11-29 00:00:00
280.57
QQQ
-0.0076
2
2022-12-13 00:00:00
342.46
ADBE
0.0126
3
2022-12-13 00:00:00
256.92
MSFT
0.0173
df_quotes:
index
Date
Close
Name
72
2022-11-29 00:00:00
141.17
AAPL
196
2022-11-29 00:00:00
240.33
MSFT
73
2022-11-30 00:00:00
148.03
AAPL
197
2022-11-30 00:00:00
255.14
MSFT
11
2022-11-30 00:00:00
293.36
QQQ
136
2022-12-01 00:00:00
344.11
ADBE
198
2022-12-01 00:00:00
254.69
MSFT
12
2022-12-02 00:00:00
293.72
QQQ
I would like to add a column to df_op that indicates the close of the stock in df_quotes 2 days later. For example, the first row of df_op should become:
index
Date
Close
Name
LogRet
Next
0
2022-11-29 00:00:00
240.33
MSFT
-0.0059
254.69
In other words:
for each row in df_op, find the corresponding Name in df_quotes with Date of 2 days later and copy its Close to df_op in column 'Next'.
I tried tens of combinations like this without success:
df_quotes[df_quotes['Date'].isin(df_op['Date'] + pd.DateOffset(days=2)) & df_quotes['Name'].isin(df_op['Name'])]
How can I do this without recurring to loops?
Try this:
#first convert to datetime
df_op['Date'] = pd.to_datetime(df_op['Date'])
df_quotes['Date'] = pd.to_datetime(df_quotes['Date'])
#merge on Date and Name, but the date is offset 2 business days
(pd.merge(df_op,
df_quotes[['Date','Close','Name']].rename({'Close':'Next'},axis=1),
left_on=['Date','Name'],
right_on=[df_quotes['Date'] - pd.tseries.offsets.BDay(2),'Name'],
how = 'left')
.drop(['Date_x','Date_y'],axis=1))
Output:
Date index Close Name LogRet Next
0 2022-11-29 0 240.33 MSFT -0.0059 254.69
1 2022-11-29 1 280.57 QQQ -0.0076 NaN
2 2022-12-13 2 342.46 ADBE 0.0126 NaN
3 2022-12-13 3 256.92 MSFT 0.0173 NaN
I got two pandas dataframes as following:
ts1
Out[50]:
soil_moisture_ids41
date_time
2007-01-07 05:00:00 0.1830
2007-01-07 06:00:00 0.1825
2007-01-07 07:00:00 0.1825
2007-01-07 08:00:00 0.1825
2007-01-07 09:00:00 0.1825
... ...
2017-10-10 20:00:00 0.0650
2017-10-10 21:00:00 0.0650
2017-10-10 22:00:00 0.0650
2017-10-10 23:00:00 0.0650
2017-10-11 00:00:00 0.0650
[94316 rows x 3 columns]
and the other one is
ts2
Out[51]:
soil_moisture_ids42
date_time
2016-07-20 00:00:00 0.147
2016-07-20 01:00:00 0.148
2016-07-20 02:00:00 0.149
2016-07-20 03:00:00 0.150
2016-07-20 04:00:00 0.152
... ...
2019-12-31 19:00:00 0.216
2019-12-31 20:00:00 0.216
2019-12-31 21:00:00 0.215
2019-12-31 22:00:00 0.215
2019-12-31 23:00:00 0.215
[30240 rows x 3 columns]
You could see that, from 2007-01-07 to 2016-07-19, only ts1 has the data points. And from 2016-07-20 to 2017-10-11 there are some overlapped time series. Now I want to combine these two data frames. During the overlapped period, I want to get the mean values over ts1 and ts2. During the non-overlapped period, (2007-01-07 to 2016-07-19 and 2017-10-12 to 2019-12-31), the values at each time stamp is set as the value from ts1 or ts2. So how can I do it?
Thanks!
Use concat with aggregate mean, if only one value get same ouput, if multiple get mean. Also finally DatatimeIndex is sorted:
s = pd.concat([ts1, ts2]).groupby(level=0).mean()
Just store the concatenated series first and then apply the mean. i.e. merged_ts = pd.concat([ts1, ts2]) and then mean_ts = merged_ts.group_by(level=0).mean()
I have 2 columns of data in a pandas DF that looks like this with the "DateTime" column in format YYYY-MM-DD HH:MM:SS - this is first 24 hrs but the df is for one full year or 8784 x 2.
BAFFIN BAY DateTime
8759 8.112838 2016-01-01 00:00:00
8760 7.977169 2016-01-01 01:00:00
8761 8.420204 2016-01-01 02:00:00
8762 9.515370 2016-01-01 03:00:00
8763 9.222840 2016-01-01 04:00:00
8764 8.872423 2016-01-01 05:00:00
8765 8.776145 2016-01-01 06:00:00
8766 9.030668 2016-01-01 07:00:00
8767 8.394983 2016-01-01 08:00:00
8768 8.092915 2016-01-01 09:00:00
8769 8.946967 2016-01-01 10:00:00
8770 9.620883 2016-01-01 11:00:00
8771 9.535951 2016-01-01 12:00:00
8772 8.861761 2016-01-01 13:00:00
8773 9.077692 2016-01-01 14:00:00
8774 9.116074 2016-01-01 15:00:00
8775 8.724343 2016-01-01 16:00:00
8776 8.916940 2016-01-01 17:00:00
8777 8.920438 2016-01-01 18:00:00
8778 8.926278 2016-01-01 19:00:00
8779 8.817666 2016-01-01 20:00:00
8780 8.704014 2016-01-01 21:00:00
8781 8.496358 2016-01-01 22:00:00
8782 8.434297 2016-01-01 23:00:00
I am trying to calculate daily averages of the "BAFFIN BAY" and I've tried these approaches:
davg_df2 = df2.groupby(pd.Grouper(freq='D', key='DateTime')).mean()
davg_df2 = df2.groupby(pd.Grouper(freq='1D', key='DateTime')).mean()
davg_df2 = df2.groupby(by=df2['DateTime'].dt.date).mean()
All of these approaches yields the same answer as shown below :
BAFFIN BAY
DateTime
2016-01-01 6.008044
However, if you do the math, the correct average for 2016-01-01 is 8.813134 Thank you kindly for your help. I'm assuming the grouping is just by day or 24hrs to make consecutive DAILY averages but the 3 approaches above clearly is looking at other data in my 8784 x 2 DF.
I just ran your df with this code and i get 8.813134:
df['DateTime'] = pd.to_datetime(df['DateTime'])
df = df.groupby(by=pd.Grouper(freq='D', key='DateTime')).mean()
print(df)
Output:
BAFFIN BAY
DateTime
2016-01-01 8.813134
I have a table with date(date), left time(varchar2(4)) and arrival time(varchar2(4)). Time taken is in 24 hour format as hhmm. If a person travel 3 times a day, what will be the query to calculate total travel time in a day?
I am using oracle 11g. Kindly help. Thank you.
Convert the value to a number and report in minutes:
select to_number(substring(time, 1, 2))*60 + to_number(substring(time, 3, 2)) as minutes
Your query would look something like:
select person, sum(to_number(substring(time, 1, 2))*60 + to_number(substring(time, 3, 2))) as minutes
from t
group by person;
I see no reason to convert this back to a string -- or to even store the value as a string instead of as a number. But if you need to, you can reverse the process to get a string.
There are 2 answers, If you want to sum time only on date then it can be done as:-
select curr_date,
sum(24 * (to_date(arrival_time, 'HH24:mi:ss')- to_date(left_time, 'HH24:mi:ss'))) as difference
from sql_prac group by curr_date,arrival_time,left_time;
The sample output is as follows:-
select curr_date,left_time,arrival_time from sql_prac;
CURR_DATE LEFT_TIME ARRIVAL_TIME
--------- -------------------- --------------------
30-JUN-17 00:00:00 15:00:00
30-JUL-17 03:30:00 11:30:00
30-AUG-17 03:00:00 12:30:00
30-SEP-17 04:00:00 17:00:00
30-JUN-17 00:00:00 15:00:00
30-JUL-17 03:30:00 11:30:00
30-AUG-17 03:00:00 12:30:00
30-SEP-17 04:00:00 17:00:00
30-SEP-17 04:00:00 17:00:00
9 rows selected
select curr_date,sum(24 * (to_date(arrival_time, 'HH24:mi:ss')- to_date(left_time, 'HH24:mi:ss'))) as difference
from sql_prac group by curr_date,arrival_time,left_time;
CURR_DATE DIFFERENCE
--------- ----------
30-JUN-17 30
30-JUL-17 16
30-SEP-17 39
30-AUG-17 19
If you want to sum it by person and date then it can be done as:-
select dept,curr_date,sum(24 * (to_date(arrival_time, 'HH24:mi:ss')- to_date(left_time, 'HH24:mi:ss'))) as difference
from sql_prac group by dept,curr_date,arrival_time,left_time order by Dept;
The sample output is as follows:-
Data in table is:-
select dept,curr_date,left_time,arrival_time from sql_prac;
DEPT CURR_DATE LEFT_TIME ARRIVAL_TIME
-------------------- --------- -------------------- --------------------
A 30-SEP-17 04:00:00 17:00:00
B 30-SEP-17 04:00:00 17:00:00
C 30-AUG-17 03:00:00 12:30:00
D 30-DEC-17 04:00:00 17:00:00
A 30-SEP-17 04:00:00 17:00:00
B 30-JUL-17 03:30:00 11:30:00
C 30-AUG-17 03:00:00 12:30:00
D 30-SEP-17 04:00:00 17:00:00
R 30-SEP-17 04:00:00 17:00:00
Data fetched using the query
select dept,curr_date,sum(24 * (to_date(arrival_time, 'HH24:mi:ss')- to_date(left_time, 'HH24:mi:ss'))) as difference
from sql_prac group by dept,curr_date,arrival_time,left_time order by Dept;
DEPT CURR_DATE DIFFERENCE
-------------------- --------- ----------
A 30-SEP-17 26
B 30-JUL-17 8
B 30-SEP-17 13
C 30-AUG-17 19
D 30-SEP-17 13
D 30-DEC-17 13
R 30-SEP-17 13
I would like to group a Pandas dataframe by hour disregarding the date.
My data:
id opened_at count sum
2016-07-01 07:02:05 1 46.14
154 2016-07-01 07:34:02 1 479
2016-07-01 10:10:01 1 127.14
2016-07-02 12:01:04 1 8.14
2016-07-02 12:00:50 1 18.14
I am able to group by hour with date taken into account by using the following.
groupByLocationDay = df.groupby([df.id,
pd.Grouper(key='opened_at', freq='3h')])
I get the following
id opened_at count sum
2016-07-01 06:00:00 2 4296.14
154 2016-07-01 09:00:00 46 43716.79
2016-07-01 12:00:00 169 150827.14
2016-07-02 12:00:00 17 1508.14
2016-07-02 09:00:00 10 108.14
How can I group by hour only, so that it would look like the following.
id opened_at count sum
06:00:00 2 4296.14
154 09:00:00 56 43824.93
12:00:00 203 152335.28
The original data is on hourly basis, thus I need to get 3h frequency.
Thanks!
you can do it this way:
In [134]: df
Out[134]:
id opened_at count sum
0 154 2016-07-01 07:02:05 1 46.14
1 154 2016-07-01 07:34:02 1 479.00
2 154 2016-07-01 10:10:01 1 127.14
3 154 2016-07-02 12:01:04 1 8.14
4 154 2016-07-02 12:00:50 1 18.14
5 154 2016-07-02 08:34:02 1 479.00
In [135]: df.groupby(['id', df.opened_at.dt.hour // 3 * 3]).sum()
Out[135]:
count sum
id opened_at
154 6 3 1004.14
9 1 127.14
12 2 26.28