What is timestamp granularity computing in pandas? - pandas

I have the following dataset
df = pd.DataFrame({'timestamp': pd.date_range('1/1/2020', '3/1/2020 23:59', freq='12h'),
'col1': np.random.randint(100,size=122)}).\
sort_values('timestamp')
I want to compute a daily, weekly and monthly sum of the col1. If I use 'W' granularity for the timestamp column I receive a ValueError : ValueError: <Week: weekday=6> is a non-fixed frequency and I read that is recommended to use 7D, 30D etc.
My question is how pandas compute 7D or 30D granularity? If I add another column
df['timestamp2']= df.timestamp.dt.floor('30D')
df.groupby('timestamp2')[['col1']].sum()
I get the following result:
timestamp2 col1
2019-12-10 778
2020-01-09 3100
2020-02-08 2470
Why does pandas returns those dates if my min date is Jan 1, 2020 and maximum timestamp is 1 Mar, 2020?

The origin is the POSIX origin: 1970-01-01. By using .floor('30D') the allowable values are 1970-01-01, 1970-01-31, ... and all other 30 day multiples. Your dates are close to the 608-610th multiples.
pd.to_datetime('1970-01-01') + pd.DateOffset(days=30*608)
#Timestamp('2019-12-10 00:00:00')
pd.to_datetime('1970-01-01') + pd.DateOffset(days=30*609)
#Timestamp('2020-01-09 00:00:00')
If what you want is instead 30D periods from your first observation, then resample is how you can aggregate:
df.resample('30D', on='timestamp')['timestamp'].agg(['min', 'max'])
min max
timestamp
2020-01-01 2020-01-01 2020-01-30 12:00:00 # starts from 1st date
2020-01-31 2020-01-31 2020-02-29 12:00:00
2020-03-01 2020-03-01 2020-03-01 12:00:00

Related

mininum value of a resample (not 0)

i have a dataframe (df) indexed by dates (freq: 15 minutes): (little example)
datetime
Value
2019-09-02 16:15:00
0.00
2019-09-02 16:30:00
3.07
2019-09-02 16:45:00
1.05
And i want to resample my dataframe to freq: 1 month. Also I need calculate the min value in this month reaching this:
df_min = df.resample('1M').min()
Up to this point, all good but i need the min value not be 0, so i want something like min(i>0) but i dont know how to get it
here is one way to do it
assumption: datetime is an index
# make the 0 as nan and take the min
df_min= df.replace(0, np.nan).resample('1M').min()
Value
datetime
2019-09-30 1.05

Interpolate hourly load of a selected months of a year from the same months of the previous year and the next year in python pandas?

I have the following three dataframes:
df1:
date_time system_load
01-01-2017 00:00:00 208111
01-01-2017 01:00:00 208311
01-01-2017 02:00:00 208311
01-01-2017 03:00:00 208011
............... ...
31-12-2017 20:00:00 208611
31-12-2017 21:00:00 208411
31-12-2017 22:00:00 208111
31-12-2017 23:00:00 208911
The system load values of df1 has no problem.
df2:
date_time system_load
01-01-2018 00:00:00 208111
01-01-2018 01:00:00 208311
01-01-2018 02:00:00 208311
01-01-2018 03:00:00 208011
............... ...
31-12-2018 20:00:00 209611
31-12-2018 21:00:00 209411
31-12-2018 22:00:00 209111
31-12-2018 23:00:00 209911
The system load values of df2 is missed starting from 06-03-2018 20:00:00 till up to 24-10-2018 22:00:00.
df3:
date_time system_load
01-01-2019 00:00:00 309119
01-01-2019 01:00:00 309391
01-01-2019 02:00:00 309811
01-01-2019 03:00:00 309711
............... ...
31-12-2019 20:00:00 309611
31-12-2019 21:00:00 309411
31-12-2019 22:00:00 309111
31-12-2019 23:00:00 309911
The system load values of df3 has no problem.
What I want is to interpolate in suitable way the missed hourly records in df2 using the corresponding df1 and df3 hourly records (06-03-2017 20:00:00 till up to 24-10-2017 22:00:00 and 06-03-2019 20:00:00 till up to 24-10-2019 22:00:00 respectively). Based on "Pierre D"'s valuable comment I attached my scaled data.
Here is a very basic strategy that just takes data from neighboring years to fill the missing values. The offset is chosen to be precisely 52 weeks, so as to reflect possible weekly seasonality.
# get the whole series together, and resample to have missing data as NaN:
s = pd.concat([df1, df2, df3])['system_load'].resample('H').asfreq()
offset = 52 * 7 * 24 # 52 weeks, 7 days/week, 24 hours/day
filler = pd.concat([s.shift(offset), s.shift(-offset)], axis=1).mean(axis=1)
out = s.where(~s.isna(), filler)
# optional: make a new df2 with the filled values
df2mod = out.truncate(
before='2018',
after=pd.Timestamp('2019') - pd.Timedelta(1)
).to_frame('system_load')
Notes:
out contains the "filled" series for the whole system_load using neighboring years.
we use pandas.DataFrame.mean() to build the filler series as the mean of the two neighboring years, in a way that takes care of NaN (e.g. if one year or the other has NaN, then the mean is the only non-NaN value).
this is one of the most basic ways of filling the missing data, and likely won't fool a careful observer. Depending on the intended usage of the reconstructed data, a more elaborate strategy should be considered. Data reconstruction is an active field of research, and there are sophisticated methods in the literature. For example, one could use a GAN to build a resulting series that would be very hard to discriminate from real data.

pandas groupby several criteria

I have a dataframe that looks like this
which contains every minute of a year.
I need to simplify it on hourly base and to get only hours of the year and then maximum of Reserved and Used columns for the respective hours.
I made this, which works, but not totally for my purposes
df = df.assign(date=df.date.dt.round('H'))
df1 = df.groupby('date').agg({'Reserved': ['max'], 'Used': ['max'] }).droplevel(1, axis=1).reset_index()
which just groups the minutes into hours.
date Reserved Used
0 2020-01-01 00:00:00 2176 0.0
1 2020-01-01 01:00:00 2176 0.0
2 2020-01-01 02:00:00 2176 0.0
3 2020-01-01 03:00:00 2176 0.0
4 2020-01-01 04:00:00 2176 0.0
... ... ... ...
8780 2020-12-31 20:00:00 3450 50.0
8781 2020-12-31 21:00:00 3450 0.0
8782 2020-12-31 22:00:00 3450 0.0
8783 2020-12-31 23:00:00 3450 0.0
8784 2021-01-01 00:00:00 3450 0.0
Now I need group it more to plot several curves, containing only 24 points (for every hour) based on several criteria
average used and reserved for the whole year (so to group together every 00 hour, every 01 hour, etc.)
average used and reserved for every month (so to group every 00 hour, 01 hour etc for each month individually)
average used and reserved for weekdays and for weekends
I know this is only the similar groupby as before, but I somehow miss the logic of doing it.
Could anybody help?
Thanks.

expand datetime data in pandas like interpolation

I have the following data of dates and every date is assigned to the value 1
is there a way to somehow get a pandas list of hourly DateTime list such that all the values are 0 except for the one's I have in my xls file?
it is similar to interpolating but interpolating just interpolates whereas here I want just the rest of the date to be filled as 0.I want the entire 24 hours of the below dates to be assigned as one.I tried to do it in a for loop method but it just takes longer than ever and is very much nonpractical
Use pandas datetime accessor pd.Series.dt.date to extract the date part from datetime objects. And then use .isin() to match the values.
# sample data
df = pd.DataFrame({ # list of dates
"date": [date(2020,10,2), date(2020,10,4)]
})
df_hr = pd.DataFrame({ # list of hours from Oct.1 to 4
"hr": [datetime(2020,10,1,0,0) + i * timedelta(hours=1) for i in range(24*4)]
})
df_hr["flag"] = 0
df_hr.loc[df_hr["hr"].dt.date.isin(df["date"]), "flag"] = 1
# show the first and last hour of each day
df_hr.loc[[0,23,24,47,48,71,72,95]]
Out[111]:
hr flag
0 2020-10-01 00:00:00 0
23 2020-10-01 23:00:00 0
24 2020-10-02 00:00:00 1
47 2020-10-02 23:00:00 1
48 2020-10-03 00:00:00 0
71 2020-10-03 23:00:00 0
72 2020-10-04 00:00:00 1
95 2020-10-04 23:00:00 1

How to fill pandas dataframe with max() values

I have a dataframe where each day starts at 7:00 and ends at 22:10 in 5 minute intervals.
In the df are around 200 days (weekend days and some specific days are excluded)
Date Time Volume
0 2019-09-03 07:00:00 70000 778
1 2019-09-03 07:05:00 70500 1267
2 2019-09-03 07:10:00 71000 1208
3 2019-09-03 07:15:00 71500 715
4 2019-09-03 07:20:00 72000 372
I need another column, let's call it 'lastdayVolume', with the max value of Volume of the prior day
For example, in 2019-09-03 (between 7:00 and 22:10) the maximum volume value in a single row is 50000, then I need in every row of 2019-09-04 the value 50000 in column 'lastdayVolume'.
How would you do this without decreasing the lenght of the dataframe?
Have you tried
df.resample('1D', on='Date').max()
This should give you one row per day with the maximal value at this day.
EDIT: To combine that with the old Data, you can use a left join. Its a bit messy but
pd.merge(df, df.resample('1D', on='Date')['Volume'].max().rename('lastdayVolume'), left_on=pd.to_datetime((df['Date'] - pd.Timedelta('1d')).dt.date), right_index=True, how='left')
In [54]: pd.merge(df, df.resample('1D', on='Date')['Volume'].max().rename('lastdayVolume'), left_on=pd.to_datetime((df['Date'] - pd.Timedelta('1d')).dt.date), right_index=True, how='left')
Out[54]:
Date Time Volume lastdayVolume
0 2019-09-03 07:00:00 70000 778 800.0
1 2019-09-03 07:05:00 70500 1267 800.0
2 2019-09-03 07:10:00 71000 1208 800.0
3 2019-09-03 07:15:00 71500 715 800.0
4 2019-09-03 07:20:00 72000 372 800.0
0 2019-09-02 08:00:00 70000 800 NaN
seems to work out.
Equivalently you can use the slightly shorter
df.join(df.resample('1D', on='Date')['Volume'].max().rename('lastdayVolume'), on=pd.to_datetime((df['Date'] - pd.Timedelta('1d')).dt.date))
here.
The first DataFrame is your old one, the second is the one I calculated above (with appropriate renaming). For the values to merge on you use your 'Date' column which contains timestamps, offset it by one day and converted to an actual date on the left. On the right simply use the index.
The left join ensures you don't accidentally drop rows if you have no transactions the day before.
EDIT 2: To find out that maximum in a certain timerange, you can use
df.set_index('Date').between_time('15:30:00', '22:10:00')
to filter the DataFrame. Afterwards resample as before
df.join(df.set_index('Date').between_time('15:30:00', '22:10:00').resample('1D')...
where the on parameter in the resample is no longer necessary as the Date went into the index.