pandas Why datetimes indices in two data frames are not matching - pandas

This is a follow on to my most recent question. Thanks to Bob Haffner I have almost got a solution for updating columns in one df using values in another df.
I have df1:
shares
trans_date symbol
2011-01-13 AAPL -1500
IBM 4000
2011-01-26 GOOG 1000
2011-02-02 XOM -4000
2011-02-10 XOM 4000
2011-03-03 GOOG -1000
IBM -2200
2011-05-03 IBM 1500
2011-06-03 IBM -3300
2011-06-10 AAPL 1200
2011-08-01 GOOG 0
2011-12-20 AAPL -1200
df1 info :
<class 'pandas.core.frame.DataFrame'>
MultiIndex: 12 entries, (2011-01-13 00:00:00, AAPL) to (2011-12-20 00:00:00, AAPL)
Data columns (total 1 columns):
shares 12 non-null int64
dtypes: int64(1)
memory usage: 192.0+ bytes
And this df2:
2011-01-13 GOOG 0
AAPL 0
XOM 0
IBM 0
_CASH 0
GOOG 0
AAPL 0
XOM 0
IBM 0
_CASH 0
2011-01-26 GOOG 0
AAPL 0
XOM 0
IBM 0
_CASH 0
2011-02-02 GOOG 0
AAPL 0
XOM 0
IBM 0
_CASH 0
2011-02-10 GOOG 0
AAPL 0
XOM 0
IBM 0
_CASH 0
2011-03-03 GOOG 0
AAPL 0
XOM 0
IBM 0
_CASH 0
..
df2 info is :
return pd.TimeSeries(index=dates, data=dates)
<class 'pandas.core.frame.DataFrame'>
<class 'pandas.core.frame.DataFrame'>
MultiIndex: 65 entries, (2011-01-13, GOOG) to (2011-12-20, _CASH)
Data columns (total 1 columns):
0 65 non-null int64
dtypes: int64(1)
memory usage: 1.0+ KB
None
I then issue a df.update command:
df2_updated = df2.update(df1)
This returns 'none'. I cannot understand why the indices are not matching. I am grateful for any help.
Drew Yallop

Related

Get the value in a dataframe based on a value and a date in another dataframe

I tried countless answers to similar problems here on SO but couldn't find anything that works for this scenario. It's driving me nuts.
I have these two Dataframes:
df_op:
index
Date
Close
Name
LogRet
0
2022-11-29 00:00:00
240.33
MSFT
-0.0059
1
2022-11-29 00:00:00
280.57
QQQ
-0.0076
2
2022-12-13 00:00:00
342.46
ADBE
0.0126
3
2022-12-13 00:00:00
256.92
MSFT
0.0173
df_quotes:
index
Date
Close
Name
72
2022-11-29 00:00:00
141.17
AAPL
196
2022-11-29 00:00:00
240.33
MSFT
73
2022-11-30 00:00:00
148.03
AAPL
197
2022-11-30 00:00:00
255.14
MSFT
11
2022-11-30 00:00:00
293.36
QQQ
136
2022-12-01 00:00:00
344.11
ADBE
198
2022-12-01 00:00:00
254.69
MSFT
12
2022-12-02 00:00:00
293.72
QQQ
I would like to add a column to df_op that indicates the close of the stock in df_quotes 2 days later. For example, the first row of df_op should become:
index
Date
Close
Name
LogRet
Next
0
2022-11-29 00:00:00
240.33
MSFT
-0.0059
254.69
In other words:
for each row in df_op, find the corresponding Name in df_quotes with Date of 2 days later and copy its Close to df_op in column 'Next'.
I tried tens of combinations like this without success:
df_quotes[df_quotes['Date'].isin(df_op['Date'] + pd.DateOffset(days=2)) & df_quotes['Name'].isin(df_op['Name'])]
How can I do this without recurring to loops?
Try this:
#first convert to datetime
df_op['Date'] = pd.to_datetime(df_op['Date'])
df_quotes['Date'] = pd.to_datetime(df_quotes['Date'])
#merge on Date and Name, but the date is offset 2 business days
(pd.merge(df_op,
df_quotes[['Date','Close','Name']].rename({'Close':'Next'},axis=1),
left_on=['Date','Name'],
right_on=[df_quotes['Date'] - pd.tseries.offsets.BDay(2),'Name'],
how = 'left')
.drop(['Date_x','Date_y'],axis=1))
Output:
Date index Close Name LogRet Next
0 2022-11-29 0 240.33 MSFT -0.0059 254.69
1 2022-11-29 1 280.57 QQQ -0.0076 NaN
2 2022-12-13 2 342.46 ADBE 0.0126 NaN
3 2022-12-13 3 256.92 MSFT 0.0173 NaN

Pandas Shift Date Time Columns Back One Hour

I have data in a DF (df1) that starts and ends like this below and I'm trying to shift the "0" and "1" columns below so that the date and time is moved back one hour so that the date and time start at hour == 0 not hour == 1.
data starts (df1) -
0 1 2 3 4 5 6 7
0 20160101 100 7.977169 109404.0 20160101 100 4.028678 814.0
1 20160101 200 8.420204 128546.0 20160101 200 4.673662 2152.0
2 20160101 300 9.515370 165931.0 20160101 300 8.019863 8100.0
data ends (df1) -
0 1 2 3 4 5 6 7
8780 20161231 2100 4.198906 11371.0 20161231 2100 0.995571 131.0
8781 20161231 2200 4.787433 19083.0 20161231 2200 1.029809 NaN
8782 20161231 2300 3.987506 9354.0 20161231 2300 0.900942 NaN
8783 20170101 0 3.284947 1815.0 20170101 0 0.899262 NaN
I need the date and time to start shifted back one hour so start time is hour begin not hour end -
0 1 2 3 4 5 6 7
0 20160101 000 7.977169 109404.0 20160101 100 4.028678 814.0
1 20160101 100 8.420204 128546.0 20160101 200 4.673662 2152.0
2 20160101 200 9.515370 165931.0 20160101 300 8.019863 8100.0
and ends like this with the date and time below -
0 1 2 3 4 5 6 7
8780 20161231 2000 4.198906 11371.0 20161231 2100 0.995571 131.0
8781 20161231 2100 4.787433 19083.0 20161231 2200 1.029809 NaN
8782 20161231 2200 3.987506 9354.0 20161231 2300 0.900942 NaN
8783 20161231 2300 3.284947 1815.0 20170101 0 0.899262 NaN
And, i have no real idea of how to accomplish this or how to research it. Thank you,
It would be better to create a proper datetime object then simply remove the hours as a sum which will handle any redaction in days. We can then use dt.strftime to re-create your object (string) columns.
s = pd.to_datetime(
df[0].astype(str) + df[1].astype(str).str.zfill(4), format="%Y%m%d%H%M"
)
0 2016-01-01 01:00:00
1 2016-01-01 02:00:00
2 2016-01-01 03:00:00
8780 2016-12-31 21:00:00
8781 2016-12-31 22:00:00
8782 2016-12-31 23:00:00
8783 2017-01-01 00:00:00
dtype: datetime64[ns]
df[1] = (s - pd.DateOffset(hours=1)).dt.strftime("%H%M").str.lstrip("0").str.zfill(3)
df[0] = (s - pd.DateOffset(hours=1)).dt.strftime("%Y%d%m")
print(df)
0 1 2 3 4 5 6 7
0 20160101 000 7.977169 109404.0 20160101 100 4.028678 814.0
1 20160101 100 8.420204 128546.0 20160101 200 4.673662 2152.0
2 20160101 200 9.515370 165931.0 20160101 300 8.019863 8100.0
8780 20163112 2000 4.198906 11371.0 20161231 2100 0.995571 131.0
8781 20163112 2100 4.787433 19083.0 20161231 2200 1.029809 NaN
8782 20163112 2200 3.987506 9354.0 20161231 2300 0.900942 NaN
8783 20163112 2300 3.284947 1815.0 20170101 0 0.899262 NaN
Use, DataFrame.shift to shift the columns 0, 1, then use Series.bfill on column 0 of df2 to fill the missing values, then use .fillna on column 1 of df2 to fill the NaN values, finally use Dataframe.join to join the dataframe df2 with the dataframe df1:
df2 = df1[['0', '1']].shift()
df2['0'] = df2['0'].bfill()
df2['1'] = df2['1'].fillna('000')
df2 = df2.join(df1.loc[:, '2':])
# print(df2)
0 1 2 3 4 5 6 7
0 20160101 000 7.977169 109404.0 20160101 100 4.028678 814.0
1 20160101 100 8.420204 128546.0 20160101 200 4.673662 2152.0
2 20160101 200 9.515370 165931.0 20160101 300 8.019863 8100.0
...
8780 20160101 300 4.198906 11371.0 20161231 2100 0.995571 131.0
8781 20161231 2100 4.787433 19083.0 20161231 2200 1.029809 NaN
8782 20161231 2200 3.987506 9354.0 20161231 2300 0.900942 NaN
8783 20161231 2300 3.284947 1815.0 20170101 0 0.899262 NaN
You can do subtraction in pandas (considering that the data in your dataframe are not string type)
I will show you an example on how it can be done
import pandas as pd
df = pd.DataFrame()
df['time'] = [0,100,500,2100,2300,0] #creating dataframe
df['time'] = df['time']-100 #This is what you want to do
Now your data will be subtracted by 100.
There is a case when subtracting 0 you will get -100 as time. For that you can do this:
for i in range(len(df['time'])):
if df['time'].iloc[i]== -100:
df['time'].iloc[i]=2300

how do i access only specific entries of a dataframe having date as index

[this is tail of my DataFrame for around 1000 entries][1]
Open Close High Change mx_profitable
Date
2018-06-06 263.00 270.15 271.4 7.15 8.40
2018-06-08 268.95 273.00 273.9 4.05 4.95
2018-06-11 273.30 274.00 278.4 0.70 5.10
2018-06-12 274.00 282.85 284.4 8.85 10.40
I have to sort out the entries of only certain dates, for example, 25th of every month.
I think need DatetimeIndex.day with boolean indexing:
df[df.index.day == 25]
Sample:
rng = pd.date_range('2017-04-03', periods=1000)
df = pd.DataFrame({'a': range(1000)}, index=rng)
print (df.head())
a
2017-04-03 0
2017-04-04 1
2017-04-05 2
2017-04-06 3
2017-04-07 4
df1 = df[df.index.day == 25]
print (df1.head())
a
2017-04-25 22
2017-05-25 52
2017-06-25 83
2017-07-25 113
2017-08-25 144

How do I get the rows where a certain value occurs?

I have orders_df:
Symbol Order Shares
Date
2011-01-10 AAPL BUY 1500
2011-01-13 AAPL SELL 1500
2011-01-13 IBM BUY 4000
2011-01-26 GOOG BUY 1000
2011-02-02 XOM SELL 4000
2011-02-10 XOM BUY 4000
2011-03-03 GOOG SELL 1000
2011-03-03 IBM SELL 2200
2011-05-03 IBM BUY 1500
2011-06-03 IBM SELL 3300
2011-08-01 GOOG BUY 55
2011-08-01 GOOG SELL 55
I want have a variable that maps Date to the number of SELLS on that date. I also want a symmetric variable for BUY.
I tried doing it for all Orders by doing
num_orders_per_day = orders_df.groupby(['Date']).size()
and got:
Date
2011-01-10 1
2011-01-13 2
2011-01-26 1
2011-02-02 1
2011-02-10 1
2011-03-03 2
2011-05-03 1
2011-06-03 1
2011-08-01 2
but that is not the desired output.
What I want is sells_on_a_day:
2011-01-13 1
2011-02-02 1
2011-03-03 2
2011-06-03 1
2011-08-01 1
and then a similar buys_on_a_day variable.
First filter by boolean indexing and then get count:
num_sells_per_day = orders_df[orders_df['Order'] == 'SELL']
.groupby(level=0).size().reset_index(name='count')
print (num_sells_per_day)
Date count
0 2011-01-13 1
1 2011-02-02 1
2 2011-03-03 2
3 2011-06-03 1
4 2011-08-01 1
Alternative:
num_sells_per_day = orders_df.query("Order == 'SELL'")
.groupby(level=0)
.size()
.reset_index(name='count')
print (num_sells_per_day)
Date count
0 2011-01-13 1
1 2011-02-02 1
2 2011-03-03 2
3 2011-06-03 1
4 2011-08-01 1
Also is possible create 2 columns together, only get NaNs if some values missing:
df1 = orders_df.groupby(['Date','Order']).size().unstack()
print (df1)
Order BUY SELL
Date
2011-01-10 1.0 NaN
2011-01-13 1.0 1.0
2011-01-26 1.0 NaN
2011-02-02 NaN 1.0
2011-02-10 1.0 NaN
2011-03-03 NaN 2.0
2011-05-03 1.0 NaN
2011-06-03 NaN 1.0
2011-08-01 1.0 1.0

convert hourly time period in 15-minute time period

I have a dataframe like that:
df = pd.read_csv("fileA.csv", dtype=str, delimiter=";", skiprows = None, parse_dates=['Date'])
Date Buy Sell
0 01.08.2009 01:00 15 25
1 01.08.2009 02:00 0 30
2 01.08.2009 03:00 10 18
But I need that one (in 15-min-periods):
Date Buy Sell
0 01.08.2009 01:00 15 25
1 01.08.2009 01:15 15 25
2 01.08.2009 01:30 15 25
3 01.08.2009 01:45 15 25
4 01.08.2009 02:00 0 30
5 01.08.2009 02:15 0 30
6 01.08.2009 02:30 0 30
7 01.08.2009 02:45 0 30
8 01.08.2009 03:00 10 18
....and so on.
I have tried df.resample(). But it does not worked. Does someone know a nice pandas method?!
If fileA.csv looks like this:
Date;Buy;Sell
01.08.2009 01:00;15;25
01.08.2009 02:00;0;30
01.08.2009 03:00;10;18
then you could parse the data with
df = pd.read_csv("fileA.csv", delimiter=";", parse_dates=['Date'])
so that df will look like this:
In [41]: df
Out[41]:
Date Buy Sell
0 2009-01-08 01:00:00 15 25
1 2009-01-08 02:00:00 0 30
2 2009-01-08 03:00:00 10 18
You might want to check df.info() to make sure you successfully parsed your data into a DataFrame with three columns, and that the Date column has dtype datetime64[ns]. Since the repr(df) you posted prints the date in a different format and the column headers do not align with the data, there is a good chance that the data has not yet been parsed properly. If that's true and you post some sample lines from the csv, we should be able help you parse the data into a DataFrame.
In [51]: df.info()
<class 'pandas.core.frame.DataFrame'>
Int64Index: 3 entries, 0 to 2
Data columns (total 3 columns):
Date 3 non-null datetime64[ns]
Buy 3 non-null int64
Sell 3 non-null int64
dtypes: datetime64[ns](1), int64(2)
memory usage: 96.0 bytes
Once you have the DataFrame correctly parsed, resampling to 15 minute time periods can be done with asfreq with forward-filling the missing values:
In [50]: df.set_index('Date').asfreq('15T', method='ffill')
Out[50]:
Buy Sell
2009-01-08 01:00:00 15 25
2009-01-08 01:15:00 15 25
2009-01-08 01:30:00 15 25
2009-01-08 01:45:00 15 25
2009-01-08 02:00:00 0 30
2009-01-08 02:15:00 0 30
2009-01-08 02:30:00 0 30
2009-01-08 02:45:00 0 30
2009-01-08 03:00:00 10 18