How to set an index within multiindex for datetime? - pandas

I have this df:
open high low close volume
date symbol
2014-02-20 AAPL 69.9986 70.5252 69.4746 69.7569 76529103
MSFT 33.5650 33.8331 33.4087 33.7259 27541038
2014-02-21 AAPL 69.9727 70.2061 68.8967 68.9821 69757247
MSFT 33.8956 34.2619 33.8241 33.9313 38030656
2014-02-24 AAPL 68.7063 69.5954 68.6104 69.2841 72364950
MSFT 33.6723 33.9269 33.5382 33.6723 32143395
which is returned from here:
from datetime import datetime
from iexfinance.stocks import get_historical_data
from pandas_datareader import data
import matplotlib.pyplot as plt
import pandas as pd
start = '2014-01-01'
end = datetime.today().utcnow()
symbol = ['AAPL', 'MSFT']
prices = pd.DataFrame()
datasets_test = []
for d in symbol:
data_original = data.DataReader(d, 'iex', start, end)
data_original['symbol'] = d
prices = pd.concat([prices,data_original],axis=0)
prices = prices.set_index(['symbol'], append=True)
prices.sort_index(inplace=True)
when trying to get the day of the week:
A['day_of_week'] = features.index.get_level_values('date').weekday
I get error:
AttributeError: 'Index' object has no attribute 'weekday'
I tried to change the date index to date time with
prices['date'] = pd.to_datetime(prices['date'])
but got this error:
KeyError: 'date'
Any idea how to keep 2 indexs, date + symbol but to change one of them (date) tp datetime so I could get the day of the week?

Looks like the date level of the index contains strings, not datetime objects. One solution is to reset all MultiIndex levels into columns, convert the date column to datetime, and set the MultiIndex back. Then you can proceed with pandas datetime accessors like .weekday in the usual way.
prices = prices.reset_index()
prices['date'] = pd.to_datetime(prices['date'])
prices = prices.set_index(['date', 'symbol'])
prices.index.get_level_values('date').weekday
Int64Index([3, 3, 4, 4, 0, 0, 1, 1, 2, 2,
...
1, 1, 2, 2, 3, 3, 4, 4, 1, 1],
dtype='int64', name='date', length=2516)

Related

How can I add several columns within a dataframe (broadcasting)?

import numpy as np
import pandas as pd
data = [[30, 19, 6], [12, 23, 14], [8, 18, 20]]
df = pd.DataFrame(data = data, index = ['A', 'B', 'C'], columns = ['Bulgary', 'Robbery', 'Car Theft'])
df
I get the following:
Bulgary
Robbery
Car Theft
A
30
19
6
B
12
23
14
C
8
18
20
I would like to assign:
df['Total'] = df['Bulgary'] + df['Robbery'] + df['Car Theft']
But does this operation have to be done manually? I am looking for a function that can handle conveniently.
#pseudocode
#df['Total'] = df.Some_Column_Adding_Function([0:3])
#df['Total'] == df['Bulgary'] + df['Robbery'] + df['Car Theft'] returns True
Similarly, how do I add across rows?
Use sum:
df['Total'] = df.sum(axis=1)
Or if you want subset of columns:
df['Total'] = df[df.columns[0:3]].sum(axis=1)
# or df['Total'] = df[['Bulgary', 'Robbery', 'Car Theft']].sum(axis=1)

Generating one NumPy array for each DataFrame row

I'm attempting to plot stock market trades against a plot of the particular stock using mplfinance.plot(). I keep record of all my trades using jstock which uses as CSV file:
"Code","Symbol","Date","Units","Purchase Price","Current Price","Purchase Value","Current Value","Gain/Loss Price","Gain/Loss Value","Gain/Loss %","Broker","Clearing Fee","Stamp Duty","Net Purchase Value","Net Gain/Loss Value","Net Gain/Loss %","Comment"
"ASO","Academy Sports and Outdoors, Inc.","Sep 13, 2021","25.0","45.85","46.62","1146.25","1165.5","0.769999999999996","19.25","1.6793893129770994","0.0","0.0","0.0","1146.25","19.25","1.6793893129770994",""
"ASO","Academy Sports and Outdoors, Inc.","Aug 26, 2021","15.0","41.3","46.62","619.5","699.3","5.32","79.79999999999995","12.881355932203384","0.0","0.0","0.0","619.5","79.79999999999995","12.881355932203384",""
"ASO","Academy Sports and Outdoors, Inc.","Jun 3, 2021","10.0","37.48","46.62","374.79999999999995","466.2","9.14","91.40000000000003","24.386339381003214","0.0","0.0","0.0","374.79999999999995","91.40000000000003","24.386339381003214",""
"RMBS","Rambus Inc.","Nov 24, 2021","2.0","26.99","26.99","53.98","53.98","0.0","0.0","0.0","0.0","0.0","0.0","53.98","0.0","0.0",""
I can get this data easily enough using
myportfolio = pd.read_csv(PORTFOLIO_LOCATION, parse_dates=[2])
But I need to create individual lists for each trade that match the day-by-day stock price:
Date,High,Low,Open,Close,Volume,Adj Close
2020-12-01,17.020000457763672,16.5,16.799999237060547,16.8799991607666,990900,16.8799991607666
2020-12-02,17.31999969482422,16.290000915527344,16.65999984741211,16.40999984741211,1200500,16.40999984741211
and I have a normal DataFrame containing this. So far this is what I have:
for i in myportfolio.groupby("Code"):
(code, j) = i
if code == "ASO": # just testing it against one stock
simp = pd.DataFrame(columns=["Date", "Units", "Price"],
data=j[["Date", "Units", "Purchase Price"]].values, index=j[["Date"]])
df = pd.read_csv("ASO-2020-12-01-2021-12-01.csv", index_col=0, parse_dates=True)
# df.lookup(simp["Date"])
df.insert(0, 'row_num', range(0,len(df)))
k = df.loc[simp["Date"]]['row_num']
trades = []
for index, m in k.iteritems():
t = np.zeros((df.shape[0], 1))
t.fill(np.nan)
t[m] = simp[index]["Price"]
trades.append(t.to_list())
But I receive a KeyError: Timestamp('2021-09-17 00:00:00')
Any ideas of how to fix this?
Addendum 1:
import pandas as pd
trade_data = [['ASO', '5/5/21', 10], ['ASO', '5/6/21', 12], ['RBLX', '5/7/21', 15]]
trade_df = pd.DataFrame(trade_data, columns = ['Code', 'Date', 'Price'])
trade_df['Date'] = pd.to_datetime(trade_df['Date'])
trade_df
Code Date Price
0 ASO 2021-05-05 10
1 ASO 2021-05-07 12
2 RBLX 2021-05-07 15
aso_data = [['5/5/21', 12, 5, 10, 7], ['5/6/21', 15, 7, 13, 8], ['5/7/21', 17, 10, 15, 11]]
aso_df = pd.DataFrame(aso_data, columns = ['Date', 'High', 'Low', 'Open', 'Close'])
aso_df['Date'] = pd.to_datetime(aso_df['Date'])
aso_df
Date High Low Open Close
0 2021-05-05 12 5 10 7
1 2021-05-06 15 7 13 8
2 2021-05-07 17 10 15 11
So I want to create two NumPy arrays for ASO {one for each trade) and one for the RBLX trade. For ASO I should have two NumPy arrays that looks like [10, Nan, Nan] and [NaN, NaN, 12].
Do you want a list of lists right?
There is no need to loop.
df_list = df.values.tolist()
just in case another novice such as myself surfs in with a similar problem.
for i in myportfolio.groupby(["Code"]):
(code, j) = i
if code == "ASO": # just testing it against one stock
df = pd.read_csv("ASO-2020-12-01-2021-12-01.csv", index_col=0, parse_dates=True)
df.insert(0, 'row_num', range(0,len(df)))
k = df.loc[j["Date"]]['row_num']
trades = []
for index, m in j.iterrows():
t = np.zeros((df.shape[0], 1))
t.fill(np.nan)
t[int(df.loc[m["Date"]]['row_num'])] = m["Purchase Price"]
asplot = mpf.make_addplot(t, type="scatter", color='red', marker="D")
trades.append(asplot)
mpf.plot(df, type='candle', addplot=trades)
produced an okay graph showing my entry points. good luck

Pd['Column1] > Pd['Column2'] Key Error: 0

I am trying to write a function that loops through a pandas dataframe and compares if column1 > column2, if so appends 1 to a list, which is then returned.
Importing finance data from yahoo finance, calculating 2 Std and assigning to column upper and lower, and the moving average.
import pandas as pd
import pandas_datareader as web
import matplotlib.pyplot as plt
%matplotlib inline
from datetime import datetime
import numpy as np
end = datetime(2021, 1, 16)
start = datetime(2020 , 1, 16)
symbols = ['ETH-USD']
stock_df = web.get_data_yahoo(symbols, start, end)
period = 20
# Simple Moving Average
stock_df['SMA'] = stock_df['Close'].rolling(window=period).mean()
# Standard deviation
stock_df['STD'] = stock_df['Close'].rolling(window=period).std()
# Upper Bollinger Band
stock_df['Upper'] = stock_df['SMA'] + (stock_df['STD'] * 2)
# Lower Bollinger Band
stock_df['Lower'] = stock_df['SMA'] - (stock_df['STD'] * 2)
# List of columns
column_list = ['Close', 'SMA', 'Upper', 'Lower']
stock_df[column_list].plot(figsize=(12.2,6.4)) #Plot the data
plt.title('ETH-USD')
plt.ylabel('USD Price ($)')
plt.show();
#Create a new data frame, Period for calculation, removes NAN's
bolldf = stock_df[period-1:]
#Show the new data frame
bolldf
Function, Loop through column rows and compare, append df['Close'][0] to buy/sell signal if condition is met.
def signal(df):
buy_signal = []
sell_signal = []
for i in range(len(df['Close'])):
if df['Close'][i] > df['Upper'][i]:
buy_signal.append(1)
return buy_signal
buy_signal = signal(bolldf)
buy_signal
Info about the Error:
KeyError: 0
During handling of the above exception, another exception occurred:
---> 12 buy_signal = greaterthan(stock_df)
KeyError: 0
During handling of the above exception, another exception occurred:
11
---> 12 buy_signal = greaterthan(stock_df)
13
----> 8 if bolldf['Close'][i] > bolldf['Open'][i]:
KeyError: 0
When i attempt this function on the columns df['Upper'] > df['Lower], or df['SMA'] < df['Lower'] for example it works as expected, its only when using the columns from the original data that it does not work.
Any help would be amazing. Thank you.
Since it is a multi-index, the columns must also be specified in the multi-index format. You can check the contents of the columns with bolldf.columns, and you can get them by modifying as follows.
bolldf.columns
MultiIndex([('Adj Close', 'ETH-USD'),
( 'Close', 'ETH-USD'),
( 'High', 'ETH-USD'),
( 'Low', 'ETH-USD'),
( 'Open', 'ETH-USD'),
( 'Volume', 'ETH-USD'),
( 'SMA', ''),
( 'STD', ''),
( 'Upper', ''),
( 'Lower', '')],
names=['Attributes', 'Symbols'])
def signal(df):
buy_signal = []
sell_signal = []
for i in range(len(df[('Close','ETH-USD')])):
if df[('Close','ETH-USD')][i] > df[('Upper','')][i]:
buy_signal.append(1)
return buy_signal
buy_signal = signal(bolldf)

Add a column of minutes to a datetime in pandas

I have a dataframe with a start time and the length of operation. I'm trying to figure out out to add the length (in minutes) to the start time in order to figure out the end time of the session. I've run a few different variations of the same general idea and keep getting the same error, "unsupported type for timedelta minutes component: Series". The code extract is below:
data= {'Name': ['John', 'Peter'],
'Start' : [2, 2],
'Length': [120, 90],
}
df = pd.DataFrame.from_records(data)
df['Start'] = pd.to_datetime(df['Start'])
df['Length'] = pd.to_datetime(df['Length'])
df["tdiffinmin"] = df['Start'].apply(lambda x: x + pd.DateOffset(minutes = df["Length"]))
Ive also tried the follow as other methods of doing this math and keep getting similar errors.
df["tdiffinmin"] = df['Start'].apply(lambda x: x -pd.DateOffset(minutes = df["Length"]))
df["tdiffinmin"] = (df['Start']. + timedelta(minutes = df["Length"])).dt.total_seconds() / 60
df['tdiffinmin'] = df['Start'] - pd.DateOffset(minutes = df["Length"])
The full code reads from a data set (excel sheet or CSV), populates a Dataframe, and this is some of the math I am doing. Originally it was done with Start and Stop times, so I know something similar is possible. In the dataset, Length is in minutes and Start is a date and time, so datetime is necessary.
You should convert Length into timedelta, not datetime:
df['Start'] = pd.to_datetime(df['Start'])
df['Length'] = pd.to_timedelta(df['Length'], unit='min')
df['tdiffinmin'] = df['Start'] + df['Length']
Output:
Length Name Start tdiffinmin
0 02:00:00 John 1970-01-01 00:00:00.000000002 1970-01-01 02:00:00.000000002
1 01:30:00 Peter 1970-01-01 00:00:00.000000002 1970-01-01 01:30:00.000000002

Get day of year from a string date in pandas dataframe

I want to turn my date string into day of year... I try this code..
import pandas as pd
import datetime
data = pd.DataFrame()
data = pd.read_csv(xFilename, sep=",")
and get this DataFrame
Index Date Tmin Tmax
0 1950-01-02 -16.508 -2.096
1 1950-01-03 -6.769 0.875
2 1950-01-04 -1.795 8.859
3 1950-01-05 1.995 9.487
4 1950-01-06 -17.738 -9.766
I try this...
convert = lambda x: x.DatetimeIndex.dayofyear
data['Date'].map(convert)
with this error:
AttributeError: 'str' object has no attribute 'DatetimeIndex'
I expect to get new date to match 1950-01-02 = 2, 1950-01-03 = 3...
Thank for your help... and sorry Im new on python
I think need pass parameter parse_dates to read_csv and then call Series.dt.dayofyear:
data = pd.read_csv(xFilename, parse_dates=["Date"])
data['dayofyear'] = data['Date'].dt.dayofyear