I pull historical data for a large universe of stocks and ETFs daily. Quandl has pretty good free coverage of US Equities, but they do not have historical data for ETFs so I use the Google API as a backup for Quandl.
The recent Google finance "renovation" hasn't left me with a great alternative, so I am trying to apply Brad Solomon's work (thanks Brad, link below) to a list of symbols. Assume it is unlikely without a loop given that he is creating URLs. Any clever ideas welcome.
Related question: How come pandas_datareader for google doesn't work?
Thanks.
Under the hood, pandas-datareader is looping through each symbol that you pass and making http requests one by one.
Here's the function that does that in the base class, from which the google- and yahoo-related classes inherit: base._DailyBaseReader._dl_mult_symbols.
The magic is that these are appended to a list and then aggregated into a pandas Panel.
I would note, however, that Panel is deprecated and you can get the same functionality in a DataFrame with a MultiIndex, a structure that's technically 2-dimenionsal but replicates higher dimensionalities in practice.
So, here's the barebones of what you could do, below. Please note I'm skipping a lot of the functionality embedded within the package itself, such as parsing string dates to datetime.
import datetime
from io import StringIO
import requests
from pandas.io.common import urlencode
import pandas as pd
BASE = 'http://finance.google.com/finance/historical'
def get_params(sym, start, end):
params = {
'q': sym,
'startdate': start.strftime('%Y/%m/%d'),
'enddate': end.strftime('%Y/%m/%d'),
'output': "csv"
}
return params
def build_url(sym, start, end):
params = get_params(sym, start, end)
return BASE + '?' + urlencode(params)
def get_one_data(sym, start=None, end=None):
if not start:
start = datetime.datetime(2010, 1, 1)
if not end:
end = datetime.datetime.today()
url = build_url(sym, start, end)
data = requests.get(url).text
return pd.read_csv(StringIO(data), index_col='Date',
parse_dates=True).sort_index()
def get_multiple(sym, start=None, end=None, return_type='Panel'):
if isinstance(sym, str):
return get_one_data(sym, start=start, end=end)
elif isinstance(sym, (list, tuple, set)):
res = {}
for s in sym:
res[s] = get_one_data(s, start, end)
# The actual module also implements a 'passed' and 'failed'
# check here and also using chunking to get around
# data retreival limits (I believe)
if return_type.lower() == 'panel':
return pd.Panel(res).swapaxes('items', 'minor')
elif return_type.lower() == 'mi': # MultiIndex DataFrame
return pd.concat((res), axis=1)
An example:
syms = ['AAPL', 'GE']
data = get_multiple(syms, return_type='mi')
# Here's how you would filter down to Close prices
# on MultiIndex columns
data.xs('Close', axis=1, level=1)
AAPL GE
Date
2010-01-04 30.57 15.45
2010-01-05 30.63 15.53
2010-01-06 30.14 15.45
2010-01-07 30.08 16.25
2010-01-08 30.28 16.60
...
Related
I'm using the HTC anemometer and it gives me data in the following order, where in two of the columns are merged into one and there's some useless data I want to exclude.
The data looks like below
"NO.","T&RH","DATA","UNIT","TIME"
1," 27�C 70.5%",0,"m/s","30-11-2020\15:33:34"
2," 27�C 70.5%",0,"m/s","30-11-2020\15:33:35"
3," 27�C 70.5%",0,"m/s","30-11-2020\15:33:36"
4," 27�C 70.5%",0,"m/s","30-11-2020\15:33:37"
...
...
When I try to load it into a pandas data-frame, there's all kind of weird errors.
I've come up with the following code to clean the data and export it as a df.
import pandas as pd
def _formathtc(text_data:list) ->pd.DataFrame:
data = []
for l in rawdata:
d = []
l = l.split(",")
try:
_,t,h = l[1].strip('"').split(" ")
d.append(t.replace("°C",""))
d.append(h.replace("%",""))
d.append(l[2])
d.append(l[-1].strip('\n'))
data.append(d)
except Exception as e:
pass
df = pd.DataFrame(data=data)
df.columns=['temp','relhum','data','time']
return df
def gethtc(filename:str)->pd.DataFrame:
text_data = open(filename, "r", encoding="iso-8859-1").readlines()
return _formathtc(text_data)
df = gethtc(somefilename)
My problem is that the above shown operations operate in linear time, i.e., as the file grows in size more is the time take to extract out the info and get that data-frame.
How can I make it more efficient?
You can use pd.read_csv in place of the DataFrame constructor here. There are a ton of options (including encoding, and engine quotechar which may be helpful). At least pandas does all the parsing for you, and probably has better performance (esp. setting engine="c"). If this doesn't help with performance, I'm not sure there is a better native pandas option:
df = pd.read_csv("htc.csv", engine="c")
df["TIME"] = pd.to_datetime(df.TIME.str.replace("\\", " "))
df["T&RH"] = df['T&RH'].str.replace("�", "")
output:
NO. T&RH DATA UNIT TIME
0 1 27C 70.5% 0 m/s 2020-11-30 15:33:34
1 2 27C 70.5% 0 m/s 2020-11-30 15:33:35
2 3 27C 70.5% 0 m/s 2020-11-30 15:33:36
3 4 27C 70.5% 0 m/s 2020-11-30 15:33:37
The post-processing is optional of course, but I don't think should slow things down much.
I am new to Pandas and doing some analysis csv file. I have successfully read csv and shown all details. I have got two column as an object type. I have done groupy for those two columns and got all result. I need to find all endPoint from event-description series in new column. Below is the sample till Groupby operation, however I am stuck to find all kind of http endpoint. Currently endPoint is showing blank, however it may contains from http url
import pandas as pd
data = pd.read_csv('/Users/temp/Downloads/sample.csv’)
data.head()
grouped_df = data.groupby([ "event_type", "event_description"])
grouped_df.first()
Sample:
a = '{"endPoint":"https://link.json","responseCode":200}'
b = '{"endPoint":"","responseCode":200}'
c = 'app'
df = pd.DataFrame({'event_description':[a,b,c]})
print (df)
event_description
0 {"endPoint":"https://link.json","responseCode"...
1 {"endPoint":"","responseCode":200}
2 app
Use custom function with try and except, because some data are not valid json:
import json
def get_endPoint(x):
try:
return json.loads(x)['endPoint']
except Exception:
return np.nan
df['endPoint'] = df['event_description'].apply(get_endPoint)
print (df)
event_description endPoint
0 {"endPoint":"https://link.json","responseCode"... https://link.json
1 {"endPoint":"","responseCode":200}
2 app NaN
I am considering using a closure with the current state, to compute the rolling window (which in my case is of width 2), to answer my own question, which I have recently posed. Something on the lines of:
def test(init_value):
def my_fcn(x,y):
nonlocal init_value
actual_value = (x + y) * init_value
init_value = actual_value
return init_value
return my_fcn
where my_fcn is a dummy function used for testing. Therefore the function might be initialised thorugh actual_fcn = test(0); where we assume the initial value is zero, for example. Finally one could use the function through ddf.apply (where ddf is the actual dask dataframe).
Finally the question: this would work, if the order of the computations is preserved, otherwise everything would be scrambled. I have not tested it, since -even if it passes- I cannot be 100% sure it will always preserve the order. So, question is:
Does dask dataframe's apply method preserve rows order?
Any other ideas? Any help highly appreciated.
Apparently yes. I am using dask 1.0.0.
The following code:
import numpy as np
import pandas as pd
import dask.dataframe as dd
number_of_components = 30
df = pd.DataFrame(np.random.randint(0,number_of_components,size=(number_of_components, 4)), columns=list('ABCD'))
my_data_frame = dd.from_pandas(df, npartitions = 1 )
def sumPrevious( previousState ) :
def getValue(row):
nonlocal previousState
something = row['A'] - previousState
previousState = row['A']
return something
return getValue
given_func = sumPrevious(1)
out = my_data_frame.apply(given_func, axis = 1 , meta = float).compute()
behaves as expected. There is a big caveat: if the previous state is provided by reference (i.e.: it is some object of some class) then the user should be careful in using equality inside the nested function to update the previous state: since it will have side effects, if the state is passed by reference.
Rigorously, this example does not prove that order is preserved under any circumstances; so I would still be interested whether I can rely on this assumption.
I would like to use resampling function from pandas but applying my own custom function. The problem I'm facing is that the custom function returns a pandas Data Frame instead of a single array.
The following example illustrate my problem:
>>> import pandas as pd
>>> import numpy as np
>>> def f(data):
... return ((1+data).cumprod(axis=0)-1)
...
>>> data = np.random.randn(1000,3)
>>> index = pd.date_range("20170101", periods = 1000, freq="B")
>>> df = pd.DataFrame(data= data, index =index)
Now suppose I want to resample the business days to business end month frequency:
>>> resampler = df.resample("BM")
If I apply now the my function f I don't get the desired result. I would like to get the last row of my output from f.
>>> resampler.apply(f)
this is becaumes the cumprod in my function f returns a pandas data frame. I could write my f such that it returns just the last row. However, I would like to use this function in other places as well to return the whole Data Frame. This could be solved via introducing a flag like "last_row" in the function f which steers to return the complete or just the last row. But this solutions seem rather nasty.
Just define your function f with a last_row parameter. You can default it to False so that it returns the entire dataframe. When True it returns the last row
def f(data, last_row=False):
df = ((1+data).cumprod(axis=0)-1)
if last_row:
return df.iloc[-1]
return df
Get the last row
df.resample('BM').apply(f, last_row=True)
0 1 2
2017-01-31 0.185662 -0.580058 -1.004879
2017-02-28 -1.004035 -0.999878 17.059846
2017-03-31 -0.995280 -1.000001 -1.000507
2017-04-28 -1.000656 -240.369487 -1.002645
2017-05-31 47.646827 -72.042190 -1.000016
....
Return all the rows as you already did.
df.resample('BM').apply(f)
I think you could refactor in the following way, which will be much faster for larger dataframes:
(1+df).resample('BM').prod() - 1
0 1 2
2017-01-31 -0.999436 -1.259078 -1.000215
2017-02-28 -1.221404 0.342863 9.841939
2017-03-31 -0.820196 -1.002598 -0.450662
2017-04-28 -1.000299 2.739184 -1.035557
2017-05-31 -0.999986 -0.920445 -2.103289
That gives the same answer as #TedPetrou although you can't tell because we used different random seeds, but you can easily test this yourself. Though actually, I'm still sorting out why this gives the same answer via prod() rather than cumprod(). Anyway, as you can see this is a mix of intuition and reverse engineering I'm using here and will update as I double check things...
For this relatively small dataframe with 1,000 rows, this way is only around twice as fast, but if you increase the rows you'll find this way scales much better (about 250x faster at 10,000 rows).
Alternative approaches: These give different answers from the above (and from each other) but I wonder if they might be closer to what you are looking for?
(1+df).resample('BM').mean().expanding().apply( lambda x: x.prod() - 1)
(1+df).expanding().apply( lambda x: x.prod() - 1).resample('BM').mean()
I need to create a Pandas DataFrame from a large file with space delimited values and row structure that is depended on the number of columns.
Raw data looks like this:
2008231.0 4891866.0 383842.0 2036693.0 4924388.0 375170.0
On one line or several, line breaks are ignored.
End result looks like this, if number of columns is three:
[(u'2008231.0', u'4891866.0', u'383842.0'),
(u'2036693.0', u'4924388.0', u'375170.0')]
Splitting the file into rows is depended on the number of columns which is stated in the meta part of the file.
Currently I split the file into one big list and split it into rows:
def grouper(n, iterable, fillvalue=None):
"Collect data into fixed-length chunks or blocks"
# grouper(3, 'ABCDEFG', 'x') --> ABC DEF Gxx
args = [iter(iterable)] * n
return izip_longest(fillvalue=fillvalue, *args)
(code is from itertools examples)
Problem is, I end up with multiple copies of the data in memory. With 500MB+ files this eats up the memory fast and Pandas has some trouble reading lists this big with large MultiIndexes.
How can I use Pandas file reading functionality (read_csv, read_table, read_fwf) with this kind of data?
Or is there an other way of reading data into Pandas without auxiliary data structures?
Although it is possible to create a custom file-like object, this will be very slow compared to the normal usage of pd.read_table:
import pandas as pd
import re
filename = 'raw_data.csv'
class FileLike(file):
""" Modeled after FileWrapper
http://stackoverflow.com/a/14279543/190597 (Thorsten Kranz)
"""
def __init__(self, *args):
super(FileLike, self).__init__(*args)
self.buffer = []
def next(self):
if not self.buffer:
line = super(FileLike, self).next()
self.buffer = re.findall(r'(\S+\s+\S+\s+\S+)', line)
if self.buffer:
line = self.buffer.pop()
return line
with FileLike(filename, 'r') as f:
df = pd.read_table(f, header=None, delimiter='\s+')
print(len(df))
When I try using FileLike on a 5.8M file (consisting of 200000 lines), the above code takes 3.9 seconds to run.
If I instead preprocess the data (splitting each line into 2 lines and writing the result to disk):
import fileinput
import sys
import re
filename = 'raw_data.csv'
for line in fileinput.input([filename], inplace = True, backup='.bak'):
for part in re.findall(r'(\S+\s+\S+\s+\S+)', line):
print(part)
then you can of course load the data normally into Pandas using pd.read_table:
with open(filename, 'r') as f:
df = pd.read_table(f, header=None, delimiter='\s+')
print(len(df))
The time required to rewrite the file was ~0.6 seconds, and now loading the DataFrame took ~0.7 seconds.
So, it appears you will be better off rewriting your data to disk first.
I don't think there is a way to seperate rows with the same delimiter as columns.
One way around this is to reshape (this will most likely be a copy rather than a view, to keep the data contiguous) after creating a Series using read_csv:
s = pd.read_csv(file_name, lineterminator=' ', header=None)
df = pd.DataFrame(s.values.reshape(len(s)/n, n))
In your example:
In [1]: s = pd.read_csv('raw_data.csv', lineterminator=' ', header=None, squeeze=True)
In [2]: s
Out[2]:
0 2008231
1 4891866
2 383842
3 2036693
4 4924388
5 375170
Name: 0, dtype: float64
In [3]: pd.DataFrame(s.values.reshape(len(s)/3, 3))
Out[3]:
0 1 2
0 2008231 4891866 383842
1 2036693 4924388 375170