Importing timeseries datasets to MATLAB (all values are displayed as NaN) - file-io

I am stuck trying to run an economic model using MATLAB - at the data importing part. For most of my code I'm using a freeware toolbox called IRIS.
I have quarterly dataset with 14 variables and 160 datapoints. Essentially the dataset is a 15X161 matrix- including the dates(col1) and variable names(B1:O1).
The command used for uploading data on IRIS is
d = dbload('filename.csv')
but this isn't working. Although MATLAB is creating a 1X1 array called d and creating fields under it (one for each variable). All cells display NaN - not a number.
Why is this happening?
I checked the tutorials on the IRIS toolbox website and tried running and loading a sample dataset from there using this command, but it leads to the same problem. Everywhere I checked- including MATLAB help, this seems to be the correct command to use when using IRIS, but somehow it isn't working.
I also tried uploading the data directly using MATLAB functions and not IRIS. The command I'm using is:
d = dataset('XLSFile','filename.xls','ReadVarNames', true).
Although this is working, and I can see all the variable names, but MATLAB can't read the dates. I tried xlsread and importdata as well, but they don't read the variable names. Is there any way for me to upload the entire Excel sheet with the variable names and dates?
It would be best if I could get the IRIS command to work, since the rest of my code would be compatible with that.
The dataset looks somewhat like this..
HO_GDP HO_CPI HO_CPI HO_RS HO_ER HO_POIL....
4/1/1970 82.33 85.01 55.00 99.87 08.77
7/1/1970 54.22 8.98 25.22 95.11 91.77
10/1/1970 85.41 85.00 85.22 95.34 55.00
1/1/1971 85.99 899 8.89 85.1

You can use the TEXTSCAN function to read the CSV file in MATLAB:
%# some options
numCols = 15; %# number of columns
opts = {'Delimiter',',', 'MultipleDelimsAsOne',true, 'CollectOutput',true};
%# open file for reading
fid = fopen('filename.csv','rt');
%# read header line
headers = textscan(fid, repmat('%s',1,numCols), 1, opts{:});
%# read rest of data rows
%# 1st column as string, the other 14 as floating point
data = textscan(fid, ['%s' repmat('%f',1,numCols-1)], opts{:});
%# close file
fclose(fid);
%# collect data
headers = headers{1};
data = [datenum(data{1},'mm/dd/yyyy') data{2}];
The result for the above sample you posted (assuming values are comma-separated):
>> headers
headers =
'HO_GDP' 'HO_CPI' 'HO_CPI' 'HO_RS' 'HO_ER' 'HO_POIL'
>> data
data =
7.1962e+05 82.33 85.01 55 99.87 8.77
7.1971e+05 54.22 8.98 25.22 95.11 91.77
7.198e+05 85.41 85 85.22 95.34 55
7.1989e+05 85.99 899 8.89 85.1 0
Note how in the last line of the code we convert the date column to serial date number, so that we can store the entire data in one numeric matrix. You can always go back to string representation of dates using DATESTR function:
>> datestr(data(:,1))
ans =
01-Apr-1970
01-Jul-1970
01-Oct-1970
01-Jan-1971

Related

Issue when importing dataset: Rows that contain more elements/columns than the previous row are divided between two rows

For a project I receive datasets in the form of text files. These text files are generated by the measuring software from a machine. The data in the files is seperated by spaces and has no header, example of a row:
Mo 27.06.2022 12:01:11 MP2 mv:(mean. 5s): 4,824 mg/mü org.C
When loading this data using
my_data <- read.table("File.txt", header = FALSE, sep = "", dec = ",", fill=TRUE, na.strings=c("","NA"))
I obtain 9 columns in the following format (example), as intended.
|Mo|27.06.2022|12:01:11|MP2|mv:(mean.| 5s):| 4,824| mg/mü| org.C|
However, sometimes the data set starts with a notification from the machine (example):
Mo 27.06.2022 11:42:04 {SE14} service requestend
When this happens, the 'regular' 9 column rows are seperated between two rows (example):
Row 1: Mo|27.06.2022|11:58:26|MP1|mv:(mean.| 5s):|
Row 2: 7,858| mg/mü |org.C
How do I tell R to not perform this seperation between two rows? As I understand, it does this because earlier in the text file, an input of only 6 columns is recognized.
This is a script that we will use for years to come, so help is greatly appreciated!
I've tried removing the fill function from the read.table function, I have tried removing the na.strings, and ofcourse looking for answers on stack overflow, but was not able to encounter this specific problem.

Trying to load an hdf5 table with dataframe.to_hdf before I die of old age

This sounds like it should be REALLY easy to answer with Google but I'm finding it impossible to answer the majority of my nontrivial pandas/pytables questions this way. All I'm trying to do is to load about 3 billion records from about 6000 different CSV files into a single table in a single HDF5 file. It's a simple table, 26 fields, mixture of strings, floats and ints. I'm loading the CSVs with df = pandas.read_csv() and appending them to my hdf5 file with df.to_hdf(). I really don't want to use df.to_hdf(data_columns = True) because it looks like that will take about 20 days versus about 4 days for df.to_hdf(data_columns = False). But apparently when you use df.to_hdf(data_columns = False) you end up with some pile of junk that you can't even recover the table structure from (or so it appears to my uneducated eye). Only the columns that were identified in the min_itemsize list (the 4 string columns) are identifiable in the hdf5 table, the rest are being dumped by data type into values_block_0 through values_block_4:
table = h5file.get_node('/tbl_main/table')
print(table.colnames)
['index', 'values_block_0', 'values_block_1', 'values_block_2', 'values_block_3', 'values_block_4', 'str_col1', 'str_col2', 'str_col3', 'str_col4']
And any query like df = pd.DataFrame.from_records(table.read_where(condition)) fails with error "Exception: Data must be 1-dimensional"
So my questions are: (1) Do I really have to use data_columns = True which takes 5x as long? I was expecting to do a fast load and then index just a few columns after loading the table. (2) What exactly is this pile of garbage I get using data_columns = False? Is it good for anything if I need my table back with query-able columns? Is it good for anything at all?
This is how you can create an HDF5 file from CSV data using pytables. You could also use a similar process to create the HDF5 file with h5py.
Use a loop to read the CSV files with np.genfromtxt into a np array.
After reading the first CSV file, write the data with .create_table() method, referencing the np array created in Step 1.
For additional CSV files, write the data with .append() method, referencing the np array created in Step 1
End of loop
Updated on 6/2/2019 to read a date field (mm/dd/YYY) and convert to datetime object. Note changes to genfromtxt() arguments! Data used is added below the updated code.
import numpy as np
import tables as tb
from datetime import datetime
csv_list = ['SO_56387241_1.csv', 'SO_56387241_2.csv' ]
my_dtype= np.dtype([ ('a',int),('b','S20'),('c',float),('d',float),('e','S20') ])
with tb.open_file('SO_56387241.h5', mode='w') as h5f:
for PATH_csv in csv_list:
csv_data = np.genfromtxt(PATH_csv, names=True, dtype=my_dtype, delimiter=',', encoding=None)
# modify date in fifth field 'e'
for row in csv_data :
datetime_object = datetime.strptime(row['my_date'].decode('UTF-8'), '%m/%d/%Y' )
row['my_date'] = datetime_object
if h5f.__contains__('/CSV_Data') :
dset = h5f.root.CSV_Data
dset.append(csv_data)
else:
dset = h5f.create_table('/','CSV_Data', obj=csv_data)
dset.flush()
h5f.close()
Data for testing:
SO_56387241_1.csv:
my_int,my_str,my_float,my_exp,my_date
0,zero,0.0,0.00E+00,01/01/1980
1,one,1.0,1.00E+00,02/01/1981
2,two,2.0,2.00E+00,03/01/1982
3,three,3.0,3.00E+00,04/01/1983
4,four,4.0,4.00E+00,05/01/1984
5,five,5.0,5.00E+00,06/01/1985
6,six,6.0,6.00E+00,07/01/1986
7,seven,7.0,7.00E+00,08/01/1987
8,eight,8.0,8.00E+00,09/01/1988
9,nine,9.0,9.00E+00,10/01/1989
SO_56387241_2.csv:
my_int,my_str,my_float,my_exp,my_date
10,ten,10.0,1.00E+01,01/01/1990
11,eleven,11.0,1.10E+01,02/01/1991
12,twelve,12.0,1.20E+01,03/01/1992
13,thirteen,13.0,1.30E+01,04/01/1993
14,fourteen,14.0,1.40E+01,04/01/1994
15,fifteen,15.0,1.50E+01,06/01/1995
16,sixteen,16.0,1.60E+01,07/01/1996
17,seventeen,17.0,1.70E+01,08/01/1997
18,eighteen,18.0,1.80E+01,09/01/1998
19,nineteen,19.0,1.90E+01,10/01/1999

How to generate a fits file from the beginning

In this post, they explain how to generate a fits file from ascii file. However, I also would like to know how to define header and data into fits file. (Converting ASCII Table to FITS image)
For example, when I call a spectral fits file with astropy (which is downloaded from a telescope), I can call data and header separately.
I.E
In [1]:hdu = fits.open('observation.fits', memmap=True)
In [2]:header = hdu[0].header
In [3]:header
Out [3]:
SIMPLE = T / conforms to FITS standard
BITPIX = 8
NAXIS = 1
NAXIS1 = 47356
EXTEND = T
DATE = 'date' / file creation date (YYYY-MM-DDThh:mm:ss UT)
ORIGIN = 'XXX ' / European Southern Observatory
TELESCOP= 'XXX' / ESO Telescope Name
INSTRUME= 'Instrument' / Instrument used.
OBJECT = 'ABC ' / Original target.
RA = 30.4993 / xx:xx:xx.x RA (J2000) pointing
DEC = -20.0009 / xx:xx:xx.x DEC (J2000) pointing
CTYPE1 = 'WAVE ' / wavelength axis in nm
CRPIX1 = 0. / Reference pixel in z
CRVAL1 = 298.903594970703 / central wavelength
CDELT1 = 0.0199999995529652 / nm per pixel
CUNIT1 = 'nm ' / spectral unit
..
bla bla
..
END
In [3]:data = hdu[0].data
In [4]:data
Out [4]:array([ 1000, 1001, 1002, ...,
5.18091546e-13, 4.99434453e-13, 4.91280864e-13])
Lets assume, I have data like below
WAVE FLUX
1000 2.02e-12
1001 3.03e-12
1002 4.04e-12
..
bla bla
..
So, I'd like to generate a spectral fits file with my own data (with its own header).
Mini question : Now lets assume, I generate spectral fits file correctly, but I realised that I forgot to take logarithm of WAVE values in X axis (1000, 1001, 1002, ....) . How can I do that without touching FLUX values of Y-axis (2.02e-12, 3.03e-13, 4.04e-13) ?
FITS files are organized as one or more HDUs (Header Data Units) consisting, as the name suggests, as one data object (generally, a single array for an observation, though sometimes something else like a table), and the header of metadata that goes with that data.
To create a file from scratch, especially an image, the simplest way is to directly create an ImageHDU object:
>>> from astropy.io import fits
>>> hdu = fits.ImageHDU()
Just as with an HDU read from an existing file, this HDU has a (mostly empty) header, and an empty data attribute that you can then assign to:
>>> hdu.data = np.array(<some numpy array>)
>>> hdu.header['TELESCOP'] = 'Gemini'
When you're satisfied you can write the HDU out to a file with:
>>> hdu.writeto('filename.fits')
(Note: A lot of the documentation you'll see demonstrates a more complex process of creating an HDUList object, appending the HDU to the HDU list, and then writing the full HDU list. This is only necessary if you're creating a multi-extension FITS file. For a single HDU, you can use hdu.writeto directly and the framework will handle the other structural details.)
In general you don't need to manipulate the headers that describe the format of the data itself--that is automatic and should not be touched by hand (FITS has the unfortunate misfeature of mixing information about data structure with actual metadata). You can see more examples on how to manipulate FITS data here: http://docs.astropy.org/en/stable/generated/examples/index.html#astropy-io
Your other question pertains to manipulating the WCS (World Coordinate System) of the image, and in particular for spectral data this can be non-trivial. I would ask a separate question about that with more details about what you hope to accomplish.

Importing a TermDocumentMatrix into R

I am working on a qualitative analysis project in the tm package of R. I have built a corpus and created a term document matrix and long story short I need to edit my term document matrix and conflate some of its rows. To do this I have exported it out of R using
write.csv()
I then have imported the csv file back into R but am struggling to figure out how to get R to read it as a TermDocumentMatrix or DocumentTermMatrix.
I tried using the suggestions of the following example code with no avail.
It seems to keep reading my matrix as if it was a corpus and each cell as a single document.
# change this file location to suit your machine
file_loc <- "C:\\Documents and Settings\\Administrator\\Desktop\\Book1.csv"
# change TRUE to FALSE if you have no column headings in the CSV
x <- read.csv(file_loc, header = TRUE)
require(tm)
corp <- Corpus(DataframeSource(x))
dtm <- DocumentTermMatrix(corp)
Is there any way to import in a csv matrix that will be read as a termdocumentmatrix or documenttermmatrix without having R read the csv as if each cell is a document?
You're not reading documents, so skip the Corpus() step. This should work directly:
myDTM <- as.DocumentTermMatrix(x, weighting = weightTf)
For next time, consider saving the TDM object as .RData as this will not require conversion, and is also much more efficient.
If you want to keep the format of any data, I would recommend to use the save() function.
You can save any R objects into a .RData file. And when you want to retrieve the data, you can use the load() function.

Read API Into different cells Matlab

I am using an API (link for sample data can be found HERE) The way I have it now, using urlread, it reads all that data into one cell. How do I make it read into multiple cells? The ultimate goal is to extract the location_name, so if you could help me with that too thatd be great!
The sample data is provided as JSON, so you want a JSON parser, for example this one.
You use it like this:
>> url = 'http://www3.septa.org/hackathon/locations/get_locations.php?lon=-75.1903105&lat=39.9601978&type=rail_stations&radius=5';
>> contents = urlread(url);
>> data = parse_json(contents);
>> data = data{1}; # For some reason it returns a cell array with one element...
>> data{1}
ans =
location_id: 90004
location_name: '30th Street Station'
location_lat: '39.9566667'
location_lon: '-75.1816667'
distance: '0.5184'
location_type: 'rail_stations'
location_data: [1x1 struct]