pandas dataframe for matrix of values - pandas

I have 3 things:
A time series 1D array of certain length.
A matrix of stellar flux values of equal column length as the time series (as each star in the field was observed according to the time array) but ~3000 rows deep as there are ~3000 observed stars in this field.
An array of ~3000 star ID's to go with the ~3000 time-series flux recordings mentioned above.
I'm trying to turn all of this into a pandas.DataFrame for extracting timeseries features using the module 'tsfresh'. Link here.
Does anyone have an idea on how to do this? It should read somewhat like a table with a row of ID's as headers, a column of time values and ~3000 columns of flux values for the stars.
I've seen examples of it being done on the page I've linked i.e. multiple 'value' columns (in this case they would be flux values). But no indication of how to construct them.
This data frame will then be used for machine learning if that makes any difference.
Many thanks for any help that can be offered!

Related

Extra Row in Dataframe

I am attempting to create a dataframe comprised of two vectors.
The two vectors are comprised of SIX elements which were the products of previous steps here they are:
CLsummary<-c(MaxCL, MinCL, MeanCL, MedianCL, RangeCL, SDCL)
PRsummary<-c(MaxPR, MinPR, MeanPR, MedianPR, RangePR, SDPR)
But when I create the data frame, like below, I get SEVEN rows of data:
FHsummary<-data.frame(CLsummary, PRsummary)
Specifically, the fifth row seems to be a duplicate of the MinCL and MinPR data.
What am I doing wrong?
Thank you!
When just replicating your code, making the objects in the vector into strings, the final dataframe is 6 rows as expected. You should take a look at your objects that comprise the vectors. What type are they? How were they made? My guess is that one of the objects in each vector is a vector itself, with two elements.

Slice dataframe according to unique values into many smaller dataframes

I have a large dataframe (14,000 rows). The columns include 'title', 'x' and 'y' as well as other random data.
For a particular title, I've written a code which basically performs an analysis using the x and y values for a subset of this data (but the specifics are unimportant for this).
For this title (which is something like "Part number Y1-17") there are about 80 rows.
At the moment I have only worked out how to get my code to work on 1 subset of titles (i.e. one set of rows with the same title) at a time. For this I've been making a smaller dataframe out of my big one using:
df = pd.read_excel(r"mydata.xlsx")
a = df.loc[df['title'].str.contains('Y1-17')]
But given there are about 180 of these smaller datasets I need to do this analysis on, I don't want to have to do it manually.
My question is, is there a way to make all of the smaller dataframes automatically, by slicing the data by the unique 'title' value? All the help I've found, it seems like you need to specify the 'title' to make a subset. I want to subset all of it and I don't want to have to list all the title names to do it.
I've searched quite a lot and haven't found anything, however I am a beginner so it's very possible I've missed some really basic way of doing this.
I'm not sure if its important information but the modules I'm working with pandas, and numpy
Thanks for any help!
You can use Pandas groupby
For example:
df_dict = {key: title for key, title in df.copy().groupby('title', sort=False)}
Which creates a dictionary of DataFrames each containing all the columns and only the rows pertaining to each unique value of title.

LeavePGroupsOut For multidimensional array

I am working on a research problem and due to a small sized dataset with subjects I am trying to implement Leave N Out style analyses.
Currently I am doing this ad-hoc and I stumbled upon scikit-learn LeavePGroupsOut function.
I read the docs but I am unable to understand how to use it in multidimensional array.
My data are the following: I have 50 subjects, around 20 entries per subject (not fixed) and 20 features per entry with ground-truth value (0 or 1) for every entry.
Well the documentation is actually pretty clear:
https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.LeavePGroupsOut.html#sklearn.model_selection.LeavePGroupsOut
In your case you need to concatenate your array s.t. you can provide for every entry and feature the group index. Thus your feature array will have the shape 50*20 datapoints times 20 features (1000,20), so your group array also needs to have shape (1000,).
Then you need to define the cross validation via
lpgo = LeavePGroupsOut(n_groups=n_groups)
It's important to notice that this will result in all possible combinations of left out test groups.

python - pandas - dataframe - data padding multidimensional statistics

i have a dataframe with columns accounting for different characteristics of stars and rows accounting for measurements of different stars. (something like this)
\property_______A _______A_error_______B_______B_error_______C_______C_error ...
star1
star2
star3
...
in some measurements the error for a specifc property is -1.00 which means the measurement was faulty.
in such case i want to discard the measurement.
one way to do so is by eliminating the entire row (along with other properties who's error was not -1.00)
i think it's possible to fill in the faulty measurement with a value generated by the distribution based on all the other measurements, meaning - given the other properties which are fine, this property should have this value in order to reduce the error of the entire dataset.
is there a proper name to the idea i'm referring to?
how would you apply such an algorithm?
i'm a student on a solo project so would really appreciate answers that also elaborate on theory (:
edit
after further reading, i think what i was referring to is called regression imputation.
so i guess my question is - how can i implement multidimensional linear regression in a dataframe in the most efficient way???
thanks!

Looping through columns to conduct data manipulations in a data frame

One struggle I have with using Python Pandas is to repeat the same coding scheme for a large number of columns. For example, below is trying to create a new column age_b in a data frame called data. How do I easily loop through a long (100s or even 1000s) of numeric columns, do the exact same thing, with the newly created column names being the existing name with a prefix or suffix string such as "_b".
labels = [1,2,3,4,5]
data['age_b'] = pd.cut(data['age'],bins=5, labels=labels)
In general, I have many simply data frame column manipulations or calculations, and it's easy to write the code. However, so often I want to repeat the same process for dozens of columns, that's when I get bogged down, because most functions or manipulations would work for one column, but not easily repeatable to many columns. It would be nice if someone can suggest a looping code "structure". Thanks!