How to get scikit learn to find simple non-linear relationship - pandas

I have some data in a pandas dataframe (although pandas is not the point of this question). As an experiment I made column ZR as column Z divided by column R. As a first step using scikit learn I wanted to see if I could predict ZR from the other columns (which should be possible as I just made it from R and Z). My steps have been.
columns=['R','T', 'V', 'X', 'Z']
for c in columns:
results[c] = preprocessing.scale(results[c])
results['ZR'] = preprocessing.scale(results['ZR'])
labels = results["ZR"].values
features = results[columns].values
#print labels
#print features
regr = linear_model.LinearRegression()
regr.fit(features, labels)
print(regr.coef_)
print np.mean((regr.predict(features)-labels)**2)
This gives
[ 0.36472515 -0.79579885 -0.16316067 0.67995378 0.59256197]
0.458552051342
The preprocessing seems wrong as it destroys the Z/R relationship I think. What's the right way to preprocess in this situation?
Is there some way to get near 100% accuracy? Linear regression is the wrong tool as the relationship is not-linear.
The five features are highly correlated in my data. Is non-negative least squares implemented in scikit learn ? ( I can see it mentioned in the mailing list but not the docs.) My aim would be to get as many coefficients set to zero as possible.

You should easily be able to get a decent fit using random forest regression, without any preprocessing, since it is a nonlinear method:
model = RandomForestRegressor(n_estimators=10, max_features=2)
model.fit(features, labels)
You can play with the parameters to get better performance.

The solutions is not as easy and can be very influenced by your data.
If your variables R and Z are bounded (for ex 0<R<1 -3<Z<2) then you should be able to get a good estimation of the output variable using neural network.
Using neural network you should be able to estimate your output even without preprocessing the data and using all the variables as input.
(Of course here you will have to solve a minimization problem).
Sklearn do not implement neural network so you should use pybrain or fann.
If you want to preprocess the data in order to make the minimization problem easier you can try to extract the right features from the predictor matrix.
I do not think there are a lot of tools for non linear features selection. I would try to estimate the important variables from you dataset using in this order :
1-lasso
2- sparse PCA
3- decision tree (you can actually use them for features selection ) but I would avoid this as much as possible
If this is a toy problem I would sugges you to move towards something of more standard.
You can find a lot of examples on google.

Related

A huge number of discrete features

I'm developing a regression model. But I ran into a problem when preparing the data. 17 out of 20 signs are categorical, and there are a lot of categories in each of them. Using one-hot-encoding, my data table is transformed into a 10000x6000 table. How should I prepare this type of data?
I used PCA, trying to reduce the dimension, but even 70% of the variance is in 2500 features. That's why I joined.
Unfortunately, I can't attach the dataset, as it is confidential
How do I prepare the data to achieve the best results in the learning process?
Can the data be mapped more accurately in a non-linear manner? If so, you might want to try using an autoencoder for dimensionality reduction.
One thing to note about PCA is that it computes an orthogonal projection of the data into linear space. This means that it only gives a linear mapping of the data. Autoencoders, on the other hand, can give you a non-linear mapping, and so is able to represent a greater amount of variance in the data in fewer dimensions. Just be sure to use non-linear activation functions in your autoencoder architecture.
It really depends on exactly what you are trying to do. Getting a covariance matrix (and also PCA decomp.) will give you great insight about which classes tend to come together (and this requires one-hot encoded categories), but training a model off of that might be problematic.
In general, it really depends on the model you want to use.
One option would be a random forest. They can definitely be used for regression, though they need to be trained specifically for that. SKLearn has a class just for this:
https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestRegressor.html
The benifits of random forest is that it is great for tabular data (as is the case here), and can easily be trained using numerical values for class features, meaning your data vector can only be of dimension 20!
Decision tree models (such as random forest) are being shown to out-preform deep-learning in many cases, and this may be one of them.
TLDR; If you use a random forest, it can take learn even with numerical values for categories, and you can avoid creating incredibly large vectors for data.

Understanding Time2Vec embedding for implementing this as a keras layer

The paper time2vector link (the relevant theory is in section 4) shows an approach to include a time embedding for features to improve model performance. I would like to give this a try. I found a implementation as keras layer which I changed a little bit. Basically it creates two matrices for one feature:
(1) linear = w * x + b
(2) periodic = sin(w * x + b)
Currently I choose this feature manually. Concerning the paper there are a few things i don't understand. The first thing is the term k as the number of sinusoids. The authors use up to 64 sinusoids. What does this mean? I have just 1 sinusoid at the moment, right? Secondly I'm about to put every feature I have through the sinus transformation for me dataset that would make 6 (sinusoids) periodic features. The authors use only one linear term. How should I choose the feature for the linear term? Unfortunately the code from the paper is not available anymore. Has anyone worked with time embeddings or even with this particularly approach?
For my limited understanding, the linear transformation of time is a fixed element of the produced embedding and the parameter K allows you to select how many different learned time representations you want to use in your model. So, the resulting embedding has a size of K+1 elements.

Linear regression graph interpretation

I have a histogram showing frequency of some data.
I have two type of files: Pdbs and Uniprots. Each Uniprot file is associated with a certain number of Pdbs. So this histogram shows how many Uniprot files are associated with 0 Pdb files, 1 Pdb file, 2 Pdb files ... 80 Pdb files.
Y-axis is in a log scale.
I did a regression on the same dataset and this is the result.
Here is the code I'm using for the regression graph:
# Fitting Simple Linear Regression to the Training set
from sklearn.linear_model import LinearRegression
regressor = LinearRegression()
x = np.array(x).reshape((-1, 1))
y = np.array(y)
regressor.fit(x, y)
# Predicting the Test set results
y = regressor.predict(x)
# Visualizing the Training set results
plt.scatter(x, y, color = 'red')
plt.plot(x, regressor.predict(x), color = 'blue')
plt.title('Uniprot vs Pdb')
plt.xlabel('Pdbs')
plt.ylabel('Uniprot')
plt.savefig('regression_test.png')
plt.show()
Can you help me interpret the regression graph?
I can understand that as the number of Pdbs increases, there will be less Uniprots associated with them.
But why is it going negative on the y-axis? Is this normal?
The correct way to interpret this linear regression is "this linear regression is 90% meaningless." In fact, some of that 90% is worse than meaningless, it's downright misleading, as you have pointed out with the negative y values. OTOH, there is about 10% of it that we can interpret to good effect, but you have to know what you're looking for.
The Why: Amongst other often less apparent things, one of the assumptions of a linear regression model is that the data are more-or-less linear. If your data aren't linear with some very regular "noise" added in, then all bets are off. Your data aren't linear. They're not even close. So all bets are off.
Since all bets are off, it is helpful to examine the sort of things that we might have otherwise wanted to do with a linear regression model. The hardest thing is extrapolation, which is predicting y outside of the original x range. Your model's abilities at extrapolation are pretty well illustrated by its behavior at the endpoints. This is where you noticed "hey, my graph is all negative!". This is, in a very simplistic sense, because you took a linear model, fit it to data that did not satisfy the "linear" assumption, and then tried to make it do the hardest thing for a model to do. The second hardest thing for a model to do is interpolation which is making predictions inside the original x range. This linear regression isn't very good at that either. Further down the list is, if we simply look at the slope of the linear regression line, we can get a general idea of whether our data are increasing or decreasing. Note that even this bet is off if your data aren't linear. However, it generally works out in a not-entirely-useless sort of way for large classes of even non-linear real-world data. So, this one thing, your linear regression model gets kind of right. Your data are decreasing, and the linear model is also decreasing. That's the 10% I spoke of previously.
What to do: Try to fit a better model. You say that you log-transformed your original data, but it doesn't look like that helped much. In general, the whole point of "transforming" data is to make it look linear. The log transform is helpful for exponential data. If your starting data didn't look exponential-like, then the log transform probably isn't going to help. Since you are trying to do density estimation, you almost certainly want to fit a probability distribution to this stuff, for which you don't even need to do a transform to make the data linear. Here is another Stack Overflow answer with details about how to fit a beta distribution to data. However, there are many options.
Can you help me interpret the regression graph?
Linear Regression tries to built a line between x-variables and a target y-variable which assimates the 'real' value in the most closed possible way (graph you find also here: https://en.wikipedia.org/wiki/Linear_regression):
the line here is the blue line, and the original points are the black lines. The goal is to minimize the error (black dots to blue line) for all black dots.
The regression line is the blue line. That means you can describe a uniprot with a linear equatation y = m*x +b , which has a constant value m=0.1 (example) and b=0.2 (example) and x=Pdbs.
I can understand that as the number of Pdbs increases, there will be less Uniprots associated with them. But why is it going negative on the y-axis?
This is normal, you could plot this line until -10000000 Pdbs or whateever, it is just a equation. Not a real line.
But there is one mistake in your plot, you need to plot the original black dots also or not?
y = regressor.predict(x)
plt.scatter(x, y, color = 'red')
This is wrong, you should add the original values to it, to get the plot from my graphic, something like:
y = df['Uniprot']
plt.scatter(x, y, color = 'red')
should help to understand it.

How to build a neural network that infers a set of values for the most important feature?

My task here is to find a way to get a suggested value of the most important feature or features. By changing into the suggested values of the features, I want the classification result to change as well.
Snapshot of dataset
The following is the procedures that I have tried so far:
Import dataset (shape: 1162 by 22)
Build a simple neural network (2 hidden layers)
Since the dependent variable is simply either 0 or 1 (classification problem), I onehot-encoded the variable. So it's either [0, 1] or [1,0]
After splitting into train & test data, I train my NN model and got accuracy of 77.8%
To know which feature (out of 21) is the most important one in the determination of either 0 or 1, I trained the data using Random Forest classifier (scikit-learn) and also got 77.8% accuracy and then used the 'feature_importances_' offered by the random forest classifier.
As a result, I found out that a feature named 'a_L4' ranks the highest in terms of relative feature importance.
The feature 'a_L4' is allowed to have a value from 0 to 360 since it means an angle. In the original dataset, 'a_L4' comprises of only 12 values that are [5, 50, 95, 120, 140, 160, 185, 230, 235, 275, 320, 345].
I augmented the original dataset by directly adding all the possible 12 values for each cases giving a new dataset of shape (1162x12 by 22).
I imported the augmented dataset and tested it on the previously trained NN model. The result was a FAILURE. There hardly was any change in the classification meaning almost no '1's switched to '0's.
My conclusion was that changing the values of 'a_L4' was not enough to bring a change in the classification. So I additionally did the same procedure again for the 2nd most important feature which in this case was 'b_L7_p1'.
So writing all the possible values that the two most important features can have, now the new dataset becomes the shape of (1162x12x6 by 22). 'b_L7_p1' is allowed to have 6 different values only, thus the multiplication by 6.
Again the result was a FAILURE.
So, my question is what might have I done wrong in the procedure described above? Do I need to keep searching for more important features and augment the data with all the possible values they can have? But since this is a tedious task with multiple procedures to be done manually and leads to a dataset with a huge size, I wish there was a way to construct an inference-based NN model that can directly give out the suggested values of a certain feature or features.
I am relatively new to this field of research, so could anyone please tell me some key words that I should search for? I cannot find any work or papers regarding this issue on Google.
Thanks in advance.
In this case I would approach the problem in the following way:
Normalize the whole dataset. As you can see from the dataset your features have different scales. It is utterly important that you make all features to have the same scale. Have a look at: https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.html
The second this that I would do now is train and evaluate a model (It can be whatever you want) to get a so called baseline model.
Then, I would try PCA to see whether all features are needed. Maybe you are including unnecessary sparsity to the model. See: https://scikit-learn.org/stable/modules/generated/sklearn.decomposition.PCA.html
For example if you set the n_components in PCA to be 0.99 then you are reducing the number of features while retaining as 0.99 explained variance.
Then I would train the model to see whether there is any improvement. Please note that only by adding the normalization itself there should be an improvement.
If I want to see by the dataset itself which features are important I would do: https://scikit-learn.org/stable/modules/generated/sklearn.feature_selection.SelectKBest.html This would select a specified number of features based on some statistical test lets say: https://scikit-learn.org/stable/modules/generated/sklearn.feature_selection.chi2.html
Train a model and evaluate it again to see whether there is some improvement.
Also, you should be aware that the NNs can perform feature engineering by themselves, so computing feature importance is redundant in a way.
Let me know whether you will see any improvements.

Why is tf.transpose so important in a RNN?

I've been reading the docs to learn TensorFlow and have been struggling on when to use the following functions and their purpose.
tf.split()
tf.reshape()
tf.transpose()
My guess so far is that:
tf.split() is used because inputs must be a sequence.
tf.reshape() is used to make the shapes compatible (Incorrect shapes tends to be a common problem / mistake for me). I used numpy for this before. I'll probably stick to tf.reshape() now. I am not sure if there is a difference between the two.
tf.transpose() swaps the rows and columns from my understanding. If I don't use tf.transpose() my loss doesn't go down. If the parameter values are incorrect the loss doesn't go down. So the purpose of me using tf.transpose() is so that my loss goes down and my predictions become more accurate.
This bothers me tremendously because I'm using tf.transpose() because I have to and have no understanding why it's such an important factor. I'm assuming if it's not used correctly the inputs and labels can be in the wrong position. Making it impossible for the model to learn. If this is true how can I go about using tf.transpose() so that I am not so reliant on figuring out the parameter values via trial and error?
Question
Why do I need tf.transpose()?
What is the purpose of tf.transpose()?
Answer
Why do I need tf.transpose()? I can't imagine why you would need it unless you coded your solution from the beginning to require it. For example, suppose I have 120 student records with 50 stats per student and I want to use that to try and make a linear association with their chance of taking 3 classes. I'd state it like so
c = r x m
r = records, a matrix with a shape if [120x50]
m = the induction matrix. it has a shape of [50x3]
c = the chance of all students taking one of three courses, a matrix with a shape of [120x3]
Now if instead of making m [50x3], we goofed and made m [3x50], then we'd have to transpose it before multiplication.
What is the purpose of tf.transpose()?
Sometimes you just need to swap rows and columns, like above. Wikipedia has a fantastic page on it. The transpose function has some excellent properties for matrix math function, like associativeness and associativeness with the inverse function.
Summary
I don't think I've ever used tf.transpose in any CNN I've written.