There are about 20 features in my data and I wonder if there is any way to check how each column influences the final prediction. For example if it has negative impact on prediction, it will be better to get rid of it.
sklearn has the DecisionTreeClassifier that has the property feature_importances_
I also want to know the answer. Maybe this can help us.
Related
I want to predict a time series value (regression task). But I need to tell the machine that recent observations in a batch relate stronger to the label than older ones.
In other words - I want to weight input values. How can this be done?
You can use autocorrelation of Y with itself using a time lag, like:
time_lag=4
x_train=dataframe[0:-timelag]
y_train=dataframe[time_lag:]
from pandas.plotting import autocorrelation_plot
autocorrelation_plot(dataframe)
You will notice autocorrelation will decrease its value with more distant values.
However, a proper neural network will learn that without being explicitly programmed, given that correlation with close values will naturally be bigger, and so the weights, in absolute values.
I have a dataframe which contains three more or less significant correlations between target column and other columns ( LinarRegressionModel.coef_ from sklearn shows 57, 97 and 79). And I don't know what exact model to choose: should I use only most correlated column for regression or use regression with all three predictors. Is there any way to compare models effectiveness? Sorry, I'm very new to data analysis, I couldn't google any tools for this task
Well first at all, you must know that when we are choosing the best model to apply to new data, we are going to choose the best model to fit out of sample data, which is the kind of samples that might are not present in the training process, after all, you want to predict new probabilities or cases. In your case, predict a new number.
So, how can we do this? Well, the best is to use metrics which can help us to choose which model is better for our dataset.
There are so many kinds of metrics for regression:
MAE: Mean absolute error is the mean of the absolute value of the errors. This is the easiest of the metrics to understand since it’s just the average error.
MSE: Mean squared error is the mean of the squared error. It’s more popular than a mean absolute error because the focus is geared more towards large errors.
RMSE: Root means the squared error is the square root of the mean squared error. This is one of the most popular of the evaluation metrics because root means the squared error is interpretable in the same units as the response vector or y units, making it easy to relate its information.
RAE: Relative absolute error, also known as the residual sum of a square, where y bar is a mean value of y, takes the total absolute error and normalizes it by dividing by the total absolute error of the simple predictor.
You can work with any of these, but I highly recommend to use MSE and RMSE.
ml beginner here.
I have a dataset containing the GPA, GRE, TOEFL, SOP&LOR Ranking(out of 5)etc. (all numerical) , and a final column that states whether or not they were admitted to a university(0 or 1), which is what we'll use as y_train.
I'm supposed to not just classify the predicted labels, but also calculate the probability of each person getting admitted.
edit: so from the from the first comment, I built a Logistic Regression model, and with some googling I found 'predict_proba' from sklearn and tried implementing it. There werent any syntactical errors but the code values given by predict_proba were horribly wrong.
Link: https://github.com/tarunn2799/gre-pred/blob/master/GRE%20Admission%20Probability-%20Extraaedge.ipynb
please help me finding where I've gone wrong, and also tips to reduce the loss
thank you!
I read your notebook, but I'm confused why you think the predict_proba are horribly wrong..
Is the predict accuracy not good, or the format of predict_proba not as you expected?
You could use sklearn.metrics.accuracy_score(), sklearn.metrics.confusion_matrix() to check your predict label, or use sklearn.metrics.roc_auc_score() to check the result of predict_proba. Check both train & text parts are better.
I think the format of predict_proba is correct, or maybe you could try the predict_log_proba() to calculate the log probability?
Hope this could help you.
I have some data in a pandas dataframe (although pandas is not the point of this question). As an experiment I made column ZR as column Z divided by column R. As a first step using scikit learn I wanted to see if I could predict ZR from the other columns (which should be possible as I just made it from R and Z). My steps have been.
columns=['R','T', 'V', 'X', 'Z']
for c in columns:
results[c] = preprocessing.scale(results[c])
results['ZR'] = preprocessing.scale(results['ZR'])
labels = results["ZR"].values
features = results[columns].values
#print labels
#print features
regr = linear_model.LinearRegression()
regr.fit(features, labels)
print(regr.coef_)
print np.mean((regr.predict(features)-labels)**2)
This gives
[ 0.36472515 -0.79579885 -0.16316067 0.67995378 0.59256197]
0.458552051342
The preprocessing seems wrong as it destroys the Z/R relationship I think. What's the right way to preprocess in this situation?
Is there some way to get near 100% accuracy? Linear regression is the wrong tool as the relationship is not-linear.
The five features are highly correlated in my data. Is non-negative least squares implemented in scikit learn ? ( I can see it mentioned in the mailing list but not the docs.) My aim would be to get as many coefficients set to zero as possible.
You should easily be able to get a decent fit using random forest regression, without any preprocessing, since it is a nonlinear method:
model = RandomForestRegressor(n_estimators=10, max_features=2)
model.fit(features, labels)
You can play with the parameters to get better performance.
The solutions is not as easy and can be very influenced by your data.
If your variables R and Z are bounded (for ex 0<R<1 -3<Z<2) then you should be able to get a good estimation of the output variable using neural network.
Using neural network you should be able to estimate your output even without preprocessing the data and using all the variables as input.
(Of course here you will have to solve a minimization problem).
Sklearn do not implement neural network so you should use pybrain or fann.
If you want to preprocess the data in order to make the minimization problem easier you can try to extract the right features from the predictor matrix.
I do not think there are a lot of tools for non linear features selection. I would try to estimate the important variables from you dataset using in this order :
1-lasso
2- sparse PCA
3- decision tree (you can actually use them for features selection ) but I would avoid this as much as possible
If this is a toy problem I would sugges you to move towards something of more standard.
You can find a lot of examples on google.
A common practice that I do when building models in Abaqus, is to fit the material property. For example, I try out all the possible material properties and look at the surface deflection given by the model, and then find out the one that matches our experimental observation the best. Practically, I compare the value with model output and experimental data, get an R-squared value and try to minimize the value of -1. * R2.
I have been using the scipy optimization toolbox to do this in Abaqus. However, there is one question: there are cases where the model would not converge with certain given parameters that the optimizer try. In these cases, what values should I set to R2? Should I set it to -1. * numpy.inf, or -1. * numpy.nan (assuming import numpy as np)?
Moreover, there are situations where I use optimization functions that does not support general constraints, for example modulus_1 > modulus_2; if it asks me to submit a job where modulus_1 <= modulus_2, can I just return a -1. * np.inf or -1. * np.nan as penalty?
The problem happens because there is no way to know where the model would fail to converge in the parameter space apriori. Any help would really be appreciated. Thank you so much!