Using Variance Threshold with normalized variance - pandas

We know that zero-variance or low-variance features should be dropped to help with model complexity. However, I have come to learn that comparing variances of features can be difficult. For example:
The above features all have different medians, different variances, and ranges. Also, higher values in a distribution tend to have bigger variances. So, to make a fair comparison, can we normalize all features by dividing them by their mean, like so:
normalized_df = df / df.mean()
I have seen this technique in a DataCamp course and it is suggested in the course that after doing a normalization like above, we can choose a lower variance threshold, like 0.005 to make a fair comparison in feature selection. I was wondering if it was correct.
If it is, what kind of threshold should be chosen for normalized features?

Related

sjmisc::merge_imputations() averages across imputed datasets, which seems unjustified?

The sjmisc package has a function sjmisc::merge_imputations()
This function merges multiple imputed data frames from mice::mids()-objects into a single data frame by computing the mean or selecting the most likely imputed value.
I think this is what Stef van Buuren cautions against in 5.1.2 Not recommended workflow: Averaging the data ?
the procedure ignores the between-imputation variability, and hence shares all the drawbacks of single imputation
Instead, they advocate for mice::with() and mice::pool().
So when might one use sjmisc::merge_imputations() ?
If:
The researcher either only cares about means, not about correlations or other more complicated relationships between variables. Or, is willing to assume that the imputation models were "true" models.
The researcher only cares about point estimates, and is less concerned about the uncertainty in those estimates (variance, standard errors, confidence intervals, hypothesis tests, coefficients of variation).
There is only a small amount of missing data.
Then averaging the imputed values can be a reasonable fix. Averaging the imputed values is basically a version of "stochastic regression imputation". Although note that as the number of imputations increases, averaging the imputed values converges to simple regression imputation. It's still wrong, but it may be a practical method. The sjmisc package documentation quotes Burns et al (2011). https://doi.org/10.1016/j.jclinepi.2010.10.011 From that article:
There were practical benefits in providing DYNOPTA investigators an averaged imputation score as it precludes the necessity for investigators to run MICE for different projects using the MMSE, the need to obtain software capable of combining and analyzing multiple imputed datasets, and many investigators are unfamiliar with MI analysis techniques.
Compare also van Buuren 1.3.5
If you have the ability to use proper pooling methods I would recommend using those instead.

Remove Outliers from a Multitrace in PYMC3

I have a model which has 3 parameters A, n, and Beta.
I did a Bayesian analysis using pymc3 and got the posterior distributions of the parameters in a multitrace called "trace". Is there any way to remove the outliers of A (and thus the corresponding values of n and Beta) from the multitrace?
Stating that specific values of A are outliers implies that you have enough "domain expertise" to know that the ranges where these values fall into have very low probability of occurence in the experiment/system you are modelling.
You could therefore narrow your chosen prior distribution for A, such that these "outliers" remain in the tails of the distribution.
Reducing the overall model entropy with such informative prior's choice is risky in a way but can be considered as a valid approach if you know that values within these specific ranges just do not happen in real-life experiments.
Once the Bayes rule applied, your posterior distribution will put a lot less weight on these ranges and should better reflect the actual system behaviour.

Isn't it dangerous to apply Min Max Scaling to the test set?

Here's the situation I am worrying about.
Let me say I have a model trained with min-max scaled data. I want to test my model, so I also scaled the test dataset with my old scaler which was used in the training stage. However, my new test data's turned out to be the newer minimum, so the scaler returned negative value.
As far as I know, minimum and maximum aren't that stable value, especially in the volatile dataset such as cryptocurrency data. In this case, should I update my scaler? Or should I retrain my model?
I happen to disagree with #Sharan_Sundar. The point of scaling is to bring all of your features onto a single scale, not to rigorously ensure that they lie in the interval [0,1]. This can be very important, especially when considering regularization techniques the penalize large coefficients (whether they be linear regression coefficients or neural network weights). The combination of feature scaling and regularization help to ensure your model generalizes to unobserved data.
Scaling based on your "test" data is not a great idea because in practice, as you pointed out, you can easily observe new data points that don't lie within the bounds of your original observations. Your model needs to be robust to this.
In general, I would recommend considering different scaling routines. scikitlearn's MinMaxScaler is one, as is StandardScaler (subtract mean and divide by standard deviation). In the case where your target variable, cryptocurrency price can vary over multiple orders of magnitude, it might be worth using the logarithm function for scaling some of your variables. This is where data science becomes an art -- there's not necessarily a 'right' answer here.
(EDIT) - Also see: Do you apply min max scaling separately on training and test data?
Ideally you should scale first and then only split into test and train. But its not preferable to use minmax scaler with data which can have dynamically varying min and max values with significant variance in realtime scenario.

Select important features then impute or first impute then select important features?

I have a dataset with lots of features (mostly categorical features(Yes/No)) and lots of missing values.
One of the techniques for dimensionality reduction is to generate a large and carefully constructed set of trees against a target attribute and then use each attribute’s usage statistics to find the most informative subset of features. That is basically we can generate a large set of very shallow trees, with each tree being trained on a small fraction of the total number of attributes. If an attribute is often selected as best split, it is most likely an informative feature to retain.
I am also using an imputer to fill the missing values.
My doubt is what should be the order to the above two. Which of the above two (dimensionality reduction and imputation) to do first and why?
From mathematical perspective you should always avoid data imputation (in the sense - use it only if you have to). In other words - if you have a method which can work with missing values - use it (if you do not - you are left with data imputation).
Data imputation is nearly always heavily biased, it has been shown so many times, I believe that I even read paper about it which is ~20 years old. In general - in order to do a statistically sound data imputation you need to fit a very good generative model. Just imputing "most common", mean value etc. makes assumptions about the data of similar strength to the Naive Bayes.

How to estimate the Scoring Scheme in Pairwise Alignment

I'm not specialist in bioinformatics. I want to align two nucleotide sequences using a global alignment method. Each sequence is a combinations of the {A,C,T,G} letters.
The problem is that I don't know how to choose the best scoring scheme (substations and gap penalties).
Currently, I'm using the values +1,-1,-2 for match , mismatch and gap penalty. And I'm aware that ; the number of transitions in human DNA is larger than the number of transversions.
My question is how to estimate the penalties for (match , mismatch and gap) based on my dataset. Is there any statistical model can help?
If we need to answer to this question we need to know the dataset well and your scope exactly,but generally for match/mismatch we may represent as +1/-1 this does not include (transversion and transition).
For I advice you to take a look on this model and Kimura
Finally for penalty, you may use "low, medium, and high" penalty according to the divergent the sequences,I mean If the organisms is closely related then you may use the low gap penalty, and the high penalty for the more divergent organisms, so the gap penalty depends on how divergent the sequences are that you are aligning.
If we need to know if the sequences is divergent or not, as I said it's depends and different according to your data, but you may take a look on these examples about some sequences : link1, link2, link3, link4, and link5