What kind of feature vectors does featuretools / DFS generate? - data-science

Are the feature vectors generated by featuretools/DFS dense or sparse or does it depend on something?

The sparseness of feature vectors generated by Featuretools will in general be dependent on
the EntitySet in question and
the primitives chosen.
Primitives are meant to give back dense information. While it's possible (but not helpful) to construct example EntitySets that will make the output of primitive sparse, it's more common for the primitive to give back no information than sparse information.
However, certain primitives and workflows are more likely to give back sparse than others. A big one to worry about is feature encoding, which uses one-hot. Because that's generating a vector with 1s only when a certain value occurs, an infrequently occurring categorical value immediately would be converted into a sparse vector. Using Where aggregation primitives can sometimes have similar results.

Related

Using PCA on Part of Dataframe

I want to use a clustering algorithm to a dataframe that contains a lot of features (32 columns).
A part of the features are encoded using one hot encoder.
I want to use PCA ( Principal Component analysis ) to reduce the dimension and make the machine learning process easier.
Is it possible to use the PCA just for some columns of the data frame and keep the other columns as they are then use machine learning model.
Or it is obligatory to use PCA for all the dataframe before clustering.
I guess there should be no issue with doing what you describe.
What this does, effectively, is merge some of the objects' features into fewer ones, but then using other, non-merged ones in addition to the merged ones. I don't know what effect that would have on the outcome; it might be good to run a correlation to see whether the unmerged features add anything to the PCA-merged ones. You might find that they basically duplicate what is there already.
Since clustering is an exploratory method, you can basically do whatever you want. It is of course advisable to have a reason for doing so, as it otherwise ends up as simply trial-and-error, and if you find a result, you won't be able to describe why you got there. It is possible (or even likely for some data sets) that there are multiple ways to cluster them, so you should make decisions based on what you know about the data already, so they can be justified in those terms.
Running random trial-and-error clustering until you find a structure makes it a bit difficult to come up with a good explanation why that structure is valid.

how are histograms constructed in sklearn's HistGradientBoostingClassifier to decide on best split point

Both lightgbm and sklearn's HistGradientBoostingClassifier estimators use histograms to decide on best splits for continuous features.
Is it possible to explain intuitively (or with some example) the process of histogram creation and how does it help in deciding in faster split point at a node.
I have looked for answers extensively over the Internet but could not find any simple or intuitive way as to how histograms are constructed.
I am not sure but it could be related to how (unique) Regression trees are constructed in XGBoost. For a continuous feature, you construct an histogram, decide on the split (e.g. weight < 70kg), construct a Regression tree and compute the Similarity score as well as the Gain. However, when the range of the values in the continuous feature is quite large then it is quite computationally expensive to try all the possible split values. In that case, XGBoost basically makes the split by making use of the quantiles which involves dividing all the observations into equally sized sets.
I guess sklearn's HistGradientBoostingClassifier might involve the above tool optimization as well for coming up with the best split.

How Data and String are treated in graphlab

I am having large date set in which some of columns are Date and other are categorical Data like Status, Department Name, Country Name.
So how this data is treated in graphlab when i call the graphlab.linear_regression.create method, does i have to pre-process this data and convert them into numbers or can directly provide to graphlab.
Graphlab is mostly used for computing tabular and graph based datasets, and have high scalability and performance. In graphlab.linear_regression.create, graphlab have inbuilt feature of understanding the type of data and giving most suitable method of linear regression for optimizing results. For Example, for numeric data of target and feature both, most of the time, graphlab takes Newtons Method of linear regression. Similarly, depending on the dataset, understands the need and gives method accordingly.
Now, about preprocessing, graphlab only takes SFrame for learning that need to be parsed correctly before any learning. While creating an SFrame, unprocessed and error creating data are always reflected and throws an error. So, in order to go through any learning, you need to have a clean data. If SFrame accepts the data, and also your chosen target and feature for learning that you want, you are good to go but pre-processing and cleaning data is always recommended. Also, its always a good practice to do feature engineering before any learning algorithm, and redefining data types before learning is always recommended for accuracy.
About your point on how data is treated in Graphlab, I would say, it depends!. Some datasets are tabular and are treated accordingly and some in graph structure. Graphlab performs very well when comes to regression tree and boosted classifiers which follows decision tree concept and are quite time and resource consuming in other libraries than graphlab.
For me, graphlab performed very well while creating recommendation engine where I had dataset of nodes and edges and boosted tree classifier with 18 iterations too worked flawless in quite scalable time and I must say, even for tree structured data, graphlab performs very well. I hope this answer helps.

Encoding invariance for deep neural network

I have a set of data, 2D matrix (like Grey pictures).
And use CNN for classifier.
Would like to know if there is any study/experience on the accuracy impact
if we change the encoding from traditionnal encoding.
I suppose yes, question is rather which transformation of the encoding make the accuracy invariant, which one deteriorates....
To clarify, this concerns mainly the quantization process of the raw data into input data.
EDIT:
Quantize the raw data into input data is already a pre-processing of the data, adding or removing some features (even minor). It seems not very clear the impact in term of accuracy on this quantization process on real dnn computation.
Maybe, some research available.
I'm not aware of any research specifically dealing with quantization of input data, but you may want to check out some related work on quantization of CNN parameters: http://arxiv.org/pdf/1512.06473v2.pdf. Depending on what your end goal is, the "Q-CNN" approach may be useful for you.
My own experience with using various quantizations of the input data for CNNs has been that there's a heavy dependency between the degree of quantization and the model itself. For example, I've played around with using various interpolation methods to reduce image sizes and reducing the color palette size, and in the end, I discovered that each variant required a different tuning of hyper-parameters to achieve optimal results. Generally, I found that minor quantization of data had a negligible impact, but there was a knee in the curve where throwing away additional information dramatically impacted the achievable accuracy. Unfortunately, I'm not aware of any way to determine what degree of quantization will be optimal without experimentation, and even deciding what's optimal involves a trade-off between efficiency and accuracy which doesn't necessarily have a one-size-fits-all answer.
On a theoretical note, keep in mind that CNNs need to be able to find useful, spatially-local features, so it's probably reasonable to assume that any encoding that disrupts the basic "structure" of the input would have a significantly detrimental effect on the accuracy achievable.
In usual practice -- a discrete classification task in classic implementation -- it will have no effect. However, the critical point is in the initial computations for back-propagation. The classic definition depends only on strict equality of the predicted and "base truth" classes: a simple right/wrong evaluation. Changing the class coding has no effect on whether or not a prediction is equal to the training class.
However, this function can be altered. If you change the code to have something other than a right/wrong scoring, something that depends on the encoding choice, then encoding changes can most definitely have an effect. For instance, if you're rating movies on a 1-5 scale, you likely want 1 vs 5 to contribute a higher loss than 4 vs 5.
Does this reasonably deal with your concerns?
I see now. My answer above is useful ... but not for what you're asking. I had my eye on the classification encoding; you're wondering about the input.
Please note that asking for off-site resources is a classic off-topic question category. I am unaware of any such research -- for what little that is worth.
Obviously, there should be some effect, as you're altering the input data. The effect would be dependent on the particular quantization transformation, as well as the individual application.
I do have some limited-scope observations from general big-data analytics.
In our typical environment, where the data were scattered with some inherent organization within their natural space (F dimensions, where F is the number of features), we often use two simple quantization steps: (1) Scale all feature values to a convenient integer range, such as 0-100; (2) Identify natural micro-clusters, and represent all clustered values (typically no more than 1% of the input) by the cluster's centroid.
This speeds up analytic processing somewhat. Given the fine-grained clustering, it has little effect on the classification output. In fact, it sometimes improves the accuracy minutely, as the clustering provides wider gaps among the data points.
Take with a grain of salt, as this is not the main thrust of our efforts.

What is the advantage of the paperboat format in performance optimization of ML?

The paperBoat format claims to provide a better dataset representation for machine learning routines. I'd like to understand the nature of its optimization. I understand that using an integer representation for model attributes means a faster processing of the data set, what are the other improvements.
Also, how to tune an ML algorithm to work with this file format.
I don't know if this format really provides better representation, but I can speculate why it can be more efficient.
First, as they state at format description, "Having data of the same precision consecutive enables hardware vectorization."; consider also wikipedia: "Vector processing techniques have since been added to almost all modern CPU designs".
Second, their format allows you to mix sparse and non-sparse features, but since all sparse features are placed consequently, it is possible to easily take them as a sparse matrix and optimize methods for learning like conjugate gradient.
how to tune an ML algorithm to work with this file format?
What do you mean by ML algorithm tuning? The learning algorithm doesn't know and doesn't need to know anything about file format of the dataset; and you can't increase or decrease accuracy if you know file format. In theory, you can speed up the concrete optimization algorithm (like Gradient descent) if you can rely on some properties of data (and, I guess, Ismion PaperBoat does it), but I don't think that you can tune it by yourself.