SAS BigQuery Logistic Regression - google-bigquery

Is there a way I can match SAS logistic regression results with BigQuery ML logistic regression results (coefficient / intercept values for same data)?

Related

Regression Loss Function Working Perfectly on My Classification Model

I have built a model that detects what type of shot a table tennis player is performing using TensorFlow. After I built my Neural Network, the model I am dealing with seems to be a multi-label classification model. The binary cross-entropy and categorical cross-entropy gave bad loss and accuracy while using MSE and MAE gave 98% accuracy and 0.004 loss in both cases.
Why is this happening, although I have Supervised Learning data with 3 output labels as shown in the figure below:
The dataset I have collected showing 3 output labels
If your learner has .98 for R squared (of I understand you well), it is likely that you're overfitting and will hence have poor testing predictions. Prediction errors that low are typically symptomatic of overfitting... but honestly, this is likely a better query for cross-validated.

PCA for Recurrent Neural Networks (LSTM) - Shall I use PCA for target variables too?

I have a seasonal timeseries dataset containing 3 target variables and n feature variables. I am trying to apply a PCA algorithm before feeding the data to a simple LSTM.
The operations I do are the following:
Split train - validation - test
Standard scaler (force mean=0 & std=1) of the train dataset (including target and features)
Apply PCA for only features of the train dataset
Transform through the PCA matrix in step 3 the feature variables from validation and target
Where I get lost: What to do with target's validation and target's test variables?
... more neural networks pre-processing and building the architecture of the LSTM
My question is: How do I scale / normalize the target variables? Through a PCA too?, through any independent scaler (standard, mapminmax, etc.)? If I leave the original target values I got overfitting in my LSTM.
The most disappointing is that without the PCA, the LSTM I've build is showing no overfitting
Thanks a lot for your help!
I know this comes late...
As far as I know, you should not apply PCA to the target variables. PCA is used in a way to reduce dimensionality on the feature variables.
As you have applied the PCA transformation trained with the Train dataset, you can do the same with the used Scaler.

what if I predict data in training dataset

I'm developing recommender system using NCF in somewhat modified way.
My circumstance is that data for prediction occasionally includes data used in training.
For example, My training set is 100000rows. And by negative sampling, some unobserved datas are added to training set.
And I want to predict all the unobserved one from the trained model. Then some datas from negative sampling is intersection of train data and predict data.
Will this cause any problem?
Should I remove unobserved data from negative sampling in predict data?

sampled_softmax_loss vs negative sampling

I am working on text autoencoder so want to use negative sampling for training our model. I want to know the difference between negative sampling and sampled softmax.
Thanks in advance
https://www.tensorflow.org/extras/candidate_sampling.pdf
Accoring to tensorflow negative sampling relates to logistic loss while sampled softmax relates to softmax.
Both of them, at the core, pick a sample of negative examples to compute the loss on and update gradients.
For your model, use it if your output is very large (many classes) AND the regular loss is too slow to compute. If the output has few classes there's not much gain. If the training is fast anyway, why bother with approximations.

Tensorflow: getting calibrated probability output

How do I perform calibration on Tensorflow probability outputs from an Estimator? Is there a way to perform Isotonic regression or Platt scaling using Tensorflow?