Model evaluation predictions from VertexAI AutoMLTabularTrainingJob - google-cloud-sdk

I am following python api documentation to create a AutoMLTabularTrainingJob : https://cloud.google.com/vertex-ai/docs/training/automl-api#tabular. It trains successfully with a dataset that consists of train/valid/test splits. I can also get the model evaluation metrics after following this documentation : https://cloud.google.com/vertex-ai/docs/training/evaluating-automl-models
However, I am unable to find a way to get the raw predictions that are used to generate the model evaluation metrics. Any pointers would be much appreciated !
Do I have to request for batch predictions again ? https://cloud.google.com/vertex-ai/docs/predictions/batch-predictions#tabular

Related

TF Serving Predict API Output Interpretation

Is the TensorFlow Serving (TFS) Predict API output the same as the tf.keras.model.predict method (i.e. the outputs of the model according to the compiled metrics)?
For example, if we have a tf.keras.model compiled with BinaryAccuracy metric, will the output of the TFS predict API be a list of binary accuracy values for each one of the inputs of the predict request?
Thanks in advance!
I am not able to clearly get your question about compiled metrics and the output prediction of the model. But here's the comparision of outputs from Keras predict method and TF Serving's Predict API.
The output format of prediction for both Keras and TF Serving Predict API is similar, which emits a list of probability values of the data point belonging to each class.
Consider that you have a 10 class classification model and you're sending 4 data points to predict method, The output will be of shape 4x10, wherein for each data point the predicted result contains the probability of that data point belonging to each classes(0–9).
Here's a sample prediction
predictions = [
[8.66183618e-05 1.06925681e-05 1.40683464e-04 4.31487868e-09
7.31811961e-05 6.07917445e-06 9.99673367e-01 7.10965661e-11
9.43153464e-06 1.98050812e-10],
[6.35617238e-04 9.08200348e-10 3.23482091e-05 4.98994159e-05
7.29685112e-08 4.77315152e-05 4.25152575e-06 4.23201502e-10
9.98981178e-01 2.48882337e-04],
[9.99738038e-01 3.85520025e-07 1.05982785e-04 1.47284098e-07
5.99268958e-07 2.26216093e-06 1.17733900e-04 2.74483864e-05
3.30203284e-06 4.03360673e-06],
[3.42538192e-06 2.30619257e-09 1.29460409e-06 7.04832928e-06
2.71432992e-08 1.95419183e-03 9.96945918e-01 1.80040043e-12
1.08795590e-03 1.78136176e-07]]
You can take a look at the output of make_prediction() function in this reference to understand about how the Predict API in TF Serving works. Thank you!

Outputting multiple loss components to tensorboard from tensorflow estimators

I am pretty new to tensorflow and I am struggling to get tensorboard to display some of my custom metrics. The model I am working with is a tf.estimator.Estimator, with an associated EstimatorSpec. The first new metric I am trying to log is from my loss function, which is composed of two components: a loss for an age prediction (tf.float32) and a loss for a class prediction (one-hot/multiclass), which I add together to determine a total loss (my model is predicting both a class and an age). The total loss is output just fine during training and shows up on tensorboard, but I would like to track the individual age and the class prediction loss components as well.
I think a solution that is supposed to work is to add a eval_metric_ops argument to the EstimatorSpec as described here (Custom eval_metric_ops in Estimator in Tensorflow). I have not been able to make this approach work, however. I defined a custom metric function that looks like this:
def age_loss_function(labels, ages_pred, ages_true):
per_sample_age_loss = get_age_loss_per_sample(ages_pred, ages_true) ### works fine
#### The error happens on this line:
mean_abs_age_diff, age_loss_update_fn = tf.metrics.Mean(per_sample_age_loss)
######
return mean_abs_age_diff, age_loss_update_fn
eval_metric_ops = {"age_loss": age_loss_function} #### Want to use this in EstimatorSpec
The instructions seem to say that I need both the error metric and the update function which should both be returned from the tf.metrics command as in examples like the one I linked. But this command fails for me with the error message:
tensorflow.python.framework.errors_impl.OperatorNotAllowedInGraphError: using a `tf.Tensor` as a Python `bool` is not allowed in Graph execution. Use Eager execution or decorate this function with #tf.function.
I am probably just misusing the APIs. If someone can guide me on the proper usage I would really appreciate it. Thanks!
It looks like the problem was from a version change. I had updated to tensorflow 2.0 while the instructions I was following were from 1.X. Using tf.compat.v1.metrics.mean() instead gets past this problem.

why are my tensorflow events files empty?

I am running the tensorflow object detection API and using the SSD_mobilenet model.I have the model.cpkt as well as the graph.pbtxt in my training dir. But in my training dir I found that my events files are empty. It seems that no data was written to my events. Could anyone help me,please!!!
Tensorflow event files will be generated based on the summaries what we have added in code.
For example, suppose you are training a convolutional neural network for recognizing MNIST digits. You'd like to record how the learning rate varies over time, and how the objective function is changing. Collect these by attaching tf.summary.scalar ops to the nodes that output the learning rate and loss respectively. Then, give each scalar_summary a meaningful tag, like 'learning rate' or 'loss function'.
For example:
Add a scalar summary for the snapshot loss.
tf.summary.scalar('loss', loss)
Please refer the below link:
https://www.tensorflow.org/guide/summaries_and_tensorboard

How to get both loss and model output at once, on a batch of data in Keras?

I'm using Keras w/ Tensorflow backend to train a NN.
I'm using train_on_batch for training, which returns the loss on the given batch. How do I also get the output classification on that batch ? (I'd like to do some visualisations of the output)
To do that I currently do another call to predict to get the model output, but that's redundant since train_on_batch have already passed the input batch "forward".
In Caffe, when an image is fed forward, the intermediate layer outputs stay stored in net.blobs, but in Keras/Tensorflow it seems that if we want to get an intermediate output we have to rerun the computational graph for each intermediate output we want to access on CPU, as described here. Is there a way to access many/all intermediate layers' outputs without rerunning the graph for each ?
I don't mind having a tensorflow-specific workaround.
If you use the function API, this is pretty straight forward.
In addition to #MohamedEzz's answer, you can create a custom callback which can perform the operations you require during the training process. They have methods which will run your code onEpochEnd, onEpochStart, onTrainingEnd and so on...
This way you can preserve the batch.

LibSVM training error

I’m using java version of libsvm (regression) for prediction purposes.
After training my data set, the generated model shows the Support Vectors, but no indication on the training error rate.
I would like to know if it's possible to find the training error of my training set? Is there any function I can call, or a class attribute I can use to find it?
Thank you,
You will have to create a svm_problem from your training-set and call svm_predict(..) on that - then compute the MSE (mean squared error) - but note that LibSVM performs quite poor on regression datasets compared to neural networks.