Tensorflow: getting calibrated probability output - tensorflow

How do I perform calibration on Tensorflow probability outputs from an Estimator? Is there a way to perform Isotonic regression or Platt scaling using Tensorflow?

Related

Tensorflow op: ParseExampleV2 is super slow

When using native tensorflow for training and prediction/serving, we found that the bottle neck of training/serving speed is on the op of ParseExampleV2 , i.e. the op of transforming a vector of tf.Example protos (as strings) into typed tensors. Time-spent for each tensorflow ops within the graph based on a Logistic Regression model
Is there any solution to that problem?

Converting tensorflow dataset to numpy array

I have an autoencoder defined using tf.keras in tensorflow 1.15. I cannot upgrade to tensorflow to 2.0 for some specific reasons.
This particular autoencoder is used for anomaly detection. I currently compute the AUC score of the autoencoder as follows:
All anomalous inputs are labelled 1 and all normal inputs are labelled 0. This is y_true
I feed the autoencoder with unseen inputs and then measure the reconstruction error, like so: errors = np.mean(np.square(data - model.predict(data)), axis=-1)
The mean of this array is then said to the predicted label, y_pred.
I then compute the AUC using auc = metrics.roc_auc_score(y_true, y_pred).
This approach works well. I now need to move towards using tf.data.dataset to feed in my data. Previously, it was numpy arrays. The issue is, I am unable to convert tf.data.dataset to a numpy array and hence unable to compute the mean squared error as seen in 2.
Once I have a tf.data.Dataset, I feed it for prediction like so: results = model.predict(x_test)
This yields a numpy array, results. I want to compute the mean square error of results with x_test. However, x_test is of type tf.data.Dataset. So the question is, how can I convert a tf.data.dataset to a numpy array in tensorflow 1.15 or what is an alternative method to do this?

Pytorch equivalent features in tensorflow?

I recently was reading a Pytorch code and came across loss.backward() and optimizer.step() functions, are there any equivalent of these using tensorflow/keras?
loss.backward() equivalent in tensorflow is tf.GradientTape(). TensorFlow provides the tf.GradientTape API for automatic differentiation - computing the gradient of computation with respect to its input variables. Tensorflow "records" all operations executed inside the context of a tf.GradientTape onto a "tape". Tensorflow then uses that tape and the gradients associated with each recorded operation to compute the gradients of a "recorded" computation using reverse mode differentiation.
optimizer.step() equivalent in tensorflow is minimize(). Minimizes the loss by updating the variable list. Calling minimize() takes care of both computing the gradients and applying them to the variables.
If you want to process the gradients before applying them you can instead use the optimizer in three steps:
Compute the gradients with tf.GradientTape.
Process the gradients as you wish.
Apply the processed gradients with apply_gradients().
Hope this answers your question. Happy Learning.

Float ops found in quantized TensorFlow MobileNet model

As you can see in the screenshot of a quantized MobileNet model implemented in TensorFlow, there are still some float operations. The quantization is done in TensorFlow via the graph_transform tools.
The red ellipse in the image has its description in the right-hand-size text box. The "depthwise" is a "DepthwiseConv2dNative" operation that expects "DT_FLOAT" inputs.
Despite the lower Relu6 performs an 8-bit quantized operation, the result has to go through "(Relu6)" which is a "Dequantize" op, in order to produce "DT_FLOAT" inputs for the depthwise convolution.
Why is depthwise conv operations left out by TF graph_transform tools? Thank you.
Unfortunately there isn't a quantized version of depthwise conv in standard TensorFlow, so it falls back to the float implementation with conversions before and after. For a full eight-bit implementation of MobileNet, you'll need to look at TensorFlow Lite, which you can learn more about here:
https://www.tensorflow.org/mobile/tflite/

Tensorflow: Quantized graph not working for inception-resnet-v2 model

I did quantization on inception-resnet-v2 model using https://www.tensorflow.org/performance/quantization#how_can_you_quantize_your_models.
Size of freezed graph(input for quantization) is 224,6 MB and quantized graph is 58,6 MB. I ran accuracy test for certain dataset wherein, for freezed graph the accuracy is 97.4% whereas for quantized graph it is 0%.
Is there a different way to quantize the model for inception-resnet versions? or, for inception-resnet model, quantization is not support at all?
I think they transitioned from quantize_graph to graph_transforms. Try using this:
https://github.com/tensorflow/tensorflow/tree/master/tensorflow/tools/graph_transforms
And what did you use for the input nodes/output nodes when testing?