According to the github page for tensorflow fairness-indicators :
Fairness Indicators enables easy computation of commonly-identified fairness metrics for binary and multiclass classifiers.
However, I'm unable to find any example using tf-fairness-indicators for multiclass classifier/output. The examples mentioned here and here demonstrates the usage only for binary-class, without giving any idea how to extend this for multiclass-classifiers.
Do we have any reference/documented approach for extending this to multiclass-classifiers ?
Related
Let's say I have a pytorch-model describing the evolution of some multidimensional system based on its own state x and an external actuator u. So x_(t+1) = f(x_t, u_t) with f being the artificial neural network from pytorch.
Now i want to solve a dynamic optimization problem to find an optimal sequence of u-values to minimize an objective that depends on x. Something like this:
min sum over all timesteps phi(x_t)
s.t.: x_(t+1) = f(x_t, u_t)
Additionally I also have some upper and lower bounds on some of the variables in x.
Is there an easy way to do this using a dynamic optimization toolbox like pyomo or gekko?
I already wrote some code that transforms a feedforward neural network to a numpy-function which can then be passed as a constraint to pyomo. The problem with this approach is, that it requires significant reprogramming-effort every time the structure of the neural network changes, so quick testing becomes difficult. Also integration of recurrent neural networks gets difficult because hidden cell states would have to be added as additional variables to the optimization problem.
I think a good solution could be to do the function evaluations and gradient calculations in torch and somehow pass the results to the dynamic optimizer. I'm just not sure how to do this.
Thanks a lot for your help!
Tensorflow or Pytorch models can't be directly integrated into the GEKKO at this moment. But, I believe you can retrieve the derivatives from Tensorflow and Pytorch, which allows you to pass them to the GEKKO.
There is a GEKKO Brain module and examples in the link below. You can also find an example that uses GEKKO Feedforward neural network for dynamic optimization.
GEKKO Brain Feedforward neural network examples
MIMO MPC example with GEKKO neural network model
Recurrent Neural Network library in the GEKKO Brain module is currently being developed, which allows using all the GEKKO's dynamic optimization functions easily.
In the meantime, you can use a sequential method by wrapping the TensorFlow or PyTorch models in the available optimization solver such as scipy optimization module.
Check out the below link for a dynamic optimization example with Keras LSTM model and scipy optimize.
Keras LSTM MPC
I have a regression model based on various independent features which eventually predict a value with a custom loss function. Somewhat similar to the link below.
https://www.evergreeninnovations.co/blog-quantile-loss-function-for-machine-learning/
The current model is built using Tensorflow library but now I want to use MXNet becuase of the speed and other advantages it provides. How to write a similar logic in MXNet with a custom loss function?
Simple regression with L2 loss is featured in 2 famous tutorials - you can just pick any of those and customize the loss:
In the D2L.ai book (used at many universities):
https://d2l.ai/chapter_linear-networks/linear-regression-gluon.html
In The Straight Dope (guide to the python API of MXNet,
gluon). A lot of that guide went into D2L.ai:
https://gluon.mxnet.io/chapter02_supervised-learning/linear-regression-gluon.html
I am trying to extracted feature importance from a model built in python using tf.estimator.BoostedTreeRegressor.
It looks like a standard way to achieve it is by iterating over all trees in the forest and from the importance of each tree's coefficients to calculate some statistics.
Example in sklearn, xgboost. I have not found how to address this issue in tensorflow.
This is not possible at the moment using TensorFlow's Premade BoostedTreeRegressor or Classifier Estimators.
I am currently learning to make neural networks with tensorflow. And the library provides a very convenient way to create one with the estimator DNNClassifier like in this tutorial: https://www.tensorflow.org/get_started/premade_estimators.
However, I don't manage to see how to choose the final treshold of the output layer before making the prediction:
For instance, let's say we have a binary classifier between 'KO' and 'OK'. The end of the neural network compute the probabilities for each possibility for a specific sample, for instance [0.4,0.6] (so 40% that the answer is 'KO' and 60% that the answer is 'OK'). I assume that the dnn takes by default a threshold of 0.5, so it will answer 'OK' here. But I want to change this threshold to 0.8 so that if the dnn is not sure at 80% for 'OK', it will answer 'KO' (in order to tune the FP-rate and the FN-rate).
How can we do that ?
Thanks in advance for your help.
The premade estimators are somewhat rigid. The DNNClassifier, for example, does not provide a mechanism to change the loss function or to obtain the logits/probabilities output by the classifier, as you've discovered.
To modify the logic of how predictions are generated, or to modify your loss function, you'll have to create a custom Estimator. This tutorial walks you through that process.
If you haven't invested too much time learning how to use the Estimator API yet, I recommend you also acquaint yourself with Keras, another high-level API for building and training deep learning models in TensorFlow; you might find it easier to build custom models with Keras rather than Estimators.
In Tensorflow, you can either perform either classification or linear regression to train your inputs against the labels. Is it possible to perform some classification for your inputs (as pre-processing but not necessarily to use Tensorflow) and determine if you want to run the linear regression using Tensorflow?
For example in image denoising task, you have found that your linear regression algorithm can provide a good smoothing effect against the edges but in the meantime also remove the details for the texture objects. Therefore you would like to perform a binary classification to determine if an input is a texture object, and run the linear regression algorithm using Tensorflow; otherwise do nothing for texture object.
I understand Tensorflow supports transfer learning so I guess one of the possible solutions is to perform binary classification using Tensorflow, and transfer the "texture classification" knowledge to instruct Tensorflow to apply linear regression algorithm only when the input is a texture object? Please correct me if I am wrong as I am not too sure if the above task is do-able in Tensorflow (it would be great if you can describe how to do this in details if this is do-able :-) ).
I guess an alternative solution is to use some binary classification without Tensorflow, and filter out (remove) the texture inputs before passing them to Tensorflow.
Please kindly tell me if which of the above solution (or any other solution) is better (if do-able) for the above scenario? Any suggestions are welcome.