I have a regression model based on various independent features which eventually predict a value with a custom loss function. Somewhat similar to the link below.
https://www.evergreeninnovations.co/blog-quantile-loss-function-for-machine-learning/
The current model is built using Tensorflow library but now I want to use MXNet becuase of the speed and other advantages it provides. How to write a similar logic in MXNet with a custom loss function?
Simple regression with L2 loss is featured in 2 famous tutorials - you can just pick any of those and customize the loss:
In the D2L.ai book (used at many universities):
https://d2l.ai/chapter_linear-networks/linear-regression-gluon.html
In The Straight Dope (guide to the python API of MXNet,
gluon). A lot of that guide went into D2L.ai:
https://gluon.mxnet.io/chapter02_supervised-learning/linear-regression-gluon.html
Related
Is there any equivalent/alternate library to fastai in tensorfow for easier training and debugging deep learning models including analysis on results of trained model in Tensorflow.
Fastai is built on top of pytorch looking for similar one in tensorflow.
The obvious choice would be to use tf.keras.
It is bundled with tensorflow and is becoming its official "high-level" API -- to the point where in TF 2 you would probably need to go out of your way not using it at all.
It is clearly the source of inspiration for fastai to easy the use of pytorch as Keras does for tensorflow, as mentionned by the authors time and again:
Unfortunately, Pytorch was a long way from being a good option for part one of the course, which is designed to be accessible to people with no machine learning background. It did not have anything like the clear simple API of Keras for training models. Every project required dozens of lines of code just to implement the basics of training a neural network. Unlike Keras, where the defaults are thoughtfully chosen to be as useful as possible, Pytorch required everything to be specified in detail. However, we also realised that Keras could be even better. We noticed that we kept on making the same mistakes in Keras, such as failing to shuffle our data when we needed to, or vice versa. Also, many recent best practices were not being incorporated into Keras, particularly in the rapidly developing field of natural language processing. We wondered if we could build something that could be even better than Keras for rapidly training world-class deep learning models.
I am trying to extracted feature importance from a model built in python using tf.estimator.BoostedTreeRegressor.
It looks like a standard way to achieve it is by iterating over all trees in the forest and from the importance of each tree's coefficients to calculate some statistics.
Example in sklearn, xgboost. I have not found how to address this issue in tensorflow.
This is not possible at the moment using TensorFlow's Premade BoostedTreeRegressor or Classifier Estimators.
I am currently learning to make neural networks with tensorflow. And the library provides a very convenient way to create one with the estimator DNNClassifier like in this tutorial: https://www.tensorflow.org/get_started/premade_estimators.
However, I don't manage to see how to choose the final treshold of the output layer before making the prediction:
For instance, let's say we have a binary classifier between 'KO' and 'OK'. The end of the neural network compute the probabilities for each possibility for a specific sample, for instance [0.4,0.6] (so 40% that the answer is 'KO' and 60% that the answer is 'OK'). I assume that the dnn takes by default a threshold of 0.5, so it will answer 'OK' here. But I want to change this threshold to 0.8 so that if the dnn is not sure at 80% for 'OK', it will answer 'KO' (in order to tune the FP-rate and the FN-rate).
How can we do that ?
Thanks in advance for your help.
The premade estimators are somewhat rigid. The DNNClassifier, for example, does not provide a mechanism to change the loss function or to obtain the logits/probabilities output by the classifier, as you've discovered.
To modify the logic of how predictions are generated, or to modify your loss function, you'll have to create a custom Estimator. This tutorial walks you through that process.
If you haven't invested too much time learning how to use the Estimator API yet, I recommend you also acquaint yourself with Keras, another high-level API for building and training deep learning models in TensorFlow; you might find it easier to build custom models with Keras rather than Estimators.
This is a newbie question for the tensorflow experts:
I reading lot of data from power transformer connected to an array of solar panels using arduinos, my question is can I use tensorflow to predict the power generation in future.
I am completely new to tensorflow, if can point me to something similar I can start with that or any github repo which is doing similar predictive modeling.
Edit: Kyle pointed me to the MNIST data, which I believe is a Image Dataset. Again, not sure if tensorflow is the right computation library for this problem or does it only work on Image datasets?
thanks, Rajesh
Surely you can use tensorflow to solve your problem.
TensorFlowâ„¢ is an open source software library for numerical
computation using data flow graphs.
So it works not only on Image dataset but also others. Don't worry about this.
And about prediction, first you need to train a model(such as linear regression) on you dataset, then predict. The tutorial code can be found in tensorflow homepage .
Get your hand dirty, you will find it works on your dataset.
Good luck.
You can absolutely use TensorFlow to predict time series. There are plenty of examples out there, like this one. And this is a really interesting one on using RNN to predict basketball trajectories.
In general, TF is a very flexible platform for solving problems with machine learning. You can create any kind of network you can think of in it, and train that network to act as a model for your process. Depending on what kind of costs you define and how you train it, you can build a network to classify data into categories, predict a time series forward a number of steps, and other cool stuff.
There is, sadly, no short answer for how to do this, but that's just because the possibilities are endless! Have fun!
Can I train a model in C++ in Tensorflow? I don't see any optimizers exposed in it's C++ API. Are the optimizers written in Python? If not, how can I train a graph in C++? I'm able to import a Python trained graph in C++, but I want to write the code fully in C++ (training and inference)
I have found an example training file from the official repository
https://github.com/tensorflow/tensorflow/blob/master/tensorflow/cc/tutorials/example_trainer.cc
I do believe that this is only basic training of some sort, not open to all the optimizations that the python API has. I will keep looking around for more info.
Auto-differentiation is currently not implemented in C in tensorflow so training complex models in C is a huge task. They say they are working on it: https://github.com/tensorflow/tensorflow/issues/4130