I would like to impl. a random forest regression via tensorflow 2.3 but I cannot find any example for that. is it possible to do the random forest regression via tensorflow 2.3?
The same problem with svm, svr :/
I cannot use sklearn, because I have to use golang in running system. Maybe I can do the random forest regression via sklearn but how can I read the model via tensorflow? I think it is not possible.
Related
Machine learning framework comprise, amongst other things, the following functions:
augmentations
metrics and losses
These functions are simple conversions of tensors and seem rather framework independent. However, for example tensorflow's categorical crossentropy loss uses some tensorflow specific functions like tf.convert_to_tensor() or tf.cast(). So it cannot be used easily in pytorch. Also tensorflow heavily prefers to work with tensorflow tensors instead of numpy ones to create tensorflow graphs to my knowledge.
Are there any existing efforts or ideas how to write such functions in a way that they can be used in both frameworks? I'm thinking of pure numpy functions which can be somehow converted to either tensorflow or pytorch.
I have a regression model based on various independent features which eventually predict a value with a custom loss function. Somewhat similar to the link below.
https://www.evergreeninnovations.co/blog-quantile-loss-function-for-machine-learning/
The current model is built using Tensorflow library but now I want to use MXNet becuase of the speed and other advantages it provides. How to write a similar logic in MXNet with a custom loss function?
Simple regression with L2 loss is featured in 2 famous tutorials - you can just pick any of those and customize the loss:
In the D2L.ai book (used at many universities):
https://d2l.ai/chapter_linear-networks/linear-regression-gluon.html
In The Straight Dope (guide to the python API of MXNet,
gluon). A lot of that guide went into D2L.ai:
https://gluon.mxnet.io/chapter02_supervised-learning/linear-regression-gluon.html
new Tensorflow 2.0 user. My project requires me to investigate the weights for the neural network i created in Tensorflow (super simple one). I think I know how to do it in the regular Tensorflow case. Namely I use the command model.save_weights(filename). I would like to repeat this effort for a .tflite model but I am having trouble. Instead of generating my own tensorflow lite model, I am using one of the many models which are provided online: https://www.tensorflow.org/lite/guide/hosted_model to avoid having to troubleshoot my use of the Tensorflow Lite converter. Any thoughts?
I am trying to extracted feature importance from a model built in python using tf.estimator.BoostedTreeRegressor.
It looks like a standard way to achieve it is by iterating over all trees in the forest and from the importance of each tree's coefficients to calculate some statistics.
Example in sklearn, xgboost. I have not found how to address this issue in tensorflow.
This is not possible at the moment using TensorFlow's Premade BoostedTreeRegressor or Classifier Estimators.
Can I train a model in C++ in Tensorflow? I don't see any optimizers exposed in it's C++ API. Are the optimizers written in Python? If not, how can I train a graph in C++? I'm able to import a Python trained graph in C++, but I want to write the code fully in C++ (training and inference)
I have found an example training file from the official repository
https://github.com/tensorflow/tensorflow/blob/master/tensorflow/cc/tutorials/example_trainer.cc
I do believe that this is only basic training of some sort, not open to all the optimizations that the python API has. I will keep looking around for more info.
Auto-differentiation is currently not implemented in C in tensorflow so training complex models in C is a huge task. They say they are working on it: https://github.com/tensorflow/tensorflow/issues/4130