Can I run a model trained using tensorflow on mxnet? - tensorflow

I have models trained on tensorflow. Can I use mxnet in forward only mode to run these ?
https://github.com/dmlc/nnvm says this should be possible in future, but is the support available today ?

MXNet doesn't have tensorflow model converter yet. It does have a caffe-to-mxnet converter. So you can convert your tf model to caffe, that would work..
https://github.com/dmlc/mxnet/tree/master/tools/caffe_converter

Related

Converting Tensorflow Lite Model to Tensorflow Model

Is there's any way to convert a Tensorflow Lite model to a normal Tensorflow Model that i can use with Tensorflow.Javascript?
not supported by tfjs official converters as this is considered one-way conversion. but this tool is pretty amazing, although not the easiest to setup: https://github.com/PINTO0309/tflite2tensorflow

How to do fine tuning on TFlite model

I would like to fine tune a model on my own data. However the model is distributed by tflite format. Is there anyway to extract the model architecture and parameters out of the tflite file?
One approach could be to convert the TFLite file to another format, and import into a deep learning framework that supports training.
Something like ONNX, using tflite2onnx, and then import into a framework of your choice. Not all frameworks can import from ONNX (e.g. PyTorch). I believe you can train with ONNXRuntime, and MXNet. Unsure if you can train using TensorFlow.
I'm not sure to understand what you need. But if you want to know the exact architecture of your model you can use neutron to find out.
You will get something like the this :
And for your information TensorFlow Lite is not meant to be finetuned. You need to finetune a classic TensorFlow model and then convert it to TensorFlow Lite.

BidirectionalRNN in tensorflow keras for tensorflow lite

I want to convert a model using the bidirectional RNN to a tensorflow lite model.
What can be an equivalent way of achieving the same effect as tf.keras.layers.Bidirectional by writing lower level code.
some of the derivations for that code can be found in this course.

Can I quantize my tensorflow graph for the full version of TF, not tflite?

I need to quantify my model for use in the full version of tensorflow. And I do not find how to do this (in the official manual for quantization of the model, the model is saved in the format tflite)
AFAIK the only supported quantization scheme in tensorflow is tflite. What do you plan to do with a quantized tensorflow graph? If it is inference only, why not simply use tflite?

How to import trained DNNClassifier using C_API

I have trained DNNClassifier using Python (conda tensorflow installation). The trained model needs to be used for evaluation using C_API. Is there a way to load both graph and weights of the trained model using C_API?
There is a way to load h5 and any data for C_API. Maybe some googling could help. I've found this article to be helpful.
And for DNNClassifier on C_API I think you should Implement it manually using pure Tensor Array on C_API. cmiimw