How to convert trained PyTorch model to Keras model? - tensorflow

I am aware of ONNX package but I am not sure how to use it here. The tutorial on their Github page helps convert to Tensorflow but I want to convert it to Keras!
Any help would be appreciated.

Related

Tensorflow object detection api model to tflite

Is there a way to convert tensorflow Api models to Tflite for example(faster-rcnn model to tflite model)
if yes would like to know how.
Thanks
Please go to https://www.tensorflow.org/lite/convert for general instructions to convert. If you're looking for models to start, go to https://www.tensorflow.org/hub.

How was the ssd_mobilenet_v1 tflite model in TFHub trained?

How do I find more info on how the ssd_mobilenet_v1 tflite model on TFHub was trained?
Was it trained in such a way that made it easy to convert it to tflite by avoiding certain ops not supported by tflite? Or was it trained normally, and then converted using the tflite converter with TF Select and the tips on this github issue?
Also, does anyone know if there's an equivalent mobilenet tflite model trained on OpenImagesV6? If not, what's the best starting point for training one?
I am not sure about about the exact origin of the model, but looks like it does have TFLite-compatible ops. From my experience, the best place to start for TFLite-compatible SSD models is with the TF2 Detection Zoo. You can convert any of the SSD models using these instructions.
To train your own model, you can follow these instructions that leverage Google Cloud.

How to do fine tuning on TFlite model

I would like to fine tune a model on my own data. However the model is distributed by tflite format. Is there anyway to extract the model architecture and parameters out of the tflite file?
One approach could be to convert the TFLite file to another format, and import into a deep learning framework that supports training.
Something like ONNX, using tflite2onnx, and then import into a framework of your choice. Not all frameworks can import from ONNX (e.g. PyTorch). I believe you can train with ONNXRuntime, and MXNet. Unsure if you can train using TensorFlow.
I'm not sure to understand what you need. But if you want to know the exact architecture of your model you can use neutron to find out.
You will get something like the this :
And for your information TensorFlow Lite is not meant to be finetuned. You need to finetune a classic TensorFlow model and then convert it to TensorFlow Lite.

How to convert Dlib weights into tflite format?

I want to convert Dlib weights for Face Detection, Face landmarks and Face recognition that is in .dat format into .tflite format. Tensorflow lite requires input format in tensorflow_saved model/ Frozen graph (.pb) or keras model (.h5) format. Conversion of Dlib .dat to any of these will also work. Can anyone help me out that how to do it and are there converted files available?
Tensorflow lite requires input format in tensorflow_saved model/ Frozen graph (.pb) or keras model (.h5) format. Conversion of Dlib .dat to any of these will also work.
I think you're on the right track. You should try to convert Dlib to TensorFlow frozen graph, then convert the TensorFlow frozen graph to TensorFlow Lite format following the guide.
Have you tried this? Did you run into any problem when running tflite_convert? If you have further questions, please update the original question with detailed error messages.

How to import trained DNNClassifier using C_API

I have trained DNNClassifier using Python (conda tensorflow installation). The trained model needs to be used for evaluation using C_API. Is there a way to load both graph and weights of the trained model using C_API?
There is a way to load h5 and any data for C_API. Maybe some googling could help. I've found this article to be helpful.
And for DNNClassifier on C_API I think you should Implement it manually using pure Tensor Array on C_API. cmiimw