Is it possible to quantize a Tensorflow Lite model to 8-bit weights without the original HDF5 file? - tensorflow

I'm trying to compile a tflite model with the edgetpu compiler to make it compatible with Google's Coral USB key, but when I run edgetpu_compiler the_model.tflite I get a Model not quantized error.
I then wanted to quantize the tflite model to an 8-bit integer format, but I don't have the model's original .h5 file.
Is it possible to quantize a tflite-converted model to an 8-bit format?

#garys unfortunately, tensorflow doesn't have an API to quantize a float tflite model. For post training quantization, the only API they have is for full tensorflow models (.pb, hdf5, h5, saved_model...) -> tflite. The quantization process happens during tflite conversion, so to my knowledge, there isn't a way to do this

Related

How to transfer learning or fine tune tensorflow lite tflite

I have a darknet yolov4 model that is converted to tflite file and trained by COCO dataset for object detection.
I want to train that for my traffic sign dataset. (GTSDB)
How should I do that for my tflite file?
I don't want to do that in darknet and then convert to tflite. I want to transfer learning directly from tflite file.

How was the ssd_mobilenet_v1 tflite model in TFHub trained?

How do I find more info on how the ssd_mobilenet_v1 tflite model on TFHub was trained?
Was it trained in such a way that made it easy to convert it to tflite by avoiding certain ops not supported by tflite? Or was it trained normally, and then converted using the tflite converter with TF Select and the tips on this github issue?
Also, does anyone know if there's an equivalent mobilenet tflite model trained on OpenImagesV6? If not, what's the best starting point for training one?
I am not sure about about the exact origin of the model, but looks like it does have TFLite-compatible ops. From my experience, the best place to start for TFLite-compatible SSD models is with the TF2 Detection Zoo. You can convert any of the SSD models using these instructions.
To train your own model, you can follow these instructions that leverage Google Cloud.

Can I quantize my tensorflow graph for the full version of TF, not tflite?

I need to quantify my model for use in the full version of tensorflow. And I do not find how to do this (in the official manual for quantization of the model, the model is saved in the format tflite)
AFAIK the only supported quantization scheme in tensorflow is tflite. What do you plan to do with a quantized tensorflow graph? If it is inference only, why not simply use tflite?

Can you convert a .tflite model file to .coreml - or back to a Tensorflow .pb file or keras h5 file?

General question: is there tooling to convert from tflite format to any other format?
I'm trying to convert a keras model to a CoreML model, but I can't because the model uses a layer type unsupported by CoreML (Gaussian Noise). Converting the keras .h5 model to a .tflite is simple, removes the offending layer (which is only used in training anyway), and performs some other optimisations. But it doesn't seem possible to convert out of the resultant tflite to any other format. Coremltools doesn't support tflite. I thought I could probably load the model from tflite into a tensorflow session, save a .pb from there, and convert that to coreml using coremltools, but I can't see a way to load the tflite model into a tensorflow session. I saw the documentation linked to in this question, but that seems to use the tflite interpreter to read the tflite model, rather than a "true" Tensorflow session.

How to convert Dlib weights into tflite format?

I want to convert Dlib weights for Face Detection, Face landmarks and Face recognition that is in .dat format into .tflite format. Tensorflow lite requires input format in tensorflow_saved model/ Frozen graph (.pb) or keras model (.h5) format. Conversion of Dlib .dat to any of these will also work. Can anyone help me out that how to do it and are there converted files available?
Tensorflow lite requires input format in tensorflow_saved model/ Frozen graph (.pb) or keras model (.h5) format. Conversion of Dlib .dat to any of these will also work.
I think you're on the right track. You should try to convert Dlib to TensorFlow frozen graph, then convert the TensorFlow frozen graph to TensorFlow Lite format following the guide.
Have you tried this? Did you run into any problem when running tflite_convert? If you have further questions, please update the original question with detailed error messages.