How to transfer learning or fine tune tensorflow lite tflite - tensorflow

I have a darknet yolov4 model that is converted to tflite file and trained by COCO dataset for object detection.
I want to train that for my traffic sign dataset. (GTSDB)
How should I do that for my tflite file?
I don't want to do that in darknet and then convert to tflite. I want to transfer learning directly from tflite file.

Related

How was the ssd_mobilenet_v1 tflite model in TFHub trained?

How do I find more info on how the ssd_mobilenet_v1 tflite model on TFHub was trained?
Was it trained in such a way that made it easy to convert it to tflite by avoiding certain ops not supported by tflite? Or was it trained normally, and then converted using the tflite converter with TF Select and the tips on this github issue?
Also, does anyone know if there's an equivalent mobilenet tflite model trained on OpenImagesV6? If not, what's the best starting point for training one?
I am not sure about about the exact origin of the model, but looks like it does have TFLite-compatible ops. From my experience, the best place to start for TFLite-compatible SSD models is with the TF2 Detection Zoo. You can convert any of the SSD models using these instructions.
To train your own model, you can follow these instructions that leverage Google Cloud.

Is it possible to quantize a Tensorflow Lite model to 8-bit weights without the original HDF5 file?

I'm trying to compile a tflite model with the edgetpu compiler to make it compatible with Google's Coral USB key, but when I run edgetpu_compiler the_model.tflite I get a Model not quantized error.
I then wanted to quantize the tflite model to an 8-bit integer format, but I don't have the model's original .h5 file.
Is it possible to quantize a tflite-converted model to an 8-bit format?
#garys unfortunately, tensorflow doesn't have an API to quantize a float tflite model. For post training quantization, the only API they have is for full tensorflow models (.pb, hdf5, h5, saved_model...) -> tflite. The quantization process happens during tflite conversion, so to my knowledge, there isn't a way to do this

how to generate a prototxt file for tensorflow frozen model?

I want to know analyze the CNN in NetScope CNN analyzer, but unfortunately it only accepts caffe prototxt files. So is there any way to convert the tensorflow frozen model to caffe model to generate a prototxt file?

Can you convert a .tflite model file to .coreml - or back to a Tensorflow .pb file or keras h5 file?

General question: is there tooling to convert from tflite format to any other format?
I'm trying to convert a keras model to a CoreML model, but I can't because the model uses a layer type unsupported by CoreML (Gaussian Noise). Converting the keras .h5 model to a .tflite is simple, removes the offending layer (which is only used in training anyway), and performs some other optimisations. But it doesn't seem possible to convert out of the resultant tflite to any other format. Coremltools doesn't support tflite. I thought I could probably load the model from tflite into a tensorflow session, save a .pb from there, and convert that to coreml using coremltools, but I can't see a way to load the tflite model into a tensorflow session. I saw the documentation linked to in this question, but that seems to use the tflite interpreter to read the tflite model, rather than a "true" Tensorflow session.

How to convert frozen inference graph or frozen inference graph to SavedModel

I have a frozen inference graph(frozen_inference_graph.pb) and a checkpoint (model.ckpt.data-00000-of-00001, model.ckpt.index), how to deploy these to Tensorflow serving? serving need SavedModel format, how to convert to it?
I study Tensorflow and found Deeplab v3+ provide PASCAL VOC 2012 model, I run train, eval, visualization on my local PC, but I don't know how to deploy it on serving.
Have you tried export_inference_graph.py?
Prepares an object detection tensorflow graph for inference using model
configuration and a trained checkpoint. Outputs inference
graph, associated checkpoint files, a frozen inference graph and a
SavedModel