Convert frozen graph to tensorflow-js format - tensorflow

I have a SSD model (trained on custom dataset) using Google Object Detection API. I have frozen a checkpoint which generates couple of files (including a *.pb file).
Question : How to convert that frozen inference graph into web-convenient format which can be used by tf-js?
(PS : Official website do mentions an example on the similar lines but it expects saved models format, not frozen graph)

I found the answer. This is a two step conversion process (1) Freeze the checkpoint to frozen graph with input_type as encoded_image_string_tensor (help). (2) Now, we can use the tensorflow JS exporter.
(Note: It is possible that step2 will fail possibly because all the layers are not supported for conversion.)

Related

How to train custom object detection with tfrecord file

here I want to train a object detection model, so I have annotated the data using roboflow and then exported it as tfrecords and also got the (.pbtxt file) and after that I don't have any clue on how to train a can model from scratch with just 2,3 number of hidden layers. am not getting on how to use that tfrecord to fit in my model which I have created. please help me out.
tfrecord files are usually used with Tensorflow Object Detection. It's pretty old and I haven't seen it used in practice recently, but there's a Tensorflow Object Detection tutorial here that uses these tfrecord files.
If there's not a particular reason you need to use TF Object Detection I'd recommend using a newer and more well-supported model like YOLOv5 or YOLOv7.

Can't manage to open TensorFlow SavedModel for usage in Keras

I'm kinda new to TensorFlow and Keras, so please excuse any accidental stupidity, but I have an issue. I've been trying to load in models from the TensorFlow Detection Zoo, but haven't had much success.
I can't figure out how to read these saved_model folders (they contain a saved_model.pb file, and an assets and variables folder), so that they're accepted by Keras. Nor can I figure out a way to convert these models so that they may be loaded in. I've tried converting the SavedModel to ONNX, and then convert the ONNX-model to Keras, but that didn't work. Trying to load the original model as a saved_model, and then trying to to save this loaded model in another format gave me no success either.
Since you are new to Tensorflow (and I guess deep learning) I would suggest you stick with the API because the detection zoo models best interface with the object detection API. If you have already downloaded the model, you just need to export it using the exporter_main_v2.py script. This article explains it very well link.

Convert Subclassed Speech Recognition Model to Tensorflow.js

I have a subclassed Speech Recognition model (link) with which I'd like to make inferences on my node.js server. I am trying to convert it using tfjs but because its a subclassed model I'm getting the following error:
NotImplementedError: Saving the model to HDF5 format requires the model to be a Functional model or a Sequential model. It does not work for subclassed models, because such models are defined via the body of a Python method, which isn't safely serializable. Consider saving to the Tensorflow SavedModel format (by setting save_format="tf") or using `save_weights`.
I am following the official tutorial, which doesn't count this scenario in. And surprisingly I couldn't find much info on the web apart from a closed issue.
Any ideas on how to convert a Subclassed Model to tensorflowjs?
Ok, so I was specifically trying to convert a speech recognition model (link above) and it seems that most such models aren't supported at the moment by tfjs (including mozilla's deepspeech).
It will specifically throw this error:
ValueError: Unsupported Ops in the model before optimization
AudioSpectrogram
The command used being in this case:
tensorflowjs_converter path/to/qnet15/ path/to/qnet15/converted/ --input_format=tf_saved_model --output_format=tfjs_graph_model
This error can be silenced, however, by adding the --skip_op_check flag to the above command. It will generated the expected model.json with its corresponding weight binaries after a bunch of warnings.
But, if you try inference #node, the same error occurs:
Promise {
<rejected> TypeError: Unknown op 'AudioSpectrogram'. File an issue at https://github.com/tensorflow/tfjs/issues so we can add it, or register a custom execution with tf.registerOp()
The model is basically useless. There is an open issue for this feature since some years now.
Instead of using model.save_weights() as in the tutorial you should use the other option which is model.save("my_model_dir") and you can check here for confirmation.
After saving the model directory you would want to convert it using the Tensorflowjs Converter
$ tensorflowjs_converter \
--input_format=tf_saved_model \
my_model_dir \ # input_path
converted_model # output_path

How to convert model trained on custom data-set for the Edge TPU board?

I have trained my custom data-set using the Tensor Flow Object Detection API. I run my "prediction" script and it works fine on the GPU. Now , I want to convert the model to lite and run it on the Google Coral Edge TPU Board to detect my custom objects. I have gone through the documentation that Google Coral Board Website provides but I found it very confusing.
How to convert and run it on the Google Coral Edge TPU Board?
Thanks
Without reading the documentation, it will be very hard to continue. I'm not sure what your "prediction script" means, but I'm assuming that the script loaded a .pb tensorflow model, loaded some image data, and run inference on it to produce prediction results. That means you have a .pb tensorflow model at the "Frozen graph" stage of the following pipeline:
Image taken from coral.ai.
The next step would be to convert your .pb model to a "fully quantized .tflite model" using the post training quantization technique. The documentation to do that are given here. I also created a github gist, containing an example of Post Training Quantization here. Once you have produced the .tflite model, you'll need to compile the model via the edgetpu_compiler. Although everything you need to know about the edgetpu compiler is in that link, for your purpose, compiling a model is as simple as:
$ edgetpu_compiler your_model_name.tflite
Which will creates a your_model_name_edgetpu.tflite model that is compatible with the EdgeTPU. Now, if at this stage, instead of creating an edgetpu compatible model, you are getting some type of errors, then that means your model did not meets the requirements that are posted in the models-requirements section.
Once you have produced a compiled model, you can then deploy it on an edgetpu device. Currently are 2 main APIs that can be use to run inference with the model:
EdgeTPU API
python api
C++ api
tflite API
C++ api
python api
Ultimately, there are many demo examples to run inference on the model here.
The previous answer works with general classification models, but not with TF object detection API trained models.
You cannot do post-training quantization with TF Lite converter on TF object detection API models.
In order to run object detection models on EdgeTPU-s:
You must train the models in quantized aware training mode with this addition in model config:
graph_rewriter {
quantization {
delay: 48000
weight_bits: 8
activation_bits: 8
}
}
This might not work with all the models provided in the model-zoo, try a quantized model first.
After training, export the frozen graph with: object_detection/export_tflite_ssd_graph.py
Run tensorflow/lite/toco tool on the frozen graph to make it TFLite compatible
And finally run edgetpu_complier on the .tflite file
You can find more in-depth guide here:
https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/running_on_mobile_tensorflowlite.md

Using TensorFlow object detection API models at prediction

I have used the TensorFlow object detection API to train the SSD Inception model from scratch. The evaluation script shows that the model has learned something and now I want to use the model.
I have looked at the object detection ipynb that can feed single images to a trained model. However, this is for SSD with MobileNet. I have used the following line (after loading the meta graph) to print the tensor names of the TensorFlow model I trained.
print([str(op.name) for op in tf.get_default_graph().get_operations()] )
But it does not contain the same input or output tensor names as in the ipynb. I have also searched through the code, but many functions point toward each other and it is difficult to find what I am looking for.
How can I find the tensor names I need? Or is there another method I do not know about?
To use the graph, you need to freeze/export it, using this provided script. The resulting .pb file will contain the nodes you need. I don't know why it's organized like that, but it is.