The Tensorflow.js model used does not load properly - tensorflow

I trained the model according to this tutorial, but it doesn't work, seems to be too new
Custom object detection in the browser using TensorFlow.js
Training a model for custom object detection (TF 2.x) on Google Colab
After I load the model in this example, I got some errors
https://github.com/tensorflow/tfjs-models/tree/master/coco-ssd
Error:https://github.com/tensorflow/tfjs/issues/4638
I don't understand the reason, but I also don't understand the model output information
Here is a working sample code
https://github.com/mtrucc/react-ts-fpkqes
Here is the working model
https://github.com/mtrucc/react-ts-fpkqes/tree/main/public/mask_model
This is the model I trained myself and converted using the command, it doesn't work
https://github.com/mtrucc/react-ts-fpkqes/tree/main/public/mask_model2
You can switch models by modifying the path in the code
https://github.com/mtrucc/react-ts-fpkqes/blob/6cb2ce09f4c44673098749041244b0f998f8d8f4/src/components/mask.js#L35
Here is my conversion command
tensorflowjs_converter --input_format=tf_saved_model \
--output_format=tfjs_graph_model \
/content/gdrive/MyDrive/customTF2/data/inference_graph/saved_model \
/content/gdrive/MyDrive/customTF2/data/inference_graph/saved_model_js
I don't know where is the problem? If you need more details, I can add.

Related

How to Convert Yolov5 model to tensorflow.js

Is it possible to convert the YOLOv5 PyTorch model to the Tensorflow.js model?
I am developing an object detection web app. so I have trained the data using Yolov5, but now I am looking for the correct method to convert that model to Tf.js.
I think this is what you are looking for https://github.com/zldrobit/tfjs-yolov5-example
Inside the YoloV5 repo, run the export.py command.
python export.py --weights yolov5s.pt --include tfjs
Then cd into the above linked repo and copy the weights folder to the public:
cp ./yolov5s_web_model public/web_model
Don't forget, you'll have to change the names array in src/index.js to match your custom model.
But unfortunately, it seems painfully slow at about 1-2 seconds. I don't think I was able to get WebGL working.
With few interim steps, but most of the times it works:
Export PyTorch to ONNX
Convert ONNX to TF Saved Model
Convert TF Saved Model to TFJS Graph Model
When converting from ONNX to TF, you might need to adjust target version if you run into unsupported ops.
Also, make sure to set input resolutions to a fixed values, any dynamic inputs get messed up in this multi-step conversion.

How can I convert the model I trained with Tensorflow (python) for use with TensorflowJS without involving IBM cloud (from the step I'm at now)?

What I'm trying to do
I'm trying to learn TensorFlow object recognition and as usual with new things, I scoured the web for tutorials. I don't want to involve any third party cloud service or web development framework, I want to learn to do it with just native JavaScript, Python, and the TensorFlow library.
What I have so far
So far, I've followed a TensorFlow object detection tutorial (accompanied by a 5+ hour video) to the point where I've trained a model in Tensorflow (python) and want to convert it to run in a browser via TensorflowJS. I've also tried other tutorials and haven't seemed to find one that explains how to do this without a third party cloud / tool and React.
I know in order to use this model with tensorflow.js my goal is to get files like:
group1-shard1of2.bin
group1-shard2of2.bin
labels.json
model.json
I've gotten to the point where I created my TFRecord files and started training:
py Tensorflow\models\research\object_detection\model_main_tf2.py --model_dir=Tensorflow\workspace\models\my_ssd_mobnet --pipeline_config_path=Tensorflow\workspace\models\my_ssd_mobnet\pipeline.config --num_train_steps=100
It seems after training the model, I'm left with:
files named checkpoint, ckpt-1.data-00000-of-00001, ckpt-1.index, pipeline.config
the pre-trained model (which I believe isn't the file that changes during training, right?) ssd_mobilenet_v2_fpnlite_320x320_coco17_tpu-8
I'm sure it's not hard to get from this step to the files I need, but I honestly browsed a lot of documentation and tutorials and google and didn't see an example of doing it without some third party cloud service. Maybe it's in the documentation, I'm missing something obvious.
The project directory structure looks like this:
Where I've looked for an answer
For some reason, frustratingly, every single tutorial I've found (including the one linked above) for using a pre-trained Tensorflow model for object detection via TensorFlowJS has required the use of IBM Cloud and ReactJS. Maybe they're all copying from some tutorial they found and now all the tutorials include this, I don't know. What I do know is I'm building an Electron.js desktop app and object detection shouldn't require network connectivity assuming the compute is happening on the user's device. To clarify: I'm creating an app where the user trains the model, so it's not just a matter of one time conversion. I want to be able to train with Python Tensorflow and convert the model to run on JavaScript Tensorflow without any cloud API.
So I stopped looking for tutorials and tried looking directly at the documentation at https://github.com/tensorflow/tfjs.
When you get to the section about importing pre-trained models, it says:
Importing pre-trained models
We support porting pre-trained models from:
TensorFlow SavedModel
Keras
So I followed that link to Tensorflow SavedModel, which brings us to a project called tfjs-converter. That repo says:
This repository has been archived in favor of tensorflow/tfjs.
This repo will remain around for some time to keep history but all
future PRs should be sent to tensorflow/tfjs inside the tfjs-core
folder.
All history and contributions have been preserved in the monorepo.
Which sounds a bit like a circular reference to me, considering it's directing me to the page that just told me to go here. So at this point you're wondering well is this whole library deprecated, will it work or what? I look around in this repo anyway, into: https://github.com/tensorflow/tfjs-converter/tree/master/tfjs-converter
It says:
A 2-step process to import your model:
A python pip package to convert a TensorFlow SavedModel or TensorFlow Hub module to a web friendly format. If you already have a converted model, or are using an already hosted model (e.g. MobileNet), skip this step.
JavaScript API, for loading and running inference.
And basically says to create a venv and do:
pip install tensorflowjs
tensorflowjs_converter \
--input_format=tf_saved_model \
--output_format=tfjs_graph_model \
--signature_name=serving_default \
--saved_model_tags=serve \
/mobilenet/saved_model \
/mobilenet/web_model
But wait, are the checkpoint files I have a "TensorFlow SavedModel"? This doesn't seem clear, the documentation doesn't explain. So I google it, find the documentation, and it says:
You can save and load a model in the SavedModel format using the
following APIs:
Low-level tf.saved_model API. This document describes how to use this
API in detail. Save: tf.saved_model.save(model, path_to_dir)
The linked syntax extrapolates somewhat:
tf.saved_model.save(
obj, export_dir, signatures=None, options=None
)
with an example:
class Adder(tf.Module):
#tf.function(input_signature=[tf.TensorSpec(shape=[], dtype=tf.float32)])
def add(self, x):
return x + x
model = Adder()
tf.saved_model.save(model, '/tmp/adder')
But so far, this isn't familiar at all. I don't understand how to take the results of my training process so far (the checkpoints) to load it into a variable model so I can pass it to this function.
This passage seems important:
Variables must be tracked by assigning them to an attribute of a
tracked object or to an attribute of obj directly. TensorFlow objects
(e.g. layers from tf.keras.layers, optimizers from tf.train) track
their variables automatically. This is the same tracking scheme that
tf.train.Checkpoint uses, and an exported Checkpoint object may be
restored as a training checkpoint by pointing
tf.train.Checkpoint.restore to the SavedModel's "variables/"
subdirectory.
And it might be the answer, but I'm not really clear on what it means as far as being "restored", or where I go from there, if that's even the right step to take. All of this is very confusing to someone learning TF which is why I looked for a tutorial that does it, but again, I can't seem to find one without third party cloud services / React.
Please help me connect the dots.
You can convert your model to TensorFlowJS format without any cloud services. I have laid out the steps below.
I'm sure it's not hard to get from this step to the files I need.
The checkpoints you see are in tf.train.Checkpoint format (relevant source code that creates these checkpoints in the object detection model code). This is different from the SavedModel and Keras formats.
We will go through these steps:
Checkpoint (current) --> SavedModel --> TensorFlowJS
Converting from tf.train.Checkpoint to SavedModel
Please see the script models/research/object_detection/export_inference_graph.py to convert the Checkpoint files to SavedModel.
The code below is taken from the docs of that script. Please adjust the paths to your specific project. --input_type should remain as image_tensor.
python export_inference_graph.py \
--input_type image_tensor \
--pipeline_config_path path/to/ssd_inception_v2.config \
--trained_checkpoint_prefix path/to/model.ckpt \
--output_directory path/to/exported_model_directory
In the output directory, you should see a savedmodel directory. We will use this in the next step.
Converting SavedModel to TensorFlowJS
Follow the instructions at https://github.com/tensorflow/tfjs/tree/master/tfjs-converter, specifically paying attention to the "TensorFlow SavedModel example". The example conversion code is copied below. Please modify the input and output paths for your project. The --signature_name and --saved_model_tags might have to be changed, but hopefully not.
tensorflowjs_converter \
--input_format=tf_saved_model \
--output_format=tfjs_graph_model \
--signature_name=serving_default \
--saved_model_tags=serve \
/mobilenet/saved_model \
/mobilenet/web_model
Using the TensorFlowJS model
I know in order to use this model with tensorflow.js my goal is to get files like:
group1-shard1of2.bin
group1-shard2of2.bin
labels.json
model.json
The steps above should create these files for you, though I don't think labels.json will be created. I am not sure what that file should contain. TensorFlowJS will use model.json to construct the inference graph, and it will load the weights from the .bin files.
Because we converted a TensorFlow SavedModel to a TensorFlowJS model, we will need to load the JS model with tf.loadGraphModel(). See the tfjs converter page for more information.
Note that for TensorFlowJS, there is a difference between a TensorFlow SavedModel and a Keras SavedModel. Here, we are dealing with a TensorFlow SavedModel.
The Javascript code to run the model is probably out of scope for this answer, but I would recommend reading this TensorFlowJS tutorial. I have included a representative javascript portion below.
import * as tf from '#tensorflow/tfjs';
import {loadGraphModel} from '#tensorflow/tfjs-converter';
const MODEL_URL = 'model_directory/model.json';
const model = await loadGraphModel(MODEL_URL);
const cat = document.getElementById('cat');
model.execute(tf.browser.fromPixels(cat));
Extra notes
... Which sounds a bit like a circular reference to me,
The TensorFlowJS ecosystem has been consolidated in the tensorflow/tfjs GitHub repository. The tfjs-converter documentation lives there now. You can create a pull request to https://github.com/tensorflow/tfjs to fix the SavedModel link to point to the tensorflow/tfjs repository.

Convert Subclassed Speech Recognition Model to Tensorflow.js

I have a subclassed Speech Recognition model (link) with which I'd like to make inferences on my node.js server. I am trying to convert it using tfjs but because its a subclassed model I'm getting the following error:
NotImplementedError: Saving the model to HDF5 format requires the model to be a Functional model or a Sequential model. It does not work for subclassed models, because such models are defined via the body of a Python method, which isn't safely serializable. Consider saving to the Tensorflow SavedModel format (by setting save_format="tf") or using `save_weights`.
I am following the official tutorial, which doesn't count this scenario in. And surprisingly I couldn't find much info on the web apart from a closed issue.
Any ideas on how to convert a Subclassed Model to tensorflowjs?
Ok, so I was specifically trying to convert a speech recognition model (link above) and it seems that most such models aren't supported at the moment by tfjs (including mozilla's deepspeech).
It will specifically throw this error:
ValueError: Unsupported Ops in the model before optimization
AudioSpectrogram
The command used being in this case:
tensorflowjs_converter path/to/qnet15/ path/to/qnet15/converted/ --input_format=tf_saved_model --output_format=tfjs_graph_model
This error can be silenced, however, by adding the --skip_op_check flag to the above command. It will generated the expected model.json with its corresponding weight binaries after a bunch of warnings.
But, if you try inference #node, the same error occurs:
Promise {
<rejected> TypeError: Unknown op 'AudioSpectrogram'. File an issue at https://github.com/tensorflow/tfjs/issues so we can add it, or register a custom execution with tf.registerOp()
The model is basically useless. There is an open issue for this feature since some years now.
Instead of using model.save_weights() as in the tutorial you should use the other option which is model.save("my_model_dir") and you can check here for confirmation.
After saving the model directory you would want to convert it using the Tensorflowjs Converter
$ tensorflowjs_converter \
--input_format=tf_saved_model \
my_model_dir \ # input_path
converted_model # output_path

Tensorflow2 Object Detection API: PR-Curve

I have trained a model using the Tensorflow 2 Object Detection API on a custom dataset with the model_main_tf2.py script, exported it with export_inference_graph.py and evaluated it with model_main_tf2.py by setting the checkpoint_dir flag.
I would like to see the PR-curve for my exported model. However, I can't seem to find a way to do this. I found an answer here but I don't understand how this would be integrated in the TF Object Detection API.
Edit: I've run tensorboard and it shows me the standard coco-metrics (mAP, mAP#.5IOU, mAP#.75IOU.... etc). However, clicking "PR-curve" in the top right corner makes the board empty, as in seen in the picture below.

How to convert model trained on custom data-set for the Edge TPU board?

I have trained my custom data-set using the Tensor Flow Object Detection API. I run my "prediction" script and it works fine on the GPU. Now , I want to convert the model to lite and run it on the Google Coral Edge TPU Board to detect my custom objects. I have gone through the documentation that Google Coral Board Website provides but I found it very confusing.
How to convert and run it on the Google Coral Edge TPU Board?
Thanks
Without reading the documentation, it will be very hard to continue. I'm not sure what your "prediction script" means, but I'm assuming that the script loaded a .pb tensorflow model, loaded some image data, and run inference on it to produce prediction results. That means you have a .pb tensorflow model at the "Frozen graph" stage of the following pipeline:
Image taken from coral.ai.
The next step would be to convert your .pb model to a "fully quantized .tflite model" using the post training quantization technique. The documentation to do that are given here. I also created a github gist, containing an example of Post Training Quantization here. Once you have produced the .tflite model, you'll need to compile the model via the edgetpu_compiler. Although everything you need to know about the edgetpu compiler is in that link, for your purpose, compiling a model is as simple as:
$ edgetpu_compiler your_model_name.tflite
Which will creates a your_model_name_edgetpu.tflite model that is compatible with the EdgeTPU. Now, if at this stage, instead of creating an edgetpu compatible model, you are getting some type of errors, then that means your model did not meets the requirements that are posted in the models-requirements section.
Once you have produced a compiled model, you can then deploy it on an edgetpu device. Currently are 2 main APIs that can be use to run inference with the model:
EdgeTPU API
python api
C++ api
tflite API
C++ api
python api
Ultimately, there are many demo examples to run inference on the model here.
The previous answer works with general classification models, but not with TF object detection API trained models.
You cannot do post-training quantization with TF Lite converter on TF object detection API models.
In order to run object detection models on EdgeTPU-s:
You must train the models in quantized aware training mode with this addition in model config:
graph_rewriter {
quantization {
delay: 48000
weight_bits: 8
activation_bits: 8
}
}
This might not work with all the models provided in the model-zoo, try a quantized model first.
After training, export the frozen graph with: object_detection/export_tflite_ssd_graph.py
Run tensorflow/lite/toco tool on the frozen graph to make it TFLite compatible
And finally run edgetpu_complier on the .tflite file
You can find more in-depth guide here:
https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/running_on_mobile_tensorflowlite.md