Distributed retraining (TF & Google Coral) - tensorflow

Assuming I have tens of Google Coral devices doing object detection (using same trained model), every once in awhile we will retraining a device for new object (transfer learning), lets say this device is Coral1, now I wonder how would I transfer learning, what Coral1 learned to all devices (without the need to retrain those devices)?
for sure devices can be Google Coral or any other device

Since what we are assuming is that, all the devices will inference on the same model at start,
whenever there is a new learning done by any device, the updated trained model should be pushed to all other devices, helping them to start recognizing the new objects.
Every device does not need to train individually, if we can maintain synchronization among the devices.
For further information, go through this link

You shouldn't be "retraining a device", but rather retraining a model. Take a look at this guide on how to re-train a model:
https://coral.withgoogle.com/docs/edgetpu/retrain-detection/
Once you've finished retraining a model, you can scp it into the other board and reload.

Related

How to use the models under tensorflow/models/research/object_detection/models?

I'm looking into training an object detection network using Tensorflow, and I had a look at the TF2 Model Zoo. I noticed that there are noticeably less models there than in the directory /models/research/models/, including the MobileDet with SSDLite developed for the jetson xavier.
To clarify, the readme says that there is a MobileDet GPU with SSDLite, and that the model and checkpoints trained on COCO are provided, yet I couldn't find them anywhere in the repo.
How is one supposed to use those models?
I already have a custom-trained MobileDetv3 for image classification, and I was hoping to see a way to turn the network into an object detection network, in accordance with the MobileDetv3 paper. If this is not straightforward, training one network from scratch could be ok too, I just need to know where to even start from.
If you plan to use the object detection API, you can't use your existing model. You have to choose from a list of models here for v2 and here for v1
The documentation is very well maintained and the steps to train or validate or run inference (test) on custom data is very well explained here by the TensorFlow team. The link is meant for TensorFlow version v2. However, if you wish to use v1, the process is fairly similar and there are numerous blogs/videos explaining how to go about it

TF Lite Retraining on Mobile

Let's assume I made an app that has machine learning in it using a tflite file.
Is it possible that I could retrain this model right inside the app?
I have tried to use the Model Maker which is provided by TensorFlow, but, without this, i don't think there's any other way to retrain your model with just the app i made.
Do you mean training on the device when the app is deployed? If yes, TFLite currently doesn't support training in general. But there's some experimental work in this direction with limited support as shown by https://github.com/tensorflow/examples/blob/master/lite/examples/model_personalization.
Currently, the retraining of a TFLite model, as you found out w/ Model Maker, has to happen offline w/ TF before the app is deployed.

Which implementation is preferred for real time video object detection with Tensorflow

I want to implement a real time video object detection where streams of video frames are fed to a detection serving system. I am considering two ways of implementing the system: (1) using tensorflow serving system TF serving, (2) using tensorflow session.run(). I was wondering which implementation fits following scenario better?
Video streams arrive to the detection system at random time. Each of the video stream lasts for some time.
the system must support concurrent video object detection processes in real time. But the GPU nodes are limited which meaning more than one detection processes may be running on a GPU concurrently.
Multiple DNN models can be used for achieving detections. when a new video stream just arrives, one of the DNN models is selected for inference. It is desired that the detection system can change the model selection decision on the fly.
the system is able to set resource allocation (maximum GPU utilization) for each serving model.
Thank you!

Run Faster-rcnn on mobile iOS

I have faster rcnn model that I trained and work on my google cloud instance with GPU ( train with google models API),
I want to run it on mobile, I found some GitHub that shows how to run SSDmobileNet but I could not found one that runs Faster-rcnn.
real time is not my concern for now.
I have iPhone 6, iOS 11.4
The model can be run with Metal, CoreML, tensorflow-lite...
but for POC I need it to run on mobile without train new network.
any help?
Thanks!
Faster R-CNN requires a number of custom layers that are not available in Metal, CoreML, etc. You will have to implement these custom layers yourself (or hire someone to implement them for you, wink wink).
I'm not sure if TF-lite will work. It only supports a limited number of operations on iOS, so chances are it won't have everything that Faster R-CNN needs. But that would be the first thing to try. If that doesn't work, I would try a Core ML model with custom layers.
See here info about custom layers in Core ML: http://machinethink.net/blog/coreml-custom-layers/

Tensorflow object detection API not detecting all objects

I am attempting to use the tensorflow object detection API. To check things out I have made use of a pretrained model, and attempted to run it on a image that I created.
But I see that the API does not detect all the objects in the image (though they are the same image of the dog).I used ssd_mobilenet_v1_coco pretrained model
I have attached the final output image with the detected objects.
Output image with the detected objects
Any pointers on why that might be happening? Where should I be start looking into to improve this?
Tensorflow Object Detection API comes with 5 pre-trained models each with a trade off on speed or accuracy. Single Shot Detectors (ssd) are designed for speed, not accuracy and why it's a preferred model for mobile devices or real-time video detection.
Running your image of 5 dogs through an R-FCN model rfcn_resnet101_coco_11_06_2017, designed for greater accuracy over speed, it detects all 5 dogs. However, this model isn't designed for real-time detection as it'll struggle to push through a respectable fps at best.