Run Faster-rcnn on mobile iOS - tensorflow

I have faster rcnn model that I trained and work on my google cloud instance with GPU ( train with google models API),
I want to run it on mobile, I found some GitHub that shows how to run SSDmobileNet but I could not found one that runs Faster-rcnn.
real time is not my concern for now.
I have iPhone 6, iOS 11.4
The model can be run with Metal, CoreML, tensorflow-lite...
but for POC I need it to run on mobile without train new network.
any help?
Thanks!

Faster R-CNN requires a number of custom layers that are not available in Metal, CoreML, etc. You will have to implement these custom layers yourself (or hire someone to implement them for you, wink wink).
I'm not sure if TF-lite will work. It only supports a limited number of operations on iOS, so chances are it won't have everything that Faster R-CNN needs. But that would be the first thing to try. If that doesn't work, I would try a Core ML model with custom layers.
See here info about custom layers in Core ML: http://machinethink.net/blog/coreml-custom-layers/

Related

How to use the models under tensorflow/models/research/object_detection/models?

I'm looking into training an object detection network using Tensorflow, and I had a look at the TF2 Model Zoo. I noticed that there are noticeably less models there than in the directory /models/research/models/, including the MobileDet with SSDLite developed for the jetson xavier.
To clarify, the readme says that there is a MobileDet GPU with SSDLite, and that the model and checkpoints trained on COCO are provided, yet I couldn't find them anywhere in the repo.
How is one supposed to use those models?
I already have a custom-trained MobileDetv3 for image classification, and I was hoping to see a way to turn the network into an object detection network, in accordance with the MobileDetv3 paper. If this is not straightforward, training one network from scratch could be ok too, I just need to know where to even start from.
If you plan to use the object detection API, you can't use your existing model. You have to choose from a list of models here for v2 and here for v1
The documentation is very well maintained and the steps to train or validate or run inference (test) on custom data is very well explained here by the TensorFlow team. The link is meant for TensorFlow version v2. However, if you wish to use v1, the process is fairly similar and there are numerous blogs/videos explaining how to go about it

CreateML what kind of ObjectDetector Network is trained?

I used CreateML do train a new custom ObjectDector.
Everything worked well so far.
Now I am just wondering, what kind of Network is trained in the background?
Is it something like YOLO or Mobilenet?
I did not found anything on the official documentation:
https://developer.apple.com/documentation/createml#overview
There are two options:
TinyYOLOv2
Using transfer learning. This uses a built-in feature extractor model (VisionFeaturePrint.Objects). This is available with Create ML in Xcode 12.

Distributed retraining (TF & Google Coral)

Assuming I have tens of Google Coral devices doing object detection (using same trained model), every once in awhile we will retraining a device for new object (transfer learning), lets say this device is Coral1, now I wonder how would I transfer learning, what Coral1 learned to all devices (without the need to retrain those devices)?
for sure devices can be Google Coral or any other device
Since what we are assuming is that, all the devices will inference on the same model at start,
whenever there is a new learning done by any device, the updated trained model should be pushed to all other devices, helping them to start recognizing the new objects.
Every device does not need to train individually, if we can maintain synchronization among the devices.
For further information, go through this link
You shouldn't be "retraining a device", but rather retraining a model. Take a look at this guide on how to re-train a model:
https://coral.withgoogle.com/docs/edgetpu/retrain-detection/
Once you've finished retraining a model, you can scp it into the other board and reload.

Already implemented neural network on Google Cloud Platform

I have implemented a neural network model using Python and Tensorflow, which normally runs on my own computer.
Now I would like to train it on new datasets on the Google Cloud Platform. Do you think it is possible? Do I need to change my code?
Thank you very much for your help!
Google Cloud offers the Cloud ML Engine service, which allows to train your models and perform predictions without the need of running and maintaining an instance with the required software.
In order to run the TensorFlow NN models you already have, you will not need to change your code, you will only have to package the trainer appropriately, as described in the documentation, and run a ML Engine job that performs the training itself. Once you have your model, you can also deploy it in the same service and later get predictions with different features depending on your requirements (urgency in getting the predictions, data set sources, etc.).
Alternatively, as suggested in the comments, you can always launch a Compute Engine instance and run there your TensorFlow model as if you were doing it locally in your computer. However, I would strongly recommend the approach I proposed earlier, as you will be saving some money, because you will only be charged for your usage (training jobs and/or predictions) and do not need to configure an instance from scratch.

Real Time Object detection using TensorFlow

I have just started experimenting with Deep Learning and Computer Vision technologies. I came across this awesome tutorial. I have setup the TensorFlow environment using docker and trained my own sets of objects and it provided greater accuracy when I tested it out.
Now I want to make the same more real-time. For example, instead of giving an image of an object as the input, I want to utilize a webcam and make it recognize the object with the help of TensorFlow. Can you guys guide me with the right place to start with this work?
You may want to look at TensorFlow Serving so that you can decouple compute from sensors (and distribute the computation), or our C++ api. Beyond that, tensorflow was written emphasizing throughput rather than latency, so batch samples as much as you can. You don't need to run tensorflow at every frame, so input from a webcam should definitely be in the realm of possibilities. Making the network smaller, and buying better hardware are popular options.