I am trying to serve a deep learning model from mobiles. My react native app using tensorflow takes about a minute just to load. My model is about 175mb (about 30 million params). This is my first time trying to run a model on mobiles and I couldn't find any good performance data for tensorflow js on react native.
Is my model too large to expect a reasonably quick loading and inference time on react native? Is this because of hardware limitation or framework? I read that tfjs react native uses web-gl which would be slower than direct access the mobile's gpu so could I use core ml or something like that and expect a better time?
As an extra question (I'll also make a separate post), another route I'm considering is moving inference to a web browser for laptops/desktops. Could I expect a browser web-gl to perform as well as directly using the computer gpu?
Yes, you ML Kit and Core ML are much faster than TF.js.
Also you can create your model, convert it to TF lite then deploy it to firebase and use your model on ios and android in the native side, (currently #react-native-firebase/ml. does not support custom models).
Related
I have an FYP project (a social media app like Instagram) that requires me to create a simple recommendation system. I've trained my dataset on cosine similarity using Python, but I'm at a loss on what to do next. How can I integrate my trained ml model to React native or if there is a better and easier way to make a recommendation system?
I tried reading documentation and watching videos. But I still don't seem to be able to grasp some difficult concepts. I would greatly appreciate it if you could give me instructions or steps on what to learn after training my model. Or if I have to use some library or packages etc. [not sure if this is the appropriate forum for this inquiry]
I wanted came across this guide https://developers.google.com/ml-kit/vision/pose-detection/classifying-poses and I wanted to develop a cross plateform app with pose classification using react native.
I haven't been able to find any wrapper from react-native ? Is this going to be developped one day ?
I thought about using the flutter one but it seems that it doesn't contain the pose detection library ?
ML Kit itself does not currently have a plan to provide React Native or Flutter wrappers. Some developers have come up their own wrappers, e.g. https://pub.dev/packages/google_ml_kit. They are not officially endorsed by ML Kit and your mileage may vary.
For React Native we have a wrapper for ML Kit https://github.com/a7medev/react-native-ml-kit but pose detection is not implemented yet.
I want to do this without using recognition on server or on cloud. I already trained tensorflow-lite model. I've seen tflite-react-native but it works only with images not real time camera streaming. I'm wandering if it's even possible to make capturing and recognition custom object in real time without streaming the video on backend. Any advices and thoughts are very appreciated.
I want to use Kaggle flower dataset (with 4242 images uploaded) for flower recognition app with react native tensorflow.
However, currently, I am sticking on a stupid mistake.
I am unable to change flowers.npz and label.npz files after data preparation followed by kaggle official website(https://www.kaggle.com/jrmst102/flow-ers-data-preparation/notebook) into tensorflow_graph.pb and tensorflow_label.txt formats.
So questions are as follows :
How to transform flowers.npz and labels.npz files into tensorflow_graph.pb and tensorflow_label.txt extensions respectively?
Or how to properly upload this pretrained flower dataset into react native application, for testing purposes with react native tensorflow?
Googling all over around did not help me.
Any advice is appreciated!
P.S. In case, I am developing an app using CRNA
I have faster rcnn model that I trained and work on my google cloud instance with GPU ( train with google models API),
I want to run it on mobile, I found some GitHub that shows how to run SSDmobileNet but I could not found one that runs Faster-rcnn.
real time is not my concern for now.
I have iPhone 6, iOS 11.4
The model can be run with Metal, CoreML, tensorflow-lite...
but for POC I need it to run on mobile without train new network.
any help?
Thanks!
Faster R-CNN requires a number of custom layers that are not available in Metal, CoreML, etc. You will have to implement these custom layers yourself (or hire someone to implement them for you, wink wink).
I'm not sure if TF-lite will work. It only supports a limited number of operations on iOS, so chances are it won't have everything that Faster R-CNN needs. But that would be the first thing to try. If that doesn't work, I would try a Core ML model with custom layers.
See here info about custom layers in Core ML: http://machinethink.net/blog/coreml-custom-layers/