I want to use Kaggle flower dataset (with 4242 images uploaded) for flower recognition app with react native tensorflow.
However, currently, I am sticking on a stupid mistake.
I am unable to change flowers.npz and label.npz files after data preparation followed by kaggle official website(https://www.kaggle.com/jrmst102/flow-ers-data-preparation/notebook) into tensorflow_graph.pb and tensorflow_label.txt formats.
So questions are as follows :
How to transform flowers.npz and labels.npz files into tensorflow_graph.pb and tensorflow_label.txt extensions respectively?
Or how to properly upload this pretrained flower dataset into react native application, for testing purposes with react native tensorflow?
Googling all over around did not help me.
Any advice is appreciated!
P.S. In case, I am developing an app using CRNA
Related
I have an FYP project (a social media app like Instagram) that requires me to create a simple recommendation system. I've trained my dataset on cosine similarity using Python, but I'm at a loss on what to do next. How can I integrate my trained ml model to React native or if there is a better and easier way to make a recommendation system?
I tried reading documentation and watching videos. But I still don't seem to be able to grasp some difficult concepts. I would greatly appreciate it if you could give me instructions or steps on what to learn after training my model. Or if I have to use some library or packages etc. [not sure if this is the appropriate forum for this inquiry]
I wanted came across this guide https://developers.google.com/ml-kit/vision/pose-detection/classifying-poses and I wanted to develop a cross plateform app with pose classification using react native.
I haven't been able to find any wrapper from react-native ? Is this going to be developped one day ?
I thought about using the flutter one but it seems that it doesn't contain the pose detection library ?
ML Kit itself does not currently have a plan to provide React Native or Flutter wrappers. Some developers have come up their own wrappers, e.g. https://pub.dev/packages/google_ml_kit. They are not officially endorsed by ML Kit and your mileage may vary.
For React Native we have a wrapper for ML Kit https://github.com/a7medev/react-native-ml-kit but pose detection is not implemented yet.
I am trying to serve a deep learning model from mobiles. My react native app using tensorflow takes about a minute just to load. My model is about 175mb (about 30 million params). This is my first time trying to run a model on mobiles and I couldn't find any good performance data for tensorflow js on react native.
Is my model too large to expect a reasonably quick loading and inference time on react native? Is this because of hardware limitation or framework? I read that tfjs react native uses web-gl which would be slower than direct access the mobile's gpu so could I use core ml or something like that and expect a better time?
As an extra question (I'll also make a separate post), another route I'm considering is moving inference to a web browser for laptops/desktops. Could I expect a browser web-gl to perform as well as directly using the computer gpu?
Yes, you ML Kit and Core ML are much faster than TF.js.
Also you can create your model, convert it to TF lite then deploy it to firebase and use your model on ios and android in the native side, (currently #react-native-firebase/ml. does not support custom models).
I want to do this without using recognition on server or on cloud. I already trained tensorflow-lite model. I've seen tflite-react-native but it works only with images not real time camera streaming. I'm wandering if it's even possible to make capturing and recognition custom object in real time without streaming the video on backend. Any advices and thoughts are very appreciated.
Currently working on a project with a hospital where I need to detect facial features to determine if any facial deformities exist through iPhone App.
For example I found https://github.com/auduno/clmtrackr which showed facial feature detection points. I thought maybe look at the code and make it into objective C. The problem is when I tested clmtrackr with a face with deformity it did not work as intended.
You can check it also: http://www.auduno.com/clmtrackr/clm_image.html
Also tried this image:
both were inconsistent with detecting all the features points it can detect.
Do you know of any API that could do this? Or do you know what techniques I should look up so that I can make one myself.
Thank you
There are several libraries for facial landmark detection:
Dlib ( C++ / Python )
CLM-Framework (C++)
Face++ ( FacePlusPlus ) : Web API
OpenCV. Here's a tutorial: http://www.learnopencv.com/computer-vision-for-predicting-facial-attractiveness/
You can read more at: http://www.learnopencv.com/facial-landmark-detection/
you can use dlib since it's face detection algorithm is faster and it also includes a pre-trained model
https://github.com/davisking/dlib/
https://github.com/davisking/dlib-models
refer this for integration to ios how to build DLIB for iOS
alternatively you could use openface for checking it out just download the binaries http://www.cl.cam.ac.uk/~tb346/software/OpenFace_0.2_win_x86.zip and you're ready to go with command lines https://github.com/TadasBaltrusaitis/OpenFace/wiki/Command-line-arguments
note:- i wont prefer to use opencv since training process and results and not so regular