Can the TFLite training approach mentioned in the below blog post be deployed to iOS?
Based on this article, "This new feature is available in TensorFlow 2.7 and later and is currently available for Android apps. (iOS support will be added in the future.)"
https://blog.tensorflow.org/2021/11/on-device-training-in-tensorflow-lite.html
Related
I'm trying to do some object tracking on a video using Google Colab but I'm facing the issue below. Tracking is only done in the first frame of the video and not in the rest. I'm working with exactly same files and same commands both on my computer and Google Colab.
expected
Google Colab
It seems like TensorFlow's version caused this problem. Here is my solution:
!pip install tensorflow==2.3.0
I have trained a custom YOLOX model on google colab and want to convert it from .onnx to .ncnn.
I'm using the following as directions: https://github.com/Megvii-BaseDetection/YOLOX/blob/main/demo/ncnn/cpp/README.md#step4
Step 1 requires building ncnn with directions: https://github.com/Tencent/ncnn/wiki/how-to-build#build-for-macos
The directions give instructions for building on different devices.
My question: Which instructions should I use to build ncnn on Google Colab?
I am looking for a way to integrate a tensorflow.js model into my react-native application which is built using EXPO.
The model needs to **be able to access the camera** to **detect real-time** sign-language letters.
My current solution:
Train a model via Google's Teachable Machine platform and use their code.
The platform supplies a Web script that I uploaded to the cloud.
You can see the website here
Using the 'react-native-webview' I was able to present the site inside my app.
<WebView source={{uri: 'https://whatever.tiiny.site/'}} style={{ marginTop: 20 }} />
However, it feels like cheating and doesn't look very good.
I also built my own React.js project with the sign-language model, and tried to convert it to react-native. It failed as well.
I know react-native released a tflite-react-native, tensorflowjs-react-native packages and I have read time and again their documentation but I wasn't able to convert it to my needs.
BTW:
I also found this project:
https://github.com/expo/examples/tree/master/with-tfjs-camera
which is very close to what I need, but they are using '#tensorflow-models/mobilenet' and I need to use my own tensorflow model.
Relevant\similar posts:
how to use teachable machine model in react native expo
I see the React-Native library hasn't been updated lately? https://github.com/spokestack/react-native-spokestack
Is it still being supported?
The 2.1.2 version is still supported! There will be a 3.0 soon™ with Text to Speech support and other shiny new features TBD.
NNAPI is available on android 8.1.
But I want to use the NNAPI on android 7&8(arm64).
the NNAPI is used by tensorflow-lite.
Where can I download libneuralnetworks.so?
Unfortunately NNAPI is only available on devices with Android 8.1 or later. And it currently does not have a support lib to work on older devices.
If your primary usecase is Tensorflow-Lite, you can rely on its CPU implementation on older devices. Actually, if you enabled NNAPI delegation in TFLite, it will try look for libneuralnetworks.so and use it when it's available. And it will fallback to its CPU implementation when libneuralnetworks.so is not available.