Is it possible to train EfficientDet lite model using TPU Google Coral accelerator on Raspberry Pi 4 (64 bit os bullseye)?
Related
Can the TFLite training approach mentioned in the below blog post be deployed to iOS?
Based on this article, "This new feature is available in TensorFlow 2.7 and later and is currently available for Android apps. (iOS support will be added in the future.)"
https://blog.tensorflow.org/2021/11/on-device-training-in-tensorflow-lite.html
I'm trying to do some object tracking on a video using Google Colab but I'm facing the issue below. Tracking is only done in the first frame of the video and not in the rest. I'm working with exactly same files and same commands both on my computer and Google Colab.
expected
Google Colab
It seems like TensorFlow's version caused this problem. Here is my solution:
!pip install tensorflow==2.3.0
Is it possible to run mediapipe hand detection with Coral usb accelerator?
I have seen plenty of tutorial about tflite but nothing about mediapipe
I have trained a custom YOLOX model on google colab and want to convert it from .onnx to .ncnn.
I'm using the following as directions: https://github.com/Megvii-BaseDetection/YOLOX/blob/main/demo/ncnn/cpp/README.md#step4
Step 1 requires building ncnn with directions: https://github.com/Tencent/ncnn/wiki/how-to-build#build-for-macos
The directions give instructions for building on different devices.
My question: Which instructions should I use to build ncnn on Google Colab?
I ran facenet model on my computer and got good results. But when I ran the same model on coral dev board , the results are not as expected.
facenet on my computer - detects accurately,
facenet on coral dev board - very bad results.
Why is it behaving so different?
Note : using coral camera on dev board and webcam on computer.