Installing ncnn on Google Colab - google-colaboratory

I have trained a custom YOLOX model on google colab and want to convert it from .onnx to .ncnn.
I'm using the following as directions: https://github.com/Megvii-BaseDetection/YOLOX/blob/main/demo/ncnn/cpp/README.md#step4
Step 1 requires building ncnn with directions: https://github.com/Tencent/ncnn/wiki/how-to-build#build-for-macos
The directions give instructions for building on different devices.
My question: Which instructions should I use to build ncnn on Google Colab?

Related

Does tensorflow lite support training on iOS?

Can the TFLite training approach mentioned in the below blog post be deployed to iOS?
Based on this article, "This new feature is available in TensorFlow 2.7 and later and is currently available for Android apps. (iOS support will be added in the future.)"
https://blog.tensorflow.org/2021/11/on-device-training-in-tensorflow-lite.html

YOLOv4-deepsort does not detect while running on Google Colab's GPU

I'm trying to do some object tracking on a video using Google Colab but I'm facing the issue below. Tracking is only done in the first frame of the video and not in the rest. I'm working with exactly same files and same commands both on my computer and Google Colab.
expected
Google Colab
It seems like TensorFlow's version caused this problem. Here is my solution:
!pip install tensorflow==2.3.0

Google Colab Widgets

I have been using ipywidgets in google colab for a while but it today it started asking me to enable third party widgets and presented codes for custome widget managers. The widgets arent getting displayed any more. Was it because of an google colab update or am I making an error. I have attached a picture.
This issue has been detected by google colab team, downgrading ipywidgets to 7.1.1 is the easiest way to solve the problem now.

How to implement Teachable Machine RTD inside React Native?

I am looking for a way to integrate a tensorflow.js model into my react-native application which is built using EXPO.
The model needs to **be able to access the camera** to **detect real-time** sign-language letters.
My current solution:
Train a model via Google's Teachable Machine platform and use their code.
The platform supplies a Web script that I uploaded to the cloud.
You can see the website here
Using the 'react-native-webview' I was able to present the site inside my app.
<WebView source={{uri: 'https://whatever.tiiny.site/'}} style={{ marginTop: 20 }} />
However, it feels like cheating and doesn't look very good.
I also built my own React.js project with the sign-language model, and tried to convert it to react-native. It failed as well.
I know react-native released a tflite-react-native, tensorflowjs-react-native packages and I have read time and again their documentation but I wasn't able to convert it to my needs.
BTW:
I also found this project:
https://github.com/expo/examples/tree/master/with-tfjs-camera
which is very close to what I need, but they are using '#tensorflow-models/mobilenet' and I need to use my own tensorflow model.
Relevant\similar posts:
how to use teachable machine model in react native expo

Plotting Xarray images with Geoviews on a Google Colaboratory Notebook

I'm trying to reproduce the code from this link on Google Colaboratory but my Colab Notebook crashes for reasons I don't understand. Is it possible to get this to work properly?
I can confirm the crash in this notebook.
https://colab.research.google.com/drive/1XwlC2onMlTW0mepTN16Pd1Mj0g_T3dNV
This issue has to do with the fact that Cartopy and Shapely aren't friends...
Shapely is preinstalled in Google Colab, causing a plain install of Cartopy to give problems.
You will have to uninstall Shapely first and reinstall it with no-binary.
Also in order for Geoviews to show plots in Google Colab you will have to call gv.extension('bokeh') in every cell where you want to plot something.
Follow this notebook to see the example code of Geoviews working correctly in Google Colab:
https://colab.research.google.com/drive/1sI51h7l-ySoW2bLrU-K1LMm-TYNVBlif