Does anyone know what are the possibilities for running ML/AI algos on a SHArC processor (ADSP 2156x) via a 3rd party C/C++ library?
Existing algo is done in Python using the TensorFlow py package. The algo I easily ported to Arm because the TensorFlow C code can be compiled for Arm (using CMake, here is the makefile to check which CPUs/OSes are supported).
What about trying to compile the TensorFlow for SHArC, is it hopeless?
Are there any other deep learning libraries available for SHArC processors?
Related
I am working on using TF Lite to get a trained TF model onto my board via the Arduino IDE. I am using the Circuit Playground Bluefruit board (listed as supported on the TF website). When I try to run the hello-world from the cloned library, I get an "Error in compiling for baord" message.
Adafruit mentions I need only the NON pre-compiled library, but it seems the library was removed from the native library manager a few years ago, making it difficult to find the pre-compiled library. I tried to install using:
git clone https://github.com/tensorflow/tflite-micro-arduino-examples Arduino_TensorFlowLite
which, of course, gets a pre-compiled version. I think this is what is behind the aforementioned error message. Any guidance would be so appreciated!!
I want to develop a gstreamer plugin that can use the acceleration provided by the GPU of the graphics card (NVIDIA RTX2xxx). The objective is to have a fast gstreamer pipeline that process a video stream including on it a custom filter.
After two days googling, I can not find any example or hint.
One of the best alternatives found is use "nvivafilter", passing a cuda module as argument. However, no where explains how to install this plugin or provides an example. Worst, it seems that it could be specific for Nvidia Jetson hardware.
Another alternative seems use gstreamer inside an opencv python script. But that means a mix that I do not known how impacts performance.
This gstreamer tutorial talks about several libraries. But seems outdated and not provides details.
RidgeRun seems to have something similar to "nvivafilter", but not FOSS.
Has someone any example or suggestion about how to proceed.
I suggest you start with installing DS 5.0 and explore the examples and the apps provided. It's built on Gstreamer. Deepstream Instalation guide
The installation is straight forward. You will find custom parsers built.
You will need to install the following: Ubuntu 18.04, GStreamer 1.14.1, NVIDIA driver 440or later,CUDA 10.2,TensorRT 7.0 or later.
Here is an example of running an app with 4 streams. deepstream-app -c /opt/nvidia/deepstream/deepstream-5.0/samples/configs/deepstream-app/source4_1080p_dec_infer-resnet_tracker_sgie_tiled_display_int8.txt
The advantage of DS is that all the video pipeline is optimized on GPU including decoding and preprocessing. You can always run Gstreamer along opencv only, in my experience it's not an efficient implementation.
Building custom parsers:
The parsers are required to convert the raw Tensor data from the inference to (x,y) location of bounding boxes around the detected object. This post-processing algorithm will vary based on the detection architecture.
If using Deepstream 4.0, Transfer Learning Toolkit 1.0 and TensorRT 6.0: follow the instructions in the repository https://github.com/NVIDIA-AI-IOT/deepstream_4.x_apps
If using Deepstream 5.0, Transfer Learning Toolkit 2.0 and TensorRT 7.0: keep following the instructions from https://github.com/NVIDIA-AI-IOT/deepstream_tlt_apps
Resources:
Starting page: https://developer.nvidia.com/deepstream-sdk
Deepstream download and resources: https://developer.nvidia.com/deepstream-getting-started
Quick start manual: https://docs.nvidia.com/metropolis/deepstream/dev-guide/index.html
Integrate TLT model with Deepstream SDK: https://github.com/NVIDIA-AI-IOT/deepstream_tlt_apps
Deepstream Devblog: https://devblogs.nvidia.com/building-iva-apps-using-deepstream-5.0/
Plugin manual: https://docs.nvidia.com/metropolis/deepstream/plugin-manual/index.html
Deepstream 5.0 release notes: https://docs.nvidia.com/metropolis/deepstream/DeepStream_5.0_Release_Notes.pdf
Transfer Learning Toolkit v2.0 Release Notes: https://docs.nvidia.com/metropolis/TLT/tlt-release-notes/index.html
Transfer Learning Toolkit v2.0 Getting Started Guide: https://docs.nvidia.com/metropolis/TLT/tlt-getting-started-guide/index.html
Metropolis documentation: https://docs.nvidia.com/metropolis/
TensorRT: https://developer.nvidia.com/tensorrt
TensorRT documentation: https://docs.nvidia.com/deeplearning/tensorrt/developer-guide/index.html
TensorRT Devblog: https://devblogs.nvidia.com/speeding-up-deep-learning-inference-using-tensorrt/
TensorRT Open Source Software: https://github.com/NVIDIA/TensorRT
https://gstreamer.freedesktop.org/documentation/base/gstbasetransform.html?gi-language=cGood luck.
I want to use the tensorflow in a QNX operating system? The very first step is to integrate the tensorflow into QNX. Any suggestions?
There is an issue on that on GitHub, unfortunately w/o a result but it's a starting point: https://github.com/tensorflow/tensorflow/issues/14753
Depending on your objective, NVIDIA's TensorRT can load TensorFlow models and provides binaries for QNX, see for example https://docs.nvidia.com/deeplearning/sdk/pdf/TensorRT-Release-Notes.pdf
I have many big deep learning tasks in python 3.6 ahead and wanted to build tensorflow (CPU only) from source, as my MacBook Pro with Touchbar 13" noted that tensorflow would run faster if it were build with SSE4.1 SSE4.2 AVX AVX2 and FMA support. There are quite a lot questions on StackOverflow and GitHub regarding that topic and I read them all. None of which is addressing why it is not working for me.
I strictly followed the instructions provided by https://www.tensorflow.org/install/install_sources
my configure looks like this
./configure
Please specify the location of python. [Default is /anaconda/bin/python]: /anaconda/python.app/Contents/MacOS/python
Please specify optimization flags to use during compilation when bazel option "--config=opt" is specified [Default is -march=native]:
Do you wish to build TensorFlow with Google Cloud Platform support? [y/N] n
No Google Cloud Platform support will be enabled for TensorFlow
Do you wish to build TensorFlow with Hadoop File System support? [y/N] n
No Hadoop File System support will be enabled for TensorFlow
Do you wish to build TensorFlow with the XLA just-in-time compiler (experimental)? [y/N] n
No XLA JIT support will be enabled for TensorFlow
Do you wish to build TensorFlow with VERBS support? [y/N] n
No VERBS support will be enabled for TensorFlow
Found possible Python library paths:
/anaconda/python.app/Contents/lib/python3.6/site-packages
Please input the desired Python library path to use. Default is [/anaconda/python.app/Contents/lib/python3.6/site-packages]
Using python library path: /anaconda/python.app/Contents/lib/python3.6/site-packages
Do you wish to build TensorFlow with OpenCL support? [y/N] n
No OpenCL support will be enabled for TensorFlow
Do you wish to build TensorFlow with CUDA support? [y/N] n
No CUDA support will be enabled for TensorFlow
INFO: Starting clean (this may take a while). Consider using --async if the clean takes more than several minutes.
Configuration finished
with bazel 0.4.5 I then try to do the build as in the instructions
bazel build --config=opt //tensorflow/tools/pip_package:build_pip_package
This is executed without error but it gives literally hundreds of warnings. I can provide such as an example, but there hardly any snippets that go on without warning.
I appreciate ever help, thank you all very much.
Unfortunately compiler warnings are a fact of life. However, many of these come from external libraries which are pulled into the build. These can be filtered out with the "output_filter" argument to Bazel:
bazel build --config=opt --output_filter='^//tensorflow' //tensorflow/tools/pip_package:build_pip_package
This limits output to warnings generated by TensorFlow code (you can also turn warnings off entirely this way, but that takes all the fun out of compiling). Since the tooling used to build matches what TensorFlow is developed with more closely, there are fewer warnings (I get some about multi-line comment continuations, a bunch of signed/unsigned integer comparisons, and some about variables which "may" be uninitialized).
None of these indicate definite bugs, just patterns of code which are sometimes bug-prone. If the compiler knew something was wrong, it would emit an error instead. Which is a long way of saying there's nothing to worry about.
I am trying to install tensorflow
Please specify the location where ComputeCpp for SYCL 1.2 is installed. [Default is /usr/local/computecpp]:
Invalid SYCL 1.2 library path. /usr/local/computecpp/lib/libComputeCpp.so cannot be found
What should I do?What is SYCL 1.2?
SYCL is a C++ abstraction layer for OpenCL. TensorFlow's experimental support for OpenCL uses SYCL, in conjunction with a SYCL-aware C++ compiler.
As Yaroslav pointed out in his comment, SYCL is only required if you are building TensorFlow with OpenCL support. The following question during the execution of ./configure asks about OpenCL support:
Do you wish to build TensorFlow with OpenCL support? [y/N]
If you answer N, you will not have to supply a SYCL path.
It is optional step so you can skip if you want.
OpenCL (Open Computing Language) is a framework for writing programs that execute across heterogeneous platforms consisting of central processing units (CPUs), graphics processing units (GPUs), digital signal processors (DSPs), field-programmable gate arrays (FPGAs) and other processors or hardware accelerators.
so if you want to intall you have to Setting Up TensorFlow With OpenCL Using SYCL bellow link provide step by step information about it
https://developer.codeplay.com/computecppce/latest/getting-started-with-tensorflow