Like Ray and horovod, I want to run tensorflow distributed using the Apache Ignite framework, but I can't seem to find good examples on how to achieve distributed training.
Are there any good notebooks or tutorials out there?
The use of TensorFlow w/Ignite or GridGain community edition is described here: https://www.gridgain.com/docs/latest/integrations/tensorflow/deep-learn-tensor
More information could be found here: https://github.com/gridgain/gridgain/tree/master/modules/tensorflow
Related
I want to perform distributed training on Amazon SageMaker. The code is written with TensorFlow and similar to the following code where I think CPU instance should be enough:
https://github.com/horovod/horovod/blob/master/examples/tensorflow_word2vec.py
Can Horovod with TensorFlow work on non-GPU instances in Amazon SageMaker?
Yeah you should be able to use both CPU's and GPU's with Horovod on Amazon SageMaker. Please follow the below example for the same
https://github.com/aws/amazon-sagemaker-examples/blob/main/sagemaker-python-sdk/tensorflow_script_mode_horovod/tensorflow_script_mode_horovod.ipynb
I want to develop a gstreamer plugin that can use the acceleration provided by the GPU of the graphics card (NVIDIA RTX2xxx). The objective is to have a fast gstreamer pipeline that process a video stream including on it a custom filter.
After two days googling, I can not find any example or hint.
One of the best alternatives found is use "nvivafilter", passing a cuda module as argument. However, no where explains how to install this plugin or provides an example. Worst, it seems that it could be specific for Nvidia Jetson hardware.
Another alternative seems use gstreamer inside an opencv python script. But that means a mix that I do not known how impacts performance.
This gstreamer tutorial talks about several libraries. But seems outdated and not provides details.
RidgeRun seems to have something similar to "nvivafilter", but not FOSS.
Has someone any example or suggestion about how to proceed.
I suggest you start with installing DS 5.0 and explore the examples and the apps provided. It's built on Gstreamer. Deepstream Instalation guide
The installation is straight forward. You will find custom parsers built.
You will need to install the following: Ubuntu 18.04, GStreamer 1.14.1, NVIDIA driver 440or later,CUDA 10.2,TensorRT 7.0 or later.
Here is an example of running an app with 4 streams. deepstream-app -c /opt/nvidia/deepstream/deepstream-5.0/samples/configs/deepstream-app/source4_1080p_dec_infer-resnet_tracker_sgie_tiled_display_int8.txt
The advantage of DS is that all the video pipeline is optimized on GPU including decoding and preprocessing. You can always run Gstreamer along opencv only, in my experience it's not an efficient implementation.
Building custom parsers:
The parsers are required to convert the raw Tensor data from the inference to (x,y) location of bounding boxes around the detected object. This post-processing algorithm will vary based on the detection architecture.
If using Deepstream 4.0, Transfer Learning Toolkit 1.0 and TensorRT 6.0: follow the instructions in the repository https://github.com/NVIDIA-AI-IOT/deepstream_4.x_apps
If using Deepstream 5.0, Transfer Learning Toolkit 2.0 and TensorRT 7.0: keep following the instructions from https://github.com/NVIDIA-AI-IOT/deepstream_tlt_apps
Resources:
Starting page: https://developer.nvidia.com/deepstream-sdk
Deepstream download and resources: https://developer.nvidia.com/deepstream-getting-started
Quick start manual: https://docs.nvidia.com/metropolis/deepstream/dev-guide/index.html
Integrate TLT model with Deepstream SDK: https://github.com/NVIDIA-AI-IOT/deepstream_tlt_apps
Deepstream Devblog: https://devblogs.nvidia.com/building-iva-apps-using-deepstream-5.0/
Plugin manual: https://docs.nvidia.com/metropolis/deepstream/plugin-manual/index.html
Deepstream 5.0 release notes: https://docs.nvidia.com/metropolis/deepstream/DeepStream_5.0_Release_Notes.pdf
Transfer Learning Toolkit v2.0 Release Notes: https://docs.nvidia.com/metropolis/TLT/tlt-release-notes/index.html
Transfer Learning Toolkit v2.0 Getting Started Guide: https://docs.nvidia.com/metropolis/TLT/tlt-getting-started-guide/index.html
Metropolis documentation: https://docs.nvidia.com/metropolis/
TensorRT: https://developer.nvidia.com/tensorrt
TensorRT documentation: https://docs.nvidia.com/deeplearning/tensorrt/developer-guide/index.html
TensorRT Devblog: https://devblogs.nvidia.com/speeding-up-deep-learning-inference-using-tensorrt/
TensorRT Open Source Software: https://github.com/NVIDIA/TensorRT
https://gstreamer.freedesktop.org/documentation/base/gstbasetransform.html?gi-language=cGood luck.
I want to use the tensorflow in a QNX operating system? The very first step is to integrate the tensorflow into QNX. Any suggestions?
There is an issue on that on GitHub, unfortunately w/o a result but it's a starting point: https://github.com/tensorflow/tensorflow/issues/14753
Depending on your objective, NVIDIA's TensorRT can load TensorFlow models and provides binaries for QNX, see for example https://docs.nvidia.com/deeplearning/sdk/pdf/TensorRT-Release-Notes.pdf
Now that CodeXL is open-source and openly developed, I'd expect it to support more than just AMD GPUs. Is this true?
Which GPUs does CodeXL support?
This information is buried deep in the GPUOpen site, in the Release Notes for the latest version - 2.2, the System Requirements section.
The document is lengthy and copyrighted so I won't reproduce the information from it here, except the piece that I was most interested in:
For GPU API-Level Debugging, a working OpenCL/OpenGL configuration is required (AMD or other)
So non-AMD GPUs are supported.
I'm working on a project using hadoop. Now I want to test a data intensive application on the hadoop.I checked out apache mahout machine learning algorithms.Are there any open source applications running over hadoop using apahce mahout machine learning algorithms?
You can start by watching official Mahout page - Powered by Mahout where you can find a list of commercial and academic uses of the Mahout software. I guess some of them should be open sourced but haven't checked myself.