TensorFlow and OpenCL [closed] - tensorflow

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 6 years ago.
Improve this question
It seems TensorFlow only supports CUDA and not OpenCL.
I saw the tensorflow-cl project, which compiles the CUDA code into OpenCL, but it is still a development version which does not work in all cases.
My question is whether Google, TensorFlow's developer, will ever develop a multi-platform version of its tool (no, I do not mean the CPU only version). Are the features of proprietary CUDA so critical to focus on a single GPU vendor? Are there any plans to develop an OpenCL/Vulkan/SPIR-V version at anytime in the future?

The answer is obviously yes, and Tensorflow started supporting OpenCL SYCL in the last few weeks, still in the master github branch and with few basic kernels. Many other kernels need to be written in the new format and contributions are welcome.

Related

In simple terms, what is the relationship between the GPU, Nvidia driver, CUDA and cuDNN in the context for using a deep learning framework? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 months ago.
Improve this question
I have always been doing deep learning on Google Colab or on school clusters that have everything set up nicely. Recently I needed to set up a workstation to do deep learning from scratch and I realized I have very limited understanding of the things that I need to install to run a framework like tensorflow or pytorch on GPU.
So can anyone explain in simple term possible, what is purpose of Nvidia driver, CUDA and cuDNN? How do they work together or on top of one another, and why do I need to install all of them for tensorflow/pytorch?
Python code runs on the CPU, not the GPU. This would be rather slow for complex Neural Network layers like LSTM's or CNN's. Hence, TensorFlow and PyTorch know how to let cuDNN compute those layers. cuDNN requires CUDA, and CUDA requires the NVidia driver. All of the last 3 components are provided by NVidia; they've decided to organize their code that way.

Can OpenVINO support (and use) Nvidia GPU? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 1 year ago.
Improve this question
I've recently installed OpenVINO following this tutorial and in the beginning it warned me that it doesn't see the graphics card. I assumed that it was talking about Intel HD Graphics (or similar), as I don't have one of these, but I have Nvidia GTX 1080ti.
I haven't seen anyone talking about this and it is not mentioned in the installation guide, but can it even work with Nvidia graphics cards? And if not, what's the point in using OpenVINO?
OpenVINO™ toolkit extends computer vision and non-vision workloads across Intel® hardware, maximizing performance.
OpenVINO™ toolkit is officially supported by Intel hardware only. OpenVINO™ toolkit does not support other hardware, including Nvidia GPU.
For supported Intel® hardware, refer to System Requirements.

what is the best IDE i can use for deep learning? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
We are trying to train an LSTM under anaconda (spyder) from a dataset of size 333113kb. (3628801 rows * 31columns) The data is stored in a .csv file and is imported using the pandas library. The execution is too slow and somtimes spyder crashes.
NB : we are using an Intel Core (TM) i5-8300H CPU 2.3 GHz with 8Go of RAM.
not directly an IDE but I like to use Visual Studio Code because it's an excellent way to go even for machine learning and data science.
it works on any OS
supports many Technologies besides Python, C#, JS etc.
open source and lightweight
VS Code is integrated with PyLint
You can perform unit testing on your machine learning models easily
for me VS Code makes working with SQL, .NET, Node.js and many other Tools a lot easier. It's a great code editor that supports you with operations like debugging, task running, version control and many other things that a full featured IDE can also do.
Is this like university-wise project or commercial? You could try Google Colab.

Build tensorflow faster from source for contribution purposes [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 1 year ago.
Improve this question
I am building tensorflow with bazel on my local PC, but as expected it takes quite a long time. The problem is that I want to make my own contributions in the source, so it's not feasible to compile all over again the whole code for every small change that I am making. I want to use tensorflow for a custom project of mine, thus I cannot rely/expect on/from the community to take any issues of mine into consideration.
Are there certain targets I may could use with bazel, just like if I was using make?
I have access to a pretty good GPU server but I cannot figure out whether tensorflow-bazel uses GPU resources for building. Are there any configurations I could use for building with GPU?
What's the fastest way to re-compile tensorflow in general all over again, for every small tweak that I am applying?
How do independent contributors work on tensorflow in general?
Bazel is smart about incremental changes. If you change a single file, then only bare minimum will be recompiled. Just change a file and run the same command as you do for a full build
I have access to a pretty good GPU server but I cannot figure out whether tensorflow-bazel uses GPU resources for building. Are there any configurations I could use for building with GPU?
No, compilers are complicated beasts, which use only CPU
If you want to contribute to Tensorflow go through Contribution guidelines.

TensorFlow: CPU Choice AVX-512 AMD, Intel? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
Most of my training is done using RNN , either LSTM or GRU, and I've found the CPU to be taking much of the load.
I'm looking to put a new system together for testing, and I haven't seen any posts on which Architecture is more conducive to ML with TensorFlow. It seems it boils down to the Intel Core-X series having AVX-512 and AMD not (Specifically looking at i9-7900X vs Threadripper 1950X as they are a similar price).
So the question I have is two-fold:
Does TensorFlow make use of AVX-512 extensions, and
Are those extensions beneficial enough to make up for a 6-core deficiency of the 1950X -> 7900X
~ Are there any other considerations I am not taking into account? Any specialized performance optimizations that TF has made for Intel processors over AMD?