Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 1 year ago.
Improve this question
I've recently installed OpenVINO following this tutorial and in the beginning it warned me that it doesn't see the graphics card. I assumed that it was talking about Intel HD Graphics (or similar), as I don't have one of these, but I have Nvidia GTX 1080ti.
I haven't seen anyone talking about this and it is not mentioned in the installation guide, but can it even work with Nvidia graphics cards? And if not, what's the point in using OpenVINO?
OpenVINO™ toolkit extends computer vision and non-vision workloads across Intel® hardware, maximizing performance.
OpenVINO™ toolkit is officially supported by Intel hardware only. OpenVINO™ toolkit does not support other hardware, including Nvidia GPU.
For supported Intel® hardware, refer to System Requirements.
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 months ago.
Improve this question
I have always been doing deep learning on Google Colab or on school clusters that have everything set up nicely. Recently I needed to set up a workstation to do deep learning from scratch and I realized I have very limited understanding of the things that I need to install to run a framework like tensorflow or pytorch on GPU.
So can anyone explain in simple term possible, what is purpose of Nvidia driver, CUDA and cuDNN? How do they work together or on top of one another, and why do I need to install all of them for tensorflow/pytorch?
Python code runs on the CPU, not the GPU. This would be rather slow for complex Neural Network layers like LSTM's or CNN's. Hence, TensorFlow and PyTorch know how to let cuDNN compute those layers. cuDNN requires CUDA, and CUDA requires the NVidia driver. All of the last 3 components are provided by NVidia; they've decided to organize their code that way.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 11 months ago.
Improve this question
I'd like to buy a multi GPU motherboard but wanted to separate the workloads separately on each GPU using windows
I know you can select a GPU for high performance in Windows 10 but that's not separating tasks/workloads
I.e. one GPU can work using one program and another GPU using another program without sharing workloads
Is this possible?
This kind of explicit multi-GPU management is supported by DirectX 12.
https://devblogs.microsoft.com/directx/directx-12-multiadapter-lighting-up-dormant-silicon-and-making-it-work-for-you/
https://gpuopen.com/wp-content/uploads/2017/03/GDC2017-Explicit-DirectX-12-Multi-GPU-Rendering.pdf
https://developer.nvidia.com/explicit-multi-gpu-programming-directx-12
https://www.intel.com/content/www/us/en/developer/articles/technical/multi-adapter-support-in-directx-12.html
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
Can I run PyTorch or Tensorflow on Windows on a GPU that is also acting as the system's graphics card (e.g. there is no graphics built-in to a Ryzen 3600 CPU)? If so, is there any downside, or would I be better off getting a CPU with built-in graphics?
Yes, it is possible to run i.e. Tensorflow on GPU while also using the GPU for your system. You do not need a 2nd graphics-card or integrated GPU.
Keep in mind, your graphicscard will share memory and processing-power between all your programs. GPU-intensive work might slow down the fps on your system and the other way around. Also keep an eye on the memory usage.
I had tensorflow_gpu with a multi-layer CNN running, while playing AAA-Games (i.e. GTA V) using a Ryzen 3600. It worked on my super old NVIDIA GTX 260 (2GB memory) but it crashed quite often cause of the limited memory. Upgraded to a GTX 1080 with 8GB and it worked quite well. Needless to say, you can always fill up your GPU-memory and crash, no matter the size.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 1 year ago.
Improve this question
Sorry if this is considered off-topic - not sure where else to ask this. I have a ASUS ROG GL552VW laptop and am trying to figure out whether the HDMI port on it supports HDMI 2.0. I wish to use it to output 4k content onto a TV. However, after reading the manual, and a few hours of searching online I simply cannot find this information. I also struggled to find it for other laptops too. Is there some driver I can look for on my laptop which would tell me if it does indeed support HDMI 2.0? Or something else?
TL;DR Does this laptop "ASUS ROG GL552VW" support HDMI 2.0?
I voted for a move to superuser.com since it might be better for this question. However... I think I can answer as well.
Your laptop appears to have a Nvidia GTX960M which apparently does not support HDMI 2.0: https://www.notebookcheck.net/NVIDIA-GeForce-GTX-960M.138006.0.html
You'll be able to run a 4k TV but only at 30 Hz, which is enough for most movies but can be limiting for some.
from Quora: https://www.quora.com/How-do-I-know-which-version-of-the-HDMI-port-I-have-on-my-laptop?share=1
"If you have any Intel Core-based laptop up to the 9th gen, you’ll be limited to HDMI 1.4a while the 10th gen Ice lake series support HDMI 2.0b. For AMD Ryzen APUs, they’re capped to HDMI 2.0b.
This is also applicable to laptops with dedicated graphics as all display output is still handled by the iGPU."
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 6 years ago.
Improve this question
It seems TensorFlow only supports CUDA and not OpenCL.
I saw the tensorflow-cl project, which compiles the CUDA code into OpenCL, but it is still a development version which does not work in all cases.
My question is whether Google, TensorFlow's developer, will ever develop a multi-platform version of its tool (no, I do not mean the CPU only version). Are the features of proprietary CUDA so critical to focus on a single GPU vendor? Are there any plans to develop an OpenCL/Vulkan/SPIR-V version at anytime in the future?
The answer is obviously yes, and Tensorflow started supporting OpenCL SYCL in the last few weeks, still in the master github branch and with few basic kernels. Many other kernels need to be written in the new format and contributions are welcome.