When I running TensorFlow-GPU, it only uses 'Compute_0',Why? - tensorflow

'Compute_1' isn't working
I want to make full use of my GTX1060 6G, how can I solve this problem?

After reading this post from the MS DirectX blog and seeing that you have what seems to be a single GTX 1060 GPU installed, my best guess is "compute_0" refers to usage of the primary or first GPU card installed. As for "compute_1", it could display usage if you had a GPU with two "computing" or "rendering" units, or rather a second gpu installed that are linked to each other such as with Nvidia SLI and AMD Crossfire.

Related

How can I determine the generation/codename of AMD gpu in Linux?

I want to detect the AMD gpu deneration in python code. My case is that to run specific application (davinci resolve), it is required to use amdgpu pro drivers for gpu cards before Vega. And amdgpu pro drivers are not required when AMD gpu is Vega or newer generation. See the list of amd gpus in wikipedia. I am writing a script (davinci-resolve-checker) that tells user if he/she need to use that driver.
The question is, how do I differentiate between gpu generations/chip codenames of a gpu? I am using pylspci to get information of the presented gpus. Is there a list of generations that I can check with?
There is a pci id list published here: https://pci-ids.ucw.cz/v2.2/pci.ids
In that list for Advanced Micro Devices, Inc. [AMD/ATI] (1002 vendor id) you can see their devices. For example, for AMD Radeon PRO W6600 GPU there is 73e3 Navi 23 WKS-XL [Radeon PRO W6600] line.
There you can see if the device name contains the codename substring. In this case it is Navi.
For the specified question, the codenames that currently describes vega and above are: Vega, Navi. So if the device name does not contain that substring, I consider it as "older than vega".
For programming that, you do not actually need the list, as you can just take the device name from VerboseParser device.device.name. But just in case, this list is located in /usr/share/hwdata/pci.ids in the system.
Probably, it is not a very reliable way. But I did not yet found a better way. Any suggestions are welcome.

External GPU for Mac

I'd like to buy an eGPU for my MaxBook Pro to use for simple deep learning tasks. My setup is:
MacBook Pro (15-inch, 2017)
Graphics: Radeon Pro 555 2 GB
Intel HD Graphics 630 1536 MB
Version: Mojave 10.14.5
I understand for Deep Learning (i.e. use of tensorflow-gpu) this is not currently supported for my Mac. Due to previous disputes between Nvidia and Apple I assume that Nvidia's support is reluctant to offer any kind of hacky solution with their graphic cards. On saying that, I was recommended to purchase the NVIDIA TITAN RTX or NVIDIA Quadro® GV100, but they're quite pricey at 1000s of euros/dollars a piece. At first, I just want something to play around with.
I watched this and this to see how to configure the Mac with an eGPU that is CUDA supported.
What Nvidia eGPU would you recommend for simple i.e. not mega large data sets for DL processing? There seems to be so many models to choose from that it's not clear what would satisfy my needs. Would a GIGABYTE GeForce® GTX 1050 Ti OC 4GB suffice?
I decided to ditch using my Mac for using an external Nvidia graphic card. There are apparently some hacky solutions but I figured - after reading a many forum posts and online articles - the best way to proceed is just to buy a new (gaming) Desktop PC. One reoccurring theme I did see come up was that in order to have an effective Deep Learning workstation, one should consider at least 16 GB Ram (32 or more is ideal, Intel i7 or better, and 0.5 TB of SDD).

How to speed up Tensorflow-gpu with using CUDA code simultaneoulsy

I only have one GPU(GTX 1070, 8GB VRAM) and I would like to using tensorflow-gpu with another CUDA code simultaneously, on the same GPU.
But, using CUDA code and tensorflow-gpu at the same time slows tensorflow-gpu down about twice time.
Is there any solutions to speed up when tensorflow-gpu and CUDA code are used together?
A slightly longer version of #talonmies comment:
GPUs are awesome, but they still have finite resources. Any competently-built application that uses the GPU will do its best to saturate the device, leaving few resources for other applications. In fact, one of the goals and challenges of optimizing GPU code - whether it be a shader, CUDA or CL kernel - is making sure that all CUs are used as efficiently as possible.
Assuming that TF is already doing that: When running another GPU-heavy application, or you're sharing a resource that's already running full-tilt. So, things slow down.
Some options are:
Get a second, or faster, GPU.
Optimize your CUDA kernels to reduce requirements and simplify your TF stuff. While this is always important to keep in mind when developing for GPGPU, it's unlikely to help with your current problem.
Don't run these things at the same time. This may turn out to be slightly faster than this quasi time-slicing situation that you currently have.

Vulkan: What could be causing very poor fps on an AMD card, but OK fps on an NVIDIA card

I am part of a team working on a 3D game engine which has a vulkan rendering system. So far we have been testing on NVIDIA graphics cards, like the GTX 970 and have had decent performance.
But recently we tested a scene on an AMD card and got really low fps:
For example, rendering a sponza scene:
AMD R9 Fury: 5 fps
NVIDIA GeForce GTX 970: 64 fps
The NVIDIA fps is not great, but much better than on AMD.
Do you guys have any idea what could be causing this difference in fps on the AMD card?
Or do you know how I could go about isolating what is causing the low fps on the AMD card?
Thanks in advance for your help.
AMD drivers have issues when accessing numerous vkDeviceMemory values per submission. This is particularly a problem on Windows 7/8, which do not have WDDM 2.0. In fact, if you use too many (~1000) on Window 7, it is easy to reproduce a BSOD. Nvidia drivers seem to be doing something behind the scenes, and aren't subject to these limitations. However, as a result, their driver implementation may be hiding some opportunity for optimization from the user.
Regardless, the recommendation is to pool your memory allocations, such that VkImage and VkBuffers are allocated from the same segmented vkDeviceMemory. There is a open source library, called Vulkan Memory Allocater which attempts to aid in implementing this behavior (and it is suspiciously authored by AMD!).

SteamVR performance test says my GeForce GTX 980 Ti is not ready for VR

I am thinking about getting a Vive and I wanted to check if my PC can handle it. My motherboard and processor are pretty old (Asus M4A79XTD EVO ATX AM3 and AMD Phenom II X4 965 3.4GHz respectively) but I recently upgraded to a GeForce GTX 980 Ti graphics card.
When I ran the Steam VR test program, I was expecting it to say that my graphics card was OK but that my CPU was a bit too slow. Actually, it's the other way round. Screenshot of steamVR.
Your system isn't capable of rendering low quality VR and it appears to be >mostly bound by its GPU.
We recommend upgrading your Graphics Card
I've made sure I have updated my NVidia drivers.
When I look in GeForce Experience, I get the picture I was expecting to see:
GeForce Experience screenshot. It thinks my graphics card is OK but my processor doesn't meet the minimum spec.
But, since the Steam VR test is actually rendering stuff, whereas the GeForce experience is just going by the hardware I've got, it makes we think that my GPU should be capable but something about my setup is throttling it.
I'd love to know what the problem might be. Perhaps because I'm using an NVidia card in an AMD chipset MB?
Well, I never found out exactly what the cause was but the problem is now resolved. I bought a new Motherboard, processor and RAM but kept the graphics card. After getting everything booted up, the system is reporting "high-quality VR" for both CPU and graphics card.
So, for whatever reason, it does seem like the MB/processor was throttling the graphics card in some way.
Steam VR only tests if your rig is able to keep steady frames over 75fps. I can run VR on my laptop and it's only got a GTX 960m. My CPU is a little more up to date. I7 6700k 16gb of ddr4. I also have a buddy able to run VR on a 780ti.