Rendering in blender wont use GPU - gpu

Hi Im useing a 1050ti and did all the advised steps to turn on GPU rendering but blender wont use my GPU
the integrated GPU is around 5-8% use
The 1050ti is around 3-5% use , there is usually an initial spike of use around 10-15% but only for a moment
The CPU is a i7-8750H and is usually between 40-100% use during render

You need to go to the Preferences - System and than change the Cycles Rendering Devices to CUDA. CUDA uses the graphics card of the system (if they support it, but most NVidia cards can do that). If you render your image make sure to select Cycles as render engine under the render tab.

Related

Tensorflow.js examples not using GPU

I have an NVIDIA RTX 2070 GPU and CUDA installed, I have WebGL support, but when I run the various TFJS examples, such as the Addition RNN Example or the Visualizing Training Example, I see my CPU usage go to 100% but the GPU (as metered via nvidia-smi) never gets used.
How can I troubleshoot this? I don't see any console messages about not finding the GPU. The TFJS docs are really vague about this, only saying that it uses the GPU if WebGL is supported and otherwise falls back to CPU if it can't find the WebGL. But again, WebGL is working. So...how to help it find my GPU?
Other related SO questions seem to be about tfjs-node-gpu, e.g., getting one's own tfjs-node-gpu installation working. This is not about that.
I'm talking about running the main TFJS examples on the official TFJS pages from my browser.
Browser is the latest Chrome for Linux. Running Ubuntu 18.04.
EDIT: Since someone will ask, chrome://gpu shows that hardware acceleration is enabled. The output log is rather long, but here's the top:
Graphics Feature Status
Canvas: Hardware accelerated
Flash: Hardware accelerated
Flash Stage3D: Hardware accelerated
Flash Stage3D Baseline profile: Hardware accelerated
Compositing: Hardware accelerated
Multiple Raster Threads: Enabled
Out-of-process Rasterization: Disabled
OpenGL: Enabled
Hardware Protected Video Decode: Unavailable
Rasterization: Software only. Hardware acceleration disabled
Skia Renderer: Enabled
Video Decode: Unavailable
Vulkan: Disabled
WebGL: Hardware accelerated
WebGL2: Hardware accelerated
Got it essentially solved. I found this older post, that one needs to check whether WebGL is using the "real" GPU or just some Intel-integrated-graphics offshoot of the CPU.
To do this, go to https://alteredqualia.com/tmp/webgl-maxparams-test/ and scroll down to the very bottom and look at the Unmasked Renderer and Unmasked Vendor tag.
In my case, these were showing Intel, not my NVIDIA GPU.
My System76 laptop has the capacity to run in "Hybrid Graphics" mode in which big computations are performed on the GPU but smaller things like GUI elements run on the integrated graphics. (This saves battery life.) But while some applications are able to take advantage of the GPU when in Hybrid Graphics mode -- I just ran a great Adversarial Latent AutoEncoder demo that maxed out my GPU while in Hybrid Graphics mode -- not all are. Chrome is one example of the latter, apparently.
To get WebGL to see my NVIDIA GPU, I needed to reboot my system in "full NVIDIA Graphics" mode.
After this reboot, some of the TFJS examples will use the GPU, such as the Visualizing Training example, which now trains almost instantly instead of taking a few minutes to train. But the Addition RNN example still only uses the CPU. This may be because of a missing backend declaration that #edkeveked pointed out.

Force display from APU and have discrete GPU for OpenCL?

I need a system for OpenCL programming with the following restrictions:
The discrete GPU must not run as a display card --> I can do that
from BIOS
The internal GPU of the AMD's APU must be used as display GPU --> I can do that
from BIOS
OpenCL must not recognize the internal APU's GPU and must always
default to the discrete GPU
Why do I need this?
It is because I am working on a GPU code that demands the GPU's BIOS
to be flashed and a custom BIOS to be installed, which makes the GPU
unusable for display.
AMD boards can't boot without VGA card so I am getting an APU that
has internal GPU.
The code base I am working on can't deal with conflicting GPUs so I
need to disable that (APU's GPU) from OpenCL seeing it.
How can I approach it?
According to the AMD OpenCL Programming Guide, AMD's drivers support the GPU_DEVICE_ORDINAL environment variable to configure which devices are used (Section 2.3.3):
In some cases, the user might want to mask the visibility of the GPUs seen by
the OpenCL application. One example is to dedicate one GPU for regular graphics operations and the other three (in a four-GPU system) for Compute. To
do that, set the GPU_DEVICE_ORDINAL environment parameter, which is a comma-separated
list variable:
Under Windows: set GPU_DEVICE_ORDINAL=1,2,3
Under Linux: export GPU_DEVICE_ORDINAL=1,2,3
You'll first need to determine the ordinal for the devices you want to include. For this, I would recommend using clinfo with the -l switch, which will give you a basic tree of the available OpenCL platforms and devices. If the devices are listed with the APU first and then the dedicated GPU, you would want to only enable device 1 (the GPU), and would set the environment variable to GPU_DEVICE_ORDINAL=1.

Can GPU be used to run programs that run on CPU?

Can Gpu be used to run programs that run on Cpu like getting input from keyboard and mouse or playing music or reading the contents of a text file using Direct3D and OpenGL Api?
The GPU has no direct access on any memory that is mapped by the OS to be accessed within client code (i.e. code, which is executed in user-mode while the instructions are executed on the CPU).
In addition the GPU is not supposed to perform stuff like this, it aims to perform floating point arithmetic at a high speed. And finally you would never use Direct3D or OpenGL to perform anything that is not related to graphics, except you are only going to use the compute shader.
General purpose computations are performed with OpenCL or CUDA on the GPU, such as image manipulation or physics simulations.
You can, however, gather any data on the CPU, send it to the GPU for further processing and finally write it back again into memory accessible from the CPU.

ADL only works if a monitor i connected to the GPU

I have a system with a discrete GPU, AMD Radeon HD7850, for computations only. The GPU has no monitor connected to it.
I would like to read fan speed and temperature from the GPU. This can normally be done with the ADL (AMD Display Library) API.
E.g. ADL_Overdrive6_FanSpeed_Get and ADL_Overdrive6_Temperature_Get. However, all ADL API calls return error when no displays are active, i.e. no monitor is connected.
How do I read these values when the GPU has no monitor connected to it?
The AMD Catalyst Control Center has the same problem, it too can't read the values when the display is inactive.
I know the values are accessible because I can find them with the HWiNFO64.
After consulting AMD and the guys behind HWiNFO64 I have learned that the only way to get these values from a headless GPU is to read them directly from the GPU registers.
To do this you need to write your own driver, since AMD doesn't make an API available.

Is it possible to do GPU programming if I have an integrated graphics card?

I have an HP Pavilion Laptop, it's so-called graphics card is some sort of integrated NVIDIA driver running on shared memory. To give you an idea of its capabilities, if a videogame was made in the last 5 years at a cost of more than a couple million dollars, it just won't be playable on my computer.
Anyways, I was wondering if I could do GPU programming, like CUDA, on this thing. I don't expect it to be fast, I'd just like to get the experience and not make my laptop catch fire in the meanwhile.
Find out what GPU your laptop is, and compare it against this list: http://en.wikipedia.org/wiki/CUDA#Supported_GPUs. Most likely, CUDA will not be supported.
This doesn't necessarily prevent you from doing "GPU programming", however. If the GPU supports fragment and vertex shaders, you can use the fixed pipeline to send data to the card (for example, through texture data) and do your processing in a fragment shader. You will then do a read from the pixel buffer to get the data back into system memory. Though hackish, this approach was quite popular until CUDA and other frameworks like OpenCL were introduced.