blender 3.3 is not using gpu in my computer - gpu

I am using nvidia 1060 and intel 7700k. I used blender 3.2 for rendering. it is ok and fast using gpu. But when i updated to blender 3.3 and had done all rendering setting, it uses cpu and cost a lot time.
https://youtu.be/PcpyDHZiTuQ blender 3.2 rendering ok
https://youtu.be/UIGFANNUhc4 blender 3.3 reanding using cpu
This happened before when using 3.2, but after updating nvidia drivers gpu started to work. This time it not works for 3.3.
I have seen other posts:
Why my GPU load is low when is rendering a scene with Blender?
Rendering in blender wont use GPU
They dont work for my case.

As I can see in the 2nd video you switched it to None before closing the Preferences.
render properties affected with preferences
In the Preferences > System switch it to Cuda before closing Preferences and then try setting the Render Properties.

Related

Blender 2.9 Could not find a matching GPU name warning on Chromebook

I'm using an Asus Chromebook with a CPU(I think).
This is what the Error says:
Warning: Could not find a matching GPU name. Things may not behave as expected.
Detected OpenGL configuration:
Vendor: Red Hat
Renderer: virgl
/run/user/1000/gvfs/ non-existent directory
found bundled python: /home/sekhong5417/blender/2.90/python
This works on my Friend's Chromebook who has a GPU.
Also I am kinda young so I can't replace anything or buy a new device.
There are images at the bottom
If anyone still runs into this issue, there is an incompatibility with Blender and Intel ChromeOS GPU drivers.
See https://developer.blender.org/T77651#1172666 for more details and an updated working build of v2.93.
Hopefully, the fix gets included in the next release.
I use Acer Chromebook spin 13 and I just met the same issue with you. I think it is maybe the Debian within Chromebook don't have the driver that matches the Intel GPU. My Chromebook uses Intel HD graphics 620. I tried many ways to install the driver but they all failed. Linux works easier with Nvidia GPU though. So my idea is you can try to find intel a drive which matches your Graphic card and try again.

Sorry, firefox.exe/chrome.exe is taking a while to load | (unresponsive) firefox.exe/chrome.exe

I'm making an application with WebVR using React VR. I'll test the application with my Oculus Rift and HTC-Vive. I'm using the browser Firefox Nightly to access the WebVR API's.
If I browse to my application using Firefox Nightly or Chromium, I arrive in an empty space with a loading message. A few seconds later I got this message on my Oculus Rift:
Sorry, firefox.exe/chrome.exe is taking a while to load. If this issue persists, please take off your headset and check this app on your computer.
On the HTC-Vive I got this message in Steam VR but it doesn't load at all.
(unresponsive) firefox.exe/chrome.exe
In the webbrouwser, I got the result I must see inside the headset with motion tracking.
I'm using this browsers
Version Firefox nightly: 55.0a1
Version Chromium: 56.0.2910.0
And this are my specifications:
GeForce: GTX 970
GeForce Game Ready Driver: 378.66
Processor: Intel® Core™ i7-6700 CPU # 3.40GHz
RAM: 15.87 GB
This isn't the Oculus software at all. The problem is an NVIDIA driver update that broke everything. You need to go to the NVIDIA site and download drivers from the "376" generation (Dec'16-Feb'17). Install those and the problems go away. I confirmed that things are working with Oculus 1.12, 1.14, and 1.14 beta channel.
It looked like Oculus broke things because 1.14 came out on almost the same day as the "381" NVidia driver update.
Downgrading my Nvidia driver to 376 worked as a charm for me, but I can't run the VR scenes on Nightly, just on Chromium.
I was able to get my Oculus to work with WebVR by enabling the Beta channel within the oculus app and letting it install an update. Seems like the current Oculus version might be broken.
Try installing SteamVR. SteamVR overrides the Oculus runtime and runs applications on its own.
Try closing Oculus Home before you launch your WebVR application using the WebVR button.
Make sure you are not in the Beta/Dash mode in Oculus 
Try to disable the auto-opening of Oculus Home. Oculus home opens automatically on wearing the rift. To disable, select “Run as admin” field in the properties of the Oculus app. More specifically, go to /Oculus-Directory/Support/oculus-client/OculusClient, right click, select properties, check “Run as Administrator”.
Make sure that your Desktop and Rift are connected to the same Graphic card. You might need an HDMI to DVI converter for the same. This is what fixed the problem for me.

Caffe and Tensorflow on a Dell 7559 with nvidia optimus technology

I bought a dell 7559 laptop for deep learning. I got ubuntu 16.04 installed on it but I am having trouble getting caffe and tensorflow on it. The laptop used Nvidia Optimus technology to switch between gpu and cpu to save battery usage. I checked the bios to see if I can set it to use only gpu but there is no option for it. Using bumblebee or nvidia-prime didnt work either. I now have ubuntu 16 with mate desktop environment it is preventing from getting the black screen but didnt help with the cuda issue. I was able to install the drivers and cuda but when I build caffe and tensorflow they fail saying that it didnt detect a gpu. And I wasnt able to install opengl. I tried using several versions of nvidia drivers but it didnt help. Any help would be great. thanks.
I think Bumblebee can enable you to run Caffe/Tensorflow in GPU mode. More generally, it also allows you to run other CUDA programs on a laptop with Optimus technology .
When you have installed Bumblebee correctly (tutorial: Bumblebee Wiki for Ubuntu ), you can invoke the Caffe binary by pepending optirun before the caffe binary. So it goes like the following:
optirun ../../caffe-master/build/tools/caffe train --solver=solver.prototxt
This works for the NVidia DIGITS server as well:
optirun ./digits-devserver
In addition, Bumblebee also works on my dual-graphics desktop PC (Intel HD 4600 + GTX 750 Ti) as well. The display on my PC is driven by the Intel HD 4600 through the HDMI port on the motherboard. The NVidia GTX 750 Ti is only used for CUDA programs.
In fact, for my desktop PC, the "nvidia-prime" (it's actually invoked through the command line program prime-select) is used to choose the GPU that drives the desktop. I have the integrated GPU connect to the display with the HDMI port and the NVidia GPU through a DisplayPort. Currently, the DisplayPort is inactive. The display signal comes from the HDMI port.
As far as I understand, PRIME does so by modifying /etc/X11/Xorg.conf to make either the Intel integrated GPU or the NVidia GPU the current display adapter available to X. I think the PRIME settings only makes sense when both GPUs are connected to some display, which means there need not be an Optimus link between the two GPUs like in a laptop (or, for a laptop with a Mux such as Dell Precision M4600, the Optimus is disabled in BIOS).
More information about the Display Mux and Optimus may be found here: Using the NVIDIA Driver with Optimus Laptops
Hope this helps!

TensorFlow - which Docker image to use?

From TensorFlow Download and Setup under
Docker installation I see:
b.gcr.io/tensorflow/tensorflow latest 4ac133eed955 653.1 MB
b.gcr.io/tensorflow/tensorflow latest-devel 6a90f0a0e005 2.111 GB
b.gcr.io/tensorflow/tensorflow-full latest edc3d721078b 2.284 GB
I know 2. & 3. are with source code and I am using 2. for now.
What is the difference between 2. & 3. ?
Which one is recommended for "normal" use?
TLDR:
First of all - thanks for Docker images! They are the easiest and cleanest way to start with TF.
Few aside things about images
there is no PIL
there is no nano (but there is vi) and apt-get cannot find it. yes i probable can configure repos for it, but why not out of the box
There are four images:
b.gcr.io/tensorflow/tensorflow: TensorFlow CPU binary image.
b.gcr.io/tensorflow/tensorflow:latest-devel: CPU Binary image plus source code.
b.gcr.io/tensorflow/tensorflow:latest-gpu: TensorFlow GPU binary image.
gcr.io/tensorflow/tensorflow:latest-devel-gpu: GPU Binary image plus source code.
And the two properties of concern are:
1. CPU or GPU
2. no source or plus source
CPU or GPU: CPU
For a first time user it is highly recommended to avoid the GPU version as they can be any where from difficult to impossible to use. The reason is that not all machines have an NVidia graphic chip that meet the requirements. You should first get TensorFlow working to understand it then move onto using the GPU version if you want/need.
From TensorFlow Build Instructions
Optional: Install CUDA (GPUs on Linux)
In order to build or run TensorFlow with GPU support, both Cuda
Toolkit 7.0 and CUDNN 6.5 V2 from NVIDIA need to be installed.
TensorFlow GPU support requires having a GPU card with
NVidia Compute Capability >= 3.5. Supported cards include but are not limited to:
NVidia Titan
NVidia Titan X
NVidia K20
NVidia K40
no source or plus source: no source
The docker images will work without needing the source. You should only want or need the source if you need to rebuild TensorFlow for some reason such as adding a new OP.
The standard recommendation for someone new to using TensorFlow is to start with the CPU version without the source.

OpenCl: Minimal configuration to work with AMD GPU

Suppose we have AMD GPU (for example Radeon HD 7970) and minimal linux system without X and etc.
What should be installed and what should be launched and how it should be launched to have proper OpenCL environment? In best case it should be headless environment.
Requirements to environment:
GPU visible by OpenCL programs (clinfo for example)
It is possible to monitor temperature and set fan speed (for example using aticonfig).
P.S. Simple install Xserver, catalyst and run X :0 won't work properly. See X server with fglrx driver won't responce after exactly 49 accesses to X server
UPD When you use AMD GPU on linux, OpenCL applications don't see AMD GPU if Xserver isn't launched.
I had similar problem, asked a question and had succeed solving it by myself.
For R9 290 cards and newer i assume you have:
Built kernel 4.14 or later, with amdgpu driver support. There is option in linux kernel config under Graphics Support.
All nesesary firmware .bin blobs are incorporated. To do so easily you may edit buildroot/package/linux-firmware/* contents for buildroot, and manually add BR2_PACKAGE_LINUX_FIRMWARE_AMDGPU option by yourself, along with BR2_PACKAGE_LINUX_FIRMWARE_RADEON (use it as a template). Actually we should post that update to their git.
When booting you should see appropriate dmesg messages about amdgpu initializing, per each adapter. And screen mode should be switched. If you still see large console text and no videomode switch occured during init then you have problem in kernel/firmware, you should fix that out first.
To answer second question, controlling fan speeds/temperatures is achieved via powerplay filesystem, eg /sys/class/drm/.. like this:
cd sys/class/drm/card0/device/hwmon/hwmon0
echo 1 > pwm1_enable
cat pwm1_max > pwm1
You may dig a bit deeper and find powertune parameters nearby, in device folder.
But instead of using /sys/class/drm/card0/device/pp_dpm_sclk i highly recommend flashing that values directly in cards' bios. Set with required frequencies/voltages, as it is more reliable, stable and api independent - you either init it, or not :)
PS. Also put away 7970, buy something a bit newer. I dont know if it is still supported in the latest drivers, we havent such an old card by hands right now. I tested 290, 390, 480, 580 cards series. (for R9 270, miner fails to build cl code). For older cards better to use some older software <=16.40 and maybe a bit older kernel <=4.13